这个错误查到这篇文章http://blog.csdn.net/wanghai__/article/details/5752199
文章里的原话:
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下的所有目录.
用了清空tmp也不行,各种纠结,文章里还引用了一大段英文,看了第一种解决方法,就是作者说的方法,就没看第二个
Workaround 1: Start from scratch
I can testify that the following steps solve thiserror, but the side effects won't make you happy (me neither). The crudeworkaround I have found is to:
1. stop the cluster
2. delete the data directory on the problematic datanode:the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if youfollowed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3. reformat the namenode (NOTE: all HDFS data is lostduring this process!)
4. restart the cluster
When deleting all the HDFS data and starting fromscratch does not sound like a good idea (it might be ok during the initialsetup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematicdatanodes
Big thanks to Jared Stehler for the followingsuggestion. I have not tested it myself yet, but feel free to try it out andsend me your feedback. This workaround is "minimally invasive" as youonly have to edit one file on the problematic datanodes:
1. stop the datanode
2. edit the value of namespaceID in <dfs.data.dir>/current/VERSIONto match the value of the current namenode
3. restart the datanode
If you followed the instructions in my tutorials, thefull path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION(background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, andwe set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like,here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
后来看了下第二种方法,简直太easy了,version版本不对,那就直接改成一致的就完了
文章里的原话:
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下的所有目录.
用了清空tmp也不行,各种纠结,文章里还引用了一大段英文,看了第一种解决方法,就是作者说的方法,就没看第二个
Workaround 1: Start from scratch
I can testify that the following steps solve thiserror, but the side effects won't make you happy (me neither). The crudeworkaround I have found is to:
1. stop the cluster
2. delete the data directory on the problematic datanode:the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if youfollowed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3. reformat the namenode (NOTE: all HDFS data is lostduring this process!)
4. restart the cluster
When deleting all the HDFS data and starting fromscratch does not sound like a good idea (it might be ok during the initialsetup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematicdatanodes
Big thanks to Jared Stehler for the followingsuggestion. I have not tested it myself yet, but feel free to try it out andsend me your feedback. This workaround is "minimally invasive" as youonly have to edit one file on the problematic datanodes:
1. stop the datanode
2. edit the value of namespaceID in <dfs.data.dir>/current/VERSIONto match the value of the current namenode
3. restart the datanode
If you followed the instructions in my tutorials, thefull path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION(background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, andwe set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like,here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
后来看了下第二种方法,简直太easy了,version版本不对,那就直接改成一致的就完了