格式化HDFS文件系统的namenode失败,求大神帮忙看看配置是哪里出错了~

tangxinyu318 2015-07-17 11:46:28

************************************************************/
15/07/17 23:34:59 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/07/17 23:34:59 INFO namenode.NameNode: createNameNode [-format]
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-cda62f09-97a1-4f13-a58f-e64c01e4c1ba
15/07/17 23:34:59 INFO namenode.FSNamesystem: No KeyProvider found.
15/07/17 23:34:59 INFO namenode.FSNamesystem: fsLock is fair:true
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: The block deletion will start around 2015 七月 17 23:35:00
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map BlocksMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: defaultReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplication = 512
15/07/17 23:35:00 INFO blockmanagement.BlockManager: minReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/07/17 23:35:00 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/07/17 23:35:00 INFO namenode.FSNamesystem: fsOwner = tangxinyu (auth:SIMPLE)
15/07/17 23:35:00 INFO namenode.FSNamesystem: supergroup = supergroup
15/07/17 23:35:00 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/07/17 23:35:00 INFO namenode.FSNamesystem: HA Enabled: false
15/07/17 23:35:00 INFO namenode.FSNamesystem: Append Enabled: true
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map INodeMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/07/17 23:35:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map cachedBlocks
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^18 = 262144 entries
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^15 = 32768 entries
15/07/17 23:35:00 INFO namenode.NNConf: ACLs enabled? false
15/07/17 23:35:00 INFO namenode.NNConf: XAttrs enabled? true
15/07/17 23:35:00 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/07/17 23:35:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1283852822-127.0.1.1-1437147300357
15/07/17 23:35:00 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 FATAL namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 INFO util.ExitUtil: Exiting with status 1
15/07/17 23:35:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/127.0.1.1
**********************************************************
我的配置如下:
core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/name</value>
</property>

</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/name</value>
<description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/data</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hd/dfs/name</value>
</property>

</configuration>

mapred-site.xml.template

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
<description>Host or IP and port of JobTracker.</description>
</property>
</configuration>



...全文
20897 12 打赏 收藏 转发到动态 举报
写回复
用AI写文章
12 条回复
切换为时间正序
请发表友善的回复…
发表回复
小步吖 2018-10-12
  • 打赏
  • 举报
回复 3
你这个是权限问题
ERROR namenode.NameNode: java.io.IOException: Cannot create directory /export/home/dfs/name/current
ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /usr/local/hadoop/hdfsconf/name/current
原因是 没有设置 /usr/hadoop/tmp 的权限没有设置, 将之改为:

chown –R hadoop:hadoop /usr/hadoop/tmp
sudo chmod -R a+w /usr/local/hadoop
说文科技 2018-01-21
  • 打赏
  • 举报
回复
这个问题归根于重复的格式化,导致不能讲内筒写进tmp文件,只要将tmp目录删除即可。
xi4m00 2016-05-26
  • 打赏
  • 举报
回复 1
我把/dfs目录删除,格式化成功了
helphone 2016-02-01
  • 打赏
  • 举报
回复
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. 15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. hdfs-site.xml <value>file:/home/tangxinyu/hadoop-2.6.0/dfs/name</value> 重新格式化的问题我还没解决
pww71 2016-02-01
  • 打赏
  • 举报
回复
pwwMap is update Optimize the read cache, read file using small random buffer. more than one times increase the performance of diskmap . http://sourceforge.net/projects/pwwhashmap/files/stats/timeline
qq_27059213 2016-01-09
  • 打赏
  • 举报
回复
看你的前两个目录配置方式应该不是用的root账户, /home/tangxinyu/ 但是<name>dfs.namenode.name.dir</name> <value>/usr/local/hd/dfs/name</value> 这个目录好象是root才有操作权限。。。
qq_14941653 2015-12-29
  • 打赏
  • 举报
回复
你解决了?分享一下
ZhouSanduo18 2015-08-11
  • 打赏
  • 举报
回复
引用 4 楼 Gamer_gyt 的回复:
格式化的目录之前是不能存在的,HDFS这个机制就是为了避免删除掉其他数据
我用的是hadoop-2.2.0,格式化namenode的时候就遇上了问题,不过我的问题是格式化成功了,但是却启动不了namenode,主要是由于格式化namenode导致clusterID不一致,从而无法启动namenode。楼主的问题,应该不是4楼所说的问题。 15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. 15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration. 是不是配置文件有问题呢?我没有用过hadoop-2.6.0,不知道如何配置,楼主可以查查相关配置文件,看看是不是配置文件出问题了。
  • 打赏
  • 举报
回复
格式化的目录之前是不能存在的,HDFS这个机制就是为了避免删除掉其他数据
ariser 2015-07-20
  • 打赏
  • 举报
回复
新版已经没有这个文件了吧
FightForProgrammer 2015-07-20
  • 打赏
  • 举报
回复
看错误信息 貌似是不能创建 /usr/local/hd/dfs/name/current。是不是权限问题
夜无边CN 2015-07-18
  • 打赏
  • 举报
回复
/usr/local/hd/dfs/name 这个目录存在,或者有权限吗

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧