格式化HDFS文件系统的namenode失败,求大神帮忙看看配置是哪里出错了~
************************************************************/
15/07/17 23:34:59 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/07/17 23:34:59 INFO namenode.NameNode: createNameNode [-format]
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
15/07/17 23:34:59 WARN common.Util: Path /usr/local/hd/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-cda62f09-97a1-4f13-a58f-e64c01e4c1ba
15/07/17 23:34:59 INFO namenode.FSNamesystem: No KeyProvider found.
15/07/17 23:34:59 INFO namenode.FSNamesystem: fsLock is fair:true
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/07/17 23:35:00 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: The block deletion will start around 2015 七月 17 23:35:00
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map BlocksMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/07/17 23:35:00 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: defaultReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplication = 512
15/07/17 23:35:00 INFO blockmanagement.BlockManager: minReplication = 1
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/07/17 23:35:00 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/07/17 23:35:00 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/07/17 23:35:00 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/07/17 23:35:00 INFO namenode.FSNamesystem: fsOwner = tangxinyu (auth:SIMPLE)
15/07/17 23:35:00 INFO namenode.FSNamesystem: supergroup = supergroup
15/07/17 23:35:00 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/07/17 23:35:00 INFO namenode.FSNamesystem: HA Enabled: false
15/07/17 23:35:00 INFO namenode.FSNamesystem: Append Enabled: true
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map INodeMap
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/07/17 23:35:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map cachedBlocks
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^18 = 262144 entries
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/07/17 23:35:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/07/17 23:35:00 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/07/17 23:35:00 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/07/17 23:35:00 INFO util.GSet: VM type = 64-bit
15/07/17 23:35:00 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/07/17 23:35:00 INFO util.GSet: capacity = 2^15 = 32768 entries
15/07/17 23:35:00 INFO namenode.NNConf: ACLs enabled? false
15/07/17 23:35:00 INFO namenode.NNConf: XAttrs enabled? true
15/07/17 23:35:00 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/07/17 23:35:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1283852822-127.0.1.1-1437147300357
15/07/17 23:35:00 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 FATAL namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hd/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:941)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
15/07/17 23:35:00 INFO util.ExitUtil: Exiting with status 1
15/07/17 23:35:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/127.0.1.1
**********************************************************
我的配置如下:
core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/name</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/name</value>
<description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/tangxinyu/hadoop-2.6.0/dfs/data</value>
<description>Comma separated list of paths on the local filesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hd/dfs/name</value>
</property>
</configuration>
mapred-site.xml.template
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
<description>Host or IP and port of JobTracker.</description>
</property>
</configuration>