hdfs 权限不足

tpj11 2014-07-16 05:44:29
在安装Hadoop-2.4.0的过程中,要hdfs namenode -format 但却出现错误,请问要如何提高权限?????
环境是Ubutu 14.04。
ww@HDName:~$ sudo gedit /etc/hostname

(gedit:4742): IBUS-WARNING **: The owner of /home/ww/.config/ibus/bus is not root!
ww@HDName:~$ hdfs namenode -format
14/07/16 17:35:26 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = HDName/192.168.56.168
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.0
STARTUP_MSG: classpath = (略)
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG: java = 1.7.0_55
************************************************************/
14/07/16 17:35:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/07/16 17:35:26 INFO namenode.NameNode: createNameNode [-format]
14/07/16 17:35:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-a39bc3cc-a877-4267-8683-f51c351c362e
14/07/16 17:35:28 INFO namenode.FSNamesystem: fsLock is fair:true
14/07/16 17:35:28 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/07/16 17:35:28 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/07/16 17:35:28 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/07/16 17:35:28 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/07/16 17:35:28 INFO util.GSet: Computing capacity for map BlocksMap
14/07/16 17:35:28 INFO util.GSet: VM type = 64-bit
14/07/16 17:35:28 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/07/16 17:35:28 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/16 17:35:28 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/07/16 17:35:28 INFO blockmanagement.BlockManager: defaultReplication = 1
14/07/16 17:35:28 INFO blockmanagement.BlockManager: maxReplication = 512
14/07/16 17:35:28 INFO blockmanagement.BlockManager: minReplication = 1
14/07/16 17:35:28 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/07/16 17:35:29 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/07/16 17:35:29 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/07/16 17:35:29 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/07/16 17:35:29 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/07/16 17:35:29 INFO namenode.FSNamesystem: fsOwner = ww (auth:SIMPLE)
14/07/16 17:35:29 INFO namenode.FSNamesystem: supergroup = supergroup
14/07/16 17:35:29 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/07/16 17:35:29 INFO namenode.FSNamesystem: HA Enabled: false
14/07/16 17:35:29 INFO namenode.FSNamesystem: Append Enabled: true
14/07/16 17:35:29 INFO util.GSet: Computing capacity for map INodeMap
14/07/16 17:35:29 INFO util.GSet: VM type = 64-bit
14/07/16 17:35:29 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/07/16 17:35:29 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/07/16 17:35:29 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/16 17:35:29 INFO util.GSet: Computing capacity for map cachedBlocks
14/07/16 17:35:29 INFO util.GSet: VM type = 64-bit
14/07/16 17:35:29 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/07/16 17:35:29 INFO util.GSet: capacity = 2^18 = 262144 entries
14/07/16 17:35:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/07/16 17:35:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/07/16 17:35:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/07/16 17:35:29 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/07/16 17:35:29 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/07/16 17:35:29 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/07/16 17:35:29 INFO util.GSet: VM type = 64-bit
14/07/16 17:35:29 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/07/16 17:35:29 INFO util.GSet: capacity = 2^15 = 32768 entries
14/07/16 17:35:29 INFO namenode.AclConfigFlag: ACLs enabled? false
14/07/16 17:35:29 INFO namenode.FSImage: Allocated new BlockPoolId: BP-210837843-192.168.56.168-1405503329548
14/07/16 17:35:29 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:334)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:845)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1256)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370)
14/07/16 17:35:29 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /usr/local/hadoop_store/hdfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:334)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:845)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1256)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1370)
14/07/16 17:35:29 INFO util.ExitUtil: Exiting with status 1
14/07/16 17:35:29 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at HDName/192.168.56.168
************************************************************/


ww@HDName:~$ start-dfs.sh
14/07/16 17:36:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HDName]
HDName: mkdir: 无法建立目录'/usr/local/hadoop/logs': 拒绝不符权限的操作
HDName: chown: 无法存取'/usr/local/hadoop/logs': 没有此一档案或目录
HDName: starting namenode, logging to /usr/local/hadoop/logs/hadoop-ww-namenode-HDName.out
HDName: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 151: /usr/local/hadoop/logs/hadoop-ww-namenode-HDName.out: 没有此一档案或目录
HDName: head: 无法开启'/usr/local/hadoop/logs/hadoop-ww-namenode-HDName.out' 来读取资料: 没有此一档案或目录
HDName: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 166: /usr/local/hadoop/logs/hadoop-ww-namenode-HDName.out: 没有此一档案或目录
HDName: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 167: /usr/local/hadoop/logs/hadoop-ww-namenode-HDName.out: 没有此一档案或目录
localhost: mkdir: 无法建立目录'/usr/local/hadoop/logs': 拒绝不符权限的操作
localhost: chown: 无法存取'/usr/local/hadoop/logs': 没有此一档案或目录
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-ww-datanode-HDName.out
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 151: /usr/local/hadoop/logs/hadoop-ww-datanode-HDName.out: 没有此一档案或目录
localhost: head: 无法开启'/usr/local/hadoop/logs/hadoop-ww-datanode-HDName.out' 来读取资料: 没有此一档案或目录
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 166: /usr/local/hadoop/logs/hadoop-ww-datanode-HDName.out: 没有此一档案或目录
localhost: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 167: /usr/local/hadoop/logs/hadoop-ww-datanode-HDName.out: 没有此一档案或目录
192.168.5.201: ssh: connect to host 192.168.5.201 port 22: No route to host
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: 无法建立目录'/usr/local/hadoop/logs': 拒绝不符权限的操作
0.0.0.0: chown: 无法存取'/usr/local/hadoop/logs': 没有此一档案或目录
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-ww-secondarynamenode-HDName.out
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 151: /usr/local/hadoop/logs/hadoop-ww-secondarynamenode-HDName.out: 没有此一档案或目录
0.0.0.0: head: 无法开启'/usr/local/hadoop/logs/hadoop-ww-secondarynamenode-HDName.out' 来读取资料: 没有此一档案或目录
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 166: /usr/local/hadoop/logs/hadoop-ww-secondarynamenode-HDName.out: 没有此一档案或目录
0.0.0.0: /usr/local/hadoop/sbin/hadoop-daemon.sh: line 167: /usr/local/hadoop/logs/hadoop-ww-secondarynamenode-HDName.out: 没有此一档案或目录
14/07/16 17:37:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
...全文
2521 7 打赏 收藏 转发到动态 举报
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
「已注销」 2016-07-11
  • 打赏
  • 举报
回复
$ hadoop fs -chmod -R 777 /
qq_14941653 2015-12-29
  • 打赏
  • 举报
回复
我解决了,是因为 /usr/local/hadoop_store/dfs/namenode/current没有权限,我们要到root目录下,然后给这个文件赋予权限 chown -R ubuntu:ubuntu / /usr/local/hadoop_store,因为每个人的用户名可能不一样,所以这里的ubuntu要更改为自己的
qq_14941653 2015-12-29
  • 打赏
  • 举报
回复
如何解决的,我遇到的问题和你一样,求解
on_way_ 2014-07-22
  • 打赏
  • 举报
回复
引用 3 楼 csdalijingang 的回复:
$ hadoop fs -chmod -R 777 /
你这也太狠了吧,谁敢要你这样的程序员 hdfs启动时需要在${hadoop.log.dir}指定的目录(默认为安装目录下logs文件夹)下记录日志,所以只需要把这个文件夹的权限放开就行了 可参考: http://jiacai2050.github.io/blog/2014/07/17/installation-of-cdh-5-0-2-tar-gz-with-high-availability/
tchqiq 2014-07-18
  • 打赏
  • 举报
回复
sudo -i 刚玩 就先切换到root身份做所有事情吧。。。
WL135266 2014-07-18
  • 打赏
  • 举报
回复
$ hadoop fs -chmod -R 777 /
WL135266 2014-07-18
  • 打赏
  • 举报
回复
$ hadoop fs -chmod 777 /

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧