【求助】eclipse运行WordCount报错

hello_dodo 2017-07-11 06:52:19
我试过将hdfs-site.xml的dfs.permissions权限设置为false还是不行
控制台信息如下。有没有大佬解释下这是什么原因啊
DEBUG [main] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of successful kerberos logins and latency (milliseconds)], valueName=Time)
DEBUG [main] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[Rate of failed kerberos logins and latency (milliseconds)], valueName=Time)
DEBUG [main] - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, value=[GetGroups], valueName=Time)
DEBUG [main] - UgiMetrics, User and group related metrics
DEBUG [main] - Kerberos krb5 configuration not found, setting default realm to empty
DEBUG [main] - Creating new Groups object
DEBUG [main] - Trying to load the custom-built native-hadoop library...
DEBUG [main] - Loaded the native-hadoop library
DEBUG [main] - Using JniBasedUnixGroupsMapping for Group resolution
DEBUG [main] - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
DEBUG [main] - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
DEBUG [main] - hadoop login
DEBUG [main] - hadoop login commit
DEBUG [main] - using local user:NTUserPrincipal: dodo
DEBUG [main] - Using user: "NTUserPrincipal: dodo" with name dodo
DEBUG [main] - User entry: "dodo"
DEBUG [main] - UGI loginUser:dodo (auth:SIMPLE)
DEBUG [main] - dfs.client.use.legacy.blockreader.local = false
DEBUG [main] - dfs.client.read.shortcircuit = false
DEBUG [main] - dfs.client.domain.socket.data.traffic = false
DEBUG [main] - dfs.domain.socket.path =
DEBUG [main] - multipleLinearRandomRetry = null
DEBUG [main] - rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@33990a0c
DEBUG [main] - getting client out of cache: org.apache.hadoop.ipc.Client@2a54a73f
DEBUG [main] - Both short-circuit local reads and UNIX domain socket are disabled.
DEBUG [main] - DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
DEBUG [main] - PrivilegedAction as:dodo (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
DEBUG [main] - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
DEBUG [main] - Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
DEBUG [main] - Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
INFO [main] - Connecting to ResourceManager at /0.0.0.0:8032
DEBUG [main] - PrivilegedAction as:dodo (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)
DEBUG [main] - Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
DEBUG [main] - Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
DEBUG [main] - getting client out of cache: org.apache.hadoop.ipc.Client@2a54a73f
DEBUG [main] - Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
DEBUG [main] - Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
DEBUG [main] - PrivilegedAction as:dodo (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
DEBUG [main] - Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
DEBUG [main] - PrivilegedAction as:dodo (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:162)
DEBUG [main] - PrivilegedAction as:dodo (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
DEBUG [main] - The ping interval is 60000 ms.
DEBUG [main] - Connecting to hadoop22/192.168.56.22:9000
DEBUG [IPC Client (1433666880) connection to hadoop22/192.168.56.22:9000 from dodo] - IPC Client (1433666880) connection to hadoop22/192.168.56.22:9000 from dodo: starting, having connections 1
DEBUG [IPC Parameter Sending Thread #0] - IPC Client (1433666880) connection to hadoop22/192.168.56.22:9000 from dodo sending #0
DEBUG [IPC Client (1433666880) connection to hadoop22/192.168.56.22:9000 from dodo] - IPC Client (1433666880) connection to hadoop22/192.168.56.22:9000 from dodo got value #0
DEBUG [main] - Call: getFileInfo took 195ms
DEBUG [main] - getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/dodo/.staging
DEBUG [main] - PrivilegedActionException as:dodo (auth:SIMPLE) cause:java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/dodo/.staging is not as expected. It is owned by Administrators. The directory must be owned by the submitter dodo or by dodo
Exception in thread "main" java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/dodo/.staging is not as expected. It is owned by Administrators. The directory must be owned by the submitter dodo or by dodo
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:120)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at hadoop.WordCount.main(WordCount.java:106)
DEBUG [Thread-2] - stopping client from cache: org.apache.hadoop.ipc.Client@2a54a73f
...全文
313 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
中华大表哥 2019-11-15
  • 打赏
  • 举报
回复
2019多啦,问题解决了没?
  • 打赏
  • 举报
回复
最后面说的很清楚了,目录权限不够。 The ownership on the staging directory /tmp/hadoop-yarn/staging/dodo/.staging is not as expected. It is owned by Administrators. The directory must be owned by the submitter dodo or by dodo 而且 dfs.permissions = true 如果是 true,则打开前文所述的权限系统。如果是 false,权限检查 就是关闭的,但是其他的行为没有改变。这个配置参数的改变并不改变文件或目录的模式、所有者和组等信息。
coder_szc 2019-11-09
  • 打赏
  • 举报
回复
两年多了,问题解决了没?

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧