大佬们,我的HADOOP一直报错,我把调试信息发出来,求大佬们帮我看看我到底是哪里出问题了!!!跪谢

Yyeyeye 2018-06-11 10:33:50
hduser@master:~$ hadoop fs -ls /
18/06/11 22:25:54 DEBUG util.Shell: setsid exited with exit code 0
18/06/11 22:25:55 DEBUG conf.Configuration: parsing URL jar:file:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar!/core-default.xml
18/06/11 22:25:55 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@3cc863dd
18/06/11 22:25:55 DEBUG conf.Configuration: parsing URL file:/usr/local/hadoop/etc/hadoop/core-site.xml
18/06/11 22:25:55 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@d0cc4c1
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[GetGroups], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
18/06/11 22:25:57 DEBUG util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
18/06/11 22:25:57 DEBUG security.Groups: Creating new Groups object
18/06/11 22:25:57 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/06/11 22:25:57 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/06/11 22:25:58 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
18/06/11 22:25:58 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
18/06/11 22:25:58 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
18/06/11 22:25:58 DEBUG security.UserGroupInformation: hadoop login
18/06/11 22:25:58 DEBUG security.UserGroupInformation: hadoop login commit
18/06/11 22:25:58 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hduser
18/06/11 22:25:58 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hduser" with name hduser
18/06/11 22:25:58 DEBUG security.UserGroupInformation: User entry: "hduser"
18/06/11 22:25:58 DEBUG security.UserGroupInformation: UGI loginUser:hduser (auth:SIMPLE)
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
18/06/11 22:25:59 DEBUG hdfs.DFSClient: No KeyProvider found.
18/06/11 22:26:00 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
18/06/11 22:26:00 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@2399c277
18/06/11 22:26:00 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$1@26fc13bc: starting with interruptCheckPeriodMs = 60000
18/06/11 22:26:02 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
18/06/11 22:26:02 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
18/06/11 22:26:02 DEBUG ipc.Client: The ping interval is 60000 ms.
18/06/11 22:26:02 DEBUG ipc.Client: Connecting to master/192.168.56.100:9000
18/06/11 22:26:02 DEBUG ipc.Client: closing ipc connection to master/192.168.56.100:9000: 拒绝连接
java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:265)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1625)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
18/06/11 22:26:02 DEBUG ipc.Client: IPC Client (896894357) connection to master/192.168.56.100:9000 from hduser: closed
ls: Call From master/192.168.56.100 to master:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
18/06/11 22:26:02 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: Stopping client
...全文
2268 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
迷途1503 2018-07-19
  • 打赏
  • 举报
回复
引用 3 楼 jiaoyang1503 的回复:
1.网络不好连接超时,调大心跳间隔
2.你访问hdfs的用户在hdfs集群中没有权限

查看hduser是否有权限访问hdfs
迷途1503 2018-07-19
  • 打赏
  • 举报
回复
1.网络不好连接超时,调大心跳间隔
2.你访问hdfs的用户在hdfs集群中没有权限
kxiaozhuk 2018-07-06
  • 打赏
  • 举报
回复
检查下192.168.56.100:9000这台机的防火墙,还有端口是否打开了
  • 打赏
  • 举报
回复
core-site.xml配置文件看一下

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧