大佬们,我的HADOOP一直报错,我把调试信息发出来,求大佬们帮我看看我到底是哪里出问题了!!!跪谢

Yyeyeye 2018-06-19 05:38:35
hduser@master:~$ hadoop fs -ls /
18/06/11 22:25:54 DEBUG util.Shell: setsid exited with exit code 0
18/06/11 22:25:55 DEBUG conf.Configuration: parsing URL jar:file:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.6.0.jar!/core-default.xml
18/06/11 22:25:55 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@3cc863dd
18/06/11 22:25:55 DEBUG conf.Configuration: parsing URL file:/usr/local/hadoop/etc/hadoop/core-site.xml
18/06/11 22:25:55 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@d0cc4c1
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[GetGroups], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
18/06/11 22:25:57 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
18/06/11 22:25:57 DEBUG util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
18/06/11 22:25:57 DEBUG security.Groups: Creating new Groups object
18/06/11 22:25:57 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
18/06/11 22:25:57 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
18/06/11 22:25:58 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
18/06/11 22:25:58 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
18/06/11 22:25:58 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
18/06/11 22:25:58 DEBUG security.UserGroupInformation: hadoop login
18/06/11 22:25:58 DEBUG security.UserGroupInformation: hadoop login commit
18/06/11 22:25:58 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hduser
18/06/11 22:25:58 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: hduser" with name hduser
18/06/11 22:25:58 DEBUG security.UserGroupInformation: User entry: "hduser"
18/06/11 22:25:58 DEBUG security.UserGroupInformation: UGI loginUser:hduser (auth:SIMPLE)
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
18/06/11 22:25:59 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
18/06/11 22:25:59 DEBUG hdfs.DFSClient: No KeyProvider found.
18/06/11 22:26:00 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
18/06/11 22:26:00 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@2399c277
18/06/11 22:26:00 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$1@26fc13bc: starting with interruptCheckPeriodMs = 60000
18/06/11 22:26:02 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
18/06/11 22:26:02 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
18/06/11 22:26:02 DEBUG ipc.Client: The ping interval is 60000 ms.
18/06/11 22:26:02 DEBUG ipc.Client: Connecting to master/192.168.56.100:9000
18/06/11 22:26:02 DEBUG ipc.Client: closing ipc connection to master/192.168.56.100:9000: 拒绝连接
java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
at org.apache.hadoop.ipc.Client.call(Client.java:1438)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:265)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1625)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
18/06/11 22:26:02 DEBUG ipc.Client: IPC Client (896894357) connection to master/192.168.56.100:9000 from hduser: closed
ls: Call From master/192.168.56.100 to master:9000 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
18/06/11 22:26:02 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@de98793
18/06/11 22:26:02 DEBUG ipc.Client: Stopping client
...全文
1508 12 打赏 收藏 转发到动态 举报
写回复
用AI写文章
12 条回复
切换为时间正序
请发表友善的回复…
发表回复
thinkerhb 2018-10-20
  • 打赏
  • 举报
回复
好像网络与问题?
kClown1 2018-09-22
  • 打赏
  • 举报
回复
貌似免密没设置,或者防火墙没关
Mr.别离 2018-08-29
  • 打赏
  • 举报
回复
看看能ping通不,如果能,看一下免密配置
CaseyChen5213 2018-08-23
  • 打赏
  • 举报
回复
看一下进程,如果进程正常,大概率就是免密问题,之前也遇到过拒绝连接,重新配置免密之后就好了。Ps:配置免密的时候,主节点死活配不上,推到重做好多次才生效。
pucheung 2018-08-23
  • 打赏
  • 举报
回复
jps 看一下 namenode 是否启动成功
qq_39403536 2018-08-21
  • 打赏
  • 举报
回复
ssh 自己一下,确认免密码登录了没
weitao1010 2018-08-20
  • 打赏
  • 举报
回复
1.ping ip通不能
2.防火墙关了么?
五哥 2018-07-25
  • 打赏
  • 举报
回复
18/06/11 22:26:02 DEBUG ipc.Client: IPC Client (896894357) connection to master/192.168.56.100:9000 from hduser: closed

看看防火墙,还有端口
Yyeyeye 2018-06-25
  • 打赏
  • 举报
回复
你说的这些我都检查过了,确定没有问题,大佬,还有没有别的错误原因啊!!!
曹宇飞丶 2018-06-20
  • 打赏
  • 举报
回复
1.Check the hostname the client using is correct. If it's in a Hadoop configuration option: examine it carefully, try doing an ping by hand. 2.Check the IP address the client is trying to talk to for the hostname is correct. 3.Make sure the destination address in the exception isn't 0.0.0.0 -this means that you haven't actually configured the client with the real address for that service, and instead it is picking up the server-side property telling it to listen on every port for connections. 4.If the error message says the remote service is on "127.0.0.1" or "localhost" that means the configuration file is telling the client that the service is on the local server. If your client is trying to talk to a remote system, then your configuration is broken. 5.Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this). 6.Check the port the client is trying to talk to using matches that the server is offering a service on. The netstat command is useful there. 7.On the server, try a telnet localhost <port> to see if the port is open there. 8.On the client, try a telnet <server> <port> to see if the port is accessible remotely. 9.Try connecting to the server/port from a different machine, to see if it just the single client misbehaving. 10.If your client and the server are in different subdomains, it may be that the configuration of the service is only publishing the basic hostname, rather than the Fully Qualified Domain Name. The client in the different subdomain can be unintentionally attempt to bind to a host in the local subdomain —and failing. 11.If you are using a Hadoop-based product from a third party, -please use the support channels provided by the vendor. 12.Please do not file bug reports related to your problem, as they will be closed as Invalid
  • 打赏
  • 举报
回复
  • 打赏
  • 举报
回复

20,807

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧