redhat linux 中eclipse上运行wordcount例子出现 Connection refused错误

lizhikelizhike 2013-05-15 04:03:59
在redhat linux 中eclipse上运行wordcount 出现Connection refused错误,百度了好几天都没找到解决方法,改配置文件也不对,请问大家有没有知道解决方法的,不胜感激!

下面是错误提示:
13/05/15 15:56:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s).
13/05/15 15:56:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 6 time(s).
13/05/15 15:56:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 7 time(s).
13/05/15 15:56:20 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 8 time(s).
13/05/15 15:56:21 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 9 time(s).
13/05/15 15:56:21 INFO mapred.JobClient: Cleaning up the staging area file:/tmp/hadoop-hadoop/mapred/staging/hadoop201620249/.staging/job_local_0001
13/05/15 15:56:21 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.ConnectException: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:8020 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:136)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:887)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at WordCount.main(WordCount.java:65)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
at org.apache.hadoop.ipc.Client.call(Client.java:1050)
... 23 more
...全文
1451 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
撸大湿 2013-05-20
  • 打赏
  • 举报
回复
localhost/127.0.0.1:8020 <name>mapred.job.tracker</name> <value>localhost:9001</value> 错误是由于---Eclipse中调用的端口号和配置文件不符,见上面的粗体字 还有些其他问题,都给你列出来 没有指定fs的TMP,如果不指定TMP目录,需要制定NAME、DATA、MAPRED SYS目录 没有指定MAPRED的LOCAL DIR 没有配置HOSTS域名,会有很多后遗症 HADOOP问题可以去HADOOP专区提问:http://bbs.csdn.net/forums/hadoop
Leeezk 2013-05-20
  • 打赏
  • 举报
回复
引用 1 楼 tntzbzc 的回复:
Call to localhost/127.0.0.1:8020 ,配置问题 把以下配置文件贴出来 /etc/hosts HADOOP_HOME/conf/core-site.xml HADOOP_HOME/conf/hdfs-site.xml HADOOP_HOME/conf/mapred-site.xml
/etc/hosts 文件下配置如下: 10.21.66.101 lee #Added by NetworkManager 127.0.0.1 localhost.localdomain localhost ::1 lee localhost6.localdomain6 localhost6 hadoop_home/conf/core-site.xml 配置如下: <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> hadoop_home/conf/hdfs-site.xml 配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> hadoop_home/conf/mapred-site.xml 配置如下: <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> 又折腾了几天,这个问题还是没解决,但是看见版主出面,甚感欣慰,求指点
Leeezk 2013-05-20
  • 打赏
  • 举报
回复
引用 1 楼 tntzbzc 的回复:
Call to localhost/127.0.0.1:8020 ,配置问题 把以下配置文件贴出来 /etc/hosts HADOOP_HOME/conf/core-site.xml HADOOP_HOME/conf/hdfs-site.xml HADOOP_HOME/conf/mapred-site.xml
/etc/hosts 文件下配置如下: 10.21.66.101 lee #Added by NetworkManager 127.0.0.1 localhost.localdomain localhost ::1 lee localhost6.localdomain6 localhost6 hadoop_home/conf/core-site.xml 配置如下: <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> hadoop_home/conf/hdfs-site.xml 配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> hadoop_home/conf/mapred-site.xml 配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> 又折腾了几天,这个问题还是没解决,但是看见版主出面,甚感欣慰,求指点
Leeezk 2013-05-20
  • 打赏
  • 举报
回复
/etc/hosts 文件下配置如下: 10.21.66.101 lee #Added by NetworkManager 127.0.0.1 localhost.localdomain localhost ::1 lee localhost6.localdomain6 localhost6 hadoop_home/conf/core-site.xml 配置如下: <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> hadoop_home/conf/hdfs-site.xml 配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> hadoop_home/conf/mapred-site.xml 配置如下: <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> 又折腾了几天,这个问题还是没解决,但是看见版主出面,甚感欣慰,求指点
撸大湿 2013-05-16
  • 打赏
  • 举报
回复
Call to localhost/127.0.0.1:8020 ,配置问题 把以下配置文件贴出来 /etc/hosts HADOOP_HOME/conf/core-site.xml HADOOP_HOME/conf/hdfs-site.xml HADOOP_HOME/conf/mapred-site.xml

547

社区成员

发帖
与我相关
我的任务
社区描述
Cloud Foundry是业界第一个开源PaaS云平台,它支持多种框架、语言、运行时环境、云平台及应用服务,使开发人员能够在几秒钟内进行应用程序的部署和扩展,无需担心任何基础架构的问题。
社区管理员
  • Cloud Foundry社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧