wind7使用eclipse连接虚拟机的CentOS的hadoop报错

DaveANote 2017-06-16 01:52:37
大神好,本人新学hadoop,根据网上教程搭建hadoop伪分布式模式。然后wind7的eclispe连接hadoop,尝试上传文件报错

基本信息:
CentOS版本为7.2
CentOS的防火墙已经关闭;
在wind7环境下可以通过CentOS的IP拼通CentOS地址;
在wind7环境下可以通过CentOS的IP以SSH的方式访问CentOS;
CentOS中启动hadoop通过jps显示:
17872 NameNode
19297 Jps
18764 ResourceManager
18078 DataNode
18884 NodeManager
18427 SecondaryNameNode

可以登录localhost:50070网址,并显示端口号9000;
Overview 'localhost:9000' (active)

虚拟机采用的是桥接模式,并在CentOS中设置了静态IP

CentOS的IP addr信息
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:c0:c3:55 brd ff:ff:ff:ff:ff:ff
inet 10.10.XXX.XXX/24 brd 10.10.YYY.YYY scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec0:c355/64 scope link
valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 52:54:00:fc:1b:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.AAA.AAA/24 brd 192.168.WWW.WWW scope global virbr0
valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 500
link/ether 52:54:00:fc:1b:57 brd ff:ff:ff:ff:ff:ff


报错信息:
Exception in thread "main" java.net.ConnectException: Call From 计算机名称/192.168.FFF.FFF(以太网适配器VMware Network adapter VMnet1的IPV4) to 10.10.XXX.XXX(CentOS的ip地址可以ping通,并通过ssh的方式访问):9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1474)
at org.apache.hadoop.ipc.Client.call(Client.java:1401)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at $Proxy14.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy15.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2742)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2713)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1819)
at HadoopTest.MakeDir(HadoopTest.java:31)
at HadoopTest.main(HadoopTest.java:19)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1523)
at org.apache.hadoop.ipc.Client.call(Client.java:1440)
... 21 more

代码:
public static void main(String[] args) throws IOException {
MakeDir();
}

//创建目录名称为X1
public static void MakeDir() throws IOException {
System.out.println("开始");
Configuration conf = getConf();
FileSystem fs = FileSystem.get(conf);
Path path = new Path("/X1");
fs.mkdirs(path);
fs.close();
System.out.println("完成");
}

private static Configuration getConf(){
Configuration conf = new Configuration();
// 这句话很关键,这些信息就是hadoop配置文件中的信息
conf.set("fs.default.name", "10.10.XXX.XXX:9000");
return conf;
}

请教大神问题出在了那里?
...全文
301 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
DaveANote 2017-06-16
  • 打赏
  • 举报
回复
问题更新: 我把同样的代码放入Linux系统中用eclipse执行, 修改代码 private static Configuration getConf(){ Configuration conf = new Configuration(); // 这句话很关键,这些信息就是hadoop配置文件中的信息 //conf.set("fs.default.name", "10.10.XXX.XXX:9000");修改前的代码 conf.set("fs.default.name", "127.0.0.1:9000");//修改后的代码 return conf; } 成功创建文件夹X1。 执行成功。

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧