Problem binding to [master:9000] java.net.BindException: Cannot assign requested

gakki_smile 2017-08-01 09:50:49
3台云服务器搭建的集群
在start-all.sh时jps主机只有一个jps进程。。。求大神帮助。。。
java.net.BindException: Problem binding to [master:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/ha
doop/BindException
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:719)
at org.apache.hadoop.ipc.Server.bind(Server.java:419)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:561)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2166)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:897)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:505)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:480)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:742)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:311)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:614)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2017-08-01 09:39:13,605 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-08-01 09:39:13,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/47.93.10.102

[hadoop@master logs]$ cat /etc/hosts
i::1 localhost localhost.localdomain localhost6 localhost6.localdomain
127.0.0.1 localhost
47.93.10.102 master
118.89.106.74 slave1
115.159.50.76 slave2

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>47.93.10.102:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>47.93.10.102:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>47.93.10.102:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>47.93.10.102:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>47.93.10.102:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>

<value> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://47.93.10.102:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>dfs.namenode.secondary.http-address</name>
<value>47.93.10.102:9011</value>
<description>备份namenode的http地址</description>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/dfs/name</value>
<description>namenode的目录位置</description>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/dfs/data</value>
<description>datanode's address</description>
</property>

<property>
<name>dfs.replication</name>
<value>3</value>
<description>hdfs系统的副本数量</description>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

</configuration>



<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>指明mapreduce的调度框架为yarn</description>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>47.93.10.102:10020</value>
<description>指明mapreduce的作业历史地址</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>47.93.10.102:19888</value>
<description>指明mapreduce的作业历史web地址</description>
</property>
</configuration>

这些应该是主要的配置文件了,小白一个,求各路大佬指教,谢谢了
...全文
1780 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
gakki_smile 2017-08-01
  • 打赏
  • 举报
回复
当然slave还是不能连接master:9000的求解
gakki_smile 2017-08-01
  • 打赏
  • 举报
回复
你好,我进一步查了些资料发现 netstat -an |grep 9000 tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 监听的是localhost9000,slave连接不上master:9000 在master上改成127.0.0.1后 jps 4075 Jps 2734 ResourceManager 3550 NameNode 2590 SecondaryNameNode 启动正常,所以想问一下怎么让机器监听master:9000
夜无边CN 2017-08-01
  • 打赏
  • 举报
回复
<name>fs.defaultFS</name> <value>hdfs://47.93.10.102:9000</value> 这个默认端口改了?其他所有机器都同步改了吧。
gakki_smile 2017-08-01
  • 打赏
  • 举报
回复
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 698/httpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1049/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2145/master tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 699/php-fpm: master tcp 0 0 172.17.182.176:40334 106.11.68.13:80 ESTABLISHED 2034/AliYunDun tcp 0 52 172.17.182.176:22 60.208.111.194:10645 ESTABLISHED 8440/sshd: root@pts tcp 0 0 172.17.182.176:22 58.56.96.29:10134 ESTABLISHED 8310/sshd: root@pts tcp6 0 0 :::3306 :::* LISTEN 2004/mysqld tcp6 0 0 :::21 :::* LISTEN 706/vsftpd udp 0 0 172.17.182.176:123 0.0.0.0:* 715/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 715/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 715/ntpd udp6 0 0 :::123 :::* 715/ntpd 端口没占用。。
gakki_smile 2017-08-01
  • 打赏
  • 举报
回复
引用 2 楼 w574717155 的回复:
<name>fs.defaultFS</name> <value>hdfs://47.93.10.102:9000</value> 这个默认端口改了?其他所有机器都同步改了吧。
改了也还是不行。。。再看看我的问题帮忙解决下

20,811

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧