Problem binding to [master:9000] java.net.BindException: Cannot assign requested
3台云服务器搭建的集群
在start-all.sh时jps主机只有一个jps进程。。。求大神帮助。。。
java.net.BindException: Problem binding to [master:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/ha
doop/BindException
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:719)
at org.apache.hadoop.ipc.Server.bind(Server.java:419)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:561)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2166)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:897)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:505)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:480)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:742)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:311)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:614)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2017-08-01 09:39:13,605 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-08-01 09:39:13,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/47.93.10.102
[hadoop@master logs]$ cat /etc/hosts
i::1 localhost localhost.localdomain localhost6 localhost6.localdomain
127.0.0.1 localhost
47.93.10.102 master
118.89.106.74 slave1
115.159.50.76 slave2
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>47.93.10.102:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>47.93.10.102:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>47.93.10.102:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>47.93.10.102:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>47.93.10.102:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://47.93.10.102:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>47.93.10.102:9011</value>
<description>备份namenode的http地址</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/dfs/name</value>
<description>namenode的目录位置</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/chadoop/hadoop/hadoop-2.5.0/dfs/data</value>
<description>datanode's address</description>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
<description>hdfs系统的副本数量</description>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<description>指明mapreduce的调度框架为yarn</description>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>47.93.10.102:10020</value>
<description>指明mapreduce的作业历史地址</description>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>47.93.10.102:19888</value>
<description>指明mapreduce的作业历史web地址</description>
</property>
</configuration>
这些应该是主要的配置文件了,小白一个,求各路大佬指教,谢谢了