完全分布式搭建的error(datanode启动不了)

maomingjie001 2013-05-16 07:27:04
datanode的报错为
2013-05-16 18:36:02,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException: mmj27: mmj27: 未知的名称或服务

...全文
834 45 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
45 条回复
切换为时间正序
请发表友善的回复…
发表回复
qq_24032087 2015-08-08
  • 打赏
  • 举报
回复
2015-08-08 09:50:18,600 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 2 to reach 2 Not able to place enough replicas 2015-08-08 09:50:18,601 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 2015-08-08 09:50:18,601 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call addBlock(/tmp/hadoop-root/mapred/system/jobtracker.info, DFSClient_NONMAPREDUCE_-1998092591_1, null) from 192.168.79.1:49384: error: java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /tmp/hadoop-root/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920) at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) 版主大哥 帮我看看这个错误是因为空间不足了么
maomingjie001 2013-05-18
  • 打赏
  • 举报
回复
哥 再帮忙解决个问题呗。 今天用本地eclipse想远程执行。 结果输出是能输出到服务器上的hdfs,但是服务器的50030上没有显示有已完成的任务。 感觉像是用本的伪 分布式执行了再传上去的
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
192.168.10.25: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=master 192.168.10.26: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=slave1 192.168.10.27: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=slave2 ---------------------------------------------------------------------------- /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.10.25 master 192.168.10.26 slave1 192.168.10.27 slave2 同步这个文件到3台机器上 ---------------------------------------------------------------------------- 在每台机器上创建以下目录 /opt/hadoopdata/tmp /opt/hadoopdata/hdfs/sname HADOOP_HOME/conf/excludes 删除/opt/hadoopdata/tmp下所有文件 修改core-site.xml HADOOP_HOME/conf/core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoopdata/tmp</value> <description>A base for other temporary directories.</description> </property> <!-- file system properties --> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> <property> <name>fs.checkpoint.dir</name> <value>/opt/hadoopdata/hdfs/sname</value> <final>true</final> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>dfs.hosts.exclude</name> <value>/opt/hadoop/conf/excludes</value> </property> </configuration> ---------------------------------------------------------------------------- 在每台机器上创建以下目录 /opt/hadoopdata/hdfs/name /opt/hadoopdata/hdfs/data 修改hdfs-site.xml HADOOP_HOME/conf/hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.http.address</name> <value>master:50070</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slaver1:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.name.dir</name> <value>/opt/hadoopdata/hdfs/name</value> <final>true</final> </property> <property> <name>dfs.data.dir</name> <value>/opt/hadoopdata/hdfs/data</value> <final>true</final> </property> <property> <name>dfs.block.size</name> <value>134217728</value> </property> <!--property> <name>dfs.datanode.du.reserved</name> <value>21474836480</value> </property--> <property> <name>dfs.balance.bandwidthPerSec</name> <value>1048576</value> <description> 1MB PER SEC </description> </property> <property> <name>dfs.namenode.handler.count</name> <value>15</value> <description> </description> </property> <property> <name>fs.trash.interval</name> <value>60</value> <description> </description> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration> ---------------------------------------------------------------------------- 在每台机器上创建以下目录 /opt/hadoopdata/hdfs/maprtmp 修改mapred-site.xml HADOOP_HOME/conf/mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>http://master:9001</value> </property> <property> <name>mapred.local.dir</name> <value>/opt/hadoopdata/hdfs/maprtmp</value> <final>true</final> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>2</value> <final>true</final> </property> <property> <name>mapred.tasktracker.reduce.tasks.maximum</name> <value>2</value> <final>true</final> </property> <property> <name>mapred.reduce.parallel.copies</name> <value>6</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx256m</value> <!-- Not marked as final so jobs can include JVM debugging options --> </property> </configuration> ---------------------------------------------------------------------------- 修改HADOOP_HOME/conf/masters slaver1 修改HADOOP_HOME/conf/slaves master slaver1 slaver2 ---------------------------------------------------------------------------- 在每台机器上创建以下文件 /opt/hadoopdata/hadooppids 修改HADOOP_HOME/conf/hadoop-env.sh export HADOOP_PID_DIR=/opt/hadoopdata/hadooppids 检查export JAVA_HOME=XXXXX 的配置是否对应JDK目录 ---------------------------------------------------------------------------- 确保三台机器之间的SSH都通 确保TELNET 9000 9001 50060 50030 50070 50090 都通 ---------------------------------------------------------------------------- 同步三台机器的HADOOP_HOME/conf/下的所有文件 ---------------------------------------------------------------------------- format namespace ---------------------------------------------------------------------------- 启动HADOOP
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
1.0.4的
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
有点晕了,你HADOOP什么版本 我帮你写个配置
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
mapre-site <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>2</value> </property> <property> <name>mapred.map.tasks</name> <value>2</value> </property> <property> <name>mapred.tasktracker.reduce.tasks.maximum</name> <value>2</value> </property> <property> <name>mapred.reduce.tasks</name> <value>1</value> </property> <property> <name>mapred.compress.map.output</name> <value>true</value> </property> </configuration>
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
恩恩 我重启了一下服务器,关了防火墙,结果namenode无法启动了。namenode报错为: ialize recovery manager. org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /data/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558) jobtracker报错为: .168.10.25:42236: error: java.io.IOException: File /data/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 java.io.IOException: File /data/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
Problem binding to master/192.168.10.25:8021 : 地址已在使用
这个可能是你的JOBTRACKER重复拉起导致的
而且你的mapred-site配置的很乱,把注释都去掉,只留基础配置
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
jobtracker 的报错是 09:38:10,518 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to master/192.168.10.25:8021 : 地址已在使用 at org.apache.hadoop.ipc.Server.bind(Server.java:227) at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301) at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
我昨天改了你说的 然后又把防火墙关了 再启动 都正常了 。但是stop的时候 关不了jobtracker和namenode。namenode里报错:2013-05-17 09:22:19,604 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addChild(FSDirectory.java:1099)
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <!-- 参数 dfs.name.dir 指定 Name Node 相关数据在本地文件系统上的存放位置 --> <name>dfs.name.dir</name> <value>/data/hadoop/name</value> </property> <property> <!-- 参数 dfs.data.dir 指定 Data Node 相关数据在本地文件系统上的存放位置 --> <name>dfs.data.dir</name> <value>/data/hadoop/data</value> </property> <property> <!-- 设置hdfs系统备份文件数,此值设置为2,说明当一个文件上传到hdfs系统中,会被备份2个 --> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <!-- 添加hadoop.job.ugi属性,指定用户和组 --> <name>hadoop.job.ugi</name> <value>hadoop,supergroup</value> </property> </configuration>
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
masters是 master slaves是 master slave1 slave2
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
把你的HADOOP_HOME/conf/hdfs-site.xml 贴出来 我是这样配的 <property> <name>dfs.http.address</name> <value>master:50070</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slaver1:50090</value> </property>
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
我把现在所有的conf/masters改成: master 所有的conf/slaves master slave1 slave2 结果现在三台服务器都有tasttracker和datanode了 主节点服务器也有了 版主大哥 这是什么情况
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
还有把所有DATANODE防火墙都关了iptables stop 再不行,把你QQ留下,我加你,这里讲不清
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
HADOOP_HOME/conf/masters 文件不是MASTER的地址 是 second-name node地址 这样,你把HADOOP_HOME/conf/masters文件里的内容都删了,重启HADOOP,再看看
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
额。。。。没台服务器都是这样吗?为什么master要配成slaver1? 修改HADOOP_HOME/conf/masters slaver1 修改HADOOP_HOME/conf/slaves master slaver1 slaver2
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
你没按照我的方法配置呀,按照我27楼贴出来的配置
maomingjie001 2013-05-17
  • 打赏
  • 举报
回复
2013-05-17 14:30:18,175 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block blk_-8047366707695225231_1439 bad datanode[0] nodes == null 2013-05-17 14:30:18,175 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/home/hadoop/tmp/mapred/system/jobtracker.info" - Aborting... 2013-05-17 14:30:18,175 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://master:9000/home/hadoop/tmp/mapred/system/jobtracker.info failed! 2013-05-17 14:30:18,176 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet! 2013-05-17 14:30:18,180 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager. org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /home/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1 这是jobtracker的报错,您看看能看出哪出问题不
撸大湿 2013-05-17
  • 打赏
  • 举报
回复
你是按照我27楼给你的配置配的吗?
加载更多回复(25)

20,848

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧