hadoop1.0.4 启动报错 但是put文件和运行WordCount程序都正常,请问这是为什么?

poiunet 2013-01-11 11:56:30
日志如下,防火墙,文件夹权限都改了 一开始连put文件都报错,现在put和运行WordCount程序都没事了,就是启动时报错,想请问下这是咋回事?搜了好多帖子也没弄明白,求指教!

2013-01-11 23:32:55,016 INFO org.mortbay.log: jetty-6.1.26
2013-01-11 23:32:55,135 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50030_job____yn7qmk, using /tmp/Jetty_0_0_0_0_50030_job____yn7qmk_1105396104756041622
2013-01-11 23:32:55,678 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030
2013-01-11 23:32:55,682 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-01-11 23:32:55,682 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.
2013-01-11 23:32:55,683 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2013-01-11 23:32:55,683 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2013-01-11 23:32:56,011 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2013-01-11 23:32:56,059 INFO org.apache.hadoop.mapred.JobHistory: Creating DONE folder at file:/usr/hadoop-1.0.4/logs/history/done
2013-01-11 23:32:56,061 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode
2013-01-11 23:32:56,064 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030
2013-01-11 23:32:56,064 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030
2013-01-11 23:32:56,066 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive
2013-01-11 23:32:56,156 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /home/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy5.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy5.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)

2013-01-11 23:32:56,156 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
2013-01-11 23:32:56,156 WARN org.apache.hadoop.hdfs.DFSClient: Could not get block locations. Source file "/home/hadoop/tmp/mapred/system/jobtracker.info" - Aborting...
2013-01-11 23:32:56,156 WARN org.apache.hadoop.mapred.JobTracker: Writing to file hdfs://master:9000/home/hadoop/tmp/mapred/system/jobtracker.info failed!
2013-01-11 23:32:56,156 WARN org.apache.hadoop.mapred.JobTracker: FileSystem is not ready yet!
2013-01-11 23:32:56,164 WARN org.apache.hadoop.mapred.JobTracker: Failed to initialize recovery manager.
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /home/hadoop/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy5.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy5.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
2013-01-11 23:33:06,166 WARN org.apache.hadoop.mapred.JobTracker: Retrying...
2013-01-11 23:33:06,329 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information
2013-01-11 23:33:06,338 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to
2013-01-11 23:33:06,338 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to
2013-01-11 23:33:06,338 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-01-11 23:33:06,338 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes
2013-01-11 23:33:06,341 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
2013-01-11 23:33:06,339 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
2013-01-11 23:33:06,340 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
2013-01-11 23:33:06,341 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
2013-01-11 23:33:06,341 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
2013-01-11 23:33:06,341 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
2013-01-11 23:33:06,341 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
2013-01-11 23:33:06,347 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
2013-01-11 23:33:10,050 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/localhost
2013-01-11 23:33:10,051 INFO org.apache.hadoop.mapred.JobTracker: Adding tracker tracker_localhost:localhost/127.0.0.1:47152 to host localhost
2013-01-11 23:39:46,035 INFO org.apache.hadoop.mapred.JobInProgress: job_201301112332_0001: nMaps=3 nReduces=1 max=-1
2013-01-11 23:39:46,038 INFO org.apache.hadoop.mapred.JobTracker: Job job_201301112332_0001 added successfully for user 'hadoop' to queue 'default'
2013-01-11 23:39:46,039 INFO org.apache.hadoop.mapred.AuditLogger: USER=hadoop IP=192.168.1.103 OPERATION=SUBMIT_JOB TARGET=job_201301112332_0001 RESULT=SUCCESS
...全文
193 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
poiunet 2013-01-17
  • 打赏
  • 举报
回复
没人理了~~~
easonworld 2013-01-14
  • 打赏
  • 举报
回复
引用 2 楼 abc41106 的回复:
引用 1 楼 abc41106 的回复:有可能是datanode主节点信息不一致造成的 停掉所有服务,然后删除datanode上的临时文件夹里的文件,然后重新格式化namenode 不过格式化namenode会损失所有数据,楼主三思而行
当namenode和datanode的jobid不一致时采用一楼的方法,这种情况可能也行,但格式化namenode楼主要小心
poiunet 2013-01-14
  • 打赏
  • 举报
回复
jobid一致, 已经格式化无数遍了 我是初学者 现在一启动 就报这个错 但是put文件和跑跑简单的小程序时倒是不报错 只在启动时报
abc41106 2013-01-12
  • 打赏
  • 举报
回复
引用 1 楼 abc41106 的回复:
有可能是datanode主节点信息不一致造成的 停掉所有服务,然后删除datanode上的临时文件夹里的文件,然后重新格式化namenode
不过格式化namenode会损失所有数据,楼主三思而行
abc41106 2013-01-12
  • 打赏
  • 举报
回复
有可能是datanode主节点信息不一致造成的 停掉所有服务,然后删除datanode上的临时文件夹里的文件,然后重新格式化namenode

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧