请看看 job.splitmetainfo does not exist 这个问题

lc999 2018-09-26 12:24:56
本地写了一个简单的mp程序,打包部署在hadoop集群中,执行的时候报下面的错,请各位大牛帮忙看看
18/09/25 23:24:11 INFO input.FileInputFormat: Total input paths to process : 2
18/09/25 23:24:11 INFO mapreduce.JobSubmitter: number of splits:2
18/09/25 23:24:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1537882228671_0006
18/09/25 23:24:13 INFO impl.YarnClientImpl: Submitted application application_1537882228671_0006
18/09/25 23:24:13 INFO mapreduce.Job: The url to track the job: http://s111:8088/proxy/application_1537882228671_0006/
18/09/25 23:24:13 INFO mapreduce.Job: Running job: job_1537882228671_0006
18/09/25 23:24:15 INFO mapreduce.Job: Job job_1537882228671_0006 running in uber mode : false
18/09/25 23:24:15 INFO mapreduce.Job: map 0% reduce 0%
18/09/25 23:24:15 INFO mapreduce.Job: Job job_1537882228671_0006 failed with state FAILED due to: Application application_1537882228671_0006 failed 2 times due to AM Container for appattempt_1537882228671_0006_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://s111:8088/cluster/app/application_1537882228671_0006Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1537882228671_0006/job.splitmetainfo does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1537882228671_0006/job.splitmetainfo does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
...全文
306 回复 打赏 收藏 转发到动态 举报
写回复
用AI写文章
回复
切换为时间正序
请发表友善的回复…
发表回复

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧