在hadoop集群上运行mapreduce遇到的问题 java.lang.Exception: org.apache.hadoop.ipc.RemoteExce

weixin_44096545 2019-04-03 11:44:12
在网上找了很多资料但仍得不到解决,希望各位大佬指点下小弟
运行结果如下:
19/04/03 08:33:39 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/04/03 08:33:39 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/04/03 08:33:40 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/04/03 08:33:40 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
19/04/03 08:33:40 INFO input.FileInputFormat: Total input paths to process : 2
19/04/03 08:33:40 INFO mapreduce.JobSubmitter: number of splits:2
19/04/03 08:33:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1133939493_0001
19/04/03 08:33:40 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/04/03 08:33:40 INFO mapreduce.Job: Running job: job_local1133939493_0001
19/04/03 08:33:40 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/04/03 08:33:40 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/04/03 08:33:40 INFO mapred.LocalJobRunner: Waiting for map tasks
19/04/03 08:33:40 INFO mapred.LocalJobRunner: Starting task: attempt_local1133939493_0001_m_000000_0
19/04/03 08:33:41 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
19/04/03 08:33:41 INFO mapred.MapTask: Processing split: hdfs://s1:8020/user/ubuntu/Test/1901_1.gz:0+11445
19/04/03 08:33:41 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/03 08:33:41 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/03 08:33:41 INFO mapred.MapTask: soft limit at 83886080
19/04/03 08:33:41 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/03 08:33:41 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/03 08:33:41 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/03 08:33:41 INFO mapred.MapTask: Starting flush of map output
19/04/03 08:33:41 INFO mapred.LocalJobRunner: Starting task: attempt_local1133939493_0001_m_000001_0
19/04/03 08:33:41 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
19/04/03 08:33:41 INFO mapred.MapTask: Processing split: hdfs://s1:8020/user/ubuntu/Test/1901_2.gz:0+11210
19/04/03 08:33:41 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/03 08:33:41 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/03 08:33:41 INFO mapred.MapTask: soft limit at 83886080
19/04/03 08:33:41 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/03 08:33:41 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/03 08:33:41 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/03 08:33:41 INFO mapred.MapTask: Starting flush of map output
19/04/03 08:33:41 INFO mapred.LocalJobRunner: map task executor complete.
19/04/03 08:33:41 WARN mapred.LocalJobRunner: job_local1133939493_0001
java.lang.Exception: org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:552)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): java.lang.NullPointerException
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.sortLocatedBlocks(DatanodeManager.java:359)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:552)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

at org.apache.hadoop.ipc.Client.call(Client.java:1470)
at org.apache.hadoop.ipc.Client.call(Client.java:1401)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.getBlockLocations(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1209)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1199)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1189)
at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:275)
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:242)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:235)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1487)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)
at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:545)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:783)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
19/04/03 08:33:41 INFO mapreduce.Job: Job job_local1133939493_0001 running in uber mode : false
19/04/03 08:33:41 INFO mapreduce.Job: map 0% reduce 0%
19/04/03 08:33:41 INFO mapreduce.Job: Job job_local1133939493_0001 failed with state FAILED due to: NA
19/04/03 08:33:41 INFO mapreduce.Job: Counters: 0
...全文
1370 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
  • 打赏
  • 举报
回复
你这是空指针异常啊

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧