win10 eclipse能够访问到hdfs,但是添加不了文件!!!

勤劳的skl 2017-07-20 12:30:15
public class HdfsTest {

FileSystem fs = null;

@Before
public void init() throws Exception{
Configuration config = new Configuration();
config.setBoolean("dfs.permissions", false);
fs = FileSystem.get(URI.create("hdfs://mini0:9000"),config);
}

@Test
public void addFile() throws Exception {

//向hdfs分布式文件系统添加文件
fs.copyFromLocalFile(new Path("E:\\hdfs.txt"), new Path("/"));
//关闭资源
fs.close();
}
}

控制台打印的错误:
java.io.EOFException: End of File Exception between local host is: "ZKJ/192.168.203.200"; destination host is: "mini0":9000; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1473)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1977)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:496)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:348)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1873)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1838)
at hdfs_test.HdfsTest.addFile(HdfsTest.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1072)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:967)

namenode的错误日志:
java.lang.IllegalArgumentException: Null user
at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1203)
at org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1190)
at org.apache.hadoop.util.ProtoUtil.getUgi(ProtoUtil.java:138)
at org.apache.hadoop.util.ProtoUtil.getUgi(ProtoUtil.java:120)
at org.apache.hadoop.ipc.Server$Connection.processConnectionContext(Server.java:1663)
at org.apache.hadoop.ipc.Server$Connection.processRpcOutOfBandRequest(Server.java:1892)
at org.apache.hadoop.ipc.Server$Connection.processOneRpc(Server.java:1768)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1532)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:763)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:636)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:607)
2017-07-20 08:13:01,015 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2017-07-20 08:13:01,015 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2017-07-20 08:13:31,015 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds
2017-07-20 08:13:31,016 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2017-07-20 08:14:01,016 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2017-07-20 08:14:01,016 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).
2017-07-20 08:14:31,017 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2017-07-20 08:14:31,017 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s).


之前的思考:
1、关闭hadoop的安全模式:
hadoop dfsadmin -safemode off
2、取消hdfs权限检查:在hdfs-site.xml中文件中配置:
fs.permissions为false
都不起作用!

谁能够帮忙看一下,万分感谢!!!
...全文
357 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
勤劳的skl 2017-07-24
  • 打赏
  • 举报
回复
引用 2 楼 u013313550 的回复:
[quote=引用 1 楼 zyc2011 的回复:] 有可能是权限不足吧,你试一试改下你的代码 fs = FileSystem.get(new URI("hdfs://mini0:9000"),conf,"root"); //最后一个参数为用户名
谢谢,这段代码是正确的,还有一种方式在eclipse运行代码时设置-DHADOOP-USER-NAME=用户名[/quote] 设置的是VM参数
勤劳的skl 2017-07-24
  • 打赏
  • 举报
回复
引用 1 楼 zyc2011 的回复:
有可能是权限不足吧,你试一试改下你的代码 fs = FileSystem.get(new URI("hdfs://mini0:9000"),conf,"root"); //最后一个参数为用户名
谢谢,这段代码是正确的,还有一种方式在eclipse运行代码时设置-DHADOOP-USER-NAME=用户名
zyc2011 2017-07-21
  • 打赏
  • 举报
回复
有可能是权限不足吧,你试一试改下你的代码 fs = FileSystem.get(new URI("hdfs://mini0:9000"),conf,"root"); //最后一个参数为用户名

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧