hadoop配置好后,无法使用hdfs dfs相关的命令

ksgt950817 2016-09-06 10:02:41
错误日志如下
如果能解决,200分全部奉上

[b][b]16/09/06 03:23:40 DEBUG util.Shell: setsid exited with exit code 0
16/09/06 03:23:40 DEBUG conf.Configuration: parsing URL jar:file:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.2.jar!/core-default.xml
16/09/06 03:23:40 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@3327bd23
16/09/06 03:23:40 DEBUG conf.Configuration: parsing URL file:/usr/local/hadoop/etc/hadoop/core-site.xml
16/09/06 03:23:40 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@6f195bc3
16/09/06 03:23:41 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
16/09/06 03:23:41 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
16/09/06 03:23:41 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, always=false, sampleName=Ops, type=DEFAULT, valueName=Time, value=[GetGroups])
16/09/06 03:23:41 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
16/09/06 03:23:41 DEBUG security.Groups: Creating new Groups object
16/09/06 03:23:41 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
16/09/06 03:23:41 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
16/09/06 03:23:41 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
16/09/06 03:23:41 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
16/09/06 03:23:41 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
16/09/06 03:23:41 DEBUG security.UserGroupInformation: hadoop login
16/09/06 03:23:41 DEBUG security.UserGroupInformation: hadoop login commit
16/09/06 03:23:41 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
16/09/06 03:23:41 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
16/09/06 03:23:41 DEBUG security.UserGroupInformation: User entry: "root"
16/09/06 03:23:41 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
16/09/06 03:23:41 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://rodefs
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
16/09/06 03:23:41 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
16/09/06 03:23:41 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
16/09/06 03:23:41 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@757277dc
16/09/06 03:23:41 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@548e6d58
16/09/06 03:23:42 DEBUG unix.DomainSocketWatcher: org.apache.hadoop.net.unix.DomainSocketWatcher$2@41b0dbf6: starting with interruptCheckPeriodMs = 60000
16/09/06 03:23:42 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled.
16/09/06 03:23:42 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
16/09/06 03:23:42 DEBUG ipc.Client: The ping interval is 60000 ms.
16/09/06 03:23:42 DEBUG ipc.Client: Connecting to user1/172.25.65.76:8020
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root: starting, having connections 1
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root sending #0
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root got value #0
16/09/06 03:23:42 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 40ms
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root sending #1
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root got value #1
16/09/06 03:23:42 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 9ms
16/09/06 03:23:42 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@548e6d58
16/09/06 03:23:42 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@548e6d58
16/09/06 03:23:42 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@548e6d58
16/09/06 03:23:42 DEBUG ipc.Client: Stopping client
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root: closed
16/09/06 03:23:42 DEBUG ipc.Client: IPC Client (1010953501) connection to user1/172.25.65.76:8020 from root: stopped, remaining connections 0
...全文
1648 5 打赏 收藏 转发到动态 举报
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
飞啊飞123 2016-09-22
  • 打赏
  • 举报
回复
你看下你的core-site.xml中fs.default.name的值,你设置的是什么,我设置的是hdfs://hadoop1:9000,然后必须把本机的hostname设置 成hadoop1,看看有没有帮助吧
塞卡骆伊 2016-09-21
  • 打赏
  • 举报
回复
引用 2 楼 ajun945 的回复:
[quote=引用 1 楼 dj159357 的回复:] 贴一下你的namenode的启动日志
我也出现了跟楼主一样的问题,很奇怪。 系统是centos6.2 执行“hdfs namenode -format”出现如下。。。 16/09/19 20:12:41 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost.localdomain/127.0.0.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.0 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar…………………………………………………………………………………………............................................ (长度限制,此处省略classpath后面的内容) STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.7.0_111 ************************************************************/ 16/09/19 20:12:41 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/09/19 20:12:41 INFO namenode.NameNode: createNameNode [-format] 16/09/19 20:12:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-3c7698c3-75b4-4965-96e4-75e6906554fd 16/09/19 20:12:42 INFO namenode.FSNamesystem: No KeyProvider found. 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsLock is fair:true 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Sep 19 20:12:42 16/09/19 20:12:42 INFO util.GSet: Computing capacity for map BlocksMap 16/09/19 20:12:42 INFO util.GSet: VM type = 32-bit 16/09/19 20:12:42 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 16/09/19 20:12:42 INFO util.GSet: capacity = 2^22 = 4194304 entries 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: defaultReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplication = 512 16/09/19 20:12:42 INFO blockmanagement.BlockManager: minReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/09/19 20:12:42 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 16/09/19 20:12:42 INFO namenode.FSNamesystem: supergroup = supergroup 16/09/19 20:12:42 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/09/19 20:12:42 INFO namenode.FSNamesystem: HA Enabled: false 16/09/19 20:12:42 INFO namenode.FSNamesystem: Append Enabled: true 16/09/19 20:12:42 FATAL namenode.NameNode: Failed to start namenode. java.lang.InternalError at sun.security.ec.SunEC.initialize(Native Method) at sun.security.ec.SunEC.access$000(SunEC.java:49) at sun.security.ec.SunEC$1.run(SunEC.java:61) at sun.security.ec.SunEC$1.run(SunEC.java:58) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ec.SunEC.<clinit>(SunEC.java:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at java.lang.Class.newInstance(Class.java:383) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:221) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:206) at java.security.AccessController.doPrivileged(Native Method) at sun.security.jca.ProviderConfig.doLoadProvider(ProviderConfig.java:206) at sun.security.jca.ProviderConfig.getProvider(ProviderConfig.java:187) at sun.security.jca.ProviderList.getProvider(ProviderList.java:233) at sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:434) at sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376) at sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486) at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:339) at javax.crypto.KeyGenerator.<init>(KeyGenerator.java:169) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:224) at org.apache.hadoop.security.token.SecretManager.<init>(SecretManager.java:143) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.<init>(AbstractDelegationTokenSecretManager.java:104) at org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.<init>(DelegationTokenSecretManager.java:95) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createDelegationTokenSecretManager(FSNamesystem.java:7282) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:893) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:934) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 16/09/19 20:12:42 INFO util.ExitUtil: Exiting with status 1 16/09/19 20:12:42 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/[/quote] 把tmp目录下的内容删掉以后,重新format一下试试
ajun945 2016-09-20
  • 打赏
  • 举报
回复
引用 1 楼 dj159357 的回复:
贴一下你的namenode的启动日志
我也出现了跟楼主一样的问题,很奇怪。 系统是centos6.2 执行“hdfs namenode -format”出现如下。。。 16/09/19 20:12:41 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost.localdomain/127.0.0.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.0 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar…………………………………………………………………………………………............................................ (长度限制,此处省略classpath后面的内容) STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.7.0_111 ************************************************************/ 16/09/19 20:12:41 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/09/19 20:12:41 INFO namenode.NameNode: createNameNode [-format] 16/09/19 20:12:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-3c7698c3-75b4-4965-96e4-75e6906554fd 16/09/19 20:12:42 INFO namenode.FSNamesystem: No KeyProvider found. 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsLock is fair:true 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Sep 19 20:12:42 16/09/19 20:12:42 INFO util.GSet: Computing capacity for map BlocksMap 16/09/19 20:12:42 INFO util.GSet: VM type = 32-bit 16/09/19 20:12:42 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 16/09/19 20:12:42 INFO util.GSet: capacity = 2^22 = 4194304 entries 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: defaultReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplication = 512 16/09/19 20:12:42 INFO blockmanagement.BlockManager: minReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/09/19 20:12:42 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 16/09/19 20:12:42 INFO namenode.FSNamesystem: supergroup = supergroup 16/09/19 20:12:42 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/09/19 20:12:42 INFO namenode.FSNamesystem: HA Enabled: false 16/09/19 20:12:42 INFO namenode.FSNamesystem: Append Enabled: true 16/09/19 20:12:42 FATAL namenode.NameNode: Failed to start namenode. java.lang.InternalError at sun.security.ec.SunEC.initialize(Native Method) at sun.security.ec.SunEC.access$000(SunEC.java:49) at sun.security.ec.SunEC$1.run(SunEC.java:61) at sun.security.ec.SunEC$1.run(SunEC.java:58) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ec.SunEC.<clinit>(SunEC.java:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at java.lang.Class.newInstance(Class.java:383) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:221) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:206) at java.security.AccessController.doPrivileged(Native Method) at sun.security.jca.ProviderConfig.doLoadProvider(ProviderConfig.java:206) at sun.security.jca.ProviderConfig.getProvider(ProviderConfig.java:187) at sun.security.jca.ProviderList.getProvider(ProviderList.java:233) at sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:434) at sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376) at sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486) at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:339) at javax.crypto.KeyGenerator.<init>(KeyGenerator.java:169) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:224) at org.apache.hadoop.security.token.SecretManager.<init>(SecretManager.java:143) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.<init>(AbstractDelegationTokenSecretManager.java:104) at org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.<init>(DelegationTokenSecretManager.java:95) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createDelegationTokenSecretManager(FSNamesystem.java:7282) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:893) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:934) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 16/09/19 20:12:42 INFO util.ExitUtil: Exiting with status 1 16/09/19 20:12:42 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/
ksgt950817 2016-09-20
  • 打赏
  • 举报
回复
引用 2楼ajun945 的回复:
[quote=引用 1 楼 dj159357 的回复:] 贴一下你的namenode的启动日志
我也出现了跟楼主一样的问题,很奇怪。 系统是centos6.2 执行“hdfs namenode -format”出现如下。。。 16/09/19 20:12:41 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost.localdomain/127.0.0.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.0 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar…………………………………………………………………………………………............................................ (长度限制,此处省略classpath后面的内容) STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z STARTUP_MSG: java = 1.7.0_111 ************************************************************/ 16/09/19 20:12:41 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 16/09/19 20:12:41 INFO namenode.NameNode: createNameNode [-format] 16/09/19 20:12:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-3c7698c3-75b4-4965-96e4-75e6906554fd 16/09/19 20:12:42 INFO namenode.FSNamesystem: No KeyProvider found. 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsLock is fair:true 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 16/09/19 20:12:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Sep 19 20:12:42 16/09/19 20:12:42 INFO util.GSet: Computing capacity for map BlocksMap 16/09/19 20:12:42 INFO util.GSet: VM type = 32-bit 16/09/19 20:12:42 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 16/09/19 20:12:42 INFO util.GSet: capacity = 2^22 = 4194304 entries 16/09/19 20:12:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: defaultReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplication = 512 16/09/19 20:12:42 INFO blockmanagement.BlockManager: minReplication = 1 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 16/09/19 20:12:42 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 16/09/19 20:12:42 INFO blockmanagement.BlockManager: encryptDataTransfer = false 16/09/19 20:12:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 16/09/19 20:12:42 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 16/09/19 20:12:42 INFO namenode.FSNamesystem: supergroup = supergroup 16/09/19 20:12:42 INFO namenode.FSNamesystem: isPermissionEnabled = true 16/09/19 20:12:42 INFO namenode.FSNamesystem: HA Enabled: false 16/09/19 20:12:42 INFO namenode.FSNamesystem: Append Enabled: true 16/09/19 20:12:42 FATAL namenode.NameNode: Failed to start namenode. java.lang.InternalError at sun.security.ec.SunEC.initialize(Native Method) at sun.security.ec.SunEC.access$000(SunEC.java:49) at sun.security.ec.SunEC$1.run(SunEC.java:61) at sun.security.ec.SunEC$1.run(SunEC.java:58) at java.security.AccessController.doPrivileged(Native Method) at sun.security.ec.SunEC.<clinit>(SunEC.java:58) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at java.lang.Class.newInstance(Class.java:383) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:221) at sun.security.jca.ProviderConfig$2.run(ProviderConfig.java:206) at java.security.AccessController.doPrivileged(Native Method) at sun.security.jca.ProviderConfig.doLoadProvider(ProviderConfig.java:206) at sun.security.jca.ProviderConfig.getProvider(ProviderConfig.java:187) at sun.security.jca.ProviderList.getProvider(ProviderList.java:233) at sun.security.jca.ProviderList$ServiceList.tryGet(ProviderList.java:434) at sun.security.jca.ProviderList$ServiceList.access$200(ProviderList.java:376) at sun.security.jca.ProviderList$ServiceList$1.hasNext(ProviderList.java:486) at javax.crypto.KeyGenerator.nextSpi(KeyGenerator.java:339) at javax.crypto.KeyGenerator.<init>(KeyGenerator.java:169) at javax.crypto.KeyGenerator.getInstance(KeyGenerator.java:224) at org.apache.hadoop.security.token.SecretManager.<init>(SecretManager.java:143) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.<init>(AbstractDelegationTokenSecretManager.java:104) at org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSecretManager.<init>(DelegationTokenSecretManager.java:95) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.createDelegationTokenSecretManager(FSNamesystem.java:7282) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:893) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:755) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:934) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1379) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) 16/09/19 20:12:42 INFO util.ExitUtil: Exiting with status 1 16/09/19 20:12:42 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/[/quote]你的问题跟我不一样,你是本地库版本不一致,这个自己百度一下解决办法
飞啊飞123 2016-09-18
  • 打赏
  • 举报
回复
贴一下你的namenode的启动日志

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧