hadoop配置遇到问题

supheros 2013-10-20 02:43:15
下载安装了hadoop0.20.2,配置好了过后,执行./hadoop namenode -format没有任何提示!然后运行start-all.sh,貌似正常开启了,但是stop-all.sh却不正确。是不是./hadoop namenode -format没有执行成功呀?求救帮助
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin# ./hadoop namenode -format
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin# ./hadoop namenode -format
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin# ./hadoop namenode -format
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin# ./start-all.sh
starting namenode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-namenode-huranjie-IdeaPad-Y430.out
localhost: starting datanode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-datanode-huranjie-IdeaPad-Y430.out
localhost: starting secondarynamenode, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-huranjie-IdeaPad-Y430.out
starting jobtracker, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-jobtracker-huranjie-IdeaPad-Y430.out
localhost: starting tasktracker, logging to /usr/hadoop-0.20.2/bin/../logs/hadoop-root-tasktracker-huranjie-IdeaPad-Y430.out
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin# ./stop-all.sh
no jobtracker to stop
localhost: stopping tasktracker
no namenode to stop
localhost: no datanode to stop
localhost: no secondarynamenode to stop
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/bin#


三个配置文件如下
root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/conf# cat core-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop/hadoop-${user.name}</value>
</property>
</configuration>


root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/conf# cat hdfs-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


root@huranjie-IdeaPad-Y430:/usr/hadoop-0.20.2/conf# cat mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>

...全文
986 9 打赏 收藏 转发到动态 举报
写回复
用AI写文章
9 条回复
切换为时间正序
请发表友善的回复…
发表回复
xiaofanac66 2013-10-23
  • 打赏
  • 举报
回复
我就是 hadoop 0.20.2 jdk 1.7 没问题呀! 我有一次nameNode 无法format , 原因是我的计算机名称 在/etc/hosts文件下没有注册进去,后来把我自己的计算机名称注册进去就OK了。
撸大湿 2013-10-22
  • 打赏
  • 举报
回复
HADOOPHOME/logs/XXXX.log 贴出来
supheros 2013-10-22
  • 打赏
  • 举报
回复
刚才换了一个就版本的hadoop一切ok!估计新版本的hadoop需要另外的命令来启动task和job把。但是不知道是什么命令。 至于之前我的遇到的问题,竟然是jdk的问题!我可是用的官网的jdk1.7.0_40,无语。
supheros 2013-10-22
  • 打赏
  • 举报
回复
6.1.26.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z STARTUP_MSG: java = 1.7.0_45 ************************************************************/ 13/10/22 16:14:13 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] Formatting using clusterid: CID-eaf0fcf0-95dd-4cb3-8128-e8fdc905de7c 13/10/22 16:14:14 INFO namenode.HostFileManager: read includes: HostSet( ) 13/10/22 16:14:14 INFO namenode.HostFileManager: read excludes: HostSet( ) 13/10/22 16:14:14 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 13/10/22 16:14:14 INFO util.GSet: Computing capacity for map BlocksMap 13/10/22 16:14:14 INFO util.GSet: VM type = 32-bit 13/10/22 16:14:14 INFO util.GSet: 2.0% max memory = 889 MB 13/10/22 16:14:14 INFO util.GSet: capacity = 2^22 = 4194304 entries 13/10/22 16:14:14 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 13/10/22 16:14:14 INFO blockmanagement.BlockManager: defaultReplication = 1 13/10/22 16:14:14 INFO blockmanagement.BlockManager: maxReplication = 512 13/10/22 16:14:14 INFO blockmanagement.BlockManager: minReplication = 1 13/10/22 16:14:14 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 13/10/22 16:14:14 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 13/10/22 16:14:14 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 13/10/22 16:14:14 INFO blockmanagement.BlockManager: encryptDataTransfer = false 13/10/22 16:14:14 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 13/10/22 16:14:14 INFO namenode.FSNamesystem: supergroup = supergroup 13/10/22 16:14:14 INFO namenode.FSNamesystem: isPermissionEnabled = true 13/10/22 16:14:14 INFO namenode.FSNamesystem: HA Enabled: false 13/10/22 16:14:14 INFO namenode.FSNamesystem: Append Enabled: true 13/10/22 16:14:15 INFO util.GSet: Computing capacity for map INodeMap 13/10/22 16:14:15 INFO util.GSet: VM type = 32-bit 13/10/22 16:14:15 INFO util.GSet: 1.0% max memory = 889 MB 13/10/22 16:14:15 INFO util.GSet: capacity = 2^21 = 2097152 entries 13/10/22 16:14:15 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/10/22 16:14:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 13/10/22 16:14:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 13/10/22 16:14:15 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 13/10/22 16:14:15 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 13/10/22 16:14:15 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 13/10/22 16:14:15 INFO util.GSet: Computing capacity for map Namenode Retry Cache 13/10/22 16:14:15 INFO util.GSet: VM type = 32-bit 13/10/22 16:14:15 INFO util.GSet: 0.029999999329447746% max memory = 889 MB 13/10/22 16:14:15 INFO util.GSet: capacity = 2^16 = 65536 entries 13/10/22 16:14:15 INFO common.Storage: Storage directory /usr/hadoop_tmp/dfs/name has been successfully formatted. 13/10/22 16:14:15 INFO namenode.FSImage: Saving image file /usr/hadoop_tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 13/10/22 16:14:15 INFO namenode.FSImage: Image file /usr/hadoop_tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 196 bytes saved in 0 seconds. 13/10/22 16:14:15 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 13/10/22 16:14:15 INFO util.ExitUtil: Exiting with status 0 13/10/22 16:14:15 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at huranjie-IdeaPad-Y430/127.0.1.1 ************************************************************/ [/code] ./startup-all.sh的输出和namenode的log如下
root@huranjie-IdeaPad-Y430:/usr/hadoop-2.2.0/sbin# ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/hadoop-2.2.0/logs/hadoop-root-namenode-huranjie-IdeaPad-Y430.out
localhost: starting datanode, logging to /usr/hadoop-2.2.0/logs/hadoop-root-datanode-huranjie-IdeaPad-Y430.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-huranjie-IdeaPad-Y430.out
starting yarn daemons
starting resourcemanager, logging to /usr/hadoop-2.2.0/logs/yarn-root-resourcemanager-huranjie-IdeaPad-Y430.out
localhost: starting nodemanager, logging to /usr/hadoop-2.2.0/logs/yarn-root-nodemanager-huranjie-IdeaPad-Y430.out
root@huranjie-IdeaPad-Y430:/usr/hadoop-2.2.0/sbin# jps
9194 NameNode
9816 ResourceManager
10099 Jps
9400 DataNode
10035 NodeManager
9674 SecondaryNameNode
root@huranjie-IdeaPad-Y430:/usr/hadoop-2.2.0/sbin# 
但是新版本在浏览器上只打开了http://localhost:50070 ,50030和50060都没有打开,而且jps也没有tasktracker和jobtracker,是需要额外启动吗?
supheros 2013-10-22
  • 打赏
  • 举报
回复
我今天换了最新的hadoop2.2.0和最新的jdk,貌似应该是启动了吧。namenode format也能执行了,jps的结果如下。算是启动了吗?
6536 SecondaryNameNode
6894 NodeManager
6049 NameNode
6255 DataNode
6680 ResourceManager
7817 Jps
以下是./hadoop namenode -format的输出,大牛们帮小弟看看是不是format和startup成功了? [code=text]root@huranjie-IdeaPad-Y430:/usr/hadoop-2.2.0/bin# ./hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 13/10/22 16:14:13 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = huranjie-IdeaPad-Y430/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.2.0 STARTUP_MSG: classpath = /usr/hadoop-2.2.0/etc/hadoop:/usr/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/hadoop-
wo111180611 2013-10-22
  • 打赏
  • 举报
回复
bin/hadoop dfsadmin -report 就知道起没起来 具体问题看日志
supheros 2013-10-21
  • 打赏
  • 举报
回复
引用 1 楼 rucypli 的回复:
去看namenode的错误日志
弱弱问一下日志在哪个位置?谢谢
rucypli 2013-10-20
  • 打赏
  • 举报
回复
去看namenode的错误日志

20,844

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧