虚拟机上跑Ubuntu安装hadoop,配置完成后执行格式化的时候提示找不到namenode

浮生(FS)
博客专家认证
2015-06-22 11:06:13
格式化后错误如下
找不到或无法加载主类org.apache.hadoop.hdfs.server.namenode.NameNode
执行hadoop version错误如下
找不到或无法加载主类 org.apache.hadoop.util.VersionInfo
有没有遇到过的,大神求解决方案

jdk安装目录/home/ubuntu/jdk-1.7
hadoop安装目录/home/ubuntu/hadoop-2.4.0
/etc/profile配置如下

hadoop-env.sh和yarn-env.sh
配置了export JAVA_HOME=/home/ubuntu/jdk-1.7
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>

<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/temp</value>
<description>Abase for other temporary directories.</description>
</property>

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop01:9001</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/dfs/name</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/dfs/data</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop01:10020</value>
</property>

<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop01:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop01:8032</value>
</property>

<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop01:8030</value>
</property>

<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop01:8031</value>
</property>

<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop01:8033</value>
</property>

<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop01:8088</value>
</property>

</configuration>
...全文
942 5 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
5 条回复
切换为时间正序
请发表友善的回复…
发表回复
诱惑小鱼丸 2015-07-17
  • 打赏
  • 举报
回复
肯定是配置的问题了,要不就是jar包有问题!你多少位的系统,下载的是哪个版本的 32位的可以直接装64位的 需要自己编译!
atrbyb 2015-06-25
  • 打赏
  • 举报
回复
建议看一下官方的手册,一般是由于配置文件写错了。看看core和hdfs是不是有写错
浮生(FS) 2015-06-22
  • 打赏
  • 举报
回复
jdk安装目录/home/ubuntu/jdk-1.7
hadoop安装目录/home/ubuntu/hadoop-2.4.0
/etc/profile配置如下

hadoop-env.sh和yarn-env.sh
配置了export JAVA_HOME=/home/ubuntu/jdk-1.7
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>

<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/temp</value>
<description>Abase for other temporary directories.</description>
</property>

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop01:9001</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/ubuntu/dfs/name</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/ubuntu/dfs/data</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop01:10020</value>
</property>

<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop01:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop01:8032</value>
</property>

<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop01:8030</value>
</property>

<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop01:8031</value>
</property>

<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop01:8033</value>
</property>

<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop01:8088</value>
</property>

</configuration>
浮生(FS) 2015-06-22
  • 打赏
  • 举报
回复
悲剧没人能解决吗,难道没有遇到的吗?....求助啊
浮生(FS) 2015-06-22
  • 打赏
  • 举报
回复
在线等待求回复,帮顶起

20,848

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧