Hbase无法启动HMaster,大牛帮忙看下是什么原因

harley123 2014-08-21 10:51:51
Hadoop 2.4.1﹐Zookeeper均已正常启动﹐HDFS可正常操作﹐但Hbase进行安装和配置后﹐只能启动HRegionServer, Hmaster无法启动﹐配置如下﹕

Hbase0.98安装与配置
主节点﹕hadoop0, 从节点﹕hadoop1, hadoop2,hadoop3
1 从官方下载hbase-0.98.5-hadoop2-bin.tar.gz
2 解压缩到/usr/local/app下并重命名成hbase(无需编译),并设置环境变量
#tar -zxvf hbase-0.98.5-hadoop2-bin.tar.gz
#mv hbase-0.98.5-hadoop2 hbase
#vi /etc/profile 新增和修改以下参数﹕
export HBASE_HOME=/usr/local/app/hbase
export PATH=.:$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
#source /etc/profile
3 配置hbase下conf/hbase-env.sh
export JAVA_HOME=/usr/local/jdk1.7.0_60
export HBASE_MANAGES_ZK=false (修改最后一行:集群配置为false,单机为true)
4 配置hbase下conf/hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop0:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop0,hadoop1,hadoop2</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
5 配置hbase下regionservers
hadoop1
hadoop2
hadoop3
6 换Jar包
#cd /usr/local/app/hbase/lib/
#rm -rf hadoop*.jar
#find /usr/local/hadoop/share/hadoop -name "hadoop*.jar" | xargs -i cp {} /usr/local/app/hbase/lib/
7 配置完成分发到各节点上
复制hadoop0中的hbase文件夹到hadoop1,hadoop2,hadoop3中
复制hadoop0中的/etc/profile到hadoop1,hadoop2,hadoop3中,在hadoop1,hadoop2,hadoop3上执行source /etc/profile

启动Log如下﹕
2014-08-21 08:25:08,804 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/app/hbase/bin
2014-08-21 08:25:08,805 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1:2181,hadoop0:2181,hadoop2:2181 sessionTimeout=90000 watcher=master:60000, quorum=hadoop1:2181,hadoop0:2181,hadoop2:2181, baseZNode=/hbase
2014-08-21 08:25:08,893 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:60000 connecting to ZooKeeper ensemble=hadoop1:2181,hadoop0:2181,hadoop2:2181
2014-08-21 08:25:09,026 INFO [main-SendThread(hadoop1:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop1/10.172.113.219:2181. Will not attempt to authenticate using SASL (unknown error)
2014-08-21 08:25:09,099 INFO [main-SendThread(hadoop1:2181)] zookeeper.ClientCnxn: Socket connection established to hadoop1/10.172.113.219:2181, initiating session
2014-08-21 08:25:09,186 INFO [main-SendThread(hadoop1:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop1/10.172.113.219:2181, sessionid = 0x147f5e0b1630000, negotiated timeout = 40000
2014-08-21 08:25:09,368 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: starting
2014-08-21 08:25:09,367 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2014-08-21 08:25:10,169 INFO [master:hadoop0:60000] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-08-21 08:25:10,373 INFO [master:hadoop0:60000] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2014-08-21 08:25:10,385 INFO [master:hadoop0:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context master
2014-08-21 08:25:10,385 INFO [master:hadoop0:60000] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-08-21 08:25:10,464 INFO [master:hadoop0:60000] http.HttpServer: Jetty bound to port 60010
2014-08-21 08:25:10,464 INFO [master:hadoop0:60000] mortbay.log: jetty-6.1.26
2014-08-21 08:25:12,445 INFO [master:hadoop0:60000] mortbay.log: Started SelectChannelConnector@0.0.0.0:60010
2014-08-21 08:25:12,938 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available
2014-08-21 08:25:12,969 INFO [master:hadoop0:60000] master.ActiveMasterManager: Registered Active Master=hadoop0,60000,1408580703155
[color=#FF0000]2014-08-21 08:25:12,989 INFO [master:hadoop0:60000] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2014-08-21 08:25:13,238 FATAL [master:hadoop0:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "hadoop0":9000; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
[/color]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:742)
at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:400)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1452)
at org.apache.hadoop.ipc.Client.call(Client.java:1381)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy17.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:594)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2230)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:993)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:977)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:446)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:896)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:441)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:152)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.UnknownHostException
... 25 more
2014-08-21 08:25:13,280 INFO [master:hadoop0:60000] master.HMaster: Aborting
2014-08-21 08:25:13,281 DEBUG [master:hadoop0:60000] master.HMaster: Stopping service threads
2014-08-21 08:25:13,283 INFO [master:hadoop0:60000] ipc.RpcServer: Stopping server on 60000
2014-08-21 08:25:13,284 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2014-08-21 08:25:13,293 INFO [master:hadoop0:60000] master.HMaster: Stopping infoServer
2014-08-21 08:25:13,294 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2014-08-21 08:25:13,294 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2014-08-21 08:25:13,304 INFO [master:hadoop0:60000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
2014-08-21 08:25:13,480 INFO [master:hadoop0:60000] zookeeper.ZooKeeper: Session: 0x147f5e0b1630000 closed
2014-08-21 08:25:13,481 INFO [master:hadoop0:60000] master.HMaster: HMaster main thread exiting
2014-08-21 08:25:13,480 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
2014-08-21 08:25:13,482 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:194)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2794)
...全文
10186 13 打赏 收藏 转发到动态 举报
写回复
用AI写文章
13 条回复
切换为时间正序
请发表友善的回复…
发表回复
_testing 2015-11-03
  • 打赏
  • 举报
回复
楼主解决了吗?碰到了同样的问题。
飓风zj 2015-01-23
  • 打赏
  • 举报
回复
还有就是Hadoop 是safemode 把这个关闭 怎么关闭 你百度下,因为这个状态 hdfs是不让写入的,所以hbase 一直在等待,然后就abort 超时了
飓风zj 2015-01-23
  • 打赏
  • 举报
回复
你要改主机名 hosts文件 和hostname
lingco 2015-01-13
  • 打赏
  • 举报
回复
用下hostname看看是啥,如果 是localhost就改下
Leo1005103 2014-12-29
  • 打赏
  • 举报
回复
把hadoop xxxxx. jar包拷到hbase lib文件夹下 再试试
cybloveqcl 2014-12-11
  • 打赏
  • 举报
回复
明显是主机名和ip映射不对
GeekStuff 2014-12-10
  • 打赏
  • 举报
回复
修改hostname和hosts文件
江南浙里 2014-12-09
  • 打赏
  • 举报
回复
我也遇到了这样的问题。。
SG90 2014-08-22
  • 打赏
  • 举报
回复
看这哥们说只能使用主机名,不能用IP: http://blog.csdn.net/chenxingzhen001/article/details/7756129
SG90 2014-08-22
  • 打赏
  • 举报
回复
感觉这个是不是跟版本要求有关?
harley123 2014-08-22
  • 打赏
  • 举报
回复
@zh_yi 牛a, 改成IP可以了, 我怎么想不明白,/etc/hosts都配置了hadoop0, hadoop,zookeeper和hive这样配置可以正常启动, 为什么hbase不行?
SG90 2014-08-21
  • 打赏
  • 举报
回复
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "hadoop0":9000; java.net.UnknownHostException应该是主机名有问题。
zh_yi 2014-08-21
  • 打赏
  • 举报
回复
把"hadoop0":9000改成IP :9000试试。我遇到过同样的问题。

20,809

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧