配置hbase,主节点HMaster启动不起来。

qqfly1to19 2012-04-12 11:01:53
具体状况是开机以后 HMaster启动了一下。然后就down掉了。然后再也启动不起来。关闭hbase时候。子节点只能关闭zookeeper守护进程,不能关闭存储进程。
日志文件如下(节选部分):错误没有在网上找到过,求指导。pipe坏了该怎么弄啊。
2012-04-11 02:32:57,914 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-04-11 02:32:57,917 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting
2012-04-11 02:32:58,099 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting
2012-04-11 02:32:58,101 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting
2012-04-11 02:32:58,102 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting
2012-04-11 02:32:58,102 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting
2012-04-11 02:32:58,104 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting
2012-04-11 02:32:58,653 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=ubuntu,60000,1334136776012
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-04-11 02:32:59,037 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-11 02:32:59,038 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-04-11 02:32:59,038 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized
2012-04-11 02:32:59,195 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/ubuntu,60000,1334136776012 from backup master directory
2012-04-11 02:32:59,266 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/ubuntu,60000,1334136776012 already deleted, and this is not a retry
2012-04-11 02:32:59,266 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=ubuntu,60000,1334136776012
2012-04-11 02:32:59,877 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.io.IOException: Call to Ubuntu-chaiying0/192.168.17.133:9000 failed on local exception: java.io.IOException: Broken pipe
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy10.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:482)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:94)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:458)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:336)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:122)
at sun.nio.ch.IOUtil.write(IOUtil.java:93)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:352)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:779)
at org.apache.hadoop.ipc.Client.call(Client.java:1047)
... 18 more
2012-04-11 02:32:59,898 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2012-04-11 02:32:59,898 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2012-04-11 02:32:59,898 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2012-04-11 02:32:59,898 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2012-04-11 02:32:59,898 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2012-04-11 02:32:59,898 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2012-04-11 02:32:59,899 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2012-04-11 02:32:59,900 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2012-04-11 02:32:59,904 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-04-11 02:32:59,904 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-04-11 02:33:00,034 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-04-11 02:33:00,035 INFO org.apache.zookeeper.ZooKeeper: Session: 0x136a0bf9ffc0000 closed
2012-04-11 02:33:00,035 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
2012-04-11 02:33:00,035 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1637)
...全文
5146 7 打赏 收藏 转发到动态 举报
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
zc888168 2013-06-18
  • 打赏
  • 举报
回复
hbase 换成0.90.5试试
qqfly1to19 2012-04-13
  • 打赏
  • 举报
回复
[Quote=引用 2 楼 的回复:]

hadoop和hbase的版本有冲突,jar包需要调整一下。
[/Quote]
请问需要用什么hbase版本呢? 我的hadoop用的是hadoop-0.20.2,hbase用的是hbase-0.92.1。我的jar包用的是java-6-openjdk,因为现在只有openjava可以用。还有就是java,我的这个java包在hadoop上面没问题,可以用。我的java的路径是JAVA_HOME=/usr/lib/jvm/java-6-openjdk。
hello_o721 2012-04-13
  • 打赏
  • 举报
回复
hadoop和hbase的版本有冲突,jar包需要调整一下。
qqfly1to19 2012-04-13
  • 打赏
  • 举报
回复
求大哥级人物知道哇。我已经束手待毙了。。。。
风停雨歇云淡 2012-04-13
  • 打赏
  • 举报
回复
记得刚安装时要到hbase目录下lib中把hadoop的jar包换为hadoop-0.20.2即你安装的hadoop的jar包。试试。
qqfly1to19 2012-04-13
  • 打赏
  • 举报
回复
这是最近一次运行情况:
org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:152)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:103)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1637)


主节点:
5182 Jps
4609 SecondaryNameNode
4683 JobTracker
4367 NameNode

子节点1:
4098 TaskTracker
4341 HQuorumPeer
3910 DataNode
4593 Jps
4534 HRegionServer
子节点2:
4286 HRegionServer
4345 Jps
3858 DataNode
4045 TaskTracker
qqfly1to19 2012-04-13
  • 打赏
  • 举报
回复
求指导,求帮助。。。
Hbase的安装与配置 1、前提:要有装好的hdfs分布式文件系统和zookeeper集群 2、各台linux机器上传hbase安装包:hbase-0.98.12.1-hadoop2-bin.tar.gz 3、解压jar包:tar -zxvf hbase-0.98.12.1-hadoop2-bin.tar.gz 4、把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下: cd root/hadoop/etc/hadoop scp -r hdfs-site.xml /root/hbase-0.98.12.1-hadoop2/conf/ scp -r core-site.xml /root/hbase-0.98.12.1-hadoop2/conf/ 4、配置hbase集群,要修改3个文件(首先zk集群已经安装好了) 5、修改hbase-env.sh export JAVA_HOME=/usr/java/jdk1.7.0_xxx (1)、告诉hbase使用外部的zk export HBASE_MANAGES_ZK=false 6、修改vim hbase-site.xml hbase.rootdir hdfs://namenade/hbase//this is your real nodename. hbase.zookeeper.property.dataDir /opt/zookeeper hbase.cluster.distributed true hbase.zookeeper.quorum node11,node12,node13 7、修改vim regionservers (指定regionserver) Node11 node12 node13 8、指定 standby 的hbase的副节点,注意:该文件不存在,需要创建 vim backup-masters Node12 9、拷贝hbase到其他节点或机器 Cd /root/hbase-0.98.12.1-hadoop2 scp -r conf node12:/root/hbase-0.98.12.1-hadoop2/ scp -r conf node13:/root/hbase-0.98.12.1-hadoop2/ 10、设置私钥并同步时间。 11、启动所有的hbase (1)、分别启动zk /home/zookeeper-xxx/bin/./zkServer.sh start (2)、启动hdfs集群 /root/hadoop/sbin/./start-dfs.sh (3)、启动hbase,在节点上运行: /root/hbase-0.98.12.1-hadoop2/bin/./start-hbase.sh 12、通过浏览器访问hbase管理页面 Node11:60010 Node12:60010 Node11:50070 Node12:50070 13、为保证集群的可靠性,要启动多个HMaster hbase-daemon.sh start master

20,811

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧