java api 操作hbase,创建表出现错误

Mr3-Water 2017-03-14 10:00:49
我用一台虚拟机,搭建一个单节点的Hadoop开发环境,然后在宿主机上写代码,操作HBase,结果出现下面的异常。
有哪位朋友对这块熟悉的,指点一下哈啊哈。

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.apache.hadoop.hbase.client.HBaseAdmin@f6a5f9
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Mar 14 09:47:03 CST 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=76731: row 'blog,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,16020,1489449361159, seqNum=0

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:416)
at com.water.hbase.utils.HBaseUtils.createTable(HBaseUtils.java:44)
at com.water.hbase.utils.ApplicationMain.main(ApplicationMain.java:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:134)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=76731: row 'blog,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=localhost,16020,1489449361159, seqNum=0
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:416)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:722)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more



...全文
437 8 打赏 收藏 转发到动态 举报
写回复
用AI写文章
8 条回复
切换为时间正序
请发表友善的回复…
发表回复
鱿鱼ing 2017-03-14
  • 打赏
  • 举报
回复
你看看自己的ip是不是也是192.168这个网段的,如果是 将192.168.2.227添加都windows下C:\Windows\System32\drivers\etc\hosts文件中 如果不是 最好虚拟机使用桥接方式重新获取ip
Mr3-Water 2017-03-14
  • 打赏
  • 举报
回复
引用 2 楼 qq_30831935 的回复:
而且感觉那个localhost应该改成虚拟机的ip
public class HBaseUtils { static Configuration conf; static { conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "192.168.2.227"); conf.set("hbase.zookeeper.property.clientPort", "2181"); } /** * 创建表 * * @param tableName 表名 * @param family 列簇 * @throws Exception */ public static void createTable(String tableName, String[] family) throws Exception { HBaseAdmin hBaseAdmin = new HBaseAdmin(conf); System.out.println(hBaseAdmin.toString()); HTableDescriptor tableDescriptor = new HTableDescriptor(tableName); for (int i = 0; i < family.length; i++) { tableDescriptor.addFamily(new HColumnDescriptor(family[i])); } if (hBaseAdmin.tableExists(tableName)) { System.out.println("table is exist!"); System.exit(0); } else { hBaseAdmin.createTable(tableDescriptor); System.out.println("create table success!"); } } } 代码结构是这样的
Mr3-Water 2017-03-14
  • 打赏
  • 举报
回复
引用 2 楼 qq_30831935 的回复:
而且感觉那个localhost应该改成虚拟机的ip
static { conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "192.168.2.227"); conf.set("hbase.zookeeper.property.clientPort", "2181"); // conf.set("hbase.master","master"); // conf.set("hbase.rootdir", "hdfs://master:60000/hbase"); // conf.setBoolean("hbase.cluster.distributed", true); // conf.setInt("hbase.client.scanner.caching", 10000); // conf.set("zookeeper.znode.parent", "/hbase"); // conf.addResource("hbase-site.xml"); // conf.addResource("core-site.xml"); // conf.addResource("hdfs-site.xml"); } 我在这里设置的是虚拟机的IP呢。。而且我把你说的那个文件放到类路径下也是不行的呀。
Mr3-Water 2017-03-14
  • 打赏
  • 举报
回复
static { conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "192.168.2.227"); conf.set("hbase.zookeeper.property.clientPort", "2181"); // conf.set("hbase.master","master"); // conf.set("hbase.rootdir", "hdfs://master:60000/hbase"); // conf.setBoolean("hbase.cluster.distributed", true); // conf.setInt("hbase.client.scanner.caching", 10000); // conf.set("zookeeper.znode.parent", "/hbase"); // conf.addResource("hbase-site.xml"); // conf.addResource("core-site.xml"); // conf.addResource("hdfs-site.xml"); } 我在这里设置的是虚拟机的IP呢。。而且我把你说的那个文件放到类路径下也是不行的呀。
鱿鱼ing 2017-03-14
  • 打赏
  • 举报
回复
而且感觉那个localhost应该改成虚拟机的ip
鱿鱼ing 2017-03-14
  • 打赏
  • 举报
回复
我用的时候得把 hbase-site.xml 放到项目下 或者 把代码及配置贴出来看看
鱿鱼ing 2017-03-14
  • 打赏
  • 举报
回复
引用 7 楼 zmj132113 的回复:
[quote=引用 6 楼 qq_30831935 的回复:] 你看看自己的ip是不是也是192.168这个网段的,如果是 将192.168.2.227添加都windows下C:\Windows\System32\drivers\etc\hosts文件中 如果不是 最好虚拟机使用桥接方式重新获取ip
我把程序放到虚拟机上执行就可以了,谢谢你哈。[/quote] 哈哈,我之前也在做hbase相关项目,就在我windows本机操作linux服务器端的hbase集群 是可以的 要是有问题可以继续@我
Mr3-Water 2017-03-14
  • 打赏
  • 举报
回复
引用 6 楼 qq_30831935 的回复:
你看看自己的ip是不是也是192.168这个网段的,如果是 将192.168.2.227添加都windows下C:\Windows\System32\drivers\etc\hosts文件中 如果不是 最好虚拟机使用桥接方式重新获取ip
我把程序放到虚拟机上执行就可以了,谢谢你哈。

51,412

社区成员

发帖
与我相关
我的任务
社区描述
Java相关技术讨论
javaspring bootspring cloud 技术论坛(原bbs)
社区管理员
  • Java相关社区
  • 小虚竹
  • 谙忆
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧