hive与hbase整合报错Failed with exception java.io.IOException:java.lang.NullPointerExc

ramontop1 2014-08-29 03:35:13
刚开始接触hadoop一段时间,以后要做数据挖掘的项目。整合hive、hbase就卡那里了。分开运行还顺利,hive建一个hbase能访问的表就比较O疼了。导入数据也没问题,就是查询不可以。

建表:

hive> CREATE TABLE hbase_table_1(key int, value string)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
> TBLPROPERTIES ("hbase.table.name" = "xyz");
OK
Time taken: 1.603 seconds


导数据:

hive> INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=86;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1409219405692_0016, Tracking URL = http://cluster01:8888/proxy/application_1409219405692_0016/
Kill Command = /home/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1409219405692_0016
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2014-08-29 15:40:20,961 Stage-0 map = 0%, reduce = 0%
2014-08-29 15:40:29,406 Stage-0 map = 100%, reduce = 0%, Cumulative CPU 3.89 sec
MapReduce Total cumulative CPU time: 3 seconds 890 msec
Ended Job = job_1409219405692_0016
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 3.89 sec HDFS Read: 6016 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 890 msec
OK
Time taken: 21.2 seconds


然后。。。Failed with exception java.io.IOException:java.lang.NullPointerException

hive> SELECT * FROM hbase_table_1;
OK
Failed with exception java.io.IOException:java.lang.NullPointerException
Time taken: 0.131 seconds


hive.log:

2014-08-29 15:25:42,121 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(135)) - </PERFLOG method=Driver.run start=1409297141731 end=1409297142121 duration=390 from=org.apache.hadoop.hive.ql.Driver>
2014-08-29 15:25:42,204 ERROR [main]: CliDriver (SessionState.java:printError(545)) - Failed with exception java.io.IOException:java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:636)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:534)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:137)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1519)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:285)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.net.DNS.reverseDns(DNS.java:92)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:218)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184)
at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:479)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:418)
at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:561)
... 14 more

2014-08-29 15:25:42,204 INFO [main]: exec.TableScanOperator (Operator.java:close(574)) - 0 finished. closing...


hive-site.xml:

<configuration>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://cluster01:9000/hive/warehouse</value>
</property>

<property>
<name>hive.exec.scratchdir</name>
<value>hdfs://cluster01:9000/hive/scratchdir</value>
</property>

<property>
<name>hive.querylog.location</name>
<value>file:///var/hadoop/hive/logs</value>
</property>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://cluster01:3306/hive?createDatabaseIfNotExist=true</value>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>

<property>
<name>hive.aux.jars.path</name>
<value>file:///home/hadoop/hive-0.13.1/lib/hive-hbase-handler-0.13.1.jar,file:///home/hadoop/hive-0.13.1/lib/protobuf-java-2.5.0.jar,file:///home/hadoop/hive-0.13.1/lib/hbase-client-0.96.2-hadoop2.jar,file:///home/hadoop/hive-0.13.1/lib/hbase-common-0.96.2-hadoop2.jar,file:///home/hadoop/hive-0.13.1/lib/hbase-protocol-0.96.2-hadoop2.jar,file:///home/hadoop/hive-0.13.1/lib/hbase-server-0.96.2-hadoop2.jar,file:///home/hadoop/hive-0.13.1/lib/zookeeper-3.4.6.jar,file:///home/hadoop/hive-0.13.1/lib/guava-11.0.2.jar</value>
</property>

<property>
<name>hive.zookeeper.quorum</name>
<value>cluster01,cluster02,cluster03,cluster04,cluster05,cluster06,cluster07</value>
</property>

</configuration>


其他:
hadoop@cluster01:~$ java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

hadoop@cluster01:~$ uname -a
Linux cluster01 3.13.0-34-generic #60-Ubuntu SMP Wed Aug 13 15:45:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

hadoop@cluster01:~$ ls | grep -
hadoop-2.2.0
hbase-0.96.2
hive-0.13.1
mahout-0.9
zookeeper-3.4.6
...全文
1126 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
ramontop1 2014-09-03
  • 打赏
  • 举报
回复
好吧,上午研究了下DNS服务器,搭建了个,就不报错了。

hive> select * from hbase_table_1;
OK
86      val_86
Time taken: 0.214 seconds, Fetched: 1 row(s)
很奇怪的说,非要我们搭建dns啊,不过使用dns server以后扩展很方便了。
ramontop1 2014-09-02
  • 打赏
  • 举报
回复
试了下select key from hbase_table_1,日志貌似跟DNS有关。难道我DNS哪有问题?死命google中。

hive> select key from hbase_table_1;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.lang.NullPointerException
        at org.apache.hadoop.net.DNS.reverseDns(DNS.java:92)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:218)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:479)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:291)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:372)
        at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:316)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:518)
        at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
        at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
SG90 2014-08-30
  • 打赏
  • 举报
回复
看不粗来哪里出问题了
ramontop1 2014-08-30
  • 打赏
  • 举报
回复
昨天配eclipse插件连接hadoop,每次启动点hadoop location,也会弹出个好给了个nullpointer错误,彻底晕掉了。
如果实在查不出来,建议换版本吗?

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧