hive-0.12插入数据报错 return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

sterling_wu 2014-10-22 05:27:56
hadoop-1.1.2, habse-0.94.23, zookeeper-3.4.6, hive-0.12.0, mysql, redhat el6.
在hive里面创建了与hbase整合的表hbase_table_6, 并通过hbase shell为该表成功加入了一条记录。
在hive里面也能看到这个记录,但是在hive里面给该表插入数据的时候报错
Job Submission failed with exception 'java.lang.IllegalArgumentException(Can not create a Path from an empty string)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
我是hadoop小白,求各路神仙指点。

配置参考的http://blog.csdn.net/hguisu/article/details/7282050 ;
最开始碰到的错误信息是:return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask,搞了一天没解决,不知道怎么现在报错还又不一样了。

执行信息:
<<<<<<<<<<<<<<<<<<
[hadoop@icity2 ~]$ hive --auxpath /home/hadoop/hive/lib/hbase-0.94.23.jar, /home/hadoop/hive/lib/hive-hbase-handler-0.12.0.jar, /home/hadoop/hive/lib/zookeeper-3.4.6.jar

Logging initialized using configuration in jar:file:/home/hadoop/hive/lib/hive-common-0.12.0.jar!/hive-log4j.properties
hive> select * from hbase_table_6;
OK
100 360buy.com
Time taken: 5.505 seconds, Fetched: 1 row(s)
hive> insert overwrite table hbase_table_6 select * from pokes1 where foo=86;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
java.lang.IllegalArgumentException: Can not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:82)
at org.apache.hadoop.fs.Path.<init>(Path.java:90)
at org.apache.hadoop.fs.Path.<init>(Path.java:50)
at org.apache.hadoop.mapred.JobClient.copyRemoteFiles(JobClient.java:688)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:792)
at org.apache.hadoop.mapred.JobClient.copyAndConfigureFiles(JobClient.java:717)
at org.apache.hadoop.mapred.JobClient.access$400(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:927)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Job Submission failed with exception 'java.lang.IllegalArgumentException(Can not create a Path from an empty string)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
hive>
>>>>>>>>>>>>>>>>>>


hive-site.xml:
<<<<<<<<<<<<<<<<<<
<property>
<name>hive.exec.scratchdir</name>
<value>/hive/scratchdir</value>
<description>Scratch space for Hive jobs</description>
</property>

<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/${user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://10.103.41.56:3306/hiveMeta?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hivedb</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore database</description>
</property>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehousedir</value>
<description>location of default database for the warehouse</description>
</property>
>>>>>>>>>>>>>>>>>>
...全文
31594 22 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
22 条回复
切换为时间正序
请发表友善的回复…
发表回复
vigiles 2015-05-16
  • 打赏
  • 举报
回复
楼主请看 http://q.cnblogs.com/q/72132/ 我也出现这个问题,但我的环境要简单很多,不知道什么原因 请帮忙
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
引用 5 楼 wulinshishen 的回复:
启动hive的时候加一下 -hiveconf hive.root.logger=DEBUG,console 看有没有更详细的信息
太长了贴不完整,最后面这段,: 14/10/23 12:06:47 DEBUG hdfs.DFSClient: DFSClient for block blk_-3850808407694401370_1888 Replies for seqno 139 are SUCCESS SUCCESS 14/10/23 12:06:47 DEBUG hdfs.DFSClient: DFSClient for block blk_-3850808407694401370_1888 Replies for seqno 140 are SUCCESS SUCCESS 14/10/23 12:06:47 DEBUG hdfs.DFSClient: Closing old block blk_-3850808407694401370_1888 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #96 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #96 14/10/23 12:06:47 DEBUG ipc.RPC: Call: complete 12 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #97 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #97 14/10/23 12:06:47 DEBUG ipc.RPC: Call: setReplication 6 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #98 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #98 14/10/23 12:06:47 DEBUG ipc.RPC: Call: setPermission 6 14/10/23 12:06:47 DEBUG exec.Utilities: No plan file found: hdfs://icity0:9000/hive/scratchdir/hive_2014-10-23_12-06-35_679_5471194250451391944-1/-mr-10001/e038a7b3-20af-42d8-b7f1-758136a41a9d/reduce.xml 14/10/23 12:06:47 INFO mapred.JobClient: Cleaning up the staging area hdfs://icity0:9000/home/hadoop/hadoop/tmp/mapred/staging/hadoop/.staging/job_201410211348_0035 java.lang.NullPointerException at java.util.Hashtable.put(Hashtable.java:514) at java.util.Properties.setProperty(Properties.java:161) at org.apache.hadoop.conf.Configuration.set(Configuration.java:419) at org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1840) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:947) at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:951) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Job Submission failed with exception 'java.lang.NullPointerException(null)' 14/10/23 12:06:47 ERROR exec.Task: Job Submission failed with exception 'java.lang.NullPointerException(null)' java.lang.NullPointerException at java.util.Hashtable.put(Hashtable.java:514) at java.util.Properties.setProperty(Properties.java:161) at org.apache.hadoop.conf.Configuration.set(Configuration.java:419) at org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1840) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:947) at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:951) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:886) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:144) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1414) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1192) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1020) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #99 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #99 14/10/23 12:06:47 DEBUG ipc.RPC: Call: getFileInfo 3 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #100 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #100 14/10/23 12:06:47 DEBUG ipc.RPC: Call: delete 8 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #101 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #101 14/10/23 12:06:47 DEBUG ipc.RPC: Call: getFileInfo 1 14/10/23 12:06:47 INFO ql.Driver: </PERFLOG method=task.MAPRED.Stage-0 start=1414037201287 end=1414037207867 duration=6580> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask 14/10/23 12:06:47 ERROR ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask 14/10/23 12:06:47 INFO ql.Driver: </PERFLOG method=Driver.execute start=1414037201281 end=1414037207877 duration=6596> 14/10/23 12:06:47 INFO ql.Driver: <PERFLOG method=releaseLocks> 14/10/23 12:06:47 INFO ql.Driver: </PERFLOG method=releaseLocks start=1414037207877 end=1414037207877 duration=0> 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop sending #102 14/10/23 12:06:47 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop got value #102 14/10/23 12:06:47 DEBUG ipc.RPC: Call: delete 7 14/10/23 12:06:47 INFO ql.Driver: <PERFLOG method=releaseLocks> 14/10/23 12:06:47 INFO ql.Driver: </PERFLOG method=releaseLocks start=1414037207890 end=1414037207890 duration=0> hive> 14/10/23 12:06:52 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9001 from hadoop: closed 14/10/23 12:06:52 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9001 from hadoop: stopped, remaining connections 1 14/10/23 12:06:57 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop: closed 14/10/23 12:06:57 DEBUG ipc.Client: IPC Client (47) connection to icity0/10.103.41.54:9000 from hadoop: stopped, remaining connections 0 ------------------------------------------------ No plan file found: hdfs://icity0:9000/hive/scratchdir/hive_2014-10-23_12-06-35_679_5471194250451391944-1/-mr-10001/e038a7b3-20af-42d8-b7f1-758136a41a9d/reduce.xml 这个算是线索吗
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
引用 4 楼 sky_walker85 的回复:
hadoop的fs.default.name配置是什么
core-site.xml <<<<<<<<<<<<<<<<<< <property> <name>fs.default.name</name> <value>hdfs://icity0:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoop/tmp</value> </property> >>>>>>>>>>>>>>>>>> hdfs-site.xml <<<<<<<<<<<<<<<<<< <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.hosts.exclude</name> <value>/home/hadoop/hadoop/conf/excludes</value> </property> >>>>>>>>>>>>>>>>>> icity0,icity1,icity2 三个节点。 其中icity0是name node+ job tracker;另外两个是slave; mysql和hive只装在了icity2上
  • 打赏
  • 举报
回复
启动hive的时候加一下 -hiveconf hive.root.logger=DEBUG,console 看有没有更详细的信息
skyWalker_ONLY 2014-10-23
  • 打赏
  • 举报
回复
hadoop的fs.default.name配置是什么
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
非常感谢 sky_walker85 和 wulinshishen 这么快的回复。 两个方法我都试了,仍然报同样的错误。 添加内容如下: <property> <name>hive.aux.jars.path</name> <value>file:///home/hadoop/hive/lib/hbase-0.94.23.jar,file:///home/hadoop/hive/lib/hive-hbase-handler-0.12.0.jar,file:///home/hadoop/hive/lib/zookeeper-3.4.6.jar,file:///home/hadoop/hive/lib/guava-11.0.2.jar,file:///home/hadoop/hive/lib/hive-common-0.12.0.jar,file:///home/hadoop/hive/lib/protobuf-java-2.4.1.jar</value> </property> 有两个疑问:1、我的hbase是0.94.23版本,没有找到hbase-common*.jar这样的包,只有hbase-0.94.23.jar和hbase-0.94.23-tests.jar, hive里面的hbase包是hbase-0.94.6.1.jar的,我看这个版本比较旧,就用了hbase里面的包,并且copy到了hive/lib目录。 2、发现hbase-0.94.23里面的protobuf是protobuf-java-2.4.0a.jar;而hive-0.12里面是protobuf-java-2.4.1.jar 会跟这个有关系吗?
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
啊哈,搞定了,刚刚反应慢可能是我zookeeper的端口弄错了的原因,zookeeper设置的端口是2181,hive启动的时候用这个端口,建新的整合表,可以插入记录了。太感谢2位了。至于为什么我的hive要放到namenode上,以后再找原因了。
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
谢谢您的指点,不过没办法,公司交代的任务,先得弄个环境做做测试,不做整合,hbase的操作太复杂,没法测试啊。
skyWalker_ONLY 2014-10-23
  • 打赏
  • 举报
回复
引用 18 楼 sterling_wu 的回复:
CREATE TABLE hbase_table_7(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz7"); 好慢呀,到现在还没反应,前面select * from hbase_table_6;也是过了20多分钟才有返回信息,也提示missing table。 不过select * from pokes1 where foo=86很快就有结果了。前面其实就是把icity2的hive目录scp到icity0上,并添加了相应环境变量,在icity0再次启动的。 请教一下CREATE TABLE pokes1 (foo INT, bar STRING);这样创建的表是在mysql里面呢,还是在hive里面? 如果pokes1是在mysql里面,hbase_table_6的映射定义也在mysql,就不明白为什么会提示missing table了。
这个表是在hive中,mysql值是保存了hive的元数据,比如表名、函数之类的。另外建议楼主先别着急整合hbase,先搞明白了hive,然后在尝试整合hbase,hive的教程可以参考http://blog.csdn.net/column/details/hive-home.html
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
CREATE TABLE hbase_table_7(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz7"); 好慢呀,到现在还没反应,前面select * from hbase_table_6;也是过了20多分钟才有返回信息,也提示missing table。 不过select * from pokes1 where foo=86很快就有结果了。前面其实就是把icity2的hive目录scp到icity0上,并添加了相应环境变量,在icity0再次启动的。 请教一下CREATE TABLE pokes1 (foo INT, bar STRING);这样创建的表是在mysql里面呢,还是在hive里面? 如果pokes1是在mysql里面,hbase_table_6的映射定义也在mysql,就不明白为什么会提示missing table了。
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
show tables 可是能看到这个表的, 我重新建个新表看看。
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
query returned non-zero code 40000, case:failed ParseExeception line 1:12 missing table at 'hbase_table_6' near '<EOF>'
skyWalker_ONLY 2014-10-23
  • 打赏
  • 举报
回复
引用 14 楼 sterling_wu 的回复:
hive 挪到namenode主机icity0了,mysql还在icity2 icity0上分别启动: hive --service metastore -hiveconf hbase.zookeeper.quorum=icity0,icity1,icity2 -hiveconf hbase.zookeeper.property.clientPort=2222 hive --service hiveserver -hiveconf hbase.zookeeper.quorum=icity0,icity1,icity2 -hiveconf hbase.zookeeper.property.clientPort=2222 hive -h icity0 -p 10000 登陆hive 执行show tables正常,执行select * from hbase_table_6; 回车不报错,但也没有返回信息, 看hiveserver的窗口有"OK"显示,
执行插入试试
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
hive 挪到namenode主机icity0了,mysql还在icity2 icity0上分别启动: hive --service metastore -hiveconf hbase.zookeeper.quorum=icity0,icity1,icity2 -hiveconf hbase.zookeeper.property.clientPort=2222 hive --service hiveserver -hiveconf hbase.zookeeper.quorum=icity0,icity1,icity2 -hiveconf hbase.zookeeper.property.clientPort=2222 hive -h icity0 -p 10000 登陆hive 执行show tables正常,执行select * from hbase_table_6; 回车不报错,但也没有返回信息, 看hiveserver的窗口有"OK"显示,
skyWalker_ONLY 2014-10-23
  • 打赏
  • 举报
回复
引用 12 楼 sterling_wu 的回复:
hive放到namenode主机上,mysql也要放上去吗? 起初配mysql的时候最开始有权限问题 起初在ictiy2上装好mysql,创建用户,并授权grant all privileges on *.* to 'hivedb'; 但用 mysql -u hive -p 想测试从本地登陆mysql,死活登陆不上,后来单独做了授权 grant all privileges on *.* to 'hivedb'@'localhost'; 才能登陆, 在启动hive后show tables又有报错,用debug看提示拒绝登陆'hivedb'@'icity2';很是郁闷,于是又单独授权 grant all privileges on *.* to 'hivedb'@'icity2'; mysql> select user,host,password from mysql.user; +--------+-----------+-------------------------------------------+ | user | host | password | +--------+-----------+-------------------------------------------+ | root | localhost | *6F1F0C7FA57187E0D959C29F4BC7F826ED5A9169 | | root | icity2 | | | root | 127.0.0.1 | | | | localhost | | | | icity2 | | | hivedb | % | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hivedb | localhost | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hivedb | icity2 | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | +--------+-----------+-------------------------------------------+
mysql不用放,关于mysql的问题可以参考博客http://blog.csdn.net/skywalker_only/article/details/37872833,之前我也遇到过与楼主类似的问题
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
hive放到namenode主机上,mysql也要放上去吗? 起初配mysql的时候最开始有权限问题 起初在ictiy2上装好mysql,创建用户,并授权grant all privileges on *.* to 'hivedb'; 但用 mysql -u hive -p 想测试从本地登陆mysql,死活登陆不上,后来单独做了授权 grant all privileges on *.* to 'hivedb'@'localhost'; 才能登陆, 在启动hive后show tables又有报错,用debug看提示拒绝登陆'hivedb'@'icity2';很是郁闷,于是又单独授权 grant all privileges on *.* to 'hivedb'@'icity2'; mysql> select user,host,password from mysql.user; +--------+-----------+-------------------------------------------+ | user | host | password | +--------+-----------+-------------------------------------------+ | root | localhost | *6F1F0C7FA57187E0D959C29F4BC7F826ED5A9169 | | root | icity2 | | | root | 127.0.0.1 | | | | localhost | | | | icity2 | | | hivedb | % | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hivedb | localhost | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | | hivedb | icity2 | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC | +--------+-----------+-------------------------------------------+
skyWalker_ONLY 2014-10-23
  • 打赏
  • 举报
回复
我觉得版本问题的可能性不大,楼主将hive安装在NameNode的主机上试试,但感觉关系不大
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
Table 16.2. RHEL/CentOS 6 Project Download Hadoop hadoop-2.2.0.2.0.6.0-76.tar.gz Pig pig-0.12.0.2.0.6.0-76.tar.gz Hive and HCatalog hive-0.12.0.2.0.6.0-76.tar.gz hcatalog-0.12.0.2.0.6.0-76.tar.gz HBase and ZooKeeper hbase-0.96.0.2.0.6.0-76-hadoop2-bin.tar.gz zookeeper-3.4.5.2.0.6.0-76.tar.gz ------------------------------------------------------------------ 我的版本: hadoop-1.1.2 hbase-0.94.23 zookeeper-3.4.6 hive-0.12 看到没有重新编译过,难道是说hadoop和hbase版本低了?
sterling_wu 2014-10-23
  • 打赏
  • 举报
回复
引用 8 楼 wulinshishen 的回复:
insert overwrite table hbase_table_6 select * from pokes1 where foo=86; * 变成具体的列试试 不行,换个hive版本试试 http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.8.0/bk_installing_manually_book/content/rpm-chap13.html
换成具体的列还是不行。您是让我换链接里面的hive-0.12.0.2.0.6.0-76.tar.gz 这个版本吗?
  • 打赏
  • 举报
回复
insert overwrite table hbase_table_6 select * from pokes1 where foo=86; * 变成具体的列试试 不行,换个hive版本试试 http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.8.0/bk_installing_manually_book/content/rpm-chap13.html
加载更多回复(2)

20,848

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧