hive跑mr报这个错this version of libhadoop was built without snappy support

lookat800 2018-01-26 04:29:28
环境是cdh5.7 parcel
centos6.8
hive在跑mr过程中报 能看到job日志

native snappy library not available: this version of libhadoop was built without snappy support.
————————————————————以下是hive交互的日志——————————————————————
Query ID = SJZX_20180126111919_ab2e3db7-5a13-4741-a05f-8744b48faad7
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1516936313320_0002, Tracking URL = http://dsjpt1.test.com:8088/proxy/application_1516936313320_0002/
Kill Command = /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop/bin/hadoop job -kill job_1516936313320_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-01-26 11:20:18,895 Stage-1 map = 100%, reduce = 100%
Ended Job = job_1516936313320_0002 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1516936313320_0002_m_000000 (and more) from job job_1516936313320_0002

Task with the most failures(1):
-----
Task ID:
task_1516936313320_0002_m_000000

URL:
http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1516936313320_0002&tipid=task_1516936313320_0002_m_000000
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.
at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:134)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:165)
at org.apache.hadoop.mapred.IFile$Writer.<init>(IFile.java:114)
at org.apache.hadoop.mapred.IFile$Writer.<init>(IFile.java:97)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1606)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1486)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:460)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:388)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:302)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:187)
at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:230)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
————————————————————————————————————————————————————————————————————-
以为是native不可用
用hadoop checknative 是可用的

18/01/26 03:25:03 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
18/01/26 03:25:03 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib64/libz.so.1
snappy: true /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so


http://ask.csdn.net/questions/179588
这个网址的问题有点像。


我本以为snappy的压缩没装好,hbase的snappy压缩也搞不了,去试了下 可以建snappy压缩的的表 并且可以put数据进去
有大神解决下么



...全文
1075 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
lookat800 2018-01-28
  • 打赏
  • 举报
回复
问题已自己解决。为其他兄弟做点记录吧。。。 原因是yarn组件启用了Ubertask优化 mapreduce.job.ubertask.enable 这个参数设成true了 去掉这个参数以及对应的其他两个Ubertask.maxmaps和maxreduces参数就OK了。

20,808

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧