flume+kafka整合的问题

zhtwave 2016-03-29 05:50:06
在flume中,将kafka作为flume的输入源,通过memory的方式,将数据写到hdfs上,当kafka的数据较大时,出现如下错误

16/03/24 16:57:31 ERROR network.BoundedByteBufferReceive: OOME with size 722234
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at kafka.network.BoundedByteBufferReceive.byteBufferAllocate(BoundedByteBufferReceive.scala:80)
at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:63)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:81)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:71)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:109)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:109)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:108)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:108)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:107)
at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:96)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:88)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
16/03/24 16:57:32 ERROR consumer.ConsumerFetcherThread: [ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-46], Error due to
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-45" java.lang.OutOfMemoryError: GC overhead limit exceeded
16/03/24 16:57:32 ERROR consumer.ConsumerFetcherThread: [ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-45], Error due to
16/03/24 16:57:32 INFO consumer.ConsumerFetcherThread: [ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-46], Stopped
Exception in thread "ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-43" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "metrics-meter-tick-thread-2" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "ConsumerFetcherThread-flume_ibd105-1458809849949-5f4498cc-0-44" java.lang.OutOfMemoryError: GC overhead limit exceeded
...全文
962 3 打赏 收藏 转发到动态 举报
写回复
用AI写文章
3 条回复
切换为时间正序
请发表友善的回复…
发表回复
燕少༒江湖 2017-09-10
  • 打赏
  • 举报
回复
怎么指定,可以具体说一下嘛,我遇到这个问题不知道如何处理
zhtwave 2016-05-10
  • 打赏
  • 举报
回复
问题已搞定,是因为在执行时, -conf未指定具体配置文件,使得使用时默认配置,即内存在400M,要指定具体的配置文件,并在配置文件中,指定运行时的内存大小(我的时16G)!
zhtwave 2016-04-25
  • 打赏
  • 举报
回复
需要高手们的强力支撑,谢谢!

20,809

社区成员

发帖
与我相关
我的任务
社区描述
Hadoop生态大数据交流社区,致力于有Hadoop,hive,Spark,Hbase,Flink,ClickHouse,Kafka,数据仓库,大数据集群运维技术分享和交流等。致力于收集优质的博客
社区管理员
  • 分布式计算/Hadoop社区
  • 涤生大数据
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧