spark往hbase写数据

ghhg 2014-08-28 09:41:20
val result: org.apache.spark.rdd.RDD[(String, Int)]
result.foreach(res =>
{ var put = new Put(java.util.UUID.randomUUID().toString.reverse.getBytes()) .add("lv6".getBytes(), res._1.toString.getBytes(), res._2.toString.getBytes) table.put(put) }
)
上面是程序,result里面是(key,value)的Array 保存到hbase报错各种没有序列化
Exception in thread "Thread-3" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:186)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: org.apache.hadoop.hbase.client.HTablePool$PooledHTable
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1044)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1028)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1026)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1026)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:771)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$16$$anonfun$apply$1.apply$mcVI$sp(DAGScheduler.scala:901)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$16$$anonfun$apply$1.apply(DAGScheduler.scala:898)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$16$$anonfun$apply$1.apply(DAGScheduler.scala:898)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$16.apply(DAGScheduler.scala:898)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskCompletion$16.apply(DAGScheduler.scala:897)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:897)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1226)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
...全文
7526 11 打赏 收藏 转发到动态 举报
写回复
用AI写文章
11 条回复
切换为时间正序
请发表友善的回复…
发表回复
chencang_xuejishu 2018-07-11
  • 打赏
  • 举报
回复
加上这句话就好了
System.setProperty("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
LinkSe7en 2018-07-10
  • 打赏
  • 举报
回复
关键是你在foreach算子里访问外面的HTable对象了,而HTable对象是不可序列化的,所以要在foreach里面创建。但是创建HTable又要创建HConnection,而创建HConnection又是个重型操作,所以建议是foreachPartition的方式去做:

tb.foreachPartition( it => {
import org.apache.hadoop.conf.Configuration
val hdfsConf = new Configuration(true)
hdfsConf.set("hbase.master", "datanode01:60000")
hdfsConf.set("hbase.zookeeper.property.clientport", "2181")
hdfsConf.set("hbase.zookeeper.quorum", "namenode01:2181,datanode01:2181,datanode02:2181")

val hbaseConn = ConnectionFactory.createConnection(hdfsConf)
val tableName = TableName.valueOf("dev","table_ttt")
val table = hbaseConn1.getTable(tableName1)

for (t <- it) {
val put = new Put(ByteArrayUtil.toByte(t._1))
put.addColumn("tb".getBytes,"tb".getBytes(),ByteArrayUtil.toByte(t._2))
table.put(put)
}
hbaseConn.close()
})

这样快得一比
smile326 2018-07-09
  • 打赏
  • 举报
回复
引用 3 楼 lsignsjisfsf 的回复:
关注中,同样碰到该问题。


这个情况后来怎么解决的呀
furanger 2015-01-26
  • 打赏
  • 举报
回复
试试这个 object Blaher { def blah(row: Array[String]) { val hConf = new HBaseConfiguration() val hTable = new HTable(hConf, "table") val thePut = new Put(Bytes.toBytes(row(0))) thePut.add(Bytes.toBytes("cf"), Bytes.toBytes(row(0)), Bytes.toBytes(row(0))) hTable.put(thePut) } } object TheMain extends Serializable{ def run() { val ssc = new StreamingContext(sc, Seconds(1)) val lines = ssc.socketTextStream("localhost", 9977, StorageLevel.MEMORY_AND_DISK_SER) val words = lines.map(_.split(",")) val store = words.foreachRDD(rdd => rdd.foreach(Blaher.blah)) ssc.start() } } TheMain.run() 重新序列化一下就可以了
q79969786 2015-01-22
  • 打赏
  • 举报
回复
messages.map(new Function<Tuple2<String, String>, String>() { @Override public String call(Tuple2<String, String> tuple2) { return tuple2._2(); } }).foreach(new Function2<JavaRDD<String>, Time, Void>() { private HTableInterface table = null; @Override public Void call(JavaRDD<String> values, Time time) throws Exception { values.foreach(new VoidFunction<String>() { @Override public void call(String str) throws Exception { HConnection connection = HConnectionManager.createConnection(HBaseConfiguration.create()); table = connection.getTable(tableName); String[] strings = SPACE.split(str); String tableName = strings[0]; String type = strings[1]; //if(null == table){ connection.getTable(tableName); table.setAutoFlush(false); //} if (type.equals("DELETE")) { Delete d = new Delete(strings[2].getBytes()); try { table.delete(d); } catch (IOException e) { e.printStackTrace(); } } else { Put p = new Put(Bytes.toBytes(strings[2])); for (int i = 3; i < strings.length; i++) { String cstr = strings[i]; int index = cstr.indexOf("="); if (index > 0) { p.add(Bytes.toBytes(columnFamily), Bytes .toBytes(cstr.substring(0, index)), Bytes.toBytes(cstr .substring(index + 1))); } else { p.add(Bytes.toBytes(columnFamily), Bytes.toBytes(cstr), Bytes.toBytes("")); } } table.put(p); } table.flushCommits(); } }); return null; } });
lsignsjisfsf 2014-10-11
  • 打赏
  • 举报
回复
关注中,同样碰到该问题。
  • 打赏
  • 举报
回复
涉及到的类都需要序列化处理,还不行的话可以放弃foreach, 用别的方法替代http://blog.csdn.net/fighting_one_piece/article/details/38437647 希望对你有帮助
coolbamboo2008 2014-08-29
  • 打赏
  • 举报
回复
我也是刚开始接触spark,楼主这个问题,我个人的一点建议,把一些类都实现serializable接口吧 貌似你的spark是和hadoop整合的,不知道hadoop内部是不是有些没有序列化的

1,258

社区成员

发帖
与我相关
我的任务
社区描述
Spark由Scala写成,是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于MapReduce算法实现的分布式计算。
社区管理员
  • Spark
  • shiter
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧