2017-06-19 10 views
1

Spark Streamingジョブを実行していて、DStream [GenericData.Record]のタイプのresult_dstreamを公開したいので、その目的のために以下のコードを使用しましたタスクシリアル化可能ではありません例外Kafkaに公開しようとするとタスクのシリアライゼーション例外が発生する

val prod_props : Properties = new Properties() 
prod_props.put("bootstrap.servers" , "localhost:9092") 
prod_props.put("key.serializer" , "org.apache.kafka.common.serialization.StringSerializer") 
prod_props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer") 

val _producer : KafkaProducer[String , Array[Byte]] = new KafkaProducer(prod_props) 

result_DStream.foreachRDD(r => { 
    r.foreachPartition(it => { 
    while(it.hasNext) 
     { 
     val schema = new Schema.Parser().parse(schema_string) 
     val recordInjection : Injection[GenericRecord , Array[Byte]] = GenericAvroCodecs.toBinary(schema) 
     val record : GenericData.Record = it.next() 
     val byte : Array[Byte] = recordInjection.apply(record) 
     val prod_record : ProducerRecord[String , Array[Byte]] = new ProducerRecord("sample_topic_name_9" , byte) 
     _producer.send(prod_record) 
     } 
    }) 
}) 

この問題を解決するにはどうすればよいですか?私はラムダ関数で直列化できないクラスを使用しforeachの代わりにforeachPartitionを使用するような提案を試みました。問題は私によるとschemaまたはrecordInjectionです。

org.apache.spark.SparkException: Task not serializable 
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) 
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) 
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) 
at org.apache.spark.SparkContext.clean(SparkContext.scala:2062) 
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919) 
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918) 
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918) 
at HadoopMetrics_Online$$anonfun$main$3.apply(HadoopMetrics_Online.scala:187) 
at HadoopMetrics_Online$$anonfun$main$3.apply(HadoopMetrics_Online.scala:186) 
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) 
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:661) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:50) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:50) 
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:426) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:49) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) 
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:49) 
at scala.util.Try$.apply(Try.scala:161) 
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:224) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:224) 
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) 
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:223) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.NotSerializableException: org.apache.kafka.clients.producer.KafkaProducer 
Serialization stack: 
- object not serializable (class: org.apache.kafka.clients.producer.KafkaProducer, value: [email protected]) 
- field (class: HadoopMetrics_Online$$anonfun$main$3, name: _producer$1, type: class org.apache.kafka.clients.producer.KafkaProducer) 
- object (class HadoopMetrics_Online$$anonfun$main$3, <function1>) 
- field (class: HadoopMetrics_Online$$anonfun$main$3$$anonfun$apply$1, name: $outer, type: class HadoopMetrics_Online$$anonfun$main$3) 
- object (class HadoopMetrics_Online$$anonfun$main$3$$anonfun$apply$1, <function1>) 
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) 
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47) 
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101) 
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301) 
... 30 more 
+0

stacktraceを追加すると便利です – hadooper

+0

stacktraceを追加しました – JSR29

答えて

3

KafkaProducer直列化可能ではない、とあなたはforeachPartition方法でその上に閉じています。あなたは内部的にそれを宣言する必要があります:

resultDStream.foreachRDD(r => { 
    r.foreachPartition(it => { 
    val producer : KafkaProducer[String , Array[Byte]] = new KafkaProducer(prod_props) 
    while(it.hasNext) { 
     val schema = new Schema.Parser().parse(schema_string) 
     val recordInjection : Injection[GenericRecord , Array[Byte]] = GenericAvroCodecs.toBinary(schema) 
     val record : GenericData.Record = it.next() 
     val byte : Array[Byte] = recordInjection.apply(record) 
     val prod_record : ProducerRecord[String , Array[Byte]] = new ProducerRecord("sample_topic_name_9" , byte) 
     producer.send(prod_record) 
     } 
    }) 
}) 

サイドノートを - Scalaの命名規則はsnake_case、変数名にキャメルケースをではありません。

+0

将来、どのクラスのオブジェクトが問題であるかを知ることができます。 – JSR29

+0

@ JSR29 Sparkは 'ClosureCleaner'を持っています。これは問題の原因となっている正確なフィールドを与えます。stacktraceを読んでください:' object not serializable(class:org.apache.kafka.clients.producer.KafkaProducer、value:org.apache .kafka.clients.producer.KafkaProducer @ 252f5489) ' –

関連する問題