關(guān)于SparkStreaming 緩存KafkaCosumer導致多個線程使用一個Cosumer對象報錯解決思路

現(xiàn)象

[INFO ] 2020-06-28 23:23:22,092 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Removing CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0) from cache
[INFO ] 2020-06-28 23:23:22,092 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)
[INFO ] 2020-06-28 23:23:22,095 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Initial fetch for spark-executor-GPBPAnalysis-group-prodtest gshopper_logs 0 1804944
[WARN ] 2020-06-28 23:23:22,096 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Putting block rdd_722699_5 failed due to an exception
[WARN ] 2020-06-28 23:23:22,096 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Block rdd_722699_5 could not be removed as it was not found on disk or in memory
[WARN ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Putting block rdd_722700_5 failed due to an exception
[INFO ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Removed CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0) from cache
[WARN ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
Block rdd_722700_5 could not be removed as it was not found on disk or in memory
[INFO ] 2020-06-28 23:23:22,097 method:org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)
[ERROR] 2020-06-28 23:23:22,098 method:org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
Exception in task 1007.2 in stage 243931.0 (TID 61808749)
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
        at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:2286)
        at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2270)
        at org.apache.kafka.clients.consumer.KafkaConsumer.seek(KafkaConsumer.java:1543)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.seek(CachedKafkaConsumer.scala:95)
        at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:69)
        at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:223)
        at org.apache.spark.streaming.kafka010.KafkaRDD$KafkaRDDIterator.next(KafkaRDD.scala:189)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:215)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1038)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336)
        at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1055)
        at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969)
        at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029)
        at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760)
        at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
        at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.sc

可以通過以上異??吹健癇lock rdd_722700_5 could not be removed as it was not found on disk or in memory”表示rdd緩存失敗

“Cache miss for CacheKey(spark-executor-GPBPAnalysis-group-prodtest,gshopper_logs,0)” 表示KafkaCosumer從緩存中獲取失敗

推測:由于KafkaCosumer獲取失敗了,然后某一個job失敗了,又重新去創(chuàng)建KafkaCosumer導致了“java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access”異常

可能的情況:

? 首先由于緩存被擊穿了,所以只要任何一個job重新執(zhí)行都會導致重復創(chuàng)建KafkaConsumer的問題。

? 1.推測執(zhí)行機制導致(默認關(guān)閉)

? 2.某一個job失敗后從頭開始執(zhí)行

? 3.在foreachrdd中重復使用Rdd

解決辦法:

? 1.開啟Checkpoint斬斷Rdd鏈條

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

友情鏈接更多精彩內(nèi)容