我正在运行
Scala 2.10.4中的Spark函数,并运行在Spark 1.4.0集群(基于HDFS并使用YARN进行管理)中,并在Maven存储库中使用了Jackson模块2.6.1
当我从IDE(IntelliJ IDEA v14)本地运行代码时,一切都适用于内存中的集群,但是当我的远程集群(AWS VPC上的EMR集群)上运行作业时,我遇到以下异常:
java.lang.AbstractMethodError: com.company.scala.framework.utils.JsonParser$$anon$1.com$fasterxml$jackson$module$scala$experimental$ScalaObjectMapper$_setter_$com$fasterxml$jackson$module$scala$experimental$ScalaObjectMapper$$typeCache_$eq(Lorg/spark-project/guava/cache/LoadingCache;)V at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper$class.$init$(ScalaObjectMapper.scala:50) at com.company.scala.framework.utils.JsonParser$$anon$1.<init>(JsonParser.scala:14) at com.company.scala.framework.utils.JsonParser$.<init>(JsonParser.scala:14) at com.company.scala.framework.utils.JsonParser$.<clinit>(JsonParser.scala) at com.company.migration.Migration$.printAllKeys(Migration.scala:21) at com.company.migration.Main$$anonfun$main$1.apply(Main.scala:22) at com.company.migration.Main$$anonfun$main$1.apply(Main.scala:22) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
我试图在网上查看异常情况,没有运气.我也尝试在这里找到一个类似的问题,发现只有一个线程,没有可接受的答案,没有一个答案帮助我在那里.
希望在这里找到帮助,
谢谢.