由于ClassNotFoundException,在Cassandra上运行的Spark失败:com.datastax.spark.connector.rdd.partitioner.CassandraPartition(内部详细信息)

记忆力

我使用的是spark 2.0.0(本地单机),并spark-cassandra-connector 2.0.0-M1scala 2.11我正在IDE上的项目中工作,每次我运行spark命令时,都会得到

ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:253)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

我的build.sbt文件

ibraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "2.0.0-M1"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"

所以本质上这是一条错误消息

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 13, 192.168.0.12): java.lang.ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition

问题是如果我使用带有spark-cassandra-connector的spark外壳运行

$ ./spark-shell --jars /home/Applications/spark-2.0.0-bin-hadoop2.7/spark-cassandra-connector-assembly-2.0.0-M1-22-gab4eda2.jar

我可以使用spark和cassandra,而不会出现任何错误消息。

关于如何解决这种奇怪的不兼容的任何线索?

编辑:

从工作程序节点的角度来看,这很有趣,当我运行程序时,连接器提供了

`java.io.InvalidClassException: com.datastax.spark.connector.rdd.CassandraTableScanRDD; local class incompatible: stream classdesc serialVersionUID = 1517205208424539072, local class serialVersionUID = 6631934706192455668`

这就是最终提供ClassNotFound的原因(由于冲突而未绑定)。但是该项目仅使用过spark and connector 2.0scala 2.11,任何地方都没有版本不兼容。

拉斯

在Spark中,仅因为您针对库进行构建并不意味着它将包含在运行时类路径中。如果您加入

--jars  /home/Applications/spark-2.0.0-bin-hadoop2.7/spark-cassandra-connector-assembly-2.0.0-M1-22-gab4eda2.jar

对于您的应用程序,它会在运行时和所有远程jvm上包括所有那些必需的库。

因此,基本上您会看到的是,在第一个示例中,连接器库都不在运行时类路径上,而在spark-shell示例中则是。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

Related 相关文章

热门标签

归档