从csv到javaRDD的转换错误

从csv到javaRDD的转换错误,csv,apache-spark,Csv,Apache Spark,我不知道为什么在JavaRDD中存储csv数据时总是出现noSuchMethodError。我定义了以下类,其实例将是csv文件中的记录 public class Historical_Data_Record implements Serializable { String tripduration; String starttime; String stoptime; String start_station_id; String start_stati

我不知道为什么在JavaRDD中存储csv数据时总是出现noSuchMethodError。我定义了以下类,其实例将是csv文件中的记录

public class Historical_Data_Record implements Serializable {
    String tripduration;
    String starttime;
    String stoptime;
    String start_station_id;
    String start_station_name;
    long start_station_latitude;
    long start_station_longitude;
    String stop_station_id;
    String stop_station_name;
    long stop_station_latitude;
    long stop_station_longitude;
    String bikeid;
    String usertype;
    String birth_year;
    int gender; 
    // if 1, male, if 0, female
}
然后我有下面的代码,它通过从csv读取数据并存储在JavaRDD中来创建历史数据记录对象

public static final JavaRDD<Historical_Data_Record> get_Historical_Data(JavaSparkContext sc, String filename){
    // get the data using the configuration parameters 
    final JavaRDD<Historical_Data_Record> rdd_records = sc.textFile(filename).map(
        new Function<String, Historical_Data_Record>() {
            private static final long serialVersionUID = 1L;

            public Historical_Data_Record call(String line) throws Exception {
                String[] fields = line.split(",");

                Historical_Data_Record sd = new Historical_Data_Record();           
                sd.tripduration = fields[0];
                sd.starttime = fields[1];
                sd.stoptime = fields[2];
                sd.start_station_id = fields[3];
                sd.start_station_name = fields[4];
                sd.start_station_latitude = Long.valueOf(fields[5]).longValue();
                sd.start_station_longitude = Long.valueOf(fields[6]).longValue();
                sd.stop_station_id = fields[7]; 
                sd.stop_station_name = fields[8];
                sd.stop_station_latitude = Long.valueOf(fields[9]).longValue();
                sd.stop_station_longitude = Long.valueOf(fields[10]).longValue();
                sd.bikeid = fields[11];
                sd.usertype = fields[12];
                sd.birth_year = fields[13];
                sd.gender = Integer.parseInt(fields[14]);
                return sd;
    }});

    return rdd_records;

}
公共静态最终JavaRDD获取历史数据(JavaSparkContext sc,字符串文件名){
//使用配置参数获取数据
最终JavaRDD rdd_records=sc.textFile(filename).map(
新函数(){
私有静态最终长serialVersionUID=1L;
公共历史数据记录调用(字符串行)引发异常{
String[]fields=line.split(“,”);
历史_数据_记录sd=新的历史_数据_记录();
sd.tripduration=字段[0];
sd.starttime=字段[1];
sd.stoptime=字段[2];
sd.start_station_id=字段[3];
sd.start_station_name=字段[4];
sd.start_station_lation=Long.valueOf(字段[5]).longValue();
sd.start_station_longitude=Long.valueOf(字段[6]).longValue();
sd.stop_station_id=字段[7];
sd.stop_station_name=字段[8];
sd.stop_station_lation=Long.valueOf(字段[9]).longValue();
sd.stop_station_longitude=Long.valueOf(字段[10]).longValue();
sd.bikeid=字段[11];
sd.usertype=字段[12];
sd.birth_year=字段[13];
sd.gender=Integer.parseInt(字段[14]);
返回sd;
}});
返回rdd_记录;
}
但是当我运行下面的代码时

    JavaRDD<Historical_Data_Record> aData = Spark.get_Historical_Data(sc, filename);
JavaRDD aData=Spark.get_history_数据(sc,文件名);
其中sc是SparkContext,filename只是包含文件路径的字符串。错误如下:

2014-11-03 11:04:42.959 java[5856:1b03] Unable to load realm info from SCDynamicStore
14/11/03 11:04:43 WARN storage.BlockManager: Putting block broadcast_0 failed
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
    at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
    at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
    at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
    at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
    at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
    at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
    at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
    at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
    at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
    at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
    at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
    at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
    at org.apache.spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:52)
    at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
    at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
    at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:545)
    at org.apache.spark.SparkContext.textFile(SparkContext.scala:457)
    at org.apache.spark.api.java.JavaSparkContext.textFile(JavaSparkContext.scala:164)
    at com.big_data.citibike_project.Spark.get_Historical_Data(Spark.java:19)
    at com.big_data.citibike_project.Main.main(Main.java:18)
2014-11-03 11:04:42.959 java[5856:1b03]无法从SCDynamicStore加载领域信息
14/11/03 11:04:43警告存储。BlockManager:放置块广播\u 0失败
线程“main”java.lang.NoSuchMethodError中出现异常:com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
位于org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
位于org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
位于org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
位于scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
位于org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
位于org.apache.spark.util.SizeEstimator$.visitingleobject(SizeEstimator.scala:169)
位于org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
在org.apache.spark.util.SizeEstimator$.estimate上(SizeEstimator.scala:155)
位于org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
位于org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
位于org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
位于org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
位于org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
在org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52)
在org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
在org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
在org.apache.spark.broadcast.broadcast上(BroadcastManager.scala:62)
在org.apache.spark.SparkContext.broadcast上(SparkContext.scala:776)
位于org.apache.spark.SparkContext.hadoop文件(SparkContext.scala:545)
位于org.apache.spark.SparkContext.textFile(SparkContext.scala:457)
位于org.apache.spark.api.java.JavaSparkContext.textFile(JavaSparkContext.scala:164)
访问com.big_data.citibike_project.Spark.get_history_data(Spark.java:19)
位于com.big_data.citibike_project.Main.Main(Main.java:18)

起初,我认为这可能是因为有标题,所以我删除了它。但是,同样的错误。有人能帮我一下吗?

Spark使用了非常旧的guava版本(14.0.1),看起来您的一个依赖项带来了新的不兼容版本。尝试将番石榴的版本改为spark的版本


另外,这一个可能会感兴趣-

实际上,我刚刚执行了textFile(“csv_文件”),它给了我同样的错误。有人知道发生了什么吗?