Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
java.lang.RuntimeException:编码时出错:java.lang.ArrayIndexOutOfBoundsException:1_Java_Apache Spark_Apache Spark Dataset - Fatal编程技术网

java.lang.RuntimeException:编码时出错:java.lang.ArrayIndexOutOfBoundsException:1

java.lang.RuntimeException:编码时出错:java.lang.ArrayIndexOutOfBoundsException:1,java,apache-spark,apache-spark-dataset,Java,Apache Spark,Apache Spark Dataset,我在尝试连接来自数据库和csv文件的两个数据集时出错 错误消息如下所示: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): java.lang.RuntimeExcep

我在尝试连接来自数据库和csv文件的两个数据集时出错 错误消息如下所示:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: 1
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, targetString), StringType), true, false) AS targetString#205
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, deviceName), StringType), true, false) AS deviceName#206
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, alarmDetectionCode), StringType), true, false) AS alarmDetectionCode#207
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();
SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
    StructField field = DataTypes
                .createStructField(fieldName, DataTypes.StringType, true);
    fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = new ArrayList<>();
for(String[] s :tmpContent) {
                   Row r = null;
                   if(s[0].isEmpty() ) {
                       continue;
                   }
                   if(s.length < columns.size()) {
                       String[] tmpS = new String[columns.size()];
                       System.arraycopy(s, 0, tmpS, 0, s.length);
                       r = RowFactory.create((Object[])tmpS);
                   }else {
                       r = RowFactory.create((Object[])s);
                   }
                   content.add(r);
               }
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();

spark应用程序连接两个数据集时似乎发生了不匹配 用不同的模式,但我不知道它是怎么发生的。 我的java代码如下所示:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: 1
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, targetString), StringType), true, false) AS targetString#205
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, deviceName), StringType), true, false) AS deviceName#206
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, alarmDetectionCode), StringType), true, false) AS alarmDetectionCode#207
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();
SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
    StructField field = DataTypes
                .createStructField(fieldName, DataTypes.StringType, true);
    fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = new ArrayList<>();
for(String[] s :tmpContent) {
                   Row r = null;
                   if(s[0].isEmpty() ) {
                       continue;
                   }
                   if(s.length < columns.size()) {
                       String[] tmpS = new String[columns.size()];
                       System.arraycopy(s, 0, tmpS, 0, s.length);
                       r = RowFactory.create((Object[])tmpS);
                   }else {
                       r = RowFactory.create((Object[])s);
                   }
                   content.add(r);
               }
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();

数据集架构:

device
+--------+----------+----------+
|ctgry_cd|deviceInfo|deviceName|
+--------+----------+----------+
searchinfo
+------------+----------+------------------+
|targetString|deviceName|alarmDetectionCode|
+------------+----------+------------------+

这个问题似乎比我想象的要复杂。原因有两个。 1.我的数据集有一个来自csv的空行。在这种情况下,我可以使用以下代码创建并显示此数据集:

SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
     StructField field = DataTypes
                 .createStructField(fieldName, DataTypes.StringType, true);
     fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = jsc.parallelize(tmpContent).map(l -> RowFactory.create((Object[])l)).collect();
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();
SparkSession ss=sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc=新的JavaSparkContext(ss.sparkContext());
for(字符串字段名:列){
StructField=数据类型
.createStructField(字段名,数据类型.StringType,true);
字段。添加(字段);
}
StructType schema=DataTypes.createStructType(字段);
列出tmpContent=LocalFileUtilsCustomize.readCsv(临时路径);
List content=jsc.parallelize(tmpContent).map(l->RowFactory.create((Object[])l)).collect();
Dataset searchInfo=ss.createDataFrame(内容、模式);
searchInfo.show();
但当我试图连接两个数据集并显示它们时,我得到了这个错误。 然后,我试图删除空行,但仍然得到错误。 至少,我意识到,即使我设置了“nullable=true”,我也必须确保csv的所有行都具有相同数量的“schema”列。 所以这个问题的解决方法是这样的:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: 1
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, targetString), StringType), true, false) AS targetString#205
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, deviceName), StringType), true, false) AS deviceName#206
staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, alarmDetectionCode), StringType), true, false) AS alarmDetectionCode#207
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:292)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:593)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();
SparkSession ss = sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc = new JavaSparkContext(ss.sparkContext());
for (String fieldName : columns) {
    StructField field = DataTypes
                .createStructField(fieldName, DataTypes.StringType, true);
    fields.add(field);
}
StructType schema = DataTypes.createStructType(fields);
List<String[]> tmpContent = LocalFileUtilsCustomize.readCsv(tempPath);
List<Row> content = new ArrayList<>();
for(String[] s :tmpContent) {
                   Row r = null;
                   if(s[0].isEmpty() ) {
                       continue;
                   }
                   if(s.length < columns.size()) {
                       String[] tmpS = new String[columns.size()];
                       System.arraycopy(s, 0, tmpS, 0, s.length);
                       r = RowFactory.create((Object[])tmpS);
                   }else {
                       r = RowFactory.create((Object[])s);
                   }
                   content.add(r);
               }
Dataset<Row> searchInfo= ss.createDataFrame(content,schema);
searchInfo.show();
Dataset result = null;
result = deviceInfoDataset.join(searchInfo,deviceInfoDataset.col("deviceName").equalTo(searchInfo.col("deviceName")));
result.show();

SparkSession ss=sparkContextManager.createThreadLocalSparkSession(functionId);
JavaSparkContext jsc=新的JavaSparkContext(ss.sparkContext());
for(字符串字段名:列){
StructField=数据类型
.createStructField(字段名,数据类型.StringType,true);
字段。添加(字段);
}
StructType schema=DataTypes.createStructType(字段);
列出tmpContent=LocalFileUtilsCustomize.readCsv(临时路径);
列表内容=新建ArrayList();
for(字符串[]s:tmpContent){
行r=null;
如果(s[0].isEmpty()){
继续;
}
如果(s.length
您试过这个吗<代码>DeviceInfo数据集.join(searchInfo,“deviceName”).show()是的,我试过了。看起来我的csv数据有不同的列号,但是模式总是一样的,就像这样:csv数据:“3”,“130.180.138.56”,“Tunnel6”,“0:0:0:0:0:0:0:0”“1”“2”,“130.180.138.56”“2”,“130.180.138.56”模式:datainfo,ipaddress,nodename,location,timeare spark_core和spark_sql版本是一样的吗?是的,两者都是2.11。