Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 将数据加载到源中没有分隔符的Spark Dataframe中_Apache Spark_Hadoop_Apache Spark Sql - Fatal编程技术网

Apache spark 将数据加载到源中没有分隔符的Spark Dataframe中

Apache spark 将数据加载到源中没有分隔符的Spark Dataframe中,apache-spark,hadoop,apache-spark-sql,Apache Spark,Hadoop,Apache Spark Sql,我有一个没有分隔符的数据集: 111222333444 555666777888 期望输出: |_c1_|_c2_|_c3_|_c4_| |111 |222 |333 |444 | |555 |666 |777 |888 | 我已经试过这样做以达到效果 val myDF = spark.sparkContext.textFile("myFile").toDF() val myNewDF = myDF.withColumn("c1", substring(col("value"), 0, 3)

我有一个没有分隔符的数据集:

111222333444
555666777888
期望输出:

|_c1_|_c2_|_c3_|_c4_|
|111 |222 |333 |444 |
|555 |666 |777 |888 |
我已经试过这样做以达到效果

val myDF = spark.sparkContext.textFile("myFile").toDF()
val myNewDF = myDF.withColumn("c1", substring(col("value"), 0, 3))
                  .withColumn("c2", substring(col("value"), 3, 6))
                  .withColumn("c3", substring(col("value"), 6, 9)
                  .withColumn("c4", substring(col("value"), 9, 12))
             .drop("value") 
             .show()
但是我需要处理c4乘以100,但是数据类型是string而不是double

更新:我遇到了一个场景 当我执行这个时

val myNewDF = myDF.withColumn("c1", expr("substring(value, 0, 3)"))
.withColumn("c2",  expr("substring(value, 3, 6"))
.withColumn("c3", expr("substring(value, 6, 9)"))
.withColumn("c4", (expr("substring(value, 9, 12)").cast("double") * 100))
.drop("value")
show5,false//它只显示我删除的value列和c1列


myNewDF.printSchema//仅显示2行。为什么它没有显示所有新创建的4列?

给自己留下一点困惑,比如读取文件并显式命名数据集/数据帧列,这种模拟RDD方法应该可以帮助您:

val rdd = sc.parallelize(Seq(("111222333444"), 
                             ("555666777888")
                            )
                        )

val df = rdd.map(x => (x.slice(0,3), x.slice(3,6), x.slice(6,9), x.slice(9,12))).toDF()  
df.show(false)  
返回:

+---+---+---+---+
|_1 |_2 |_3 |_4 |
+---+---+---+---+
|111|222|333|444|
|555|666|777|888|
+---+---+---+---+ 
+------------+---+---+---+---+
|value       |c1 |c2 |c3 |c4 |
+------------+---+---+---+---+
|111222333444|111|222|333|444|
|555666777888|555|666|777|888|
+------------+---+---+---+---+

使用DF:

import org.apache.spark.sql.functions._
val df = sc.parallelize(Seq(("111222333444"), 
                        ("555666777888"))
                    ).toDF()

val df2 = df.withColumn("c1", expr("substring(value, 1, 3)")).withColumn("c2", expr("substring(value, 4, 3)")).withColumn("c3", expr("substring(value, 7, 3)")).withColumn("c4", expr("substring(value, 10, 3)"))
df2.show(false)
返回:

+---+---+---+---+
|_1 |_2 |_3 |_4 |
+---+---+---+---+
|111|222|333|444|
|555|666|777|888|
+---+---+---+---+ 
+------------+---+---+---+---+
|value       |c1 |c2 |c3 |c4 |
+------------+---+---+---+---+
|111222333444|111|222|333|444|
|555666777888|555|666|777|888|
+------------+---+---+---+---+
你可以放弃这个值,让你自己决定

与上面的答案类似,但如果不是全部3个大小的块,则会变得复杂

您的更新问题是100的两倍:

val df2 = df.withColumn("c1", expr("substring(value, 1, 3)")).withColumn("c2", expr("substring(value, 4, 3)")).withColumn("c3", expr("substring(value, 7, 3)"))
        .withColumn("c4", (expr("substring(value, 10, 3)").cast("double") * 100))
创建测试数据帧:

scala> var df = Seq(("111222333444"),("555666777888")).toDF("s")
df: org.apache.spark.sql.DataFrame = [s: string]
将列拆分为3个字符的块数组:

scala> var res = df.withColumn("temp",split(col("s"),"(?<=\\G...)"))
res: org.apache.spark.sql.DataFrame = [s: string, temp: array<string>]

到目前为止,您尝试过什么?val myDF=spark.sparkContext.textFile.toDF val myNewDF=myDF.withColumnc1,substringcolvalue,0,3.withColumnc2,substringcolvalue,3,6.显示不同的问题现在我看到了基于您的新问题和最后一个问题的修改答案。是的,我发现了错误,我在每个前面都有一个回车符。导致issuevals not vars的withColumn是scala paradigmvars,如果您没有有意义的名称了感谢@thebluephantom,但是当我执行df2.printSchema时,它只显示2行。仅限起始列和c1。请告知。我需要创建的列来执行一些SQL查询奇数看起来很奇怪,因为我看不到这样的问题。