Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 在编写delta湖时使用分区(带有partitionBy)没有效果_Apache Spark_Apache Spark Sql_Partitioning_Mapr_Delta Lake - Fatal编程技术网

Apache spark 在编写delta湖时使用分区(带有partitionBy)没有效果

Apache spark 在编写delta湖时使用分区(带有partitionBy)没有效果,apache-spark,apache-spark-sql,partitioning,mapr,delta-lake,Apache Spark,Apache Spark Sql,Partitioning,Mapr,Delta Lake,当我最初编写一个delta湖时,使用分区(带partitionBy)与否并没有任何区别 在写入前对同一列进行重新分区,只会更改拼花地板文件的数量。 将列显式设置为分区“not nullable”不会改变效果 版本: Spark 2.4(实际上是2.4.0.0-mapr-620) Scala 2.11.12 三角洲湖0.5.0(io.Delta:Delta-core_2.11:jar:0.5.0) 生成的delta lake目录如下所示: ls -1 PARTITIONED_DELTA_LAK

当我最初编写一个delta湖时,使用分区(带partitionBy)与否并没有任何区别

在写入前对同一列进行重新分区,只会更改拼花地板文件的数量。 将列显式设置为分区“not nullable”不会改变效果

版本:

  • Spark 2.4(实际上是2.4.0.0-mapr-620)
  • Scala 2.11.12
  • 三角洲湖0.5.0(io.Delta:Delta-core_2.11:jar:0.5.0)
生成的delta lake目录如下所示:

ls -1 PARTITIONED_DELTA_LAKE
_delta_log
    00000000000000000000.json
part-00000-a3015965-b101-4f63-87de-1d06a7662312-c000.snappy.parquet
part-00007-3155dde1-9f41-49b5-908e-08ce6fc077af-c000.snappy.parquet
part-00014-047f6a28-3001-4686-9742-4e4dbac05c53-c000.snappy.parquet
part-00021-e0d7f861-79e9-41c9-afcd-dbe688720492-c000.snappy.parquet
part-00028-fe3da69d-660a-445b-a99c-0e7ad2f92bf0-c000.snappy.parquet
part-00035-d69cfb9d-d320-4d9f-9b92-5d80c88d1a77-c000.snappy.parquet
part-00043-edd049a2-c952-4f7b-8ca7-8c0319932e2d-c000.snappy.parquet
part-00050-38eb3348-9e0d-49af-9ca8-a323e58b3712-c000.snappy.parquet
part-00057-906312ad-8556-4696-84ba-248b01664688-c000.snappy.parquet
part-00064-31f5d03d-2c63-40e7-8fe5-a8374eff9894-c000.snappy.parquet
part-00071-e1afc2b9-aa5b-4e7c-b94a-0c176523e9f1-c000.snappy.parquet

cat PARTITIONED_DELTA_LAKE/_delta_log/00000000000000000000.json
{"commitInfo":{"timestamp":1579073383370,"operation":"WRITE","operationParameters":{"mode":"ErrorIfExists","partitionBy":"[]"},"isBlindAppend":true}}
{"protocol":{"minReaderVersion":1,"minWriterVersion":2}}
{"metaData":{"id":"2cdd6fbd-bffa-415e-9c06-94ffc2048cbe","format":{"provider":"parquet","options":{}},"schemaString":"{\"type\":\"struct\",\"fields\":[{\"name\":\"CONTENT\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}},{\"name\":\"PARTITION\",\"type\":\"integer\",\"nullable\":true,\"metadata\":{}}]}","partitionColumns":[],"configuration":{},"createdTime":1579073381183}}
{"add":{"path":"part-00000-a3015965-b101-4f63-87de-1d06a7662312-c000.snappy.parquet","partitionValues":{},"size":363,"modificationTime":1579073382329,"dataChange":true}}
{"add":{"path":"part-00007-3155dde1-9f41-49b5-908e-08ce6fc077af-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382545,"dataChange":true}}
{"add":{"path":"part-00014-047f6a28-3001-4686-9742-4e4dbac05c53-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382237,"dataChange":true}}
{"add":{"path":"part-00021-e0d7f861-79e9-41c9-afcd-dbe688720492-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382583,"dataChange":true}}
{"add":{"path":"part-00028-fe3da69d-660a-445b-a99c-0e7ad2f92bf0-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382893,"dataChange":true}}
{"add":{"path":"part-00035-d69cfb9d-d320-4d9f-9b92-5d80c88d1a77-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382488,"dataChange":true}}
{"add":{"path":"part-00043-edd049a2-c952-4f7b-8ca7-8c0319932e2d-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073383262,"dataChange":true}}
{"add":{"path":"part-00050-38eb3348-9e0d-49af-9ca8-a323e58b3712-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382683,"dataChange":true}}
{"add":{"path":"part-00057-906312ad-8556-4696-84ba-248b01664688-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382416,"dataChange":true}}
{"add":{"path":"part-00064-31f5d03d-2c63-40e7-8fe5-a8374eff9894-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382549,"dataChange":true}}
{"add":{"path":"part-00071-e1afc2b9-aa5b-4e7c-b94a-0c176523e9f1-c000.snappy.parquet","partitionValues":{},"size":625,"modificationTime":1579073382511,"dataChange":true}}
我期待着类似的事情

ls -1 PARTITIONED_DELTA_LAKE
_delta_log
    00000000000000000000.json
PARTITION=0
   part-00000-a3015965-b101-4f63-87de-1d06a7662312-c000.snappy.parquet
   ...

cat PARTITIONED_DELTA_LAKE/_delta_log/00000000000000000000.json
..."partitionBy":"[PARTITION]"...
..."partitionColumns":[PARTITION]...
..."partitionValues":{0}...

如前所述,使用的Spark版本太旧。我已经为Spark版本尝试了上述代码:

  • 2.4.0
  • 2.4.1
  • 2.4.2
只有使用2.4.2分区才能按预期工作。此版本中的问题可能是修复此问题的原因:

。。 用户可以在partitionBy中指定列,我们的内部数据源将使用此信息。不幸的是,对于外部系统,这些数据被悄悄地丢弃,而没有反馈给用户


这可能是原因吗?谢谢你指出这一点!我一定忽略了:)在2.4.2中发生了一个有希望的事情。我还将尝试验证版本更新是否解决了我的问题,并将结果发布在这里。谢谢@Flocor4!一起学习再好不过了!
ls -1 PARTITIONED_DELTA_LAKE
_delta_log
    00000000000000000000.json
PARTITION=0
   part-00000-a3015965-b101-4f63-87de-1d06a7662312-c000.snappy.parquet
   ...

cat PARTITIONED_DELTA_LAKE/_delta_log/00000000000000000000.json
..."partitionBy":"[PARTITION]"...
..."partitionColumns":[PARTITION]...
..."partitionValues":{0}...