Mysql Spark Join with Cassandratable on TimeStamp分区键卡滞

Mysql Spark Join with Cassandratable on TimeStamp分区键卡滞,mysql,scala,cassandra,apache-spark,datastax-enterprise,Mysql,Scala,Cassandra,Apache Spark,Datastax Enterprise,我尝试使用以下方法过滤巨大C*表的一小部分: val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_)).joinWithCassandraTable("listener","snapshots_tspark") println("Done Join") //******* //get only the snapshots and create rdd temp ta

我尝试使用以下方法过滤巨大C*表的一小部分:

    val snapshotsFiltered = sc.parallelize(startDate to endDate).map(TableKey(_)).joinWithCassandraTable("listener","snapshots_tspark")

    println("Done Join")
    //*******
    //get only the snapshots and create rdd temp table
    val jsons = snapshotsFiltered.map(_._2.getString("snapshot"))
    val jsonSchemaRDD = sqlContext.jsonRDD(jsons)
    jsonSchemaRDD.registerTempTable("snapshots_json")
与:

    case class TableKey(created: Long) //(created, imei, when)--> created = partititon key | imei, when = clustering key
而cassandra表模式是:

CREATE TABLE listener.snapshots_tspark (
created timestamp,
imei text,
when timestamp,
snapshot text,
PRIMARY KEY (created, imei, when) ) WITH CLUSTERING ORDER BY (imei ASC, when ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
问题是,在完成println后,进程冻结,spark master ui上没有错误

[Stage 0:>                                                                                                                                (0 + 2) / 2]
将时间戳作为分区键时联接不起作用吗?为什么会冻结?

使用:

sc.parallelize(startDate to endDate)
将startData和endDate作为从以下格式的日期生成的长度:

("yyyy-MM-dd HH:mm:ss")
我让spark构建了一个巨大的数组(100000多个对象)来连接C*表,它一点也没有卡住——C*努力实现连接并返回数据

最后,我将范围更改为:

case class TableKey(created_dh: String)
val data = Array("2015-10-29 12:00:00", "2015-10-29 13:00:00", "2015-10-29 14:00:00", "2015-10-29 15:00:00")
val snapshotsFiltered = sc.parallelize(data, 2).map(TableKey(_)).joinWithCassandraTable("listener","snapshots_tnew")

现在一切正常。

您是否检查了是否有足够的资源来运行作业?@eliasah是的。内存:总共5.5 GB,使用512.0 MB如果收集快照过滤返回为空,则下一阶段将被卡住?不,这不是原因。它可能主要是由于缺乏资源而陷入困境。这里可能是因为您要执行的查询计划非常复杂。@eliasah为什么这么复杂?它只假设在创建的时间戳上聚集,并将创建>startDate和startDate的时间带到endDate=”+startDate+”和“创建