Apache spark Spark数据集计数花费了很多时间

Apache spark Spark数据集计数花费了很多时间,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我正在使用count函数来知道count是否大于0。 但给出包含4000000个项目的特定列的大小需要5分钟以上的时间 下面是我的代码垃圾 specficManufacturerdetailsSource = source.filter(col("ManufacturerSource").equalTo(individualManufacturerName)); specficManufacturerdetailsTarget = target.filter(col("ManufacturerT

我正在使用count函数来知道count是否大于0。 但给出包含4000000个项目的特定列的大小需要5分钟以上的时间

下面是我的代码垃圾

specficManufacturerdetailsSource = source.filter(col("ManufacturerSource").equalTo(individualManufacturerName));
specficManufacturerdetailsTarget = target.filter(col("ManufacturerTarget").equalTo(individualManufacturerName));

manufacturerSourceCount=specficManufacturerdetailsSource.count();
manufacturerTargetCount=specficManufacturerdetailsTarget.count();


System.out.println("Size of specfic manufacturer source ML :"+manufacturerSourceCount+"Size of specfic manufacturer target"+manufacturerTargetCount);
if(manufacturerSourceCount > 0 && manufacturerTargetCount > 0 ){
}

根据您的上述要求,您不需要计数

您可以使用findFirst代替count,如果您通过 manufacturerSourceCount.isPresent则表示计数>0