Apache spark 从spark RDD中移除空字符串

Apache spark 从spark RDD中移除空字符串,apache-spark,pyspark,spark-dataframe,apache-spark-mllib,Apache Spark,Pyspark,Spark Dataframe,Apache Spark Mllib,我有一个RDD,我正在像这样标记它,以给我一个标记列表 data = sqlContext.read.load('file.csv', format='com.databricks.spark.csv', header='true', inferSchema='true') data = data.rdd.map(lambda x: x.desc) stopwords = set(sc.textFile('stopwords.txt').collect()) tokens = data.map

我有一个RDD,我正在像这样标记它,以给我一个标记列表

data = sqlContext.read.load('file.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')
data = data.rdd.map(lambda x: x.desc)
stopwords = set(sc.textFile('stopwords.txt').collect())

tokens = data.map( lambda document: document.strip().lower()).map( lambda document: re.split("[\s;,#]", document)).map( lambda word: [str(w) for w in word if not w in stopwords])

>>> print tokens.take(5)
[['35', 'year', 'wild', 'elephant', 'named', 'sidda', 'villagers', 'manchinabele', 'dam', 'outskirts', 'bengaluru', '', 'cared', 'wildlife', 'activists', 'suffered', 'fracture', 'developed', 'mu'], ['tamil', 'nadu', 'vivasayigal', 'sangam', 'reiterates', 'demand', 'declaring', 'tamil', 'nadu', 'drought', 'hit', 'sanction', 'compensation', 'affected', 'farmers'], ['triggers', 'rumours', 'income', 'tax', 'raids', 'quarries'], ['', 'president', 'barack', 'obama', 'ordered', 'intelligence', 'agencies', 'review', 'cyber', 'attacks', 'foreign', 'intervention', '2016', 'election', 'deliver', 'report', 'leaves', 'office', 'january', '20', '', '2017'], ['death', 'note', 'driver', '', 'bheema', 'nayak', '', 'special', 'land', 'acquisition', 'officer', '', 'alleging', 'laundered', 'mining', 'baron', 'janardhan', 'reddys', 'currency', 'commission', '']]
列表中几乎没有我无法删除的
'
项。我怎样才能去掉它们

这不起作用

tokens = tokens.filter(lambda lst: filter(None, lst))
这应该行得通

tokens = tokens.map(lambda lst: filter(None, lst))
过滤器需要一个返回布尔值的方法。在本例中,您有一个返回列表的方法