Apache spark spark.ml StringIndexer抛出';看不见的标签';关于fit()
我正在准备一个玩具Apache spark spark.ml StringIndexer抛出';看不见的标签';关于fit(),apache-spark,dataframe,pyspark,apache-spark-sql,apache-spark-ml,Apache Spark,Dataframe,Pyspark,Apache Spark Sql,Apache Spark Ml,我正在准备一个玩具spark.ml示例Spark版本1.6.0,运行在Oracle JDK版本1.8.0_65之上,pyspark,ipython笔记本电脑 首先,这几乎与我无关。将管道装配到数据集而不是转换数据集时引发异常。在这里,抑制异常可能不是一个解决方案,因为在这种情况下,恐怕数据集会变得非常糟糕 我的数据集大约是800Mb未压缩的,所以可能很难复制(较小的子集似乎可以避免这个问题) 数据集如下所示: +--------------------+-----------+-----+---
spark.ml
示例Spark版本1.6.0
,运行在Oracle JDK版本1.8.0_65之上,pyspark,ipython笔记本电脑
首先,这几乎与我无关。将管道装配到数据集而不是转换数据集时引发异常。在这里,抑制异常可能不是一个解决方案,因为在这种情况下,恐怕数据集会变得非常糟糕
我的数据集大约是800Mb未压缩的,所以可能很难复制(较小的子集似乎可以避免这个问题)
数据集如下所示:
+--------------------+-----------+-----+-------+-----+--------------------+
| url| ip| rs| lang|label| txt|
+--------------------+-----------+-----+-------+-----+--------------------+
|http://3d-detmold...|217.160.215|378.0| de| 0.0|homwillkommskip c...|
| http://3davto.ru/| 188.225.16|891.0| id| 1.0|оформить заказ пе...|
| http://404.szm.com/| 85.248.42| 58.0| cs| 0.0|kliknite tu alebo...|
| http://404.xls.hu/| 212.52.166|168.0| hu| 0.0|honlapkészítés404...|
|http://a--m--a--t...| 66.6.43|462.0| en| 0.0|back top archiv r...|
|http://a-wrf.ru/c...| 78.108.80|126.0|unknown| 1.0| |
|http://a-wrf.ru/s...| 78.108.80|214.0| ru| 1.0|установк фаркопна...|
+--------------------+-----------+-----+-------+-----+--------------------+
正在预测的值是标签
。适用于它的整个管道:
来自pyspark.ml导入管道
从pyspark.ml.feature导入向量汇编程序、StringIndexer、OneHotEncoder、标记器、哈希函数
从pyspark.ml.classification导入逻辑回归
列车,测试=munge(src_数据帧)。随机拆分([70,30.],种子=12345)
管段=[
StringIndexer(inputCol='lang',outputCol='lang\u idx'),
OneHotEncoder(inputCol='lang\u idx',outputCol='lang\u onehot'),
令牌发生器(inputCol='ip',outputCol='ip_令牌'),
HashingTF(numFeatures=2**10,inputCol='ip_令牌',outputCol='ip_向量'),
标记器(inputCol='txt',outputCol='txt\U标记),
HashingTF(numFeatures=2**18,inputCol='txt_标记',outputCol='txt_向量'),
向量汇编程序(inputCols=['lang\u onehot','ip\u vector','txt\u vector'],outputCol='features'),
逻辑回归(labelCol='label',featuresCol='features')
]
管道=管道(阶段=管道\阶段)
管道型号=管道安装(列车)
这是stacktrace:
Py4JJavaError: An error occurred while calling o10793.fit.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 627.0 failed 1 times, most recent failure: Lost task 18.0 in stage 627.0 (TID 23259, localhost): org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1.apply(RDD.scala:1025)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.reduce(RDD.scala:1007)
at org.apache.spark.rdd.RDD$$anonfun$treeAggregate$1.apply(RDD.scala:1136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.treeAggregate(RDD.scala:1113)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:271)
at org.apache.spark.ml.classification.LogisticRegression.train(LogisticRegression.scala:159)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:90)
at org.apache.spark.ml.Predictor.fit(Predictor.scala:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.SparkException: Unseen label: pl-PL.
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:157)
at org.apache.spark.ml.feature.StringIndexerModel$$anonfun$4.apply(StringIndexer.scala:153)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalExpr2$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:51)
at org.apache.spark.sql.execution.Project$$anonfun$1$$anonfun$apply$1.apply(basicOperators.scala:49)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:282)
at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:171)
at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:78)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
最有趣的是:
org.apache.spark.SparkException: Unseen label: pl-PL.
不知道,pl-pl
这是lang
列中的一个值,怎么会在label
列中混为一谈,这是一个float
,而不是string
编辑:一些仓促的结论,由于@zero323而得到纠正
我进一步研究发现,pl
是数据集测试部分的值,而不是训练。所以现在我甚至不知道从哪里寻找罪魁祸首:很可能是randomspilt
代码,而不是StringIndexer
,谁知道还有什么
如何对此进行调查?看不见的标签
。最可能的问题在于以下阶段:
StringIndexer(inputCol='lang', outputCol='lang_idx')
pl
出现在train(“lang”)
中,而不出现在test(“lang”)
中
您可以使用setHandleInvalid
和skip
来更正它:
从pyspark.ml.feature导入StringIndexer
train=sc.parallelize([(1,“foo”),(2,“bar”))).toDF([“k”,“v”])
test=sc.parallelize([(3,“foo”),(4,“foobar”))).toDF([“k”,“v”])
indexer=StringIndexer(inputCol=“v”,outputCol=“vi”)
index.fit(train.transform(test.show)()
##Py4JJavaError:调用o112.showString时出错。
##:org.apache.spark.sparkeexception:由于阶段失败,作业中止:
## ...
##org.apache.spark.SparkException:Unseen标签:foobar。
index.setHandleInvalid(“跳过”).fit(训练).transform(测试).show()
## +---+---+---+
##| k | v | vi|
## +---+---+---+
##| 3 | foo | 1.0|
## +---+---+---+
或者,在最新版本中,保留
:
indexer.setHandleInvalid(“keep”).fit(训练).transform(测试).show()
## +---+------+---+
##| k | v | vi|
## +---+------+---+
##| 3 | foo | 0.0|
##| 4 | foobar | 2.0|
## +---+------+---+
好的,我想我明白了。至少我让它工作起来了
缓存数据帧(包括训练/测试部分)解决了这个问题。这就是我在这期《吉拉》中发现的:
所以这不是一个bug,只是一个事实,randomSample
可能会在相同但分区不同的数据集上产生不同的结果。显然,我的一些munging函数(或管道
)涉及重新分区,因此,列车组重新计算的结果可能与其定义不同
我仍然感兴趣的是再现性:总是“pl”行在数据集的错误部分混合,也就是说,它不是随机的重新划分。这是确定性的,只是不一致。我想知道它到底是如何发生的。您将setHandleInvalid(“跳过”)放在管道中的何处?@mikeL无论您在何处定义
StringIndexer
。它是索引器的参数
,而不是管道
。