Python 索引器错误:字符串索引超出范围

Python 索引器错误:字符串索引超出范围,python,pyspark,Python,Pyspark,我对spark编程非常陌生。我正在尝试实现一个map,并将ebykey简化为包含15个字段的以下数据集 rdd=sc.parallelize([ ("West", "Apple", 2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0,2.0, 10), ("West", "Apple", 3.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0]) 这是我的map函数,我试图创建一个包含多个键和值的元组

我对spark编程非常陌生。我正在尝试实现一个map,并将ebykey简化为包含15个字段的以下数据集

rdd=sc.parallelize([
("West",  "Apple",  2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0,2.0, 10),
("West",  "Apple",  3.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0, 10,2.0])
这是我的map函数,我试图创建一个包含多个键和值的元组

rdd1 = rdd.map(lambda x: ((x[0],x[1]),(x[2],x[3],x[4],x[5],x[6],x[7],x[8],x[9],x[10],x[11],x[12],x[13],x[14])))
下一步我将尝试reduceByKey(对上述元组中的值实现类似sql的聚合)

这个reduce函数对元组索引值0-4的作用与预期的一样,但是当我尝试元组索引值5-14时,我得到了索引器

rdd2 = rdd1.reduceByKey(lambda x,y: (x[10]+','+y[10]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark/python/pyspark/rdd.py", line 1277, in take
res = self.context.runJob(self, takeUpToNumLeft, p, True)
File "/opt/spark/python/pyspark/context.py", line 897, in runJob
allowLocal)
File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",   line 538, in __call__
File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError15/08/28 01:23:22 WARN TaskSetManager: Lost task 1.0 in stage 78.0 (TID 91, localhost): TaskKilled (killed  intentionally)
: An error occurred while calling  z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure:   Task 0 in stage 78.0 failed 1 times, most recent failure: Lost task 0.0 in   stage 78.0 (TID 90, localhost):   org.apache.spark.api.python.PythonException: Traceback (most recent call   last):
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/spark/python/pyspark/rdd.py", line 2330, in pipeline_func
return func(split, prev_func(split, iterator))
File "/opt/spark/python/pyspark/rdd.py", line 2330, in pipeline_func
return func(split, prev_func(split, iterator))
File "/opt/spark/python/pyspark/rdd.py", line 316, in func
return f(iterator)
File "/opt/spark/python/pyspark/rdd.py", line 1758, in combineLocally
merger.mergeValues(iterator)
File "/opt/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 268, in mergeValues
d[k] = comb(d[k], v) if k in d else creator(v)
File "<stdin>", line 1, in <lambda>
IndexError: string index out of range

at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:315)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at      java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:695)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
rdd2=rdd1.reduceByKey(λx,y:(x[10]+','+y[10]))
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/opt/spark/python/pyspark/rdd.py”,第1277行,在take中
res=self.context.runJob(self,takeUpToNumLeft,p,True)
文件“/opt/spark/python/pyspark/context.py”,第897行,在runJob中
(局部)
文件“/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py”,第538行,在__
文件“/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py”,第300行,在get_return_值中
py4j.protocol.Py4JJavaError15/08/28 01:23:22警告TaskSetManager:在阶段78.0中丢失了任务1.0(TID 91,localhost):TaskKill(故意终止)
:调用z:org.apache.spark.api.python.PythonRDD.runJob时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段78.0中的任务0失败1次,最近的失败:阶段78.0中的任务0.0丢失(TID 90,本地主机):org.apache.spark.api.python.python异常:回溯(最近一次调用):
文件“/opt/spark/python/lib/pyspark.zip/pyspark/worker.py”,主文件第111行
过程()
文件“/opt/spark/python/lib/pyspark.zip/pyspark/worker.py”,第106行,正在处理中
serializer.dump_流(func(拆分索引,迭代器),outfile)
文件“/opt/spark/python/pyspark/rdd.py”,第2330行,在管道中
返回函数(拆分,上一个函数(拆分,迭代器))
文件“/opt/spark/python/pyspark/rdd.py”,第2330行,在管道中
返回函数(拆分,上一个函数(拆分,迭代器))
文件“/opt/spark/python/pyspark/rdd.py”,第316行,func格式
返回f(迭代器)
文件“/opt/spark/python/pyspark/rdd.py”,第1758行,组合形式
merge.mergeValues(迭代器)
文件“/opt/spark/python/lib/pyspark.zip/pyspark/shuffle.py”,第268行,合并值
d[k]=comb(d[k],v)如果k在d中,则为其他创建者(v)
文件“”,第1行,在
索引器错误:字符串索引超出范围
位于org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
位于org.apache.spark.api.python.PythonRDD$$anon$1(PythonRDD.scala:179)
位于org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:277)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:244)
位于org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:315)
在org.apache.spark.rdd.rdd.computeOrReadCheckpoint(rdd.scala:277)
位于org.apache.spark.rdd.rdd.iterator(rdd.scala:244)
位于org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
位于org.apache.spark.scheduler.Task.run(Task.scala:70)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:213)
位于java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
运行(Thread.java:695)
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
位于org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
位于org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
位于scala.Option.foreach(Option.scala:236)
位于org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
位于org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
位于org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
这看起来是一个非常重要的错误。我不确定这个错误是否是因为我的机器的硬件或者我的reduce函数的实现或者与spark有关

感谢您的任何帮助

文件“”,第1行,在
索引器错误:字符串索引超出范围
该错误发生在lambda函数中。您拥有的序列类型(元组、列表、字符串)的元素数没有编写函数所期望的那么多

rdd2 = rdd1.reduceByKey(lambda x,y: (x[10]+','+y[10]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark/python/pyspark/rdd.py", line 1277, in take
res = self.context.runJob(self, takeUpToNumLeft, p, True)
File "/opt/spark/python/pyspark/context.py", line 897, in runJob
allowLocal)
File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py",   line 538, in __call__
File "/opt/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError15/08/28 01:23:22 WARN TaskSetManager: Lost task 1.0 in stage 78.0 (TID 91, localhost): TaskKilled (killed  intentionally)
: An error occurred while calling  z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure:   Task 0 in stage 78.0 failed 1 times, most recent failure: Lost task 0.0 in   stage 78.0 (TID 90, localhost):   org.apache.spark.api.python.PythonException: Traceback (most recent call   last):
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/spark/python/pyspark/rdd.py", line 2330, in pipeline_func
return func(split, prev_func(split, iterator))
File "/opt/spark/python/pyspark/rdd.py", line 2330, in pipeline_func
return func(split, prev_func(split, iterator))
File "/opt/spark/python/pyspark/rdd.py", line 316, in func
return f(iterator)
File "/opt/spark/python/pyspark/rdd.py", line 1758, in combineLocally
merger.mergeValues(iterator)
File "/opt/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 268, in mergeValues
d[k] = comb(d[k], v) if k in d else creator(v)
File "<stdin>", line 1, in <lambda>
IndexError: string index out of range

at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:315)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at      java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:695)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
File "<stdin>", line 1, in <lambda>
IndexError: string index out of range