Apache spark SqlContext导入和并行化Pypark时出错
我得到以下错误Apache spark SqlContext导入和并行化Pypark时出错,apache-spark,dataframe,pyspark,rdd,Apache Spark,Dataframe,Pyspark,Rdd,我得到以下错误 TypeError:parallelize()缺少1个必需的位置参数:“c” 当从只有一列的字符串列表创建数据帧时,我还有一个问题: line = "Hello, world" sc.parallelize(list(line)).collect() 我得到以下错误: from pyspark.sql.types import * from pyspark.sql import SQLContext sqlContext = SQLContext(sc) schema = St
TypeError:parallelize()缺少1个必需的位置参数:“c”
当从只有一列的字符串列表创建数据帧时,我还有一个问题:
line = "Hello, world"
sc.parallelize(list(line)).collect()
我得到以下错误:
from pyspark.sql.types import *
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
schema = StructType([StructField("name", StringType(), True)])
df3 = sqlContext.createDataFrame(fuzzymatchIntro, schema)
df3.printSchema()
提前感谢您查看您的上述评论,您似乎以错误的方式初始化了
sparkContext
从pyspark.context导入SparkContext
从pyspark.sql.session导入SparkSession
sc=SparkContext
spark=SparkSession.builder.appName(“DFTest”).getOrCreate()
正确的方法是
----> 3 sqlContext = SQLContext(sc)
AttributeError: type object 'SparkContext' has no attribute '_jsc'
和
spark
对象可以完成sqlContext
如何从pyspark.context从pyspark.sql.session导入SparkContext sc=SparkContext spark=SparkSession.builder.appName(“DFTest”).getOrCreate()创建sc
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.appName("DFTest").getOrCreate()
sc = spark.sparkContext