Python Pyspark-Unicodeincoder错误:';ascii';编解码器可以';t编码字符'\ufffd&x27;位置124:序号不在范围内(128)

Python Pyspark-Unicodeincoder错误:';ascii';编解码器可以';t编码字符'\ufffd&x27;位置124:序号不在范围内(128),python,pyspark,unicode,encoding,non-ascii-characters,Python,Pyspark,Unicode,Encoding,Non Ascii Characters,当我尝试使用以下代码在终端上显示spark数据帧时,我得到了“UnicodeEncodeError”: from pyspark.sql.types import StructType,StructField, StringType, IntegerType import pyspark from elasticsearch import Elasticsearch from elasticsearch.exceptions import NotFoundError ### Creating S

当我尝试使用以下代码在终端上显示spark数据帧时,我得到了“UnicodeEncodeError”:

from pyspark.sql.types import StructType,StructField, StringType, IntegerType
import pyspark
from elasticsearch import Elasticsearch
from elasticsearch.exceptions import NotFoundError
### Creating Spark Session
spark = SparkSession \
                .builder \
                .appName("test") \
                .config("spark.executor.heartbeatInterval","60s") \
                .getOrCreate() 

spark.conf.set('spark.sql.session.timeZone', 'UTC')
spark.sparkContext.setLogLevel("ERROR")

es_server_ip = "elasticsearch"
es_server_port = "9200"
es_conn = Elasticsearch("http://user:password@elasticsearch:9200",use_ssl=False,verify_certs=True)


#function to read dataframe from Elastic Search index
def readFromES(esIndex,esQuery):
    esDf = spark.read.format("org.elasticsearch.spark.sql") \
            .option("es.nodes",es_server_ip ) \
            .option("es.port",es_server_port) \
            .option("es.net.http.auth.user", "user") \
            .option("es.net.http.auth.pass", "password") \
            .option("es.net.ssl","false") \
            .option("es.net.ssl.cert.allow.self.signed","true") \
            .option("es.read.metadata", "false") \
            .option("es.mapping.date.rich", "false") \
            .option("es.query",esQuery) \
            .load(esIndex)
    return esDf

#defining the elastic search query
q_ci = """{
       "query": {
        "match_all": {}
      }
    }"""

#invoking the function and saving the data to df1
df1 = readFromES("test_delete",q_ci)
df1.show(truncate=False)
错误:

df1.show(truncate=False)
文件“/opt/spark/python/lib/pyspark.zip/pyspark/sql/dataframe.py”,第行 382,在show UnicodeEncodeError中:“ascii”编解码器无法对字符进行编码 位置124中的“\ufffd”:序号不在范围内(128)

我需要的输出如下

+--------------------+------+-----+
|hostname            |kpi   |value|
+--------------------+------+-----+
|host4               |cpu   |95   |
|host3               |disk  |90   |
|Apr�ngli            |cpu   |78   |
|host2               |memory|85   |
+--------------------+------+-----+
您可以使用下面的代码模拟数据帧

data1 = [("Apr�ngli","cpu",78),
       ("host2","memory",85),
       ("host3","disk",90),
       ("host4","cpu",95),
    ]
schema1= StructType([ \
    StructField("hostname",StringType(),True), \
    StructField("kpi",StringType(),True), \
    StructField("value",IntegerType(),True)
        ])
df1 = spark.createDataFrame(data=data1,schema=schema1)
df1.printSchema()
df1.show(truncate=False)
我采取的步骤: 正如在其他stackoverflow回答中提到的,我做了以下操作,但仍然收到错误

export PYTHONIOENCODING=utf8
版本详情:

PYTHON_VERSION=3.6.8
Spark version 2.4.5

这是PySpark特有的吗?或者如果执行
打印('\ufffd')
,您是否会收到相同的异常?如果执行打印('\ufffd'),我会收到相同的错误。顺便说一句,仅供参考-我正在使用spark提交命令提交代码。
\ufffd
�(替换字符)�ngli“
。修复它并将脚本保存在utf-8中(可能使用兼行
#-*-编码:utf-8-*-
)。@JosefZ实际上是这个角色� 来自弹性搜索。为了在这里进行模拟,我创建了一个数据帧。?