Apache spark 将pyspark数据帧写入phoenix

Apache spark 将pyspark数据帧写入phoenix,apache-spark,pyspark,hbase,apache-zookeeper,phoenix,Apache Spark,Pyspark,Hbase,Apache Zookeeper,Phoenix,这是我将数据帧df写入phoenix的代码 df.write \ .format("org.apache.phoenix.spark") \ .mode("overwrite") \ .option("table", "TABLETEST") \ .option("zkUrl", "10.10.10.151:2181") \ .save() 在运行代码时,它会显示连接状态 INFO ZooKeeper: Initiating client connection, connectString=1

这是我将数据帧
df
写入phoenix的代码

df.write \
.format("org.apache.phoenix.spark") \
.mode("overwrite") \
.option("table", "TABLETEST") \
.option("zkUrl", "10.10.10.151:2181") \
.save()
在运行代码时,它会显示连接状态

INFO ZooKeeper: Initiating client connection, connectString=10.10.10.151:2181 sessionTimeout=90000 watcher=hconnection-0x4fd269230x0, quorum=10.10.10.151:2181, baseZNode=/hbase
INFO ClientCnxn: Opening socket connection to server 10.10.10.151/10.10.10.151:2181. Will not attempt to authenticate using SASL (unknown error)
INFO ClientCnxn: Socket connection established to 10.10.10.151/10.10.10.151:2181, initiating session
INFO ClientCnxn: Session establishment complete on server 10.10.10.151/10.10.10.151:2181, sessionid = 0x1610f2bcaee003d, negotiated timeout = 90000
但它显示以下错误,并继续无限次地重试调用方

INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=68188 ms ago, cancelled=false, msg=row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=bigdata-datanode,16020,1516377985241, seqNum=0
INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=88386 ms ago, cancelled=false, msg=row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=bigdata-datanode,16020,1516377985241, seqNum=0
我还添加了JAR和hbase-site.xml,如下所示

 /opt/spark/jars
... 
phoenix-core-4.12.0-HBase-1.2.jar
phoenix-spark-4.12.0-HBase-1.2.jar
hbase-common-1.2.0.jar
hbase-client-1.2.0.jar
hbase-protocol-1.2.0.jar
hbase-server-1.2.0.jar
...
/opt/spark/conf
...
hbase-site.xml
...