Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/296.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 通过PySpark将Kafka数据摄取到HBase中-调用None.org.apache.spark.streaming.api.java.JavaStreamingContext时出错_Python_Python 3.x_Apache Spark_Pyspark_Apache Kafka - Fatal编程技术网

Python 通过PySpark将Kafka数据摄取到HBase中-调用None.org.apache.spark.streaming.api.java.JavaStreamingContext时出错

Python 通过PySpark将Kafka数据摄取到HBase中-调用None.org.apache.spark.streaming.api.java.JavaStreamingContext时出错,python,python-3.x,apache-spark,pyspark,apache-kafka,Python,Python 3.x,Apache Spark,Pyspark,Apache Kafka,我尝试根据配置通过PySpark将实时Kafka数据摄取到HBase中。我对Spark Streaming有一个问题,也就是说,我遇到了如下错误: Py4JJavaError: An error occurred while calling None.org.apache.spark.streaming.api.java.JavaStreamingContext. : java.lang.NullPointerException at org.apache.spark.streaming

我尝试根据配置通过PySpark将实时Kafka数据摄取到HBase中。我对Spark Streaming有一个问题,也就是说,我遇到了如下错误:

Py4JJavaError: An error occurred while calling None.org.apache.spark.streaming.api.java.JavaStreamingContext.
: java.lang.NullPointerException
    at org.apache.spark.streaming.api.java.JavaStreamingContext.<init>(JavaStreamingContext.scala:130)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
更新

我认为问题与
sc.stop()
有关。删除此项并将
sc=SparkContext(conf=config)
更改为
SparkContext.getOrCreate(conf=config)
可能解决了问题

import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /my/path/spark/spark-streaming-kafka-0-8-assembly_2.11-2.4.0.jar pyspark-shell'

import findspark
findspark.init()
import pyspark
import random
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import  *;
# from pyspark_ext import *
import happybase

appName = "Kafka_MapR-Streams_to_HBase"
config = SparkConf().setAppName(appName)  

props = []
props.append(("spark.rememberDuration", "10"))
props.append(("spark.batchDuration", "10"))
props.append(("spark.eventLog.enabled", "true"))
props.append(("spark.streaming.timeout", "30"))
props.append(("spark.ui.enabled", "true"))

config = config.setAll(props)

sc.stop()
sc = SparkContext(conf=config)
sc.stop()
ssc = StreamingContext(sc, int(config.get("spark.batchDuration")))

def runApplication(ssc, config):
  ssc.start()
  if config.get("spark.streaming.timeout") == '':
    ssc.awaitTermination()
  else:
    stopped = ssc.awaitTerminationOrTimeout(int(config.get("spark.streaming.timeout")))
  if not stopped :
    print("Stopping streaming context after timeout...")
    ssc.stop(True)
    print("Streaming context stopped.")

hbase_table = 'clicks'
hconn = happybase.Connection('hostname')  
ctable = hconn.table(hbase_table)