Apache spark Spark Java:如何将数据从HTTP源移动到Couchbase接收器?

Apache spark Spark Java:如何将数据从HTTP源移动到Couchbase接收器?,apache-spark,apache-spark-sql,spark-streaming,spark-dataframe,couchbase,Apache Spark,Apache Spark Sql,Spark Streaming,Spark Dataframe,Couchbase,我在Web服务器上有一个.gz文件,我想以流式方式使用它,并将数据插入Couchbase。.gz文件中只有一个文件,而该文件每行又包含一个JSON对象 因为Spark没有HTTP接收器,所以我自己写了一个(如下所示)。我用它来做插入。但是,在运行时,作业实际上并没有插入任何内容。我怀疑这是因为我对Spark缺乏经验,不知道如何开始和等待终止。正如你在下面看到的,有两个地方可以打这样的电话 接收器: public class HttpReceiver extends Receiver<Str

我在Web服务器上有一个
.gz
文件,我想以流式方式使用它,并将数据插入Couchbase。
.gz
文件中只有一个文件,而该文件每行又包含一个JSON对象

因为Spark没有HTTP接收器,所以我自己写了一个(如下所示)。我用它来做插入。但是,在运行时,作业实际上并没有插入任何内容。我怀疑这是因为我对Spark缺乏经验,不知道如何开始和等待终止。正如你在下面看到的,有两个地方可以打这样的电话

接收器

public class HttpReceiver extends Receiver<String> {
    private final String url;

    public HttpReceiver(String url) {
        super(MEMORY_AND_DISK());
        this.url = url;
    }

    @Override
    public void onStart() {
        new Thread(() -> receive()).start();
    }

    private void receive() {
        try {
            HttpURLConnection conn = (HttpURLConnection) new URL(url).openConnection();
            conn.setAllowUserInteraction(false);
            conn.setInstanceFollowRedirects(true);
            conn.setRequestMethod("GET");
            conn.setReadTimeout(60 * 1000);

            InputStream gzipStream = new GZIPInputStream(conn.getInputStream());
            Reader decoder = new InputStreamReader(gzipStream, UTF_8);
            BufferedReader reader = new BufferedReader(decoder);

            String json = null;
            while (!isStopped() && (json = reader.readLine()) != null) {
                store(json);
            }
            reader.close();
            conn.disconnect();
        } catch (IOException e) {
            stop(e.getMessage(), e);
        }
    }

    @Override
    public void onStop() {

    }
}
public void load(String url) throws StreamingQueryException, InterruptedException {
        JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(1000));
        JavaReceiverInputDStream<String> lines = ssc.receiverStream(new HttpReceiver(url));

        lines.foreachRDD(rdd ->
                sql.read().json(rdd)
                        .select(new Column("id"),
                                new Column("name"),
                                new Column("rating"),
                                new Column("review_count"),
                                new Column("hours"),
                                new Column("attributes"))
                        .writeStream()
                        .option("idField", "id")
                        .format("com.couchbase.spark.sql")
                        .start()
//                        .awaitTermination(sparkProperties.getTerminationTimeoutMillis())
        );

//        ssc.start();
        ssc.awaitTerminationOrTimeout(sparkProperties.getTerminationTimeoutMillis());
}
但它抛出了非法状态异常:SparkContext已被关闭

11004 [JobScheduler] ERROR org.apache.spark.streaming.scheduler.JobScheduler  - Error running job streaming job 1488664987000 ms.0
java.lang.IllegalStateException: SparkContext has been shutdown
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1981)
    at org.apache.spark.rdd.RDD$$anonfun$fold$1.apply(RDD.scala:1088)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.fold(RDD.scala:1082)
    at org.apache.spark.sql.execution.datasources.json.InferSchema$.infer(InferSchema.scala:69)
编辑2: 原来编辑1的错误是由我关闭上下文的
@PostDestruct
方法引起的。我使用的是Spring,这个bean应该是单态的,但不知何故,Spark导致它在工作完成之前就被破坏了。我现在删除了
@PostDestruct
,并做了一些更改;以下内容似乎有效,但存在开放性问题:

public void load(String dataDirURL, String format) throws StreamingQueryException, InterruptedException {
    JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(1000));
    JavaReceiverInputDStream<String> lines = ssc.receiverStream(new HttpReceiver(dataDirURL));

    lines.foreachRDD(rdd -> {
        try {
            Dataset<Row> select = sql.read().json(rdd)
                    .select("id", "name", "rating", "review_count", "hours", "attributes");
            couchbaseWriter(select.write()
                    .option("idField", "id")
                    .format(format))
                    .couchbase();
        } catch (Exception e) {
            // Time to time throws AnalysisException: cannot resolve '`id`' given input columns: [];
        }
    });

    ssc.start();
    ssc.awaitTerminationOrTimeout(sparkProperties.getTerminationTimeoutMillis());
}

回答我自己的问题,这就是我最终能够毫无例外地工作的原因:

public void load(String dataDirURL, String format) throws InterruptedException {
    JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(1000));
    JavaReceiverInputDStream<String> lines = ssc.receiverStream(new HttpReceiver(dataDirURL));

    ObjectMapper objectMapper = new ObjectMapper();

    lines.foreachRDD(rdd -> {
                JavaRDD<RawJsonDocument> docRdd = rdd
                        .filter(content -> !isEmpty(content))
                        .map(content -> {
                            String id = "";
                            String modifiedContent = "";
                            try {
                                ObjectNode node = objectMapper.readValue(content, ObjectNode.class);
                                if (node.has("id")) {
                                    id = node.get("id").textValue();
                                    modifiedContent = objectMapper.writeValueAsString(node.retain(ALLOWED_FIELDS));
                                }
                            } catch (IOException e) {
                                e.printStackTrace();
                            } finally {
                                return RawJsonDocument.create(id, modifiedContent);
                            }
                        })
                        .filter(doc -> !isEmpty(doc.id()));
                couchbaseDocumentRDD(docRdd)
                        .saveToCouchbase(UPSERT);
            }
    );

    ssc.start();
    ssc.awaitTerminationOrTimeout(sparkProperties.getTerminationTimeoutMillis());
}
public void load(字符串dataDirURL,字符串格式)抛出InterruptedException{
JavaStreamingContext ssc=新的JavaStreamingContext(sc,新的持续时间(1000));
JavaReceiverInputDStream lines=ssc.receiverStream(新的HttpReceiver(dataDirURL));
ObjectMapper ObjectMapper=新的ObjectMapper();
行。foreachRDD(rdd->{
javarddocrdd=rdd
.filter(内容->!isEmpty(内容))
.map(内容->{
字符串id=“”;
字符串modifiedContent=“”;
试一试{
ObjectNode节点=objectMapper.readValue(内容,ObjectNode.class);
if(node.has(“id”)){
id=node.get(“id”).textValue();
modifiedContent=objectMapper.writeValueAsString(node.retain(允许的_字段));
}
}捕获(IOE异常){
e、 printStackTrace();
}最后{
返回RawJsonDocument.create(id,modifiedContent);
}
})
.filter(doc->!isEmpty(doc.id());
couchbaseDocumentRDD(docRdd)
.SaveToTouchBase(向上插入);
}
);
ssc.start();
ssc.awaitTerminationOrTimeout(sparkProperties.getTerminationTimeoutMillis());
}

writeStream
看起来有问题。您不使用结构化流媒体,而是使用标准的
write
,然后是
save
,如
sql.read().json(rdd)。选择(…).write().option(…).format(…).save()
@zero323,否,请参见编辑1。也尝试了不使用
couchbaseWriter
并使用
save
,相同的异常。基于src代码,
couchbaseWriter
似乎在内部调用
save
。这实际上看起来像是架构推断过程中的异常,而不是writer。@zero323请参见编辑2。这很可能是数据有问题。如果我是你,我会先在批处理模式下测试这个(取一个批处理,将
rdd
转储到文本文件中,看看如何从那里开始)。
Lost task 1.0 in stage 2.0 (TID 4, localhost, executor driver): com.couchbase.client.java.error.DocumentAlreadyExistsException
at com.couchbase.client.java.CouchbaseAsyncBucket$13.call(CouchbaseAsyncBucket.java:475)
public void load(String dataDirURL, String format) throws InterruptedException {
    JavaStreamingContext ssc = new JavaStreamingContext(sc, new Duration(1000));
    JavaReceiverInputDStream<String> lines = ssc.receiverStream(new HttpReceiver(dataDirURL));

    ObjectMapper objectMapper = new ObjectMapper();

    lines.foreachRDD(rdd -> {
                JavaRDD<RawJsonDocument> docRdd = rdd
                        .filter(content -> !isEmpty(content))
                        .map(content -> {
                            String id = "";
                            String modifiedContent = "";
                            try {
                                ObjectNode node = objectMapper.readValue(content, ObjectNode.class);
                                if (node.has("id")) {
                                    id = node.get("id").textValue();
                                    modifiedContent = objectMapper.writeValueAsString(node.retain(ALLOWED_FIELDS));
                                }
                            } catch (IOException e) {
                                e.printStackTrace();
                            } finally {
                                return RawJsonDocument.create(id, modifiedContent);
                            }
                        })
                        .filter(doc -> !isEmpty(doc.id()));
                couchbaseDocumentRDD(docRdd)
                        .saveToCouchbase(UPSERT);
            }
    );

    ssc.start();
    ssc.awaitTerminationOrTimeout(sparkProperties.getTerminationTimeoutMillis());
}