Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 字节[]在流式传输时不断累积数据,不只是推送当前数据_Java_Tomcat_Java 8_Streaming_Spring 4 - Fatal编程技术网

Java 字节[]在流式传输时不断累积数据,不只是推送当前数据

Java 字节[]在流式传输时不断累积数据,不只是推送当前数据,java,tomcat,java-8,streaming,spring-4,Java,Tomcat,Java 8,Streaming,Spring 4,我正在使用Java8、Spring4.3.x和Tomcat7 下面是我的代码,用于以块的形式从数据库中流式传输海量数据。 public ResponseEntity<StreamingResponseBody> streamRecords(HttpServletRequest request, HttpServletResponse response,....) throws Exception { int recordSize

我正在使用Java8、Spring4.3.x和Tomcat7

下面是我的代码,用于以块的形式从数据库中流式传输海量数据。

public ResponseEntity<StreamingResponseBody> streamRecords(HttpServletRequest request, 
            HttpServletResponse response,....) 
    throws Exception {      
    int recordSize = 5000;
    int pageSize = 1000;
    StreamingResponseBody responseBody = outputStream -> {
        byte[] bytes = null;
        for (int skip = 0 ; skip <= recordSize ; skip += pageSize) {
            logger.error("SKIP: " + skip);
            try {
                JsonObject records = new JsonObject();
                .
                .//somewhere down the call is below code
                .//dbCursor = dbCollection.find().skip(skip).limit(pageSize);
                .
                .
                .
                bytes = records.toString().getBytes();
                outputStream.write(bytes);
                outputStream.flush();
                logger.error("BYTE LENGTH: " + bytes.length);
            } catch (Exception e) {
                    logger.error("Exception occured while streaming records: " + e.getMessage() + "\n", e);
                    outputStream.close();
            }
        }
        outputStream.close();
    };
    return ResponseEntity.ok()
            .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=sample.csv")
            .contentType(MediaType.APPLICATION_JSON)
            .body(responseBody);
}
2019-05-14/15:48:30.850/IST[MvcAsync1] ERROR - SKIP: 0
2019-05-14/15:48:49.596/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:48:49.606/IST[MvcAsync1] ERROR - SKIP: 1000
2019-05-14/15:49:07.777/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
2019-05-14/15:49:07.793/IST[MvcAsync1] ERROR - SKIP: 2000
2019-05-14/15:49:26.426/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:49:26.433/IST[MvcAsync1] ERROR - SKIP: 3000
2019-05-14/15:49:45.595/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
2019-05-14/15:49:45.625/IST[MvcAsync1] ERROR - SKIP: 4000
2019-05-14/15:50:03.962/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:50:03.996/IST[MvcAsync1] ERROR - SKIP: 5000
2019-05-14/15:50:24.028/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
预期产量如下。

public ResponseEntity<StreamingResponseBody> streamRecords(HttpServletRequest request, 
            HttpServletResponse response,....) 
    throws Exception {      
    int recordSize = 5000;
    int pageSize = 1000;
    StreamingResponseBody responseBody = outputStream -> {
        byte[] bytes = null;
        for (int skip = 0 ; skip <= recordSize ; skip += pageSize) {
            logger.error("SKIP: " + skip);
            try {
                JsonObject records = new JsonObject();
                .
                .//somewhere down the call is below code
                .//dbCursor = dbCollection.find().skip(skip).limit(pageSize);
                .
                .
                .
                bytes = records.toString().getBytes();
                outputStream.write(bytes);
                outputStream.flush();
                logger.error("BYTE LENGTH: " + bytes.length);
            } catch (Exception e) {
                    logger.error("Exception occured while streaming records: " + e.getMessage() + "\n", e);
                    outputStream.close();
            }
        }
        outputStream.close();
    };
    return ResponseEntity.ok()
            .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=sample.csv")
            .contentType(MediaType.APPLICATION_JSON)
            .body(responseBody);
}
2019-05-14/15:48:30.850/IST[MvcAsync1] ERROR - SKIP: 0
2019-05-14/15:48:49.596/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:48:49.606/IST[MvcAsync1] ERROR - SKIP: 1000
2019-05-14/15:49:07.777/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
2019-05-14/15:49:07.793/IST[MvcAsync1] ERROR - SKIP: 2000
2019-05-14/15:49:26.426/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:49:26.433/IST[MvcAsync1] ERROR - SKIP: 3000
2019-05-14/15:49:45.595/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
2019-05-14/15:49:45.625/IST[MvcAsync1] ERROR - SKIP: 4000
2019-05-14/15:50:03.962/IST[MvcAsync1] ERROR - BYTE LENGTH: 282462
2019-05-14/15:50:03.996/IST[MvcAsync1] ERROR - SKIP: 5000
2019-05-14/15:50:24.028/IST[MvcAsync1] ERROR - BYTE LENGTH: 292462
字节必须是当前块的新副本,但在我向outputStream写入时,它也以某种方式占用了以前的块。根据recordSize,我预计会有5000条记录,但我会得到21000条记录


任何建议都会很有帮助。

您如何阅读
记录
?看起来你没有跳过前面的阅读ones@Lino:谢谢你的回复,我跳过了之前读过的一篇。为了快速理解,我编写了较短的代码。代码中可能包含错误的部分似乎被遗漏了。在我看来,填充“记录”的对象在下一次迭代之前没有清空。@wesleydekeirsmaker:如果您看到
JsonObject records=new JsonObject(),谢谢您的回复,我总是在for循环中创建新实例。您解决问题了吗@Ankursoni您如何阅读
记录
?看起来你没有跳过前面的阅读ones@Lino:谢谢你的回复,我跳过了之前读过的一篇。为了快速理解,我编写了较短的代码。代码中可能包含错误的部分似乎被遗漏了。在我看来,填充“记录”的对象在下一次迭代之前没有清空。@wesleydekeirsmaker:如果您看到
JsonObject records=new JsonObject(),谢谢您的回复,我总是在for循环中创建新实例。您解决问题了吗@安库索尼