Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/314.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Google应用程序引擎中blobstore对象的1MB配额限制?_Java_Google App Engine_Blobstore_Quota - Fatal编程技术网

Java Google应用程序引擎中blobstore对象的1MB配额限制?

Java Google应用程序引擎中blobstore对象的1MB配额限制?,java,google-app-engine,blobstore,quota,Java,Google App Engine,Blobstore,Quota,我正在使用以保存图像。 当我试图存储一个大于1MB的图像时,我得到以下异常 com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large. 我认为 下面是存储图像的Java代码 private void putInBlobStore(final String mimeType, final byte[] data) thr

我正在使用以保存图像。 当我试图存储一个大于1MB的图像时,我得到以下异常

com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large.
我认为

下面是存储图像的Java代码

private void putInBlobStore(final String mimeType, final byte[] data) throws IOException {
    final FileService fileService = FileServiceFactory.getFileService();
    final AppEngineFile file = fileService.createNewBlobFile(mimeType);
    final FileWriteChannel writeChannel = fileService.openWriteChannel(file, true);
    writeChannel.write(ByteBuffer.wrap(data));
    writeChannel.closeFinally();
}

最大对象大小为2 GB,但每个API调用最多只能处理1 MB。至少对于阅读来说,但我想对于写作来说可能是一样的。因此,您可以尝试将对象的写入拆分为1MB块,看看这是否有帮助。

正如Brummo在上面建议的那样,如果将对象拆分为<1MB的块,它就会起作用。这里有一些代码

public BlobKey putInBlobStoreString(String fileName, String contentType, byte[] filebytes) throws IOException {
    // Get a file service
    FileService fileService = FileServiceFactory.getFileService();
    AppEngineFile file = fileService.createNewBlobFile(contentType, fileName);
    // Open a channel to write to it
    boolean lock = true;
    FileWriteChannel writeChannel = null;
    writeChannel = fileService.openWriteChannel(file, lock);
    // lets buffer the bitch
    BufferedInputStream in = new BufferedInputStream(new ByteArrayInputStream(filebytes));
    byte[] buffer = new byte[524288]; // 0.5 MB buffers
    int read;
    while( (read = in.read(buffer)) > 0 ){ //-1 means EndOfStream
        ByteBuffer bb = ByteBuffer.wrap(buffer);
        writeChannel.write(bb);
    }
    writeChannel.closeFinally();
    return fileService.getBlobKey(file);
}

以下是我如何读取和写入大型文件:

public byte[] readImageData(BlobKey blobKey, long blobSize) {
    BlobstoreService blobStoreService = BlobstoreServiceFactory
            .getBlobstoreService();
    byte[] allTheBytes = new byte[0];
    long amountLeftToRead = blobSize;
    long startIndex = 0;
    while (amountLeftToRead > 0) {
        long amountToReadNow = Math.min(
                BlobstoreService.MAX_BLOB_FETCH_SIZE - 1, amountLeftToRead);

        byte[] chunkOfBytes = blobStoreService.fetchData(blobKey,
                startIndex, startIndex + amountToReadNow - 1);

        allTheBytes = ArrayUtils.addAll(allTheBytes, chunkOfBytes);

        amountLeftToRead -= amountToReadNow;
        startIndex += amountToReadNow;
    }

    return allTheBytes;
}

public BlobKey writeImageData(byte[] bytes) throws IOException {
    FileService fileService = FileServiceFactory.getFileService();

    AppEngineFile file = fileService.createNewBlobFile("image/jpeg");
    boolean lock = true;
    FileWriteChannel writeChannel = fileService
            .openWriteChannel(file, lock);

    writeChannel.write(ByteBuffer.wrap(bytes));
    writeChannel.closeFinally();

    return fileService.getBlobKey(file);
}

我尝试将写入拆分为几个writeChannel.write调用,但得到了相同的结果“每个API调用最多只能处理1MB”是什么意思?哪个API?这是否意味着每个(我的应用程序)请求1MB?不,我猜这意味着每个函数调用或多或少,而不是触发代码的web请求(否则将无法处理超过1MB的数据)。看起来将数据拆分为更小的数据块成功了。当我试图存储大型数据存储记录(硬限制为1MB)时,仍然出现异常。由于异常堆栈跟踪位于不同的线程中,我认为是blobStore造成了问题。谷歌:你欠我几个小时的调试时间,如果你把stacktrace包括进来(或者仔细看),我们可以帮你。更新上面的代码似乎对我有用。似乎不再有1 mb的限制…BlobstoreService.max_blob_fetch_size=1015808中的最大blob fetch size有一个常量。我在本地单元测试中对此进行了测试,它对读写都有效。