Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/389.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 如何避免缓冲区溢出异常?_Java_Bytebuffer_Avro - Fatal编程技术网

Java 如何避免缓冲区溢出异常?

Java 如何避免缓冲区溢出异常?,java,bytebuffer,avro,Java,Bytebuffer,Avro,我正在尝试正确地使用ByteBuffer和BigEndian字节顺序格式 在将其存储到Cassandra数据库之前,我尝试将两个字段合并到一个ByteBuffer中 我将要写入Cassandra的字节数组由三个字节数组组成,如下所述- short employeeId = 32767; long lastModifiedDate = "1379811105109L"; byte[] attributeValue = os.toByteArray(); 现在我需要快速压缩attributeVal

我正在尝试正确地使用ByteBuffer和BigEndian字节顺序格式

在将其存储到Cassandra数据库之前,我尝试将两个字段合并到一个ByteBuffer中

我将要写入Cassandra的字节数组由三个字节数组组成,如下所述-

short employeeId = 32767;
long lastModifiedDate = "1379811105109L";
byte[] attributeValue = os.toByteArray();
现在我需要快速压缩attributeValue数据,然后再将其存储在Cassandra中-

employeeId (do not snappy compressed)
lastModifiedDate (do not snappy compressed)
attributeValue  (snappy compressed it)
现在,我将编写
employeeId
,<代码> LSTMODIFIDEDATE < /COD>和Snaby压缩<代码>属性值一起成为一个单字节数组,结果字节数组i将写入卡桑德拉,然后我将拥有C++程序,它将从卡桑德拉检索字节数组数据,然后反序列化以提取<代码> EnvieID ID/代码>
lastModifiedDate
并使用snappy从中解压缩此
attributeValue

为了做到这一点,我使用了ByteBuffer和BigEndian字节顺序格式

我把这个密码放在一起-

public static void main(String[] args) throws Exception {

        String text = "Byte Buffer Test";
        byte[] attributeValue = text.getBytes();

        long lastModifiedDate = 1289811105109L;
        short employeeId = 32767;

        // snappy compressing it and this line gives BufferOverflowException
        byte[] compressed = Snappy.compress(attributeValue);

        int size = 2 + 8 + 4 + attributeValue.length; // short is 2 bytes, long 8 and int 4

        ByteBuffer bbuf = ByteBuffer.allocate(size); 

        bbuf.order(ByteOrder.BIG_ENDIAN);
        bbuf.putShort(employeeId);
        bbuf.putLong(lastModifiedDate);
        bbuf.putInt(attributeValue.length);
        bbuf.put(compressed); // storing the snappy compressed data

        bbuf.rewind();

        // best approach is copy the internal buffer
        byte[] bytesToStore = new byte[size];
        bbuf.get(bytesToStore);

        // write bytesToStore in Cassandra...

        // Now retrieve the Byte Array data from Cassandra and deserialize it...
        byte[] allWrittenBytesTest = bytesToStore;//magicFunctionToRetrieveDataFromCassandra();

        // I am not sure whether the below read code will work fine or not..
        ByteBuffer bb = ByteBuffer.wrap(allWrittenBytesTest);

        bb.order(ByteOrder.BIG_ENDIAN);
        bb.rewind();

        short extractEmployeeId = bb.getShort();
        long extractLastModifiedDate = bb.getLong();
        int extractAttributeValueLength = bb.getInt();
        byte[] extractAttributeValue = new byte[extractAttributeValueLength];

        bb.get(extractAttributeValue); // read attributeValue from the remaining buffer

        System.out.println(extractEmployeeId);
        System.out.println(extractLastModifiedDate);
        System.out.println(new String(Snappy.uncompress(extractAttributeValue)));

}
不知怎的,上面的代码抛出了BufferOverflowException-

Exception in thread "main" java.nio.BufferOverflowException
    at java.nio.HeapByteBuffer.put(HeapByteBuffer.java:165)
    at java.nio.ByteBuffer.put(ByteBuffer.java:813)
为什么我在存储数据之前将数据压缩在CasDand中,因为当我从C++代码中检索卡桑德拉的数据时,它应该是压缩的,所以在我们的C++图中会占用很多空间。只有当人们打电话给我们时,我们才会解压它


有谁能看一看,让我知道我在这里做错了什么?那么我应该如何读取数据呢?

在分配原始字节缓冲时,您应该使用
压缩的
长度。

您没有正确计算
大小。您确实无法从异常中看出这一点,但它在压缩异常时抛出异常
Snappy.compress(attributeValue)?不太可能。在堆栈跟踪中的任何位置都可以看到
Snappy