Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/332.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 由于outofmemory错误,cassandra中出现意外关闭_Java_Cassandra_Out Of Memory_Cassandra 2.0_Datastax Java Driver - Fatal编程技术网

Java 由于outofmemory错误,cassandra中出现意外关闭

Java 由于outofmemory错误,cassandra中出现意外关闭,java,cassandra,out-of-memory,cassandra-2.0,datastax-java-driver,Java,Cassandra,Out Of Memory,Cassandra 2.0,Datastax Java Driver,我使用的是cassandra 2.0.9,在其中我遇到了一些意外的关机错误,比如OutOfMemoryError,而在运行后台时我遇到了这个错误。在此之前,我有一些关于墓碑的警告。但我已经设定了一天的宽限期 WARN [ReadStage:95] 2016-03-09 06:10:31,548 SliceQueryFilter.java (line 225) Read 1 live and 21072 tombstoned cells in mykeyspace.user_metrics_ove

我使用的是cassandra 2.0.9,在其中我遇到了一些意外的关机错误,比如OutOfMemoryError,而在运行后台时我遇到了这个错误。在此之前,我有一些关于墓碑的警告。但我已经设定了一天的宽限期

WARN [ReadStage:95] 2016-03-09 06:10:31,548 SliceQueryFilter.java (line 225) Read 1 live and 21072 tombstoned cells in mykeyspace.user_metrics_overview (see tombstone_warn_threshold). 1 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
ERROR [CompactionExecutor:68695] 2016-03-09 06:10:31,550 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:68695,1,main]
java.lang.OutOfMemoryError: Direct buffer memory
        at java.nio.Bits.reserveMemory(Bits.java:658)
        at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
        at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
        at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174)
        at sun.nio.ch.IOUtil.read(IOUtil.java:195)
        at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:110)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
        at org.apache.cassandra.io.compress.CompressedThrottledReader.reBuffer(CompressedThrottledReader.java:41)
        at org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
        at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.computeNext(SSTableScanner.java:262)
        at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.computeNext(SSTableScanner.java:203)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at org.apache.cassandra.io.sstable.SSTableScanner.hasNext(SSTableScanner.java:183)
        at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
        at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
        at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
        at org.apache.cassandra.db.compaction.CompactionIterable.iterator(CompactionIterable.java:47)
        at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:129)
        at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
        at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
        at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
 INFO [StorageServiceShutdownHook] 2016-03-09 06:10:31,551 ThriftServer.java (line 141) Stop listening to thrift clients
ERROR [CompactionExecutor:68695] 2016-03-09 06:10:31,551 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:68695,1,main]
java.lang.IllegalThreadStateException
        at java.lang.Thread.start(Thread.java:705)
        at org.apache.cassandra.service.CassandraDaemon$2.uncaughtException(CassandraDaemon.java:205)
        at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.handleOrLog(DebuggableThreadPoolExecutor.java:220)
        at org.apache.cassandra.db.compaction.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:973)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
 INFO [StorageServiceShutdownHook] 2016-03-09 06:10:31,590 Server.java (line 182) Stop listening for CQL clients
 INFO [StorageServiceShutdownHook] 2016-03-09 06:10:31,590 Gossiper.java (line 1279) Announcing shutdown

运行内存分析器以查看是什么消耗了内存。检查heapdump以分析消耗空间的原因。

运行内存分析器以查看是什么消耗了内存。检查heapdump以分析消耗空间的原因。

JVM无法分配直接缓冲区,你可能想增加它,并尝试最有可能解决这个问题。例如:-XX:MaxDirectMemorySize=256/512m默认值,我相信是64m。我还没有设置-XX:MaxDirectMemorySize参数。如果没有设置任何值,默认值是什么@MadhusudanaReddySunnapuI认为它依赖于JVM,但对于Sun/Oracle JVM,默认值是64MB。这个节点上有多少内存?我不认为墓碑是您这里的主要问题。JVM无法分配直接缓冲区,您可能需要增加它,并尝试最有可能解决的方法。例如:-XX:MaxDirectMemorySize=256/512m默认值,我相信是64m。我还没有设置-XX:MaxDirectMemorySize参数。如果没有设置任何值,默认值是什么@MadhusudanaReddySunnapuI认为它依赖于JVM,但对于Sun/Oracle JVM,默认值是64MB。这个节点上有多少内存?我不认为墓碑是你这里的主要问题。
create column family user_metrics_overview
with column_type = 'Standard' 
and comparator = 'ReversedType(org.apache.cassandra.db.marshal.TimeUUIDType)' 
and default_validation_class = 'BytesType' 
and key_validation_class = 'BytesType'