Memory Hazelcast内存在不断增加

Memory Hazelcast内存在不断增加,memory,hazelcast,Memory,Hazelcast,我有一个hazelcast集群和两台机器 群集中唯一的对象是地图。分析日志文件时,我注意到运行状况监视器开始报告内存消耗缓慢增加,即使没有向映射添加新条目(请参阅下面的日志条目示例) 你知道是什么导致记忆增加吗 2015-09-16 10:45:49信息健康监测:?-[10.11.173.129]:5903 [dev][3.2.1]内存使用=97.6M,内存可用=30.4M, memory.total=128.0M,memory.max=128.0M,memory.used/total=76.

我有一个hazelcast集群和两台机器

群集中唯一的对象是地图。分析日志文件时,我注意到运行状况监视器开始报告内存消耗缓慢增加,即使没有向映射添加新条目(请参阅下面的日志条目示例)

你知道是什么导致记忆增加吗


2015-09-16 10:45:49信息健康监测:?-[10.11.173.129]:5903
[dev][3.2.1]内存使用=97.6M,内存可用=30.4M,
memory.total=128.0M,memory.max=128.0M,memory.used/total=76.27%,
memory.used/max=76.27%,load.process=0.00%,load.system=1.00%,
load.systemAverage=3.00%,thread.count=96,thread.peakCount=107,
event.q.size=0,executor.q.async.size=0,executor.q.client.size=0,
executor.q.operation.size=0,executor.q.query.size=0,
executor.q.scheduled.size=0,executor.q.io.size=0,
executor.q.system.size=0,executor.q.operation.size=0,
executor.q.priorityOperation.size=0,executor.q.response.size=0,
operations.remote.size=1,operations.running.size=0,proxy.count=2,
clientEndpoint.count=0,connection.active.count=2,
连接。计数=2

2015-09-16 10:46:02信息 InternalPartitionService:?-[10.11.173.129]:5903[dev][3.2.1] 队列中剩余的迁移任务=51 2015-09-16 10:46:12调试 TeleavisoIvrLoader:71-检查新文件。。。2015-09-16 10:46:13 信息内部分区服务:?-[10.11.173.129]:5903[dev][3.2.1] 所有迁移任务都已完成,队列为空。2015-09-16 10:46:19信息健康监视器:?-[10.11.173.129]:5903[dev][3.2.1] 内存.used=103.9M,内存.free=24.1M,内存.total=128.0M, memory.max=128.0M,memory.used/total=81.21%,memory.used/max=81.21%, load.process=0.00%,load.system=1.00%,load.systemAverage=2.00%, thread.count=73,thread.peakCount=107,event.q.size=0, executor.q.async.size=0,executor.q.client.size=0, executor.q.operation.size=0,executor.q.query.size=0, executor.q.scheduled.size=0,executor.q.io.size=0, executor.q.system.size=0,executor.q.operation.size=0, executor.q.priorityOperation.size=0,executor.q.response.size=0, operations.remote.size=0,operations.running.size=0,proxy.count=2, clientEndpoint.count=0,connection.active.count=2, 连接。计数=2

2015-09-16 10:46:49信息健康监测:?-[10.11.173.129]:5903 [dev][3.2.1]已用内存=105.1M,可用内存=22.9M, memory.total=128.0M,memory.max=128.0M,memory.used/total=82.11%, memory.used/max=82.11%,load.process=0.00%,load.system=1.00%, load.systemAverage=1.00%,thread.count=73,thread.peakCount=107, event.q.size=0,executor.q.async.size=0,executor.q.client.size=0, executor.q.operation.size=0,executor.q.query.size=0, executor.q.scheduled.size=0,executor.q.io.size=0, executor.q.system.size=0,executor.q.operation.size=0, executor.q.priorityOperation.size=0,executor.q.response.size=0, operations.remote.size=0,operations.running.size=0,proxy.count=2, clientEndpoint.count=0,connection.active.count=2, 连接。计数=2


正在使用地图吗?是否添加了条目?地图的配置是什么?也许这与过期有关。。。项目已添加。。并最终过期..我发现jvm中的内存非常少。。128mb@pveentjer,我认为这是hazelcast为自己的需要分配的内存量,通常是机器整个RAM的一个子集。我个人认为,在一个使用12GB物理内存的节点上,
内存中出现的内存不足1GB。total
内存。max
。@FernandaCoolaud,您是否看到以下值
thread.count
thread.peakCount
也增加了?这可能表明存在锁定问题,从而导致分配越来越多的线程供hazelcast使用。如果您向我们提供有关如何使用hazelcast、如何以及何时从中输入和获取数据等的代码片段,将会有所帮助。是否正在使用地图?是否添加了条目?地图的配置是什么?也许这与过期有关。。。项目已添加。。并最终过期..我发现jvm中的内存非常少。。128mb@pveentjer,我认为这是hazelcast为自己的需要分配的内存量,通常是机器整个RAM的一个子集。我个人认为,在一个使用12GB物理内存的节点上,
内存中出现的内存不足1GB。total
内存。max
。@FernandaCoolaud,您是否看到以下值
thread.count
thread.peakCount
也增加了?这可能表明存在锁定问题,从而导致分配越来越多的线程供hazelcast使用。如果您向我们提供有关如何使用hazelcast、如何以及何时从中输入和获取数据等的代码片段,将会有所帮助。
<p>2015-09-16 10:45:49 INFO  HealthMonitor:? - [10.11.173.129]:5903
[dev] [3.2.1] memory.used=97.6M, memory.free=30.4M,
memory.total=128.0M, memory.max=128.0M, memory.used/total=76.27%,
memory.used/max=76.27%, load.process=0.00%, load.system=1.00%,
load.systemAverage=3.00%, thread.count=96, thread.peakCount=107,
event.q.size=0, executor.q.async.size=0, executor.q.client.size=0,
executor.q.operation.size=0, executor.q.query.size=0,
executor.q.scheduled.size=0, executor.q.io.size=0,
executor.q.system.size=0, executor.q.operation.size=0,
executor.q.priorityOperation.size=0, executor.q.response.size=0,
operations.remote.size=1, operations.running.size=0, proxy.count=2,
clientEndpoint.count=0, connection.active.count=2,
connection.count=2</p>

<p>2015-09-16 10:46:02 INFO 
InternalPartitionService:? - [10.11.173.129]:5903 [dev] [3.2.1]
Remaining migration tasks in queue =    51 2015-09-16 10:46:12 DEBUG
TeleavisoIvrLoader:71 - Checking for new files... 2015-09-16 10:46:13
INFO  InternalPartitionService:? - [10.11.173.129]:5903 [dev] [3.2.1]
All migration tasks has been completed, queues are empty. 2015-09-16
10:46:19 INFO  HealthMonitor:? - [10.11.173.129]:5903 [dev] [3.2.1]
memory.used=103.9M, memory.free=24.1M, memory.total=128.0M,
memory.max=128.0M, memory.used/total=81.21%, memory.used/max=81.21%,
load.process=0.00%, load.system=1.00%, load.systemAverage=2.00%,
thread.count=73, thread.peakCount=107, event.q.size=0,
executor.q.async.size=0, executor.q.client.size=0,
executor.q.operation.size=0, executor.q.query.size=0,
executor.q.scheduled.size=0, executor.q.io.size=0,
executor.q.system.size=0, executor.q.operation.size=0,
executor.q.priorityOperation.size=0, executor.q.response.size=0,
operations.remote.size=0, operations.running.size=0, proxy.count=2,
clientEndpoint.count=0, connection.active.count=2,
connection.count=2</p>

<p>2015-09-16 10:46:49 INFO  HealthMonitor:? - [10.11.173.129]:5903
[dev] [3.2.1] memory.used=105.1M, memory.free=22.9M,
memory.total=128.0M, memory.max=128.0M, memory.used/total=82.11%,
memory.used/max=82.11%, load.process=0.00%, load.system=1.00%,
load.systemAverage=1.00%, thread.count=73, thread.peakCount=107,
event.q.size=0, executor.q.async.size=0, executor.q.client.size=0,
executor.q.operation.size=0, executor.q.query.size=0,
executor.q.scheduled.size=0, executor.q.io.size=0,
executor.q.system.size=0, executor.q.operation.size=0,
executor.q.priorityOperation.size=0, executor.q.response.size=0,
operations.remote.size=0, operations.running.size=0, proxy.count=2,
clientEndpoint.count=0, connection.active.count=2,
connection.count=2</p>