Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/cassandra/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/sqlite/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Cassandra 卡桑德拉读得很高,等等_Cassandra_Iowait - Fatal编程技术网

Cassandra 卡桑德拉读得很高,等等

Cassandra 卡桑德拉读得很高,等等,cassandra,iowait,Cassandra,Iowait,我有一个三节点的cassandra集群。当我从集群进行多线程查询时,io负载非常高。集群每个节点容纳大约80GB的数据。我使用时间窗口压缩策略,时间窗口为10小时。一个sstable大约为1GB。有人能帮我吗。谢谢。 数据以每秒10000条的速度传输。这个集群拥有大约100亿条记录。 下面是模式信息 CREATE TABLE point_warehouse.point_period ( point_name text, year text, time timestamp,

我有一个三节点的cassandra集群。当我从集群进行多线程查询时,io负载非常高。集群每个节点容纳大约80GB的数据。我使用时间窗口压缩策略,时间窗口为10小时。一个sstable大约为1GB。有人能帮我吗。谢谢。
数据以每秒10000条的速度传输。这个集群拥有大约100亿条记录。 下面是模式信息

CREATE TABLE point_warehouse.point_period (
    point_name text,
    year text,
    time timestamp,
    period int,
    time_end timestamp,
    value text,
    PRIMARY KEY ((point_name, year), time)
) WITH CLUSTERING ORDER BY (time DESC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
    AND comment = ''
    AND compaction = {'class': 'org.apache.cassandra.db.compaction.TimeWindowCompactionStrategy', 'compaction_window_size': '10', 'compaction_window_unit': 'HOURS', 'max_threshold': '32', 'min_threshold': '4'}
    AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 2592000
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99PERCENTILE';
查询是
SELECT*from POINT\u PERIOD,其中POINT\u NAME=?和YEAR='2017'和TIME>'2017-05-23 12:53:24
按时间排序asc限制1允许筛选'


当同时执行此查询时,io负载变得非常高,如200MB/s。谢谢。

如果没有模式和查询,我们如何帮助解决这个问题?是的!我的错,谢谢。