Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/oracle/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Oracle压缩索引-如何确定要压缩的列_Oracle_Indexing - Fatal编程技术网

Oracle压缩索引-如何确定要压缩的列

Oracle压缩索引-如何确定要压缩的列,oracle,indexing,Oracle,Indexing,我想为Oracle应用程序表创建以下索引 create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1)) compress 3 该表共有62421行。状态列中有10个不同的值。过程名称列中有23个不同的值。17419在selected\u person\u id列中有不同的值。只有43530个值存在于selected_per

我想为Oracle应用程序表创建以下索引

create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1)) compress 3
该表共有62421行。状态列中有10个不同的值。
过程名称
列中有23个不同的值。17419在
selected\u person\u id
列中有不同的值。只有43530个值存在于
selected_person_id
列中,其余值为空(新员工工作流中该人员尚不存在)

我的问题类似于:

select *
from   hr.hr_api_transactions psth   
where  psth.process_name in ('TFG_HR_NEW_HIRE_PLACE_JSP_PRC', 'HR_NEW_HIRE_PLACE_JSP_PRC', 'HR_NEWHIRE_JSP_PRC')   -- TFG specific.
--and    nvl(psth.selected_person_id, -1) in (:p_person_id, -1)   -- 1118634
and    psth.status not in ('W', 'S')   -- Work in Progress, Saved For Later.

我的问题是我应该使用压缩3还是压缩2?用62421个总数中的17419个不同值(以及18891个空值)压缩
所选的\u person\u id
列是否更好?

压缩声明和建议是出了名的糟糕。这是你真正需要自己测试的任务之一。您可以通过检查
DBA_SEGMENTS.BYTES
来测试压缩

压缩是CPU和大小之间的权衡。根据我的经验,基本索引压缩的CPU开销非常小。只要大小小于百分之几,我建议使用增加的压缩设置

使用以下代码测试不压缩到
compress 3
的段大小。确保使用足够大的数据进行测试。Oracle在区段中分配空间;如果您使用一个小的测试大小,那么您将只测量数据块大小开销

drop index hr.xxhr_api_transactions_idx1;
create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1));
select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'XXHR_API_TRANSACTIONS_IDX1';

drop index hr.xxhr_api_transactions_idx1;
create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1)) compress 1;
select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'XXHR_API_TRANSACTIONS_IDX1';

drop index hr.xxhr_api_transactions_idx1;
create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1)) compress 2;
select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'XXHR_API_TRANSACTIONS_IDX1';

drop index hr.xxhr_api_transactions_idx1;
create index xxhr_api_transactions_idx1 on hr.hr_api_transactions (status, process_name, nvl(selected_person_id, -1)) compress 3;
select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'XXHR_API_TRANSACTIONS_IDX1';