Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 在配置单元中使用排序表_Hadoop_Hive - Fatal编程技术网

Hadoop 在配置单元中使用排序表

Hadoop 在配置单元中使用排序表,hadoop,hive,Hadoop,Hive,总之: 我觉得我的系统忽略了预排序表的概念。 -我希望在排序步骤上节省时间,因为我正在使用 预排序的数据,但查询计划似乎指示中间 分类步骤 肮脏的细节如下: 设置======= 我设置了以下标志:============= set hive.enforce.bucketing = true; set mapred.reduce.tasks=8; set mapred.map.tasks=8; CREATE TABLE alltrades (symbol STRING, exchan

总之: 我觉得我的系统忽略了预排序表的概念。 -我希望在排序步骤上节省时间,因为我正在使用 预排序的数据,但查询计划似乎指示中间 分类步骤

肮脏的细节如下:

设置=======

我设置了以下标志:=============

set hive.enforce.bucketing = true;
set mapred.reduce.tasks=8;
set mapred.map.tasks=8;
CREATE TABLE alltrades
      (symbol STRING, exchange STRING, price FLOAT, volume INT, cond
INT, bid FLOAT, ask FLOAT, time STRING)
CLUSTERED BY (symbol) SORTED BY (symbol, time) INTO 8 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE;
insert overwrite table alltrades
select symbol, exchange, price, volume, cond, bid, ask, time
from trades
distribute by symbol sort by symbol, time;
在这里,我创建一个表来在磁盘上保存数据的临时副本========

CREATE TABLE trades
      (symbol STRING, exchange STRING, price FLOAT, volume INT, cond
INT, bid FLOAT, ask FLOAT, time STRING)
PARTITIONED BY (dt STRING)
CLUSTERED BY (symbol) SORTED BY (symbol, time) INTO 8 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE;
在这里,我将磁盘上的数据复制到表中 顺便说一句,这里的数据按符号进行聚类,并按时间排序。 我似乎无法让Hive使用这个概念。。。i、 避免 重新排序

LOAD DATA LOCAL INPATH '%(dir)s2010-05-07'
INTO TABLE trades
partition (dt='2010-05-07');
我使用下面的最后一个表来强制bucketing=========== 并实施排序顺序===========

set hive.enforce.bucketing = true;
set mapred.reduce.tasks=8;
set mapred.map.tasks=8;
CREATE TABLE alltrades
      (symbol STRING, exchange STRING, price FLOAT, volume INT, cond
INT, bid FLOAT, ask FLOAT, time STRING)
CLUSTERED BY (symbol) SORTED BY (symbol, time) INTO 8 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE;
insert overwrite table alltrades
select symbol, exchange, price, volume, cond, bid, ask, time
from trades
distribute by symbol sort by symbol, time;
数据是从配置单元表加载的==========

set hive.enforce.bucketing = true;
set mapred.reduce.tasks=8;
set mapred.map.tasks=8;
CREATE TABLE alltrades
      (symbol STRING, exchange STRING, price FLOAT, volume INT, cond
INT, bid FLOAT, ask FLOAT, time STRING)
CLUSTERED BY (symbol) SORTED BY (symbol, time) INTO 8 BUCKETS
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
 STORED AS TEXTFILE;
insert overwrite table alltrades
select symbol, exchange, price, volume, cond, bid, ask, time
from trades
distribute by symbol sort by symbol, time;
令人失望的是,任何关于所有交易的查询都需要 排序符号,时间重新排序。。。有办法吗 围绕着这个? 另外,有没有一种方法可以使整个过程在一个查询步骤中工作 而不是2

为什么排序似乎不起作用=======

请注意,该表是用sortby子句构造和填充的。 我担心扔掉这些会导致未来减速机的行为 好像不需要排序

这是我认为不应该提出的问题的计划 涉及分类。。。但事实上确实如此========

hive> explain select symbol, time, price from alltrades sort by symbol, time;
OK
ABSTRACT SYNTAX TREE:
 (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME alltrades)))
(TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT
(TOK_SELEXPR (TOK_TABLE_OR_COL symbol)) (TOK_SELEXPR (TOK_TABLE_OR_COL
time)) (TOK_SELEXPR (TOK_TABLE_OR_COL price))) (TOK_SORTBY
(TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL symbol))
(TOK_TABSORTCOLNAMEASC (TOK_TABLE_OR_COL time)))))

STAGE DEPENDENCIES:
 Stage-1 is a root stage
 Stage-0 is a root stage

STAGE PLANS:
 Stage: Stage-1
   Map Reduce
     Alias -> Map Operator Tree:
       alltrades
         TableScan
           alias: alltrades
           Select Operator
             expressions:
                   expr: symbol
                   type: string
                   expr: time
                   type: string
                   expr: price
                   type: float
             outputColumnNames: _col0, _col1, _col2
             Reduce Output Operator
               key expressions:
                     expr: _col0
                     type: string
                     expr: _col1
                     type: string
               sort order: ++
               tag: -1
               value expressions:
                     expr: _col0
                     type: string
                     expr: _col1
                     type: string
                     expr: _col2
                     type: float
     Reduce Operator Tree:
       Extract
         File Output Operator
           compressed: false
           GlobalTableId: 0
           table:
               input format: org.apache.hadoop.mapred.TextInputFormat
               output format:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat

 Stage: Stage-0
   Fetch Operator
     limit: -1

您是否检查了
set-hive.exforce.bucketing=true的效果?从


蜂巢。强制。排序
假的
是否强制排序。如果为true,则在插入到表中时,将强制执行排序。
您还可能会发现阅读org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer的实现#genBucketingSortingDest
非常有用:


hive.exforce.bucketing
不会对数据集进行全局排序。相反,它会将排序后的数据写入bucket(在您的示例8/分区中)。因此,它需要一个全局排序步骤来满足您正在寻找的查询

希望这有帮助, Nat


另请看

谢谢,但我不久前放弃了蜂巢。。。我想买点轻一点的,比如python迪斯科舞厅。