Hadoop 使用配置单元接收器将水槽输出保存到配置单元表

Hadoop 使用配置单元接收器将水槽输出保存到配置单元表,hadoop,hive,flume,Hadoop,Hive,Flume,我正在尝试使用配置单元配置水槽,以使用配置单元接收器类型将水槽输出保存到配置单元表。我有单节点集群。我使用maprhadoop发行版 这是我的水槽 agent1.sources = source1 agent1.channels = channel1 agent1.sinks = sink1 agent1.sources.source1.type = exec agent1.sources.source1.command = cat /home/andrey/flume_test.data

我正在尝试使用配置单元配置水槽,以使用配置单元接收器类型将水槽输出保存到配置单元表。我有单节点集群。我使用maprhadoop发行版

这是我的水槽

agent1.sources = source1
agent1.channels = channel1
agent1.sinks = sink1

agent1.sources.source1.type = exec
agent1.sources.source1.command = cat /home/andrey/flume_test.data

agent1.sinks.sink1.type = hive
agent1.sinks.sink1.channel = channel1
agent1.sinks.sink1.hive.metastore = thrift://127.0.0.1:9083
agent1.sinks.sink1.hive.database = default
agent1.sinks.sink1.hive.table = flume_test
agent1.sinks.sink1.useLocalTimeStamp = false
agent1.sinks.sink1.round = true
agent1.sinks.sink1.roundValue = 10
agent1.sinks.sink1.roundUnit = minute
agent1.sinks.sink1.serializer = DELIMITED
agent1.sinks.sink1.serializer.delimiter = "," 
agent1.sinks.sink1.serializer.serdeSeparator = ','
agent1.sinks.sink1.serializer.fieldnames = id,message

agent1.channels.channel1.type = FILE
agent1.channels.channel1.transactionCapacity = 1000000
agent1.channels.channel1.checkpointInterval 30000
agent1.channels.channel1.maxFileSize = 2146435071
agent1.channels.channel1.capacity 10000000
agent1.sources.source1.channels = channel1
我的数据水槽测试数据

1,AAAAAAAA
2,BBBBBBB
3,CCCCCCCC
4,DDDDDD
5,EEEEEEE
6,FFFFFFFFFFF
7,GGGGGG
8,HHHHHHH
9,IIIIII
10,JJJJJJ
11,KKKKKK
12,LLLLLLLL
13,MMMMMMMMM
14,NNNNNNNNN
15,OOOOOOOO
16,PPPPPPPPPP
17,QQQQQQQ
18,RRRRRRR
19,SSSSSSSS
这就是我在Hive中创建表格的方式

create table flume_test(id string, message string)
clustered by (message) into 1 buckets
STORED AS ORC tblproperties ("orc.compress"="NONE");
当我只使用1个bucket时,在hive shell中选择*from flume_test命令只返回OK状态,不返回数据。如果我使用超过1个bucket,它会返回错误消息

错误,例如在配置单元表选择之后有5个存储桶:

hive> select * from flume_test;
OK
2015-06-18 10:04:57,6909 ERROR Client fs/client/fileclient/cc/client.cc:1385 Thread: 10141 Open failed for file /user/hive/warehouse/flume_test/delta_0004401_0004500/bucket_00, LookupFid error No such file or directory(2)
2015-06-18 10:04:57,6941 ERROR Client fs/client/fileclient/cc/client.cc:1385 Thread: 10141 Open failed for file /user/hive/warehouse/flume_test/delta_0004401_0004500/bucket_00, LookupFid error No such file or directory(2)
2015-06-18 10:04:57,6976 ERROR Client fs/client/fileclient/cc/client.cc:1385 Thread: 10141 Open failed for file /user/hive/warehouse/flume_test/delta_0004401_0004500/bucket_00, LookupFid error No such file or directory(2)
2015-06-18 10:04:57,7044 ERROR Client fs/client/fileclient/cc/client.cc:1385 Thread: 10141 Open failed for file /user/hive/warehouse/flume_test/delta_0004401_0004500/bucket_00, LookupFid error No such file or directory(2)
Time taken: 0.914 seconds
配置单元表数据保存在/user/Hive/warehouse/flume\u test目录中,它不是空的

-rwxr-xr-x   3 andrey andrey          4 2015-06-17 16:28 /user/hive/warehouse/flume_test/_orc_acid_version
drwxr-xr-x   - andrey andrey          2 2015-06-17 16:28 /user/hive/warehouse/flume_test/delta_0004301_0004400
增量目录包含

-rw-r--r--   3 andrey andrey        991 2015-06-17 16:28 /user/hive/warehouse/flume_test/delta_0004301_0004400/bucket_00000
-rwxr-xr-x   3 andrey andrey          8 2015-06-17 16:28 /user/hive/warehouse/flume_test/delta_0004301_0004400/bucket_00000_flush_length
即使使用pig,我也无法读取/user/hive/warehouse/flume\u test/delta\u 0004301\u 0004400/bucket\u00000 orc文件

我还尝试在配置单元中创建表之后设置这个变量,但没有给出结果

set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads = 2;

我在internet上发现了一些示例,但它们还没有满,而且我对flume还不熟悉,所以我无法理解它们)

将这两行添加到配置中解决了我的问题,但我在从hive读取表时仍然有错误。我可以读取该表,它返回正确的结果,但有错误

agent1.sinks.sink1.hive.txnsPerBatchAsk = 2
agent1.sinks.sink1.batchSize = 10 

看起来您没有生成avsc文件。您似乎正在使用AVRO文件创建配置单元表,因此出现了错误