Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/regex/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 使用cloudera quickstart的Flume Avro水槽源_Hadoop_Flume_Avro_Cloudera Cdh_Flume Ng - Fatal编程技术网

Hadoop 使用cloudera quickstart的Flume Avro水槽源

Hadoop 使用cloudera quickstart的Flume Avro水槽源,hadoop,flume,avro,cloudera-cdh,flume-ng,Hadoop,Flume,Avro,Cloudera Cdh,Flume Ng,是否可以在Cloudera Quickstart CDH VM中使用Avro接收器/源设置Flume客户端收集器结构?我知道这没有实际用途,但我想了解Flume如何使用Avro文件,以及我以后如何使用PIG等 它尝试了几种配置,但都不起作用。对我来说,似乎我需要几个代理,但是VM中只能有一个代理 我上一次尝试的是: agent.sources = reader avro-collection-source agent.channels = memoryChannel memory

是否可以在Cloudera Quickstart CDH VM中使用Avro接收器/源设置Flume客户端收集器结构?我知道这没有实际用途,但我想了解Flume如何使用Avro文件,以及我以后如何使用PIG等

它尝试了几种配置,但都不起作用。对我来说,似乎我需要几个代理,但是VM中只能有一个代理

我上一次尝试的是:

    agent.sources = reader avro-collection-source
    agent.channels = memoryChannel memoryChannel2
    agent.sinks = avro-forward-sink hdfs-sink

  #Client
    agent.sources.reader.type = exec
    agent.sources.reader.command = tail -f /home/flume/avro/source.txt

    agent.sources.reader.logStdErr = true
    agent.sources.reader.restart = true


    agent.sources.reader.channels = memoryChannel


    agent.sinks.avro-forward-sink.type = avro
    agent.sinks.avro-forward-sink.hostname = 127.0.0.1
    agent.sinks.avro-forward-sink.port = 80


    agent.sinks.avro-forward-sink.channel = memoryChannel


    agent.channels.memoryChannel.type = memory

    agent.channels.memoryChannel.capacity = 10000
    agent.channels.memoryChannel.transactionCapacity = 100

 # Collector

    agent.sources.avro-collection-source.type = avro
    agent.sources.avro-collection-source.bind = 127.0.0.1
    agent.sources.avro-collection-source.port = 80

    agent.sources.avro-collection-source.channels = memoryChannel2

    agent.sinks.hdfs-sink.type = hdfs
    agent.sinks.hdfs-sink.hdfs.path = /var/flume/avro

    agent.sinks.hdfs-sink.channel = memoryChannel2

    agent.channels.memoryChannel2.type = memory

    agent.channels.memoryChannel2.capacity = 20000
    agent.channels.memoryChannel2.transactionCapacity = 2000

谢谢你的建议

我认为这是可以做到的。在下面给出的示例中,我使用的是一个源(source1),它正在从假脱机目录源读取数据并将其转储到avro接收器。我还有另一个源(source2),它是一个avro源,并链接到source1的avro接收器。这样你就有了你所寻找的流动。请根据您的用途修改此配置文件:

# Sources, channels, and sinks are defined per
# agent name, in this case 'tier1'.
dataplatform.sources  = source1 source2
dataplatform.channels = channel1 channel3 
dataplatform.sinks    = sink1 sink2 sink3


# For each source, channel, and sink, set standard properties.
dataplatform.sources.source1.type         = spooldir
dataplatform.sources.source1.spoolDir     = /home/flume/flume-sink-clean/
dataplatform.sources.source1.deserializer.maxLineLength = 1000000
dataplatform.sources.source1.deletePolicy = immediate
dataplatform.sources.source1.batchSize    = 10000
dataplatform.sources.source1.decodeErrorPolicy = IGNORE

# Channel Type
dataplatform.channels.channel1.type = FILE
dataplatform.channels.channel1.checkpointDir = /home/flume/flume_file_channel/dataplatform/file-channel/checkpoint
dataplatform.channels.channel1.dataDirs = /home/flume/flume_file_channel/dataplatform/file-channel/data
dataplatform.channels.channel1.write-timeout = 60
dataplatform.channels.channel1.use-fast-replay = true
dataplatform.channels.channel1.transactionCapacity = 1000000
dataplatform.channels.channel1.maxFileSize = 2146435071
dataplatform.channels.channel1.capacity = 100000000


# Describe Sink2
dataplatform.sinks.sink2.type = avro
dataplatform.sinks.sink2.hostname = 0.0.0.0
dataplatform.sinks.sink2.port = 20002
dataplatform.sinks.sink2.batch-size = 10000

# Describe source2
dataplatform.sources.source2.type = avro
dataplatform.sources.source2.bind = 0.0.0.0
dataplatform.sources.source2.port = 20002


# Channel3: Source 2 to Channel3 to Local
dataplatform.channels.channel3.type = FILE
dataplatform.channels.channel3.checkpointDir = /home/flume/flume_file_channel/local/file-channel/checkpoint
dataplatform.channels.channel3.dataDirs = /home/flume/flume_file_channel/local/file-channel/data
dataplatform.channels.channel3.transactionCapacity = 1000000
dataplatform.channels.channel3.checkpointInterval = 30000
dataplatform.channels.channel3.maxFileSize = 2146435071
dataplatform.channels.channel3.capacity = 10000000

# Describe Sink3 (Local File System)
dataplatform.sinks.sink3.type = file_roll
dataplatform.sinks.sink3.sink.directory = /home/flume/flume-sink/
dataplatform.sinks.sink3.sink.rollInterval = 60
dataplatform.sinks.sink3.batchSize = 1000

# Bind the source and sink to the channel
dataplatform.sources.source1.channels = channel1
dataplatform.sources.source2.channels = channel3
dataplatform.sinks.sink1.channel = channel1
dataplatform.sinks.sink2.channel = channel2
dataplatform.sinks.sink3.channel = channel3