Mapreduce 如何通过oozie计划Hbase Map Reduce作业?

Mapreduce 如何通过oozie计划Hbase Map Reduce作业?,mapreduce,hbase,oozie,Mapreduce,Hbase,Oozie,我想安排Oozie的Hbase Map Reduce作业。我面临以下问题 How/Where to specify these properties in oozie workflow ? ( i> Table name for Mapper/Reducer ii> scan object for Mapper ) Scan scan = new Scan(new Get()); scan.setMaxVersions();

我想安排Oozie的Hbase Map Reduce作业。我面临以下问题

How/Where  to specify these properties in oozie workflow ?
(    i> Table name for Mapper/Reducer 
     ii> scan object for Mapper         )


    Scan scan = new Scan(new Get());

    scan.setMaxVersions();

    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));
    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(DATE));

    Job job = new Job(conf, JOB_NAME + "_" + TABLE_USER);
    // These two properties :-
    TableMapReduceUtil.initTableMapperJob(TABLE_USER, scan,
            Mapper.class, Text.class, Text.class, job);
    TableMapReduceUtil.initTableReducerJob(DETAILS_TABLE,
            Reducer.class, job);

请让我知道Oozie安排Hbase Map Reduce作业的最佳方法。

谢谢:):)

根据我的说法,安排Hbase Map\u Reduce作业的最佳方法是将其安排为.java文件。 它运行良好,无需编写代码将扫描更改为字符串等。 所以我像java文件一样安排我的工作,直到我有更好的选择

workflow app xmlns=“uri:oozie:workflow:0.1”name=“java main wf”>
mapred.job.queue.name
${queueName}
org.apache.oozie.example.DemoJavaMain
你好
Oozie!
这
是
演示
Oozie!
Java失败,错误消息[${wf:errorMessage(wf:lastErrorNode())}]

您也可以使用
标记来调度作业,但这并不像java文件那样简单。这需要相当大的努力,但可以被视为一种替代方法

     <action name='jobSample'>
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
                <!-- This is required for new api usage -->
                <property>
                    <name>mapred.mapper.new-api</name>
                    <value>true</value>
                </property>
                <property>
                    <name>mapred.reducer.new-api</name>
                    <value>true</value>
                </property>
                <!-- HBASE CONFIGURATIONS -->
                <property>
                    <name>hbase.mapreduce.inputtable</name>
                    <value>TABLE_USER</value>
                </property>
                <property>
                    <name>hbase.mapreduce.scan</name>
                    <value>${wf:actionData('get-scanner')['scan']}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.property.clientPort</name>
                    <value>${hbaseZookeeperClientPort}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.quorum</name>
                    <value>${hbaseZookeeperQuorum}</value>
                </property>
                <!-- MAPPER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.inputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableInputFormat</value>
                </property>
                <property>
                    <name>mapred.mapoutput.key.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapred.mapoutput.value.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapreduce.map.class</name>
                    <value>com.hbase.mapper.MyTableMapper</value>
                </property>
                <!-- REDUCER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.reduce.class</name>
                    <value>com.hbase.reducer.MyTableReducer</value>
                </property>
                <property>
                    <name>hbase.mapred.outputtable</name>
                    <value>DETAILS_TABLE</value>
                </property>
                 <property>
                    <name>mapreduce.outputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableOutputFormat</value>
                </property>

                <property>
                    <name>mapred.map.tasks</name>
                    <value>${mapperCount}</value>
                </property>
                <property>
                    <name>mapred.reduce.tasks</name>
                    <value>${reducerCount}</value>
                </property>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end" />
        <error to="fail" />
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error
            message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name='end' />

你好@100诸神,你能给我一个样品吗?我试图在oozie中运行mapreduce以作为java操作从数据库导出数据,但失败了。我现在真的很困惑。
     <action name='jobSample'>
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
                <!-- This is required for new api usage -->
                <property>
                    <name>mapred.mapper.new-api</name>
                    <value>true</value>
                </property>
                <property>
                    <name>mapred.reducer.new-api</name>
                    <value>true</value>
                </property>
                <!-- HBASE CONFIGURATIONS -->
                <property>
                    <name>hbase.mapreduce.inputtable</name>
                    <value>TABLE_USER</value>
                </property>
                <property>
                    <name>hbase.mapreduce.scan</name>
                    <value>${wf:actionData('get-scanner')['scan']}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.property.clientPort</name>
                    <value>${hbaseZookeeperClientPort}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.quorum</name>
                    <value>${hbaseZookeeperQuorum}</value>
                </property>
                <!-- MAPPER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.inputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableInputFormat</value>
                </property>
                <property>
                    <name>mapred.mapoutput.key.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapred.mapoutput.value.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapreduce.map.class</name>
                    <value>com.hbase.mapper.MyTableMapper</value>
                </property>
                <!-- REDUCER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.reduce.class</name>
                    <value>com.hbase.reducer.MyTableReducer</value>
                </property>
                <property>
                    <name>hbase.mapred.outputtable</name>
                    <value>DETAILS_TABLE</value>
                </property>
                 <property>
                    <name>mapreduce.outputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableOutputFormat</value>
                </property>

                <property>
                    <name>mapred.map.tasks</name>
                    <value>${mapperCount}</value>
                </property>
                <property>
                    <name>mapred.reduce.tasks</name>
                    <value>${reducerCount}</value>
                </property>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end" />
        <error to="fail" />
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error
            message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name='end' />
scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));