Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 映射/减少计数的两阶段排序_Python_Hadoop_Mrjob - Fatal编程技术网

Python 映射/减少计数的两阶段排序

Python 映射/减少计数的两阶段排序,python,hadoop,mrjob,Python,Hadoop,Mrjob,这个python3程序试图使用map/reduce从文本文件中生成单词的频率列表。我想知道如何对单词counts排序,在第二个reducer的yield语句中表示为“count”,以便最大的计数值出现在最后。目前,结果的尾部如下所示: "0002" "wouldn" "0002" "wrap" "0002" "x" "0002" "xxx" "0002" "young" "0002" "zone" python MapReduceWordFreqCounter.py book.tx

这个python3程序试图使用map/reduce从文本文件中生成单词的频率列表。我想知道如何对单词counts排序,在第二个reducer的yield语句中表示为“count”,以便最大的计数值出现在最后。目前,结果的尾部如下所示:

"0002"  "wouldn"
"0002"  "wrap"
"0002"  "x"
"0002"  "xxx"
"0002"  "young"
"0002"  "zone"
python MapReduceWordFreqCounter.py book.txt
对于上下文,我将任何word文本文件传递到python3程序,如下所示:

"0002"  "wouldn"
"0002"  "wrap"
"0002"  "x"
"0002"  "xxx"
"0002"  "young"
"0002"  "zone"
python MapReduceWordFreqCounter.py book.txt
以下是
MapReduceWordFreqCounter.py的代码:

from mrjob.job import MRJob
from mrjob.step import MRStep
import re

# ignore whitespace characters
WORD_REGEXP = re.compile(r"[\w']+")

class MapReduceWordFreqCounter(MRJob):

    def steps(self):
        return [
            MRStep(mapper=self.mapper_get_words,
                   reducer=self.reducer_count_words),
            MRStep(mapper=self.mapper_make_counts_key,
                   reducer = self.reducer_output_words)
        ]

    def mapper_get_words(self, _, line):
        words = WORD_REGEXP.findall(line)
        for word in words:
            yield word.lower(), 1

    def reducer_count_words(self, word, values):
        yield word, sum(values)

    def mapper_make_counts_key(self, word, count):
        yield str(count).rjust(4,'0'), word

    def reducer_output_words(self, count, words):
        for word in words:
            yield count, word

if __name__ == '__main__':
    MapReduceWordFreqCounter.run()           

您必须为您的作业设置自定义排序比较器

如果用java编写,它看起来像

job.setSortComparatorClass(SortKeyComparator.class);
您必须提供一个类,该类给出相反的顺序

public class SortKeyComparator extends Text.Comparator {

    @Override
    public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
        return (-1) * super.compare(b1, s1, l1, b2, s2, l2);
    }
}

我猜python hadoop api有一些类似的方法来实现这一点。

对于MRJob Reduce步骤,不希望结果按键“count”排序

在这里,MRJob导入允许您在本地和AWS Elastic MapReduce集群上运行代码。MRJob执行起来很繁重,因为它使用纱线API和Hadoop流在map和reduce作业之间进行分布式数据传输

例如,要在本地运行,可以将此MRJob作为以下方式运行: python MapReduceWordFreqCounter.py books.txt>counts.txt

要在单个EMR节点上运行,请执行以下操作: python MapReduceWordFreqCounter.py-r emr books.txt>counts.txt

要在25个EMR节点上运行: python MapReduceWordFreqCounter.py-r emr--num-ec2-instances=25 books.txt>counts.txt

要对分布式EMR作业进行故障排除(替换作业ID): python-m mrjob.tools.emr.fetch_日志——查找故障j-1nxebaeqfdft

在这里,当在四个节点上运行时,减少的结果是有序的,但在输出文件的四个不同部分中。事实证明,强制reducer生成单个有序文件与在运行后作业步骤中对结果进行排序相比并没有性能优势。因此,解决这个特定问题的一种方法是使用Linux命令sort:

sort word_frequency_list.txt > sorted_word_frequency_list.txt
这会产生这些“尾部”结果:

0970“of” 1191“a” “1292” “1420”“您的” 1561“你” “1828”“至”

更一般地说,Hadoop之上有一些框架非常适合这种处理。对于这个问题,可以使用Pig读取处理过的文件并对计数进行排序

Pig可以通过Grunt shell或Pig脚本(使用区分大小写的Pig拉丁语法)运行。Pig脚本遵循以下模板: 1) LOAD语句读取数据 2) 处理数据的一系列“转换”语句 3) 保存结果的转储/存储语句

要使用清管器进行计数,请执行以下操作:

reducer_count_output = LOAD 'word_frequency_list.txt' using PigStorage('  ') AS (word_count:chararray, word_name:chararray);
counts_words_ordered = ORDER reducer_count_output BY word_count ASC;
STORE counts_words_ordered INTO 'counts_words_ordered' USING PigStorage(':', '-schema');