Performance Hadoop性能

Performance Hadoop性能,performance,hadoop,mapreduce,Performance,Hadoop,Mapreduce,我安装了Hadoop1.0.0并试用了单词计数示例(单节点集群)。花了200万48秒才完成。然后我尝试了标准的linux字数计算程序,它在同一组(180 kB数据)上运行10毫秒。是我做错了什么,还是Hadoop非常慢 time hadoop jar /usr/share/hadoop/hadoop*examples*.jar wordcount someinput someoutput 12/01/29 23:04:41 INFO input.FileInputFormat: Total in

我安装了Hadoop1.0.0并试用了单词计数示例(单节点集群)。花了200万48秒才完成。然后我尝试了标准的linux字数计算程序,它在同一组(180 kB数据)上运行10毫秒。是我做错了什么,还是Hadoop非常慢

time hadoop jar /usr/share/hadoop/hadoop*examples*.jar wordcount someinput someoutput
12/01/29 23:04:41 INFO input.FileInputFormat: Total input paths to process : 30
12/01/29 23:04:41 INFO mapred.JobClient: Running job: job_201201292302_0001
12/01/29 23:04:42 INFO mapred.JobClient:  map 0% reduce 0%
12/01/29 23:05:05 INFO mapred.JobClient:  map 6% reduce 0%
12/01/29 23:05:15 INFO mapred.JobClient:  map 13% reduce 0%
12/01/29 23:05:25 INFO mapred.JobClient:  map 16% reduce 0%
12/01/29 23:05:27 INFO mapred.JobClient:  map 20% reduce 0%
12/01/29 23:05:28 INFO mapred.JobClient:  map 20% reduce 4%
12/01/29 23:05:34 INFO mapred.JobClient:  map 20% reduce 5%
12/01/29 23:05:35 INFO mapred.JobClient:  map 23% reduce 5%
12/01/29 23:05:36 INFO mapred.JobClient:  map 26% reduce 5%
12/01/29 23:05:41 INFO mapred.JobClient:  map 26% reduce 8%
12/01/29 23:05:44 INFO mapred.JobClient:  map 33% reduce 8%
12/01/29 23:05:53 INFO mapred.JobClient:  map 36% reduce 11%
12/01/29 23:05:54 INFO mapred.JobClient:  map 40% reduce 11%
12/01/29 23:05:56 INFO mapred.JobClient:  map 40% reduce 12%
12/01/29 23:06:01 INFO mapred.JobClient:  map 43% reduce 12%
12/01/29 23:06:02 INFO mapred.JobClient:  map 46% reduce 12%
12/01/29 23:06:06 INFO mapred.JobClient:  map 46% reduce 14%
12/01/29 23:06:09 INFO mapred.JobClient:  map 46% reduce 15%
12/01/29 23:06:11 INFO mapred.JobClient:  map 50% reduce 15%
12/01/29 23:06:12 INFO mapred.JobClient:  map 53% reduce 15%
12/01/29 23:06:20 INFO mapred.JobClient:  map 56% reduce 15%
12/01/29 23:06:21 INFO mapred.JobClient:  map 60% reduce 17%
12/01/29 23:06:28 INFO mapred.JobClient:  map 63% reduce 17%
12/01/29 23:06:29 INFO mapred.JobClient:  map 66% reduce 17%
12/01/29 23:06:30 INFO mapred.JobClient:  map 66% reduce 20%
12/01/29 23:06:36 INFO mapred.JobClient:  map 70% reduce 22%
12/01/29 23:06:37 INFO mapred.JobClient:  map 73% reduce 22%
12/01/29 23:06:45 INFO mapred.JobClient:  map 80% reduce 24%
12/01/29 23:06:51 INFO mapred.JobClient:  map 80% reduce 25%
12/01/29 23:06:54 INFO mapred.JobClient:  map 86% reduce 25%
12/01/29 23:06:55 INFO mapred.JobClient:  map 86% reduce 26%
12/01/29 23:07:02 INFO mapred.JobClient:  map 90% reduce 26%
12/01/29 23:07:03 INFO mapred.JobClient:  map 93% reduce 26%
12/01/29 23:07:07 INFO mapred.JobClient:  map 93% reduce 30%
12/01/29 23:07:09 INFO mapred.JobClient:  map 96% reduce 30%
12/01/29 23:07:10 INFO mapred.JobClient:  map 96% reduce 31%
12/01/29 23:07:12 INFO mapred.JobClient:  map 100% reduce 31%
12/01/29 23:07:22 INFO mapred.JobClient:  map 100% reduce 100%
12/01/29 23:07:28 INFO mapred.JobClient: Job complete: job_201201292302_0001
12/01/29 23:07:28 INFO mapred.JobClient: Counters: 29
12/01/29 23:07:28 INFO mapred.JobClient:   Job Counters 
12/01/29 23:07:28 INFO mapred.JobClient:     Launched reduce tasks=1
12/01/29 23:07:28 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=275346
12/01/29 23:07:28 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/01/29 23:07:28 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/01/29 23:07:28 INFO mapred.JobClient:     Launched map tasks=30
12/01/29 23:07:28 INFO mapred.JobClient:     Data-local map tasks=30
12/01/29 23:07:28 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=137186
12/01/29 23:07:28 INFO mapred.JobClient:   File Output Format Counters 
12/01/29 23:07:28 INFO mapred.JobClient:     Bytes Written=26287
12/01/29 23:07:28 INFO mapred.JobClient:   FileSystemCounters
12/01/29 23:07:28 INFO mapred.JobClient:     FILE_BYTES_READ=71510
12/01/29 23:07:28 INFO mapred.JobClient:     HDFS_BYTES_READ=89916
12/01/29 23:07:28 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=956282
12/01/29 23:07:28 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=26287
12/01/29 23:07:28 INFO mapred.JobClient:   File Input Format Counters 
12/01/29 23:07:28 INFO mapred.JobClient:     Bytes Read=85860
12/01/29 23:07:28 INFO mapred.JobClient:   Map-Reduce Framework
12/01/29 23:07:28 INFO mapred.JobClient:     Map output materialized bytes=71684
12/01/29 23:07:28 INFO mapred.JobClient:     Map input records=2574
12/01/29 23:07:28 INFO mapred.JobClient:     Reduce shuffle bytes=71684
12/01/29 23:07:28 INFO mapred.JobClient:     Spilled Records=6696
12/01/29 23:07:28 INFO mapred.JobClient:     Map output bytes=118288
12/01/29 23:07:28 INFO mapred.JobClient:     CPU time spent (ms)=39330
12/01/29 23:07:28 INFO mapred.JobClient:     Total committed heap usage (bytes)=5029167104
12/01/29 23:07:28 INFO mapred.JobClient:     Combine input records=8233
12/01/29 23:07:28 INFO mapred.JobClient:     SPLIT_RAW_BYTES=4056
12/01/29 23:07:28 INFO mapred.JobClient:     Reduce input records=3348
12/01/29 23:07:28 INFO mapred.JobClient:     Reduce input groups=1265
12/01/29 23:07:28 INFO mapred.JobClient:     Combine output records=3348
12/01/29 23:07:28 INFO mapred.JobClient:     Physical memory (bytes) snapshot=4936278016
12/01/29 23:07:28 INFO mapred.JobClient:     Reduce output records=1265
12/01/29 23:07:28 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=26102546432
12/01/29 23:07:28 INFO mapred.JobClient:     Map output records=8233

real    2m48.886s
user    0m3.300s
sys 0m0.304s


time wc someinput/*
  178  1001  8674 someinput/capacity-scheduler.xml
  178  1001  8674 someinput/capacity-scheduler.xml.bak
    7     7   196 someinput/commons-logging.properties
    7     7   196 someinput/commons-logging.properties.bak
   24    35   535 someinput/configuration.xsl
   80   122  1968 someinput/core-site.xml
   80   122  1972 someinput/core-site.xml.bak
    1     0     1 someinput/dfs.exclude
    1     0     1 someinput/dfs.include
   12    36   327 someinput/fair-scheduler.xml
   45   192  2141 someinput/hadoop-env.sh
   45   192  2139 someinput/hadoop-env.sh.bak
   20   137   910 someinput/hadoop-metrics2.properties
   20   137   910 someinput/hadoop-metrics2.properties.bak
  118   582  4653 someinput/hadoop-policy.xml
  118   582  4653 someinput/hadoop-policy.xml.bak
  241   623  6616 someinput/hdfs-site.xml
  241   623  6630 someinput/hdfs-site.xml.bak
  171   417  6177 someinput/log4j.properties
  171   417  6177 someinput/log4j.properties.bak
    1     0     1 someinput/mapred.exclude
    1     0     1 someinput/mapred.include
   12    15   298 someinput/mapred-queue-acls.xml
   12    15   298 someinput/mapred-queue-acls.xml.bak
  338   897  9616 someinput/mapred-site.xml
  338   897  9630 someinput/mapred-site.xml.bak
    1     1    10 someinput/masters
    1     1    18 someinput/slaves
   57    89  1243 someinput/ssl-client.xml.example
   55    85  1195 someinput/ssl-server.xml.example
 2574  8233 85860 total

real    0m0.009s
user    0m0.004s
sys 0m0.000s

这取决于许多因素,包括配置、机器、内存配置、JVM设置等。您还需要减去JVM启动时间

它对我来说跑得快多了。也就是说,在小数据集上,它当然比专用的C程序要慢——考虑一下它在“幕后”做什么


在分布在数千个文件中的TB数据上尝试一下,看看会发生什么。

您的输入数据很小,因此您观察到hadoop花费了很长时间。hadoop中的就业创造过程非常繁重,因为它涉及很多事情。如果输入数据很大,那么您会看到hadoop比wc做得更好。

为了提高hadoop的性能:

  • 设置映射器和还原器的数量

    [通过查看程序的输出,我认为您已经使用了大量的还原器和映射器。请根据需要使用。使用过多的映射器或还原器不会提高性能]

  • 使用较大的数据块。(以TB为单位或至少以GB为单位)

    [在Hadoop中,有些funda的块大小为64MB。]

  • 将Hadoop安装到其他一些终端,并尝试在多节点集群中运行。这将提高性能


Hadoop是下一件大事。除了其他答案之外,还有一个因素:
您有30个文件要处理-因此有30个任务要执行。一个任务执行的Hadoop MR开销在1到3秒之间。如果您将数据合并到一个文件中,性能将大大提高,但仍会有作业启动开销

我认为本地原生程序的性能总是优于hadoop。hadoop先生在构建时考虑到了可伸缩性和容错性——在许多情况下,这会提高性能。

正如Dave所说,hadoop经过优化,可以处理大量数据,而不是玩具示例 “叫醒大象”是为了让事情顺利进行而征收的税,当你在较小的电视机上工作时,这是不需要的。
您可以查看一些详细信息,了解正在发生的事情。

与您可以使用终端运行的本机应用程序相比,Hadoop通常会有一定的开销。如果您将映射器的数量增加到2,您肯定会获得更好的时间,您应该能够做到这一点。如果wordcount示例不支持设置映射器和还原器,请尝试此示例

使用

hadoop jar ./target/wordcount.jar -r 1 -m 4 <input> <output>
hadoopjar./target/wordcount.jar-r1-m4

Hadoop的强大之处在于它能够在多个节点之间分散工作以处理GB/TB的数据,一般来说,它不会比计算机在几分钟内所能做的任何事情都更高效。

Hmm。。这里有一个混乱,或者让我在这里制造一个混乱

假设你有一个问题可以解决,比如说
O(n)复杂度
如果你应用hadoop将做什么让我们假设K台机器,那么它将把复杂度降低K倍。因此,在您的情况下,任务应该执行得更快(hadoop任务)

出什么事了

假设您有标准的hadoop安装和所有标准的hadoop配置,还假设您默认在本地模式下运行hadoop

1) 您在单个节点上运行程序,因此不要期望运行时间低于标准程序。(如果使用多节点群集,情况会有所不同)

现在问题来了,因为使用了单机,运行时间应该是一样的

答案是否定的,在hadoop中,数据首先由记录读取器读取,记录读取器发出键值对,这些键值对被传递到映射器,映射器随后处理并发出键值对(假设未使用组合器),然后对数据进行排序和洗牌,然后将数据传递到reducer阶段,然后写入hdfs。因此,这里的管理费用要多得多。由于这些原因,您会感觉到性能下降


你想看看hadoop能做什么。在K节点集群上运行相同的任务,获取PB的数据,并运行单线程应用程序。我保证你会大吃一惊。

即使Hadoop不适合这个小文件,我们仍然可以在一定程度上调整它。文件大小为180 kb。但是街区的数量是30个。您必须减少hdfs-site.xml中的“dfs.block.size”。随着输入拆分的增加,映射器的数量也会增加,这在本例中是不必要的。Hadoop必须根据节点数和输入数据进行调优。因此,您必须将“dfs.block.size”增加到64MB,才能使用一个映射器执行此字计数,这将显著提高性能

可能的重复可能的重复