﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>C++博客-mysileng-随笔分类-Hadoop</title><link>http://www.cppblog.com/mysileng/category/20280.html</link><description /><language>zh-cn</language><lastBuildDate>Tue, 25 Jun 2013 03:41:55 GMT</lastBuildDate><pubDate>Tue, 25 Jun 2013 03:41:55 GMT</pubDate><ttl>60</ttl><item><title>hadoop0.20.2在eclipse中的编译</title><link>http://www.cppblog.com/mysileng/archive/2013/06/24/201276.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Mon, 24 Jun 2013 10:58:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/06/24/201276.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/201276.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/06/24/201276.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/201276.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/201276.html</trackback:ping><description><![CDATA[<p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>1. 下载Hadoop源代码<br /></strong>Hadoop 各成员源代码下载地址：<a href="http://svn.apache.org/repos/asf/hadoop" style="color: #336699; text-decoration: initial;">http://svn.apache.org/repos/asf/hadoop</a>，请使用SVN下载，在SVN浏览器中将trunk目录下的源代码check-out 出来即可。请注意只check-out出SVN 上的tag 目录下的内容，如：<br /><a href="http://svn.apache.org/repos/asf/hadoop/common/trunk" style="color: #336699; text-decoration: initial;">http://svn.apache.org/repos/asf/hadoop/common/tag/release-0.20.2</a>，</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><br /><strong>2. 准备编译环境</strong></p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>2.1. 系统</strong></p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>CentOS5.5</strong></p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>2.2. Hadoop代码版本<br /></strong>hadoop-0.20.2-release</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>2.3. 联网</strong><br />编译Hadoop 会依赖很多第三方库，但编译工具Ant 会自动从网上下载缺少的库，所以必须保证机器能够访问Internet。<br /><strong>2.4. java<br /></strong>编译Hadoop要用JDK1.6 以上，网址：<a href="http://java.sun.com/javase/downloads/index.jsp" style="color: #336699; text-decoration: initial;">http://java.sun.com/javase/downloads/index.jsp</a>。<br />安装好之后，请设置好JAVA_HOME 环境变量。<br /><strong>2.5. Ant</strong>&nbsp;<br />需要使用Ant 工具来编译Hadoop，可以从：<a href="http://ant.apache.org/ivy/download.cgi" style="color: #336699; text-decoration: initial;">http://ant.apache.org/ivy/download.cgi</a>&nbsp;下载Ant</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;">安装好之后，请设置好ANT_HOME 环境变量。</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>2.6. Eclipse</strong></p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;">Eclipse 则可以从<a href="http://www.eclipse.org/downloads/" style="color: #336699; text-decoration: initial;">http://www.eclipse.org/downloads/</a>上下载。</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;">&nbsp;</p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>3. 编译Hadoop</strong></p><p style="color: #333333; font-family: Arial; font-size: 14.399999618530273px; line-height: 26px; background-color: #ffffff;"><strong>3.1. 编译Hadoop</strong><br />步骤1) 在Elipse 的Package 视图中单击右键，选择New-&gt;Java Project，如下图所示：<br /><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130624185506.jpg" width="825" height="570" alt="" /><br /><p style="font-size: 14.399999618530273px;">在上图所示的对话框中，点击Browse 按钮，选择hadoop-0.20.2 源代码目录，并设置Projectname 为hadoop-0.20.2-dev。工程导入完成后，进入Eclipse 主界面，可以看到hadoop-0.20.2 已经导入进来，但可以看到目录上有红叉叉，是因为Elipse默认使用了Java Builder，而不是Ant Builder，所以下一步就是设置使用Ant Builder。</p><p style="font-size: 14.399999618530273px;"><br />步骤3) 设置Builder 为Ant：右键hadoop-0.20.2-dev&gt;Properties-&gt;Builders:</p><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130624185543.jpg" width="855" height="600" alt="" /><br /><span style="font-size: 14.399999618530273px;">点击Browse File System 按钮，选择hadoop-0.20.2源代码目录下的build.xml 文件，并设置Name 为Ant_Builder（Name 可以改成其它的，但建议使用Ant_Builder，因为这样名副其实），操作结果如下图所示：</span><br /><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130624185624.jpg" width="849" height="820" alt="" /><br /><span style="font-size: 14.399999618530273px;">Hadoop 各成员都需要编译成jar，所以做如下图所示的一个修改：</span><br /><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130624185710.jpg" width="874" height="794" alt="" /><br /><span style="font-size: 14.399999618530273px;">上面完成后，回到Builder 的主对话框，再将对话框中的Java Builder 下移，并将它前面的勾去掉。</span><br style="font-size: 14.399999618530273px;" /><span style="font-size: 14.399999618530273px;">进入Eclipse 主界面，由于之前选择了Manual Build，所以需要人工方式驱动编译，编译成功后，可以看到BUILDSUCCESSFUL 字样。</span><br /><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130624185741.jpg" width="735" height="290" alt="" /><br /><p style="font-size: 14.399999618530273px;">　请注意：如果上图所示的菜单中的BuildAutomatically 被勾中，则在common的右键菜单中可能不会出现Build 子菜单。<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 在编译过程中，Ant 会自动从网上下载所依赖的库。hadoop-0.20.2 编译成功结束后，可以在build 目录下找到编译后生成的文件hadoop-core-0.20.2-dev.jar。</p><p style="font-size: 14.399999618530273px;">&nbsp;</p><p style="font-size: 14.399999618530273px;"><strong>3.2编译过程中出现错误</strong></p><p style="font-size: 14.399999618530273px;"><br />1、可能有时候因为eclipse版本或者操作系统版本的问题使得hadoop提供的eclipse plugin不太好用。<br />解决方法：<br />1）修改$HADOOP_HOME/src/contrib/build-contrib.xml<br />增加一行：&lt;propertyname="eclipse.home" location="/home/gushui/eclipse"/&gt;<br />上句后面的/home/gushui/eclipse由自己的$ECLIPSE_HOME代替<br /><br />2）修改$HADOOP_HOME/src/contrib/eclipse-plugin/src/java/org/apache/hadoop/eclipse/launch/HadoopApplicationLaunchShortcut.java<br />注释掉原来的//importorg.eclipse.jdt.internal.debug.ui.launcher.JavaApplicationLaunchShortcut;<br />改为importorg.eclipse.jdt.debug.ui.launchConfigurations.JavaApplicationLaunchShortcut;<br /><br />2、报错：</p><p style="font-size: 14.399999618530273px;">Buildfailed</p><p style="font-size: 14.399999618530273px;">Cannot write to the specified tarfile!</p><p style="font-size: 14.399999618530273px;">解决方法：</p><p style="font-size: 14.399999618530273px;">hadoop-0.20.2-dev目录下的Build.xml中<br />&lt;!--&nbsp;&nbsp;&nbsp;&nbsp;<br />&lt;tar compression="gzip"destfile="${build.classes}/bin.tgz"&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;tarfileset dir="bin"mode="755"/&gt;<br />&nbsp;&nbsp;&nbsp; &lt;/tar&gt;&nbsp;&nbsp;<br />&nbsp;--&gt;</p><p style="font-size: 14.399999618530273px;">注销掉，运行成功。</p><p style="font-size: 14.399999618530273px;">&nbsp;</p><p style="font-size: 14.399999618530273px;">参考&nbsp;<a href="http://blog.csdn.net/basicthinker/article/details/6174442" style="color: #336699; text-decoration: initial;">http://blog.csdn.net/basicthinker/article/details/6174442&nbsp;&nbsp;&nbsp;<br /></a></p><p style="font-size: 14.399999618530273px;"><a href="http://blog.csdn.net/basicthinker/article/details/6174442" style="color: #336699; text-decoration: initial;">参考：&nbsp;http://hi.baidu.com/xxjjyy2008/blog/item/7b5ed10f20e6a9346059f335.html<br /></a></p><span style="font-size: 14.399999618530273px;">参考：</span><a href="http://hadoop.hadoopor.com/thread-941-1-1.html" target="_blank" style="color: #336699; text-decoration: initial; font-size: 14.399999618530273px;">http://hadoop.hadoopor.com/thread-941-1-1.html</a><br style="font-size: 14.399999618530273px;" /><p style="font-size: 14.399999618530273px;"><a href="http://trac.nchc.org.tw/cloud/wiki/waue/2010/0211" target="_blank" style="color: #336699; text-decoration: initial;">http://trac.nchc.org.tw/cloud/wiki/waue/2010/0211</a></p><p style="font-size: 14.399999618530273px;"><br /></p><p style="font-size: 14.399999618530273px;"><br /></p><p style="font-size: 14.399999618530273px;">转自http://www.cnblogs.com/zyumeng/archive/2013/03/22/2975165.html</p></p><img src ="http://www.cppblog.com/mysileng/aggbug/201276.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-06-24 18:58 <a href="http://www.cppblog.com/mysileng/archive/2013/06/24/201276.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>在HADOOP中使用MRUNIT进行单元测试</title><link>http://www.cppblog.com/mysileng/archive/2013/04/03/199065.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Wed, 03 Apr 2013 03:27:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/04/03/199065.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/199065.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/04/03/199065.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/199065.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/199065.html</trackback:ping><description><![CDATA[<p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">本文地址：博客园 逖靖寒&nbsp;<a href="http://gpcuster.cnblogs.com/" style="color: #993300;">http://gpcuster.cnblogs.com</a></p><h2>前提</h2><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">1. 了解JUnit4.x的使用。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">2. 了解Mock的概念在单元测试中的应用。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">3. 了解Hadoop中MapReduce的编程模型。</span><ol style="padding-left: 50px; color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;"></ol><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">如果您对Junit和Mock不了解，可以先阅读<a href="http://www.cnblogs.com/gpcuster/archive/2009/10/03/1577716.html" style="color: #993300;">[翻译]Unit testing with JUnit 4.x and EasyMock in Eclipse - Tutorial</a>。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">如果您对Hadoop中MapReduce的编程模型不了解，可以先阅读<a href="http://hadoop.apache.org/common/docs/r0.20.1/mapred_tutorial.html" style="color: #993300;">Map/Reduce Tutorial</a>。</p><h2>介绍</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">MRUnit是一款由Couldera公司开发的专门针对Hadoop中编写MapReduce单元测试的框架。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">它可以用于0.18.x版本中的经典org.apache.hadoop.mapred.*的模型，也能在0.20.x版本org.apache.hadoop.mapreduce.*的新模型中使用。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">官方的介绍如下：</p><blockquote style="background-color: #ffffff; font-style: italic; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal;"><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666;">MRUnit is a unit test library designed to facilitate easy integration between your MapReduce development process and standard development and testing tools such as JUnit. MRUnit contains mock objects that behave like classes you interact with during MapReduce execution (e.g., InputSplit and OutputCollector) as well as test harness "drivers" that test your program's correctness while maintaining compliance with the MapReduce semantics. Mapper and Reducer implementations can be tested individually, as well as together to form a full MapReduce job.</p></blockquote><h2>安装</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">在目前Hadoop的发行版中，并没有默认包含MRUnit。你需要去Couldera公司的官网中去下载一个由他们再次发行的版本。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">推荐的版本为：<a href="http://archive.cloudera.com/cdh/2/hadoop-0.20.1+133.tar.gz" style="color: #993300;">hadoop-0.20.1+133.tar.gz</a>。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">下载这个文件后，你将在hadoop-0.20.1+133\contrib\mrunit目录中找到我们需要的jar包：hadoop-0.20.1+133-mrunit.jar。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">为了使用MRUnit，我们需要将hadoop-0.20.1+133-mrunit.jar和Junit4.x使用的jar包：junit.jar都添加到我们开发Hadoop程序项目的classpath中。</p><h2>示例</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">代码是最好的文档，我们先看一个简单的map单元测试示例，代码如下：</p><pre style="margin-top: 0px; margin-bottom: 0px; white-space: pre-wrap; word-wrap: break-word; font-size: small; font-family: consolas, 'Courier New', courier, monospace; background-color: #ffffff; line-height: normal;">package gpcuster.cnblogs.com;<br /><br />import junit.framework.TestCase;<br />import org.apache.hadoop.io.Text;<br />import org.apache.hadoop.mapred.Mapper;<br />import org.apache.hadoop.mapred.lib.IdentityMapper;<br />import org.junit.Before;<br />import org.junit.Test;<br />import org.apache.hadoop.mrunit.MapDriver;<br /><br /><span style="color: #0000ff;">public</span> <span style="color: #0000ff;">class</span> TestExample extends TestCase {<br /><br />  <span style="color: #0000ff;">private</span> Mapper&lt;Text, Text, Text, Text&gt; mapper;<br />  <span style="color: #0000ff;">private</span> MapDriver&lt;Text, Text, Text, Text&gt; driver;<br /><br />  @Before<br />  <span style="color: #0000ff;">public</span> <span style="color: #0000ff;">void</span> setUp() {<br />    mapper = <span style="color: #0000ff;">new</span> IdentityMapper&lt;Text, Text&gt;();<br />    driver = <span style="color: #0000ff;">new</span> MapDriver&lt;Text, Text, Text, Text&gt;(mapper);<br />  }<br /><br />  @Test<br />  <span style="color: #0000ff;">public</span> <span style="color: #0000ff;">void</span> testIdentityMapper() {<br />    driver.withInput(<span style="color: #0000ff;">new</span> Text(<span style="color: #006080;">"foo"</span>), <span style="color: #0000ff;">new</span> Text(<span style="color: #006080;">"bar"</span>))<br />            .withOutput(<span style="color: #0000ff;">new</span> Text(<span style="color: #006080;">"foo"</span>), <span style="color: #0000ff;">new</span> Text(<span style="color: #006080;">"bar"</span>))<br />            .runTest();<br />  }<br />}</pre><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">在这段示例代码中，我们使用的map是org.apache.hadoop.mapred.lib.IdentityMapper。这是一个非常简单的map函数：输入什么，就输出什么。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">org.apache.hadoop.mrunit.MapDriver是我们从MRUnit框架中导入的一个专门用于测试map的类。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">我们通过withInput指定输入的参数，通过withOutput指定我们期望的输出，然后通过runTest运行我们的测试。</p><h2>功能</h2><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">1. 测试Map，我们可以使用MapDriver。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">2. 测试Reduce，我们可以使用ReduceDriver。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">3. 测试一个完整的MapReduce，我们可以使用MapReduceDriver。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">4. 测试多个MapReduce组合而成的操作，我们可以使用PipelineMapReduceDriver。</span><ol style="padding-left: 50px; color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;"></ol><h2>实现</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">MRUnit框架非常精简，其核心的单元测试依赖于JUnit。</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">由于我们编写的MapReduce函数中包含有一个OutputCollector的对象，所以MRUnit自己实现了一套Mock对象来控制OutputCollector的操作。</p><h2>局限</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">通过阅读MRUnit的源代码我们会发现：</p><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">1. 不支持MapReduce框架中的分区和排序操作：从Map输出的值经过shuffle处理后直接就导入Reduce中了。</span><br style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;" /><span style="color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;">2. 不支持Streaming实现的MapReduce操作。</span><ol style="padding-left: 50px; color: #666666; font-family: georgia, verdana, arial, sans-serif; font-size: 14px; line-height: normal; background-color: #ffffff;"></ol><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">虽然MRUnit有这些局限，但是足以完成大多数的需求。</p><h2>参考资料</h2><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;"><a title="http://www.cloudera.com/hadoop-mrunit" href="http://www.cloudera.com/hadoop-mrunit" style="color: #993300;">http://www.cloudera.com/hadoop-mrunit</a></p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">&nbsp;</p><p style="margin-top: 10px; margin-bottom: 10px; font-size: 0.9em; color: #666666; font-family: georgia, verdana, arial, sans-serif; line-height: normal; background-color: #ffffff;">本文地址：博客园 逖靖寒&nbsp;<a href="http://gpcuster.cnblogs.com/" style="color: #993300;">http://gpcuster.cnblogs.com</a></p><img src ="http://www.cppblog.com/mysileng/aggbug/199065.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-04-03 11:27 <a href="http://www.cppblog.com/mysileng/archive/2013/04/03/199065.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Mapreduce-Partition分析</title><link>http://www.cppblog.com/mysileng/archive/2013/04/01/199028.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Mon, 01 Apr 2013 13:10:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/04/01/199028.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/199028.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/04/01/199028.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/199028.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/199028.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: 转自:http://blog.oddfoo.net/2011/04/17/mapreduce-partition%E5%88%86%E6%9E%90-2/Partition所处的位置Partition位置Partition主要作用就是将map的结果发送到相应的reduce。这就对partition有两个要求：1）均衡负载，尽量的将工作均匀的分配给不同的reduce。2）效率，分配速度一定要快。Ma...&nbsp;&nbsp;<a href='http://www.cppblog.com/mysileng/archive/2013/04/01/199028.html'>阅读全文</a><img src ="http://www.cppblog.com/mysileng/aggbug/199028.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-04-01 21:10 <a href="http://www.cppblog.com/mysileng/archive/2013/04/01/199028.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop namenode启动过程详细剖析及瓶颈分析</title><link>http://www.cppblog.com/mysileng/archive/2013/03/28/198897.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Thu, 28 Mar 2013 10:52:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/03/28/198897.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/198897.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/03/28/198897.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/198897.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/198897.html</trackback:ping><description><![CDATA[<h2>NameNode中几个关键的数据结构</h2><h3>FSImage</h3><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">Namenode会将HDFS的文件和目录元数据存储在一个叫fsimage的二进制文件中，每次保存fsimage之后到下次保存之间的所有hdfs操作，将会记录在editlog文件中，当editlog达到一定的大小（bytes，由fs.checkpoint.size参数定义）或从上次保存过后一定时间段过后（sec，由fs.checkpoint.period参数定义），namenode会重新将内存中对整个HDFS的目录树和文件元数据刷到fsimage文件中。Namenode就是通过这种方式来保证HDFS中元数据信息的安全性。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">Fsimage是一个二进制文件，当中记录了HDFS中所有文件和目录的元数据信息，在我的hadoop的HDFS版中，该文件的中保存文件和目录的格式如下：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_128283242641Uj.gif" alt="" width="909" height="111" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">当namenode重启加载fsimage时，就是按照如下格式协议从文件流中加载元数据信息。从fsimag的存储格式可以看出，fsimage保存有如下信息：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;首先是一个image head，其中包含：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">a)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;imgVersion(int)：当前image的版本信息</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">b)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;namespaceID(int)：用来确保别的HDFS instance中的datanode不会误连上当前NN。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">c)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;numFiles(long)：整个文件系统中包含有多少文件和目录</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">d)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;genStamp(long)：生成该image时的时间戳信息。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;接下来便是对每个文件或目录的源数据信息，如果是目录，则包含以下信息：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">a)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;path(String)：该目录的路径，如&#8221;/user/build/build-index&#8221;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">b)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;replications(short)：副本数（目录虽然没有副本，但这里记录的目录副本数也为3）</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">c)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mtime(long)：该目录的修改时间的时间戳信息</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">d)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;atime(long)：该目录的访问时间的时间戳信息</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">e)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;blocksize(long)：目录的blocksize都为0</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">f)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;numBlocks(int)：实际有多少个文件块，目录的该值都为-1，表示该item为目录</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">g)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;nsQuota(long)：namespace Quota值，若没加Quota限制则为-1</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">h)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;dsQuota(long)：disk Quota值，若没加限制则也为-1</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">i)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;username(String)：该目录的所属用户名</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">j)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;group(String)：该目录的所属组</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">k)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;permission(short)：该目录的permission信息，如644等，有一个short来记录。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;若从fsimage中读到的item是一个文件，则还会额外包含如下信息：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">a)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;blockid(long)：属于该文件的block的blockid，</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">b)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;numBytes(long)：该block的大小</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">c)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;genStamp(long)：该block的时间戳</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">当该文件对应的numBlocks数不为1，而是大于1时，表示该文件对应有多个block信息，此时紧接在该fsimage之后的就会有多个blockid，numBytes和genStamp信息。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">因此，在namenode启动时，就需要对fsimage按照如下格式进行顺序的加载，以将fsimage中记录的HDFS元数据信息加载到内存中。</p><h3>BlockMap</h3><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从以上fsimage中加载如namenode内存中的信息中可以很明显的看出，在fsimage中，并没有记录每一个block对应到哪几个datanodes的对应表信息，而只是存储了所有的关于namespace的相关信息。而真正每个block对应到datanodes列表的信息在hadoop中并没有进行持久化存储，而是在所有datanode启动时，每个datanode对本地磁盘进行扫描，将本datanode上保存的block信息汇报给namenode，namenode在接收到每个datanode的块信息汇报后，将接收到的块信息，以及其所在的datanode信息等保存在内存中。HDFS就是通过这种块信息汇报的方式来完成&nbsp;block -&gt; datanodes list的对应表构建。Datanode向namenode汇报块信息的过程叫做<strong style="margin: 0px; padding: 0px; ">blockReport</strong>，而namenode将block -&gt; datanodes list的对应表信息保存在一个叫<strong style="margin: 0px; padding: 0px; ">BlocksMap</strong>的数据结构中。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">BlocksMap的内部数据结构如下：&nbsp;&nbsp;&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282832577NUG7.gif" alt="" style="margin: 0px; padding: 0px; border: none; " />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">如上图显示，BlocksMap实际上就是一个Block对象对BlockInfo对象的一个Map表，其中Block对象中只记录了blockid，block大小以及时间戳信息，这些信息在fsimage中都有记录。而BlockInfo是从Block对象继承而来，因此除了Block对象中保存的信息外，还包括代表该block所属的HDFS文件的INodeFile对象引用以及该block所属datanodes列表的信息（即上图中的DN1，DN2，DN3，该数据结构会在下文详述）。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">因此在namenode启动并加载fsimage完成之后，实际上BlocksMap中的key，也就是Block对象都已经加载到BlocksMap中，每个key对应的value(BlockInfo)中，除了表示其所属的datanodes列表的数组为空外，其他信息也都已经成功加载。所以可以说：fsimage加载完毕后，BlocksMap中仅缺少每个块对应到其所属的datanodes list的对应关系信息。所缺这些信息，就是通过上文提到的从各datanode接收<strong style="margin: 0px; padding: 0px; ">blockReport</strong>来构建。当所有的datanode汇报给namenode的blockReport处理完毕后，BlocksMap整个结构也就构建完成。</p><h3>BlockMap中datanode列表数据结构</h3><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">在BlockInfo中，将该block所属的datanodes列表保存在一个Object[]数组中，但该数组不仅仅保存了datanodes列表，还包含了额外的信息。实际上该数组保存了如下信息：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282832653nh2L.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">上图表示一个block包含有三个副本，分别放置在DN1，DN2和DN3三个datanode上，每个datanode对应一个三元组，该三元组中的第二个元素，即上图中prev block所指的是该block在该datanode上的前一个BlockInfo引用。第三个元素，也就是上图中next Block所指的是该block在该datanode上的下一个BlockInfo引用。每个block有多少个副本，其对应的BlockInfo对象中就会有多少个这种三元组。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Namenode采用这种结构来保存block-&gt;datanode list的目的在于节约namenode内存。由于namenode将block-&gt;datanodes的对应关系保存在了内存当中，随着HDFS中文件数的增加，block数也会相应的增加，namenode为了保存block-&gt;datanodes的信息已经耗费了相当多的内存，如果还像这种方式一样的保存datanode-&gt;block list的对应表，势必耗费更多的内存，而且在实际应用中，要查一个datanode上保存的block list的应用实际上非常的少，大部分情况下是要根据block来查datanode列表，所以namenode中通过上图的方式来保存block-&gt;datanode list的对应关系，当需要查询datanode-&gt;block list的对应关系时，只需要沿着该数据结构中next Block的指向关系，就能得出结果，而又无需保存datanode-&gt;block list在内存中。</p><h2>NameNode启动过程</h2><h3>fsimage加载过程</h3><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">Fsimage加载过程完成的操作主要是为了：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">1.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;从fsimage中读取该HDFS中保存的每一个目录和每一个文件</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">2.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;初始化每个目录和文件的元数据信息</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">3.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;根据目录和文件的路径，构造出整个namespace在内存中的镜像</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">4.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;如果是文件，则读取出该文件包含的所有blockid，并插入到BlocksMap中。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">整个加载流程如下图所示：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282832793TFNr.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">如上图所示，namenode在加载fsimage过程其实非常简单，就是从fsimage中不停的顺序读取文件和目录的元数据信息，并在内存中构建整个namespace，同时将每个文件对应的blockid保存入BlocksMap中，此时BlocksMap中每个block对应的datanodes列表暂时为空。当fsimage加载完毕后，整个HDFS的目录结构在内存中就已经初始化完毕，所缺的就是每个文件对应的block对应的datanode列表信息。这些信息需要从datanode的blockReport中获取，所以加载fsimage完毕后，namenode进程进入rpc等待状态，等待所有的datanodes发送blockReports。</p><h3>blockReport阶段</h3><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">每个datanode在启动时都会扫描其机器上对应保存hdfs block的目录下(dfs.data.dir)所保存的所有文件块，然后通过namenode的rpc调用将这些block信息以一个long数组的方式发送给namenode，namenode在接收到一个datanode的blockReport rpc调用后，从rpc中解析出block数组，并将这些接收到的blocks插入到BlocksMap表中，由于此时BlocksMap缺少的仅仅是每个block对应的datanode信息，而namenoe能从report中获知当前report上来的是哪个datanode的块信息，所以，blockReport过程实际上就是namenode在接收到块信息汇报后，填充BlocksMap中每个block对应的datanodes列表的三元组信息的过程。其流程如下图所示:</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_128283285152J9.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">当所有的datanode汇报完block，namenode针对每个datanode的汇报进行过处理后，namenode的启动过程到此结束。此时BlocksMap中block-&gt;datanodes的对应关系已经初始化完毕。如果此时已经达到安全模式的推出阈值，则hdfs主动退出安全模式，开始提供服务。</p><h1><a name="_Toc268446377" style="margin: 0px; padding: 0px; color: #336699; ">启动过程数据采集和瓶颈分析</a></h1><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">对namenode的整个启动过程有了详细了解之后，就可以对其启动过程中各阶段各函数的调用耗时进行profiling的采集，数据的profiling仍然分为两个阶段，即fsimage加载阶段和blockReport阶段。</p><h2>fsimage加载阶段性能数据采集和瓶颈分析</h2><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">以下是对建库集群真实的fsimage加载过程的的性能采集数据：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282832976Z25j.gif" alt="" width="765" height="92" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从上图可以看出，fsimage的加载过程那个中，主要耗时的操作分别分布在<strong style="margin: 0px; padding: 0px; ">FSDirectory.addToParent</strong>，<strong style="margin: 0px; padding: 0px; ">FSImage.readString</strong>，以及<strong style="margin: 0px; padding: 0px; ">PermissionStatus.read</strong>三个操作，这三个操作分别占用了加载过程的73%，15%以及8%，加起来总共消耗了整个加载过程的96%。而其中<strong style="margin: 0px; padding: 0px; ">FSImage.readString</strong>和<strong style="margin: 0px; padding: 0px; ">PermissionStatus.read</strong>操作都是从fsimage的文件流中读取数据（分别是读取String和short）的操作，这种操作优化的空间不大，但是通过调整该文件流的Buffer大小来提高少许性能。而<strong style="margin: 0px; padding: 0px; ">FSDirectory.addToParent</strong>的调用却占用了整个加载过程的73%，所以该调用中的优化空间比较大。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;以下是addToParent调用中的profiling数据：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_12828330345m9e.gif" alt="" width="759" height="127" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从以上数据可以看出addToParent调用占用的73%的耗时中，有66%都耗在了INode.getPathComponents调用上，而这66%分别有36%消耗在INode.getPathNames调用，30%消耗在INode.getPathComponents调用。这两个耗时操作的具体分布如以下数据所示：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282833100y9i1.gif" alt="" width="839" height="218" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">可以看出，消耗了36%的处理时间的INode.getPathNames操作，全部都是在通过String.split函数调用来对文件或目录路径进行切分。另外消耗了30%左右的处理时间在INode.getPathComponents中，该函数中最终耗时都耗在获取字符串的byte数组的java原生操作中。</p><h2>blockReport阶段性能数据采集和瓶颈分析</h2><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">由于blockReport的调用是通过datanode调用namenode的rpc调用，所以在namenode进入到等待blockreport阶段后，会分别开启rpc调用的监听线程和rpc调用的处理线程。其中rpc处理和rpc鉴定的调用耗时分布如下图所示：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_12828331820m3h.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">而其中rpc的监听线程的优化是另外一个话题，在其他的issue中再详细讨论，且由于blockReport的操作实际上是触发的rpc处理线程，所以这里只关心rpc处理线程的性能数据。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;在namenode处理blockReport过程中的调用耗时性能数据如下：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_1282833251doLC.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">可以看出，在namenode启动阶段，处理从各个datanode汇报上来的blockReport耗费了整个rpc处理过程中的绝大部分时间(48/49)，blockReport处理逻辑中的耗时分布如下图：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_12828333102I59.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从上图数据中可以发现，blockReport阶段中耗时分布主要耗时在FSNamesystem.addStoredBlock调用以及DatanodeDescriptor.reportDiff过程中，分别耗时37/48和10/48，其中FSNamesystem.addStoredBlock所进行的操作时对每一个汇报上来的block，将其于汇报上来的datanode的对应关系初始化到namenode内存中的BlocksMap表中。所以对于每一个block就会调用一次该方法。所以可以看到该方法在整个过程中调用了<strong style="margin: 0px; padding: 0px; "><span style="margin: 0px; padding: 0px; color: red; ">774819</span></strong>次，而另一个耗时的操作，即DatanodeDescriptor.reportDiff，该操作的过程在上文中有详细介绍，主要是为了将该datanode汇报上来的blocks跟namenode内存中的BlocksMap中进行对比，以决定那个哪些是需要添加到BlocksMap中的block，哪些是需要添加到toRemove队列中的block，以及哪些是添加到toValidate队列中的block。由于这个操作需要针对每一个汇报上来的block去查询BlocksMap，以及namenode中的其他几个map，所以该过程也非常的耗时。而且从调用次数上可以看出，reportDiff调用在启动过程中仅调用了14次(有14个datanode进行块汇报)，却耗费了10/48的时间。所以reportDiff也是整个blockReport过程中非常耗时的瓶颈所在。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;同时可以看到，出了reportDiff，addStoredBlock的调用耗费了37%的时间，也就是耗费了整个blockReport时间的37/48，该方法的调用目的是为了将从datanode汇报上来的每一个block插入到BlocksMap中的操作。从该方法调用的运行数据如下图所示：</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img src="http://hi.csdn.net/attachment/201008/26/0_12828333662q8q.gif" alt="" style="margin: 0px; padding: 0px; border: none; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从上图可以看出，addStoredBlock中，主要耗时的两个阶段分别是FSNamesystem.countNode和DatanodeDescriptor.addBlock，后者是java中的插表操作，而FSNamesystem.countNode调用的目的是为了统计在BlocksMap中，每一个block对应的各副本中，有几个是live状态，几个是decommission状态，几个是Corrupt状态。而在namenode的启动初始化阶段，用来保存corrput状态和decommission状态的block的map都还是空状态，并且程序逻辑中要得到的仅仅是出于live状态的block数，所以，这里的countNoes调用在namenode启动初始化阶段并无需统计每个block对应的副本中的corrrput数和decommission数，而仅仅需要统计live状态的block副本数即可，这样countNodes能够在namenode启动阶段变得更轻量，以节省启动时间。</p><h2>瓶颈分析总结</h2><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">从profiling数据和瓶颈分歧情况来看，fsimage加载阶段的瓶颈除了在分切路径的过程中不够优以外，其他耗时的地方几乎都是在java原生接口的调用中，如从字节流读数据，以及从String对象中获取byte[]数组的操作。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;而blockReport阶段的耗时其实很大的原因是跟当前的namenode设计以及内存结构有关，比较明显的不优之处就是在namenode启动阶段的countNode和reportDiff的必要性，这两处在namenode初始化时的blockReport阶段有一些不必要的操作浪费了时间。可以针对namenode启动阶段将必要的操作抽取出来，定制成namenode启动阶段才调用的方式，以优化namenode启动性能。</p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><br style="margin: 0px; padding: 0px; " /></p><p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">Ref:&nbsp;<a href="http://blog.csdn.net/ae86_fc/article/details/5842020" style="margin: 0px; padding: 0px; color: #336699; text-decoration: none; ">http://blog.csdn.net/ae86_fc/article/details/5842020</a></p><img src ="http://www.cppblog.com/mysileng/aggbug/198897.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-03-28 18:52 <a href="http://www.cppblog.com/mysileng/archive/2013/03/28/198897.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop二次排序 (Map/Reduce中分区和分组的问题)</title><link>http://www.cppblog.com/mysileng/archive/2013/03/25/198814.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Mon, 25 Mar 2013 11:38:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/03/25/198814.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/198814.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/03/25/198814.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/198814.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/198814.html</trackback:ping><description><![CDATA[<p style="margin: 0px; padding: 0px; color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">1.二次排序概念：</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">首先按照第一字段排序，然后再对第一字段相同的行按照第二字段排序，注意不能破坏第一<em style="margin: 0px; padding: 0px; ">次排序</em>的结果 。</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">如： 输入文件：</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">20 21&nbsp;<br style="margin: 0px; padding: 0px; " />50 51&nbsp;<br style="margin: 0px; padding: 0px; " />50 52&nbsp;<br style="margin: 0px; padding: 0px; " />50 53&nbsp;<br style="margin: 0px; padding: 0px; " />50 54&nbsp;<br style="margin: 0px; padding: 0px; " />60 51&nbsp;<br style="margin: 0px; padding: 0px; " />60 53&nbsp;<br style="margin: 0px; padding: 0px; " />60 52&nbsp;<br style="margin: 0px; padding: 0px; " />60 56&nbsp;<br style="margin: 0px; padding: 0px; " />60 57&nbsp;<br style="margin: 0px; padding: 0px; " />70 58&nbsp;<br style="margin: 0px; padding: 0px; " />60 61&nbsp;<br style="margin: 0px; padding: 0px; " />70 54&nbsp;<br style="margin: 0px; padding: 0px; " />70 55&nbsp;<br style="margin: 0px; padding: 0px; " />70 56&nbsp;<br style="margin: 0px; padding: 0px; " />70 57&nbsp;<br style="margin: 0px; padding: 0px; " />70 58&nbsp;<br style="margin: 0px; padding: 0px; " />1 2&nbsp;<br style="margin: 0px; padding: 0px; " />3 4&nbsp;<br style="margin: 0px; padding: 0px; " />5 6&nbsp;<br style="margin: 0px; padding: 0px; " />7 82&nbsp;<br style="margin: 0px; padding: 0px; " />203 21&nbsp;<br style="margin: 0px; padding: 0px; " />50 512&nbsp;<br style="margin: 0px; padding: 0px; " />50 522&nbsp;<br style="margin: 0px; padding: 0px; " />50 53&nbsp;<br style="margin: 0px; padding: 0px; " />530 54&nbsp;<br style="margin: 0px; padding: 0px; " />40 511&nbsp;<br style="margin: 0px; padding: 0px; " />20 53&nbsp;<br style="margin: 0px; padding: 0px; " />20 522&nbsp;<br style="margin: 0px; padding: 0px; " />60 56&nbsp;<br style="margin: 0px; padding: 0px; " />60 57&nbsp;<br style="margin: 0px; padding: 0px; " />740 58&nbsp;<br style="margin: 0px; padding: 0px; " />63 61&nbsp;<br style="margin: 0px; padding: 0px; " />730 54&nbsp;<br style="margin: 0px; padding: 0px; " />71 55&nbsp;<br style="margin: 0px; padding: 0px; " />71 56&nbsp;<br style="margin: 0px; padding: 0px; " />73 57&nbsp;<br style="margin: 0px; padding: 0px; " />74 58&nbsp;<br style="margin: 0px; padding: 0px; " />12 211&nbsp;<br style="margin: 0px; padding: 0px; " />31 42&nbsp;<br style="margin: 0px; padding: 0px; " />50 62&nbsp;<br style="margin: 0px; padding: 0px; " />7 8</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">输出（需要分割线）：</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />1&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 2&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />3&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />5&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 6&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8&nbsp;<br style="margin: 0px; padding: 0px; " />7&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 82&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />12&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 211&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />20&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 21&nbsp;<br style="margin: 0px; padding: 0px; " />20&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 53&nbsp;<br style="margin: 0px; padding: 0px; " />20&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 522&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />31&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 42&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />40&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 511&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 51&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 52&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 53&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 53&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 54&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 62&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 512&nbsp;<br style="margin: 0px; padding: 0px; " />50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 522&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 51&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 52&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 53&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 56&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 56&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 57&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 57&nbsp;<br style="margin: 0px; padding: 0px; " />60&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 61&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />63&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 61&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 54&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 55&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 56&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 57&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 58&nbsp;<br style="margin: 0px; padding: 0px; " />70&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 58&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />71&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 55&nbsp;<br style="margin: 0px; padding: 0px; " />71&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 56&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />73&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 57&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />74&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 58&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />203&nbsp;&nbsp;&nbsp;&nbsp; 21&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />530&nbsp;&nbsp;&nbsp;&nbsp; 54&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />730&nbsp;&nbsp;&nbsp;&nbsp; 54&nbsp;<br style="margin: 0px; padding: 0px; " />------------------------------------------------&nbsp;<br style="margin: 0px; padding: 0px; " />740&nbsp;&nbsp;&nbsp;&nbsp; 58</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">2.工作原理</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">使用如下map和reduce：（特别注意输入输出类型， 其中IntPair为自定义类型）</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">public static class Map extends Mapper&lt;LongWritable, Text, IntPair, IntWritable&gt;&nbsp;<br style="margin: 0px; padding: 0px; " />public static class Reduce extends Reducer&lt;IntPair, NullWritable, IntWritable, IntWritable&gt;</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">&nbsp;&nbsp;&nbsp;&nbsp; 在map阶段，使用job.setInputFormatClass(TextInputFormat)做为输入格式。注意输出应该符合自定义Map中定义的输出&lt;IntPair, IntWritable&gt;。最终是生成一个List&lt;IntPair, IntWritable&gt;。在map阶段的最后，会先调用job.setPartitionerClass对这个List进行分区，每个分区映射到一个reducer。每个分区内又调用job.setSortComparatorClass设置的key比较函数类排序。可以看到，这本身就是一个二次排序。如果没有通过job.setSortComparatorClass设置key比较函数类，则使用key的实现的compareTo方法。在随后的例子中，第一个例子中，使用了IntPair实现的compareTo方法，而在下一个例子中，专门定义了key比较函数类。</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">&nbsp;&nbsp;&nbsp;&nbsp; 在reduce阶段，reducer接收到所有映射到这个reducer的map输出后，也是会调用job.setSortComparatorClass设置的key比较函数类对所有数据对排序。然后开始构造一个key对应的value迭代器。这时就要用到分组，使用jobjob.setGroupingComparatorClass设置的分组函数类。只要这个比较器比较的两个key相同，他们就属于同一个组，它们的value放在一个value迭代器，而这个迭代器的key使用属于同一个组的所有key的第一个key。最后就是进入Reducer的reduce方法，reduce方法的输入是所有的（key和它的value迭代器）。同样注意输入与输出的类型必须与自定义的Reducer中声明的一致。</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">3，具体步骤</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">（1）自定义key</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; color: #000000; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 19px; background-color: #f5f5f5; ">在mr中，所有的key是需要被比较和排序的，并且是二次，先根据partitione，再根据大小。而本例中也是要比较两次。先按照第一字段排序，然后再对第一字段相同的按照第二字段排序。根据这一点，我们可以构造一个复合类IntPair，他有两个字段，先利用分区对第一字段排序，再利用分区内的比较对第二字段排序。&nbsp;<br style="margin: 0px; padding: 0px; " />所有自定义的key应该实现接口WritableComparable，因为是可序列的并且可比较的。并重载方法：<br /><div style="background-color: #eeeeee; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #008000; ">//</span><span style="color: #008000; ">反序列化，从流中的二进制转换成IntPair&nbsp;&nbsp;</span><span style="color: #008000; "><br /></span><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;readFields(DataInput&nbsp;in)&nbsp;<span style="color: #0000FF; ">throws</span>&nbsp;IOException&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br /><span style="color: #008000; ">//</span><span style="color: #008000; ">序列化，将IntPair转化成使用流传送的二进制&nbsp;&nbsp;</span><span style="color: #008000; "><br /></span><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">void</span>&nbsp;write(DataOutput&nbsp;out)&nbsp;&nbsp;<br /><span style="color: #008000; ">//</span><span style="color: #008000; ">key的比较&nbsp;&nbsp;</span><span style="color: #008000; "><br /></span><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">int</span>&nbsp;compareTo(IntPair&nbsp;o)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<br /><span style="color: #008000; ">//</span><span style="color: #008000; ">另外新定义的类应该重写的两个方法&nbsp;&nbsp;<br /></span><span style="color: #008000; ">//</span><span style="color: #008000; ">The&nbsp;hashCode()&nbsp;method&nbsp;is&nbsp;used&nbsp;by&nbsp;the&nbsp;HashPartitioner&nbsp;(the&nbsp;default&nbsp;partitioner&nbsp;in&nbsp;MapReduce)&nbsp;&nbsp;</span><span style="color: #008000; "><br /></span><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">int</span>&nbsp;hashCode()&nbsp;&nbsp;&nbsp;<br /><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">boolean</span>&nbsp;equals(Object&nbsp;right)</div>（2）由于key是自定义的，所以还需要自定义一下类：&nbsp;<br style="margin: 0px; padding: 0px; " />（2.1）分区函数类。这是key的第一次比较。&nbsp;<br /><div style="background-color: #eeeeee; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;FirstPartitioner&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;Partitioner&lt;IntPair,IntWritable&gt;</div><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; ">在job中使用setPartitionerClasss设置Partitioner。&nbsp;<br style="margin: 0px; padding: 0px; " />（2.2）key比较函数类。这是key的第二次比较。这是一个比较器，需要继承WritableComparator（也就是实现RawComprator接口）。</p><p style="margin-top: 10px; margin-bottom: 10px; padding: 0px; ">&nbsp;&nbsp; （这个就是前面说的第二种方法，但是在第三部分的代码中并没有实现此函数，而是直接使用compareTo方法进行比较，所以也就不许下面一行的设置）&nbsp;<br style="margin: 0px; padding: 0px; " />在job中使用setSortComparatorClass设置key比较函数类。</p><div style="background-color: #eeeeee; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;KeyComparator&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;WritableComparator</div>2.3）分组函数类。在reduce阶段，构造一个key对应的value迭代器的时候，只要first相同就属于同一个组，放在一个value迭代器。这是一个比较器，需要继承WritableComparator。&nbsp;<br /><div style="background-color: #eeeeee; border: 1px solid #cccccc; padding: 4px 5px 4px 4px; width: 98%; word-break: break-all; "><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #0000FF; ">public</span>&nbsp;<span style="color: #0000FF; ">static</span>&nbsp;<span style="color: #0000FF; ">class</span>&nbsp;GroupingComparator&nbsp;<span style="color: #0000FF; ">extends</span>&nbsp;WritableComparator</div>分组函数类也必须有一个构造函数，并且重载 public int compare(WritableComparable w1, WritableComparable w2)&nbsp;<br style="margin: 0px; padding: 0px; " />分组函数类的另一种方法是实现接口RawComparator。&nbsp;<br style="margin: 0px; padding: 0px; " />在job中使用setGroupingComparatorClass设置分组函数类。&nbsp;<br style="margin: 0px; padding: 0px; " />另外注意的是，如果reduce的输入与输出不是同一种类型，则不要定义Combiner也使用reduce，因为Combiner的输出是reduce的输入。除非重新定义一个Combiner。&nbsp;<br /><br /><br />转自：<a href="http://www.cnblogs.com/dandingyy/archive/2013/03/08/2950703.html" style="font-family: Arial; line-height: 26px; ">http://www.cnblogs.com/dandingyy/archive/2013/03/08/2950703.html</a></p></p><img src ="http://www.cppblog.com/mysileng/aggbug/198814.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-03-25 19:38 <a href="http://www.cppblog.com/mysileng/archive/2013/03/25/198814.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop面试时可能遇到的问题</title><link>http://www.cppblog.com/mysileng/archive/2013/03/18/198540.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Mon, 18 Mar 2013 05:03:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/03/18/198540.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/198540.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/03/18/198540.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/198540.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/198540.html</trackback:ping><description><![CDATA[<p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">面试hadoop可能被问到的问题，你能回答出几个 ?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">1、hadoop运行的原理?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">2、mapreduce的原理?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">3、HDFS存储的机制?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">4、举一个简单的例子说明mapreduce是怎么来运行的 ?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">5、面试的人给你出一些问题,让你用mapreduce来实现？</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 比如:现在有10个文件夹,每个文件夹都有1000000个url.现在让你找出top1000000url。</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">6、hadoop中Combiner的作用?</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">Src：&nbsp;<a href="http://p-x1984.javaeye.com/blog/859843" target="_blank" style="color: #444444; text-decoration: none; ">http://p-x1984.javaeye.com/blog/859843</a></p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><br /></p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">&nbsp;</p><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><span style="line-height: 18px; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; color: #222222; "><strong><em><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><em><strong>Q1. Name the most common InputFormats defined in&nbsp;<strong>Hadoop</strong>? Which one is default ?&nbsp;</strong></em></span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><br /></span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; ">Following 2 are most common InputFormats defined in</span>&nbsp;<span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><strong>Hadoop</strong></span>&nbsp;<span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><br /></span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; ">- TextInputFormat</span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><br /></span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; ">- KeyValueInputFormat</span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; "><br /></span><span style="line-height: normal; font-style: normal; font-family: Simsun; color: #000000; font-size: 16px; font-weight: normal; ">- SequenceFileInputFormat</span></em></strong></span></div><span style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">Q2. What is the difference between TextInputFormatand KeyValueInputFormat class</span><br style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; " /><span style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper</span><br style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; " /><span style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.</span><br style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; " /><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q3. What is InputSplit in&nbsp;<strong style="color: black; ">Hadoop</strong></em></strong></div><div>When a&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q4. How is the splitting of file invoked in&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;Framework&nbsp;</em></strong></div><div>It is invoked by the&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q5. Consider case scenario: In M/R system,</em></strong></div><div><strong><em>&nbsp;&nbsp; &nbsp;- HDFS block size is 64 MB</em></strong></div><div><strong><em>&nbsp;&nbsp; &nbsp;- Input format is FileInputFormat</em></strong></div><div><strong><em>&nbsp;&nbsp; &nbsp;- We have 3 files of size 64K, 65Mb and 127Mb&nbsp;</em></strong></div><div><strong><em>then how many input splits will be made by&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;framework?</em></strong></div><div><strong style="color: black; ">Hadoop</strong>&nbsp;will make 5 splits as follows&nbsp;</div><div>- 1 split for 64K files&nbsp;</div><div>- 2 &nbsp;splits for 65Mb files&nbsp;</div><div>- 2 splits for 127Mb file&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q6. What is the purpose of RecordReader in&nbsp;<strong style="color: black; ">Hadoop</strong></em></strong></div><div>The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q7. After the Map phase finishes, the&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?</em></strong></div><div>- Partitioning</div><div>Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same</div><div><strong><br /></strong></div><div>- Shuffle</div><div>After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.</div><div></div><div>- Sort</div><div>Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;before they are presented to the Reducer&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q9. If no custom partitioner is defined in the&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;then how is data partitioned before its sent to the reducer</em></strong>&nbsp;</div><div>The default partitioner computes a hash value for the key and assigns the partition based on this result&nbsp;</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><strong><em>Q10. What is a Combiner</em></strong>&nbsp;</div><div>The Combiner is a "mini-reduce" process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.</div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; "><div style="margin: 0px; "><strong><em>Q11. Give an example scenario where a cobiner can be used and where it cannot be used</em></strong></div></div><div style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; "><div style="margin: 0px; ">There can be several examples following are the most common ones</div></div><div style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; "><div><div style="margin: 0px; ">- Scenario where you can use combiner</div></div><div><div style="margin: 0px; ">&nbsp;&nbsp;Getting list of distinct words in a file</div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; ">- Scenario where you cannot use a combiner</div></div><div><div style="margin: 0px; ">&nbsp;&nbsp;Calculating mean of a list of numbers&nbsp;</div></div></div></div><div style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; "><div><div style="margin: 0px; "><strong><em>Q12.&nbsp;What is job tracker</em></strong></div></div><div><div style="margin: 0px; ">Job Tracker is the service within&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;that runs Map Reduce jobs on the cluster</div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; "><strong><em>Q13. What are some typical functions of Job Tracker</em></strong></div></div><div><div><div style="margin: 0px; ">The following are some typical tasks of Job Tracker</div></div><div><div style="margin: 0px; ">- Accepts jobs from clients</div></div><div><div style="margin: 0px; ">-&nbsp;It talks to the NameNode to determine the location of the data</div></div><div><div style="margin: 0px; ">-&nbsp;It locates TaskTracker nodes with available slots at or near the data</div></div><div><div style="margin: 0px; ">-&nbsp;It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker&nbsp;</div></div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; "><strong><em>Q14.&nbsp;What is task tracker</em></strong></div></div><div><div style="margin: 0px; ">Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker&nbsp;</div></div><div><div style="margin: 0px; "><strong><em><br /></em></strong></div></div><div><div style="margin: 0px; "><strong><em>Q15. Whats the relationship between Jobs and Tasks in&nbsp;<strong style="color: black; ">Hadoop</strong></em></strong></div></div><div><div style="margin: 0px; ">One job is broken down into one or many tasks in&nbsp;<strong style="color: black; ">Hadoop</strong>.&nbsp;</div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; "><strong><em>Q16. Suppose&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;spawned 100 tasks for a job and one of the task failed. What will<strong style="color: black; ">hadoop</strong>&nbsp;do ?</em></strong></div></div><div><div style="margin: 0px; ">It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job</div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; "><strong><em>Q17.&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;provides to combat this</em></strong> &nbsp;</div></div><div><div style="margin: 0px; ">Speculative Execution&nbsp;</div></div><div><div style="margin: 0px; "></div></div><div><div><div style="margin: 0px; "><strong><em>Q18. How does speculative execution works in&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;</em></strong></div></div><div><div style="margin: 0px; ">Job tracker makes different task trackers process same input. When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively,&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;tells the Task Trackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.&nbsp;</div></div></div><div><div style="margin: 0px; "></div></div><div><div><div style="margin: 0px; "><strong><em>Q19. Using command line in Linux, how will you&nbsp;</em></strong></div></div><div><div style="margin: 0px; "><strong><em>- see all jobs running in the&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;cluster</em></strong></div></div><div><div style="margin: 0px; "><strong><em>- kill a job</em></strong></div></div><div><div style="margin: 0px; ">-&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;job -list</div></div><div><div style="margin: 0px; ">-&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;job -kill jobid&nbsp;</div></div></div><div><div style="margin: 0px; "></div></div><div><div style="margin: 0px; "><strong><em>Q20. What is&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;Streaming&nbsp;</em></strong></div><div style="margin: 0px; ">Streaming is a generic API that allows programs written in virtually any language to be used as<strong style="color: black; ">Hadoop</strong>&nbsp;Mapper and Reducer implementations&nbsp;</div><div></div></div><div><div style="margin: 0px; "><br /><div style="margin: 0px; "><strong><em>Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc.&nbsp;</em></strong></div><div style="margin: 0px; "><strong style="color: black; ">Hadoop</strong>&nbsp;Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value) pairs on stdout.</div><div style="margin: 0px; "><span style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; "><strong><em>Q22. Whats is Distributed Cache in&nbsp;<strong>Hadoop</strong></em></strong></span><span style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; "><br /></span><span style="line-height: normal; font-family: Simsun; color: #000000; font-size: 16px; ">Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.</span></div><div style="margin: 0px; "><span style="line-height: 18px; "><strong><em>Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it&nbsp;</em></strong><br />This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.<br /><br /><strong><em>Q.24 What mechanism does&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;framework provides to synchronize changes made in Distribution Cache during runtime of the application&nbsp;</em></strong><br />This is a trick&nbsp;<strong style="color: black; ">questions</strong>. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution<br /><br /><strong><em>Q25. Have you ever used Counters in&nbsp;<strong style="color: black; ">Hadoop</strong>. Give us an example scenario</em></strong><br />Anybody who claims to have worked on a&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;project is expected to use counters<br /><br /><strong><em>Q26. Is it possible to provide multiple input to&nbsp;<strong style="color: black; ">Hadoop</strong>? If yes then how can you give multiple directories as input to the&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;job&nbsp;</em></strong><br />Yes, The input format class provides methods to add multiple directories as input to a&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;job<br /><br /><strong><em>Q27. Is it possible to have&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;job output in multiple directories. If yes then how&nbsp;</em></strong><br />Yes, by using Multiple Outputs class<br /><br /><strong><em>Q28. What will a&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;job do if you try to run it with an output directory that is already present? Will it<br />- overwrite it<br />- warn you and continue<br />- throw an exception and exit</em></strong><br />The&nbsp;<strong style="color: black; ">hadoop</strong>&nbsp;job will throw an exception and exit.<br /><br /><strong><em>Q29. How can you set an arbitary number of mappers to be created for a job in&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;</em></strong><br />This is a trick question. You cannot set it<br /><br /><strong><em>Q30. How can you set an arbitary number of reducers to be created for a job in&nbsp;<strong style="color: black; ">Hadoop</strong>&nbsp;</em></strong><br />You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting</span></div></div></div></div><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">&nbsp;</p><p style="color: #333333; font-family: verdana, sans-serif; font-size: 13px; line-height: 17px; background-color: #ffffff; ">Src:<a href="http://xsh8637.blog.163.com/blog/#m=0&amp;t=1&amp;c=fks_084065087084081065083083087095086082081074093080080069" target="_blank" style="color: #444444; text-decoration: none; ">http://xsh8637.blog.163.com/blog/#m=0&amp;t=1&amp;c=fks_084065087084081065083083087095086082081074093080080069</a></p><img src ="http://www.cppblog.com/mysileng/aggbug/198540.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-03-18 13:03 <a href="http://www.cppblog.com/mysileng/archive/2013/03/18/198540.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>基于Hadoop Sequencefile的小文件解决方案</title><link>http://www.cppblog.com/mysileng/archive/2013/03/04/198218.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Mon, 04 Mar 2013 11:28:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/03/04/198218.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/198218.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/03/04/198218.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/198218.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/198218.html</trackback:ping><description><![CDATA[<div><p style="text-indent: 55.4pt;"><strong>基于</strong><strong><span style="font-size: 16pt;">Hadoop Sequencefile</span></strong><strong>的小文件解决方案</strong><strong></strong></p> <p>&nbsp;</p> <p><strong>一、</strong><strong> </strong><strong>概述</strong><strong></strong></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;">&nbsp;&nbsp; <span style="font-size: 9pt;">小文件是指文件</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">size</span><span style="font-size: 9pt;">小于</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span><span style="font-size: 9pt;">上</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">block</span><span style="font-size: 9pt;">大小的文件。这样的文件会给</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">hadoop</span><span style="font-size: 9pt;">的扩展性和性能带来严重问题。首先，在</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span><span style="font-size: 9pt;">中，任何</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">block</span><span style="font-size: 9pt;">，文件或者目录在内存中均以对象的形式存储，每个对象约占</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">150byte</span><span style="font-size: 9pt;">，如果有</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">1000 0000</span><span style="font-size: 9pt;">个小文件，每个文件占用一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">block</span><span style="font-size: 9pt;">，则</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">namenode</span><span style="font-size: 9pt;">大约需要</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">2G</span><span style="font-size: 9pt;">空间。如果存储</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">1</span><span style="font-size: 9pt;">亿个文件，则</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">namenode</span><span style="font-size: 9pt;">需要</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">20G</span><span style="font-size: 9pt;">空间。这样</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">namenode</span><span style="font-size: 9pt;">内存容量严重制约了集群的扩展。</span> <span style="font-size: 9pt;">其次，访问大量小文件速度远远小于访问几个大文件。</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span><span style="font-size: 9pt;">最初是为流式访问大文件开发的，如果访问大量小文件，需要不断的从一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">datanode</span><span style="font-size: 9pt;">跳到另一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">datanode</span><span style="font-size: 9pt;">，严重影响性能。最后，处理大量小文件速度远远小于处理同等大小的大文件的速度。每一个小文件要占用一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">slot</span><span style="font-size: 9pt;">，而</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">task</span><span style="font-size: 9pt;">启动将耗费大量时间甚至大部分时间都耗费在启动</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">task</span><span style="font-size: 9pt;">和释放</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">task</span><span style="font-size: 9pt;">上。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><strong>二、</strong><strong><span style="font-family: Verdana,sans-serif;">Hadoop</span></strong><strong>自带的解决方案</strong></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; text-indent: 18pt; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt;">对于小文件问题，</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Hadoop</span><span style="font-size: 9pt;">本身也提供了几个解决方案，分别为：</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Hadoop Archive</span><span style="font-size: 9pt;">，</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Sequence file</span><span style="font-size: 9pt;">和</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">CombineFileInputFormat</span><span style="font-size: 9pt;">。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt;">（</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">1</span><span style="font-size: 9pt;">）</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;"> Hadoop Archive</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Hadoop Archive</span><span style="font-size: 9pt;">或者</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HAR</span><span style="font-size: 9pt;">，是一个高效地将小文件放入</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span><span style="font-size: 9pt;">块中的文件存档工具，它能够将多个小文件打包成一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HAR</span><span style="font-size: 9pt;">文件，这样在减少</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">namenode</span><span style="font-size: 9pt;">内存使用的同时，仍然允许对文件进行透明的访问。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt;">使用</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HAR</span><span style="font-size: 9pt;">时需要两点，第一，对小文件进行存档后，原文件并不会自动被删除，需要用户自己删除；第二，创建</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">HAR</span><span style="font-size: 9pt;">文件的过程实际上是在运行一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">mapreduce</span><span style="font-size: 9pt;">作业，因而需要有一个</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">hadoop</span><span style="font-size: 9pt;">集群运行此命令。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; color: red;">该方案需人工进行维护，适用管理人员的操作，而且</span>har<span style="font-size: 9pt; color: red;">文件一旦创建，</span>Archives<span style="font-size: 9pt; color: red;">便不可改变，不能应用于多用户的互联网操作。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt;">（</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">2</span><span style="font-size: 9pt;">）</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;"> Sequence file</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; font-family: Verdana,sans-serif;">sequence file</span><span style="font-size: 9pt;">由一系列的二进制</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">key/value</span><span style="font-size: 9pt;">组成，如果为</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">key</span><span style="font-size: 9pt;">小文件名，</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">value</span><span style="font-size: 9pt;">为文件内容，则可以将大批小文件合并成一个大文件。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Hadoop-0.21.0</span><span style="font-size: 9pt;">中提供了</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">SequenceFile</span><span style="font-size: 9pt;">，包括</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Writer</span><span style="font-size: 9pt;">，</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">Reader</span><span style="font-size: 9pt;">和</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">SequenceFileSorter</span><span style="font-size: 9pt;">类进行写，读和排序操作。如果</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">hadoop</span><span style="font-size: 9pt;">版本低于</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">0.21.0</span><span style="font-size: 9pt;">的版本，实现方法可参见</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">[3]</span><span style="font-size: 9pt;">。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;">&nbsp;</p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; color: red;">该方案对于小文件的存取都比较自由，不限制用户和文件的多少，但是</span>SequenceFile<span style="font-size: 9pt; color: red;">文件不能追加写入，适用于一次性写入大量小文件的操作。</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;">&nbsp;</p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt;">（</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">3</span><span style="font-size: 9pt;">）</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">CombineFileInputFormat</span></p> <p style="margin: 3.75pt 0cm; background: white none repeat scroll 0% 0%; line-height: 13.5pt; -moz-background-clip: -moz-initial; -moz-background-origin: -moz-initial; -moz-background-inline-policy: -moz-initial;"><span style="font-size: 9pt; font-family: Verdana,sans-serif;">CombineFileInputFormat</span><span style="font-size: 9pt;">是一种新的</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">inputformat</span><span style="font-size: 9pt;">，用于将多个文件合并成一个单独的</span><span style="font-size: 9pt; font-family: Verdana,sans-serif;">split</span><span style="font-size: 9pt;">，另外，它会考虑数据的存储位置。</span></p> <p><span style="color: #ff0000;">该方案版本比较老，网上资料甚少，从资料来看应该没有第二种方案好。</span></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>三、<strong>小文件问题解决方案</strong></p> <p style="text-indent: 22.5pt;">在原有<span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span>基础上添加一个小文件处理模块，具体操作流程如下<span style="font-size: 9pt; font-family: Verdana,sans-serif;">:</span></p> <p><span style="font-size: 9pt; font-family: Verdana,sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp;1.&nbsp;&nbsp;&nbsp;</span>当用户上传文件时，判断该文件是否属于小文件，如果是，则交给小文件处理模块处理，否则，交给通用文件处理模块处理。</p> <p><span style="font-size: 9pt; font-family: Verdana,sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp;2.&nbsp;&nbsp;</span>在小文件模块中开启一定时任务，其主要功能是当模块中文件总<span style="font-size: 9pt; font-family: Verdana,sans-serif;">size</span>大于<span style="font-size: 9pt; font-family: Verdana,sans-serif;">HDFS</span>上<span style="font-size: 9pt; font-family: Verdana,sans-serif;">block</span>大小的文件时，则通过<span style="font-size: 9pt; font-family: Verdana,sans-serif;">SequenceFile</span>组件以文件名做<span style="font-size: 9pt; font-family: Verdana,sans-serif;">key</span>，相应的文件内容为<span style="font-size: 9pt; font-family: Verdana,sans-serif;">value</span>将这些小文件一次性写入<span style="font-size: 9pt; font-family: Verdana,sans-serif;">hdfs</span>模块。</p> <p><span style="font-size: 9pt; font-family: Verdana,sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp;3.&nbsp;</span>同时删除已处理的文件，并将结果写入数据库。</p> <p><span style="font-size: 9pt; font-family: Verdana,sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp;4.&nbsp;&nbsp;</span>当用户进行读取操作时，可根据数据库中的结果标志来读取文件。</p></div><br />转自:http://lxm63972012.iteye.com/blog/1429011<img src ="http://www.cppblog.com/mysileng/aggbug/198218.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-03-04 19:28 <a href="http://www.cppblog.com/mysileng/archive/2013/03/04/198218.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop jar xxxx.jar的流程 </title><link>http://www.cppblog.com/mysileng/archive/2013/03/02/198176.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Sat, 02 Mar 2013 09:28:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/03/02/198176.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/198176.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/03/02/198176.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/198176.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/198176.html</trackback:ping><description><![CDATA[<div>jar -cvf xxx.jar .<br />hadopp jar xxx.jar clalss-name [input] [output]<br />----------------------------------------------------------------------<br />hadoop jar hadoop-0.20.2-examples.jar [class name]的实质是:</div><div>1.利用hadoop这个脚本启动一个jvm进程;</div><div>2.jvm进程去运行org.apache.hadoop.util.RunJar这个java类;</div><div>3.org.apache.hadoop.util.RunJar解压hadoop-0.20.2-examples.jar到hadoop.tmp.dir/hadoop-unjar*/目录下;</div><div>4.org.apache.hadoop.util.RunJar动态的加载并运行Main-Class或指定的Class;</div><div>5.Main-Class或指定的Class中设定Job的各项属性</div><div>6.提交job到JobTracker上并监视运行情况。</div><div></div><div>注意：以上都是在jobClient上执行的。</div><div></div><div>运行jar文件的时候，jar会被解压到hadoop.tmp.dir/hadoop-unjar*/目录下（如：/home/hadoop/hadoop-fs/dfs/temp/hadoop-unjar693919842639653083, 注意：这个目录是JobClient的目录，不是JobTracker的目录）。解压后的文件为：</div><div>drwxr-xr-x 2 hadoop hadoop 4096 Jul 30 15:40 META-INF</div><div>drwxr-xr-x 3 hadoop hadoop 4096 Jul 30 15:40 org</div><div>有图有真相：<br /><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130302164152.jpg" width="581" height="268" alt="" /><br /><div>提交job的实质是：</div><div>生成${job-id}/job.xml文件到hdfs://${mapred.system.dir}/（比如hdfs://bcn152:9990/home/hadoop/hadoop-fs/dfs/temp/mapred/system/job_201007301137_0012/job.xml），job的描述包括jar文件的路径，map|reduce类路径等等.</div><div>上传${job-id}/job.jar文件到hdfs://${mapred.system.dir}/（比如hdfs://bcn152:9990/home/hadoop/hadoop-fs/dfs/temp/mapred/system/job_201007301137_0012/job.jar）</div><div>有图有真相：</div><img src="http://www.cppblog.com/images/cppblog_com/mysileng/QQ截图20130302164522.jpg" width="486" height="68" alt="" /><br /><div>生成job之后，通过static JobClient.runJob()就会向jobTracker提交job:</div><div>JobClient jc = new JobClient(job);</div><div>RunningJob rj = jc.submitJob(job);</div><div>之后JobTracker就会调度此job，</div><div></div><div>提交job之后，使用下面的代码获取job的进度：</div><div>&nbsp; &nbsp; try {</div><div>&nbsp; &nbsp; &nbsp; if (!jc.monitorAndPrintJob(job, rj)) {</div><div>&nbsp; &nbsp; &nbsp; &nbsp; throw new IOException("Job failed!");</div><div>&nbsp; &nbsp; &nbsp; }</div><div>&nbsp; &nbsp; } catch (InterruptedException ie) {</div><div>&nbsp; &nbsp; &nbsp; Thread.currentThread().interrupt();</div><div>&nbsp; &nbsp; }</div><br /><br /></div><img src ="http://www.cppblog.com/mysileng/aggbug/198176.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-03-02 17:28 <a href="http://www.cppblog.com/mysileng/archive/2013/03/02/198176.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>hadoop 序列化源码浅析 (转)</title><link>http://www.cppblog.com/mysileng/archive/2013/01/15/197301.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Tue, 15 Jan 2013 13:48:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2013/01/15/197301.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/197301.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2013/01/15/197301.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/197301.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/197301.html</trackback:ping><description><![CDATA[&nbsp;&nbsp;&nbsp;&nbsp; 摘要: 转自：http://my.oschina.net/tuzibuluo/blog?catalog=1278261.Writable接口&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Hadoop&nbsp;并没有使用&nbsp;JAVA&nbsp;的序列化，而是引入了自己实的序列化系统，&nbsp;package&nbsp;org.apache.hadoop.io&nbsp;这个...&nbsp;&nbsp;<a href='http://www.cppblog.com/mysileng/archive/2013/01/15/197301.html'>阅读全文</a><img src ="http://www.cppblog.com/mysileng/aggbug/197301.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2013-01-15 21:48 <a href="http://www.cppblog.com/mysileng/archive/2013/01/15/197301.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>HADOOP_CLASSPATH设置(转)</title><link>http://www.cppblog.com/mysileng/archive/2012/12/28/196753.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Fri, 28 Dec 2012 12:44:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2012/12/28/196753.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/196753.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2012/12/28/196753.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/196753.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/196753.html</trackback:ping><description><![CDATA[<div style="color: #323e32; font-family: simsun; background-color: #d8cce0; ">在写hadoop程序编译时，往往需要HADOOP_CLASSPATH路径，可通过以下方式进行在编译脚本中设置：</div><div style="color: #323e32; font-family: simsun; background-color: #d8cce0; ">for f in $HADOOP_HOME/hadoop-*.jar; do<br />CLASSPATH=${CLASSPATH}:$f<br />done<br /><br />for f in $HADOOP_HOME/lib/*.jar; do<br />CLASSPATH=${CLASSPATH}:$f<br />done<br /><br />for f in $HIVE_HOME/lib/*.jar; do<br />CLASSPATH=${CLASSPATH}:$f<br />done</div>&nbsp;<br />转自：<a href="http://blog.sina.com.cn/s/blog_62a9902f01017x7j.html">http://blog.sina.com.cn/s/blog_62a9902f01017x7j.html</a><img src ="http://www.cppblog.com/mysileng/aggbug/196753.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2012-12-28 20:44 <a href="http://www.cppblog.com/mysileng/archive/2012/12/28/196753.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>CentOS 5.5 安装hadoop-0.21.0(转)</title><link>http://www.cppblog.com/mysileng/archive/2012/12/25/196620.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Tue, 25 Dec 2012 12:54:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2012/12/25/196620.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/196620.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2012/12/25/196620.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/196620.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/196620.html</trackback:ping><description><![CDATA[<p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">倒腾了一天，终于在CentOS上装上了hadoop-0.21.0，特此记录，以备后用。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">操作系统：CentOS 5.5</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">Hadoop：hadoop-0.21.0<br />JDK：1.6.0_17<br />namenode主机名:master，namenode的IP:<span style="color: #000000; ">192.168.90.91</span><br />datanode主机名:slave，datanode的IP:<span style="color: #000000; ">192.168.90.94<br /><br /></span><span style="color: #ff0000; ">第一步：安装并启动ssh服务<br /></span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">CentOS 5.5安装完毕之后以及默认启动了sshd服务，可以在&#8220;系统&#8221;－&gt;&#8220;管理&#8221;-&gt;&#8220;服务&#8221;中查看sshd服务是否启动。当然了，如果机器上没有安装ssh服务，则执行命令<span style="color: #0055ff; ">sudo yum install ssh<span style="color: #000000; ">来安装。安装</span></span></span><span style="color: #000000; ">rsync，它是一个远程数据同步工具，可通过 LAN/WAN 快速同步多台主机间的文件</span><span style="color: #000000; ">，执行命令<span style="color: #0055ff; ">sudo yum install rsync。修改每个节点的/etc/hosts文件，将</span>&nbsp;namenode和datanode的IP信息加入到该文件的尾部：</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">192.168.90.91</span>&nbsp;master<br /><span style="color: #000000; ">192.168.90.94</span>&nbsp;slave<br /></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #ff0000; ">第二步，配置SSH服务</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（1），（2）是针对每一台机器</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（1）创建hadoop用户名与用户组</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp; 运行命令<span style="color: #0055ff; ">su - root</span>，注意，不是命令<span style="color: #0055ff; ">su root</span>，后者不能携带root用户的参数信息，是不能执行创建用户组和用户命令的。执行命令：<span style="color: #0055ff; ">groupadd hadoop</span>和命令<span style="color: #0055ff; ">useradd -g hadoop hadoop。<span style="color: #ff0000; ">注意不能在/home目录下创建hadoop目录，否则创建hadoop用户会失败。创建好用户以后最好是重新启动计算机，以hadoop用户登录系统。这样在之后的操作中就不需要su到hadoop用户下，而且也不会纠缠于文件的owner问题。</span></span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">（2）生成ssh密钥</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">&nbsp;&nbsp;&nbsp;&nbsp; 如果是其他用户登录的则切换到hadoop用户下，执行命令<span style="color: #0055ff; ">su - hadoop，在/home/hadoop目录下执行命令：</span></span>ssh-keygen -t rsa（一路回车，选择默认的保存路径），密钥生成成功之后，进入.ssh目录，执行<span style="color: #0055ff; ">cd .ssh</span>，执行命令：<span style="color: #0055ff; ">cp id_rsa.pub authorized_keys</span>。这个时候运行ssh localhost，让系统记住用户，之后ssh localhost就不需要再输入密码了。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（3）交换公钥</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp; 将namenode上的公钥拷贝到datanode，在hadoop用户的用户目录下（/home/hadoop）下执行命令ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@slave。同理，也可以将datanode上的公钥拷贝到namenode，但这不是必须的。这样两台机器在hadoop用户下互相ssh就不需要密码了。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #ff0000; ">第三步，安装JDK1.6或以上（每台机器）</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（1）执行命令yum install jdk</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（2）如果第一步没有找到源码包，那么就需要到官网上下载了，https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u22-oth-JPR@CDS-CDS_Developer。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（3）新建目录/usr/java，将源码包jdk-6u22-linux-i586.bin复制到该目录下，执行命令chmod a+x&nbsp;<span style="color: #0055ff; ">jdk-6u22-linux-i586.bin</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 使当前用户拥有对<span style="color: #0055ff; ">jdk-6u22-linux-i586.bin</span>的执行权限。执行命令<span style="color: #0055ff; ">sudo ./jdk-6u22-linux-i586.bin</span>进行安装</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（4）修改/etc/profile来添加环境变量，/etc/profile中设置的环境变量就像Windows下环境变量中的系统变量一样，所有用户都可以使用。<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 用文本编辑器打开/etc/profile<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #0055ff; "># vi /etc/profile</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 在最后加入以下几行：<br /><span style="color: #0055ff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export JAVA_HOME=/usr/java/jdk1.6.0_22</span><br /><span style="color: #0055ff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar</span><br /><span style="color: #0055ff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; export PATH=$PATH:$JAVA_HOME/bin</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 这样我们就设置好了JDK，在centos下&nbsp;<span style="color: #0055ff; ">source /etc/profile</span>&nbsp;就可以生效了.</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">运行命令java -version可以判断是否安装成功</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">第四步，安装hadoop</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">原来现在才开始安装hadoop，准备工作也作得太多了，废话少说。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">（1）新建目录/usr/local/hadoop，将hadoop-0.21.0.tar.gz解压缩到该目录下，执行命令<span style="color: #0055ff; ">sudo tar -xvzf hadoop-0.21.0.tar.gz</span>，修改/etc/profile文件，将hadoop的安装目录append到文件最后：</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #0055ff; ">export HADOOP_HOME=/usr/local/hadoop/hadoop-0.21.0</span><br /><span style="color: #0055ff; ">export PATH=$HADOOP_HOME/bin:$PATH<br /><span style="color: #000000; ">（2）配置/conf/hadoop-env.sh文件，修改java_home环境变量<br />export JAVA_HOME=/usr/java/jdk1.6.0_22/<br /></span></span><span style="font-size: 15px; line-height: 24px; background-color: #f5f7f8;">export HADOOP_CLASSPATH=.</span><br /><span style="color: #0055ff; "><span style="color: #000000; ">（3）配置 core-site.xml 文件<br />&lt;configuration&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;hadoop.tmp.dir&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;/usr/local/hadoop/hadoop-0.21.0/tmp&lt;/value&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style="color: #ff0000; ">(注意，请先在 hadoopinstall 目录下建立 tmp 文件夹)</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;description&gt;A base for other temporary directories.&lt;/description&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/property&gt;<br />&lt;!-- file system properties --&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;fs.default.name&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;hdfs://master:54310&lt;/value&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/property&gt;<br />&lt;/configuration&gt;<br />（4）配置 hdfs-site.xml 文件<br />&lt;configuration&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;dfs.replication&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;1&lt;/value&gt;<span style="color: #ff0000; ">（这里共两台机器，如果将主节点也配置为datanode，则这里可以写2）</span><br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/property&gt;<br />&lt;configuration&gt;<br />（5）配置 mapred-site.xml 文件<br />&lt;configuration&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;property&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;name&gt;mapred.job.tracker&lt;/name&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;value&gt;master:54311&lt;/value&gt;<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/property&gt;<br />&lt;/configuration&gt;<br />（6）配置 conf/masters 文件，加入 namenode 的 ip 地址<br />master<br />（7）配置 slaves 文件, 加入所有 datanode 的 ip 地址<br /></span></span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">slave</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #ff0000; ">(如果之前的</span><span style="color: #ff0000; ">hdfs-site.xml文件中的拷贝数设置为2，则需要将master也加入到slaves文件中</span><span style="color: #ff0000; ">)</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">（8）将 namenode 上 配 置 好 的 hadoop 所 在 文 件 夹 hadoop－0.21.0 复 制 到<br />datanode 的/usr/lcoal/hadoop/目录下（实际上 masters,slavers 文件时不必要的， 复制了也<br />没问题）。<br />（9）配置datanode的/etc/profile 文件，在文件尾append下列内容：<br /></span><span style="color: #0055ff; ">export HADOOP_HOME=/usr/local/hadoop/hadoop-0.21.0</span><br /><span style="color: #0055ff; ">export PATH=$HADOOP_HOME/bin:$PATH<br /><br /><span style="color: #ff0000; ">第五步，启动hadoop</span><br /></span><span style="color: #000000; ">首先记得关闭系统的防火墙，root用户下执行命令</span><span style="color: #0055ff; ">/etc/init.d/iptables stop</span>，运行命令<span style="color: #0055ff; ">/etc/init.d/iptables status</span>检查防火墙状态。<span style="color: #000000; ">hadoop用户下，在namenode的/usr/local/hadoop/hadoop-0.21.0/bin目录下打开终端，执行命令<span style="color: #0055ff; ">hadoop namenode -format，<span style="color: #000000; ">格式化目录节点。<span style="color: #ff0000; ">注意，</span></span></span></span><span style="color: #ff0000; ">/usr/local/hadoop/hadoop-0.21.0/tmp目录是可以写的，否则在格式化时会出现异常。</span><span style="color: #000000; ">执行命令<span style="color: #0055ff; ">start-all.sh</span>启动hadoop集群，执行命令<span style="color: #0055ff; ">jps</span>查看进程，执行命令<span style="color: #0055ff; ">hadoop dfsadmin -report</span>查看状态。在浏览器中输入http://master:50070以web方式查看集群状态。查看jobtraker的运行状态：http://www.ibm.com/developerworks/cn/linux/l-hadoop-2/index.html<br />PS：格式化namenode的时候最好将节点的tmp目录清空、删除logs目录中的文件。<br /></span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">到这里，基于CentOS5.5的hadoop集群搭建完毕！</span></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><span style="color: #000000; ">参考资料：http://www.ibm.com/developerworks/cn/linux/l-hadoop-2/index.html</span></p><img src ="http://www.cppblog.com/mysileng/aggbug/196620.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2012-12-25 20:54 <a href="http://www.cppblog.com/mysileng/archive/2012/12/25/196620.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>从Hadoop框架与MapReduce模式中谈海量数据处理（含淘宝技术架构）(转)</title><link>http://www.cppblog.com/mysileng/archive/2012/12/23/196549.html</link><dc:creator>鑫龙</dc:creator><author>鑫龙</author><pubDate>Sun, 23 Dec 2012 11:55:00 GMT</pubDate><guid>http://www.cppblog.com/mysileng/archive/2012/12/23/196549.html</guid><wfw:comment>http://www.cppblog.com/mysileng/comments/196549.html</wfw:comment><comments>http://www.cppblog.com/mysileng/archive/2012/12/23/196549.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/mysileng/comments/commentRss/196549.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/mysileng/services/trackbacks/196549.html</trackback:ping><description><![CDATA[<blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><blockquote dir="ltr" style="margin-right: 0px; "><h3>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 从hadoop框架与MapReduce模式中谈海量数据处理</h3></blockquote></blockquote><h3><a name="t1" style="color: rgb(51, 102, 153); "></a>前言</h3><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 几周前，当我最初听到，以致后来初次接触Hadoop与MapReduce这两个东西，我便稍显兴奋，觉得它们很是神秘，而神秘的东西常能勾起我的兴趣，在看过介绍它们的文章或论文之后，觉得Hadoop是一项富有趣味和挑战性的技术，且它还牵扯到了一个我更加感兴趣的话题：海量数据处理。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 由此，最近凡是空闲时，便在看&#8220;Hadoop&#8221;，&#8220;MapReduce&#8221;&#8220;海量数据处理&#8221;这方面的论文。但在看论文的过程中，总觉得那些论文都是浅尝辄止，常常看的很不过瘾，总是一个东西刚要讲到紧要处，它便结束了，让我好生&#8220;愤懑&#8221;。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 尽管我对这个Hadoop与MapReduce知之甚浅，但我还是想记录自己的学习过程，说不定，关于这个东西的学习能督促我最终写成和&#8220;经典算法研究系列&#8221;一般的一系列文章。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; Ok，闲话少说。本文从最基本的mapreduce模式，Hadoop框架开始谈起，然后由各自的架构引申开来，谈到海量数据处理，最后谈谈淘宝的海量数据产品技术架构，以为了兼备浅出与深入之效，最终，希望得到读者的喜欢与支持。谢谢。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 由于本人是初次接触这两项技术，文章有任何问题，欢迎不吝指正。再谢一次。Ok，咱们开始吧。</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><h3><a name="t2" style="color: rgb(51, 102, 153); "></a>第一部分、mapreduce模式与hadoop框架深入浅出</h3></blockquote><h3><a name="t3" style="color: rgb(51, 102, 153); "></a>架构扼要</h3><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 想读懂此文，读者必须先要明确以下几点，以作为阅读后续内容的基础知识储备：</p><ol style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><li>Mapreduce是一种模式。</li><li>Hadoop是一种框架。</li><li>Hadoop是一个实现了mapreduce模式的开源的分布式并行编程框架。</li></ol><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 所以，你现在，知道了什么是mapreduce，什么是hadoop，以及这两者之间最简单的联系，而本文的主旨即是，一句话概括：<strong>在hadoop的框架上采取mapreduce的模式处理海量数据</strong>。下面，咱们可以依次深入学习和了解mapreduce和hadoop这两个东西了。</p><h3><a name="t4" style="color: rgb(51, 102, 153); "></a>Mapreduce模式</h3><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 前面说了，mapreduce是一种模式，一种什么模式呢?一种云计算的核心计算模式，一种分布式运算技术，也是简化的分布式编程模式，它主要用于解决问题的程序开发模型，也是开发人员拆解问题的方法。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; Ok，光说不上图，没用。如下图所示，mapreduce模式的主要思想是将自动分割要执行的问题（例如程序）拆解成map（映射）和reduce（化简）的方式，流程图如下图1所示：</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816520nlst.gif" style="border: none; " /></p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 在数据被分割后通过Map 函数的程序将数据映射成不同的区块，分配给计算机机群处理达到分布式运算的效果，在通过Reduce 函数的程序将结果汇整，从而输出开发者需要的结果。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; MapReduce 借鉴了函数式程序设计语言的设计思想，其软件实现是指定一个Map 函数，把键值对(key/value)映射成新的键值对(key/value)，形成一系列中间结果形式的key/value 对，然后把它们传给Reduce(规约)函数，把具有相同中间形式key 的value 合并在一起。Map 和Reduce 函数具有一定的关联性。函数描述如表1 所示：</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816570mW63.gif" style="border: none; " /></p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; MapReduce致力于解决大规模数据处理的问题，因此在设计之初就考虑了数据的局部性原理，利用局部性原理将整个问题分而治之。MapReduce集群由普通PC机构成，为<span style="font-family: 'Times New Roman'; ">无共享式架构。</span>在处理之前，将数据集分布至各个节点。处理时，每个节点就近读取本地存储的数据处理（map），将处理后的数据进行合并（combine）、排序（shuffle and sort）后再分发（至reduce节点），避免了大量数据的传输，提高了处理效率。无共享式架构的另一个好处是配合复制（replication）策略，集群可以具有良好的容错性，一部分节点的down机对集群的正常工作不会造成影响。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; ok，你可以再简单看看下副图，整幅图是有关hadoop的作业调优参数及原理，图的左边是MapTask运行示意图，右边是ReduceTask运行示意图：</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816620g8Ru.gif" style="border: none; " /></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 如上图所示，其中map阶段，当map task开始运算，并产生中间数据后并非直接而简单的写入磁盘，它首先利用内存buffer来对已经产生的buffer进行缓存，并在内存buffer中进行一些预排序来优化整个map的性能。而上图右边的reduce阶段则经历了三个阶段，分别Copy-&gt;Sort-&gt;reduce。我们能明显的看出，其中的Sort是采用的归并排序，即merge sort。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 了解了什么是mapreduce，接下来，咱们可以来了解实现了mapreduce模式的开源框架&#8212;hadoop。</p><h3><a name="t5" style="color: rgb(51, 102, 153); "></a>Hadoop框架</h3><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 前面说了，hadoop是一个框架，一个什么样的框架呢?Hadoop 是一个实现了MapReduce 计算模型的开源分布式并行编程框架，程序员可以借助Hadoop 编写程序，将所编写的程序运行于计算机机群上，从而实现对海量数据的处理。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 此外，Hadoop 还提供一个分布式文件系统(HDFS）及分布式数据库（HBase）用来将数据存储或部署到各个计算节点上。所以，你可以大致认为：<u><strong>Hadoop</strong>=<strong>HDFS</strong>（文件系统，数据存储技术相关）+<strong>HBase</strong>（数据库）+<strong>MapReduce</strong>（数据处理</u>）。Hadoop 框架如图2 所示：</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816645ZfTU.gif" style="border: none; " /></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 借助Hadoop 框架及云计算核心技术MapReduce 来实现数据的计算和存储，并且将HDFS 分布式文件系统和HBase 分布式数据库很好的融入到云计算框架中，从而实现云计算的分布式、并行计算和存储，并且得以实现很好的处理大规模数据的能力。</p><h3><a name="t6" style="color: rgb(51, 102, 153); "></a>Hadoop的组成部分</h3><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>&nbsp;&nbsp;&nbsp; 我们已经知道，Hadoop是Google的MapReduce一个Java实现。MapReduce是一种简化的分布式编程模式，让程序自动分布到一个由普通机器组成的超大集群上并发执行。<strong><u>Hadoop主要由HDFS、MapReduce和HBase</u></strong>等组成。具体的hadoop的组成如下图：</p><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_131381852997kj.gif" height="400" width="601" style="border: none; width: 608px; height: 418px; " /></p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 由上图，我们可以看到：</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 1、&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Hadoop HDFS</strong>是Google GFS存储系统的开源实现，主要应用场景是作为并行计算环境（MapReduce）的基础组件，同时也是BigTable（如HBase、HyperTable）的底层分布式文件系统。HDFS采用master/slave架构。一个HDFS集群是有由一个Namenode和一定数目的Datanode组成。Namenode是一个中心服务器，负责管理文件系统的namespace和客户端对文件的访问。<em>Datanode</em>在集群中一般是一个节点一个，负责管理节点上它们附带的存储。在内部，一个文件其实分成一个或多个block，这些block存储在Datanode集合里。如下图所示（<u>HDFS体系结构图</u>）：</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816699CJaQ.gif" style="border: none; " /></p></blockquote><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 2、&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<strong>Hadoop MapReduce</strong>是一个使用简易的软件框架，基于它写出来的应用程序能够运行在由上千个商用机器组成的大型集群上，并以一种可靠容错的方式并行处理上TB级别的数据集。</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p align="left">&nbsp;&nbsp;&nbsp; 一个MapReduce作业（job）通常会把输入的数据集切分为若干独立的数据块，由 Map任务（task）以完全并行的方式处理它们。框架会对Map的输出<strong>先进行排序</strong>，然后把结果输入给Reduce任务。通常作业的输入和输出都会被存储在文件系统中。整个框架负责任务的调度和监控，以及重新执行已经失败的任务。如下图所示（<u>Hadoop MapReduce处理流程图</u>）：</p><p align="left"><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816714BlYw.gif" style="border: none; " /></p></blockquote><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>&nbsp;&nbsp;&nbsp; 3、&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Hive是基于Hadoop的一个数据仓库工具，处理能力强而且成本低廉。</strong></p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p align="left"><strong>主要特点</strong>：</p><p align="left">存储方式是将结构化的数据文件映射为一张数据库表。提供类SQL语言，实现完整的SQL查询功能。可以将SQL语句转换为MapReduce任务运行，十分适合数据仓库的统计分析。</p><p align="left"><strong>不足之处：</strong></p><p align="left">采用行存储的方式（SequenceFile）来存储和读取数据。效率低：当要读取数据表某一列数据时需要先取出所有数据然后再提取出某一列的数据，效率很低。同时，它还占用较多的磁盘空间。</p><p align="left">由于以上的不足，有人（查礼博士）介绍了一种将分布式数据处理系统中以记录为单位的存储结构变为以列为单位的存储结构，进而减少磁盘访问数量，提高查询处理性能。这样，由于相同属性值具有相同数据类型和相近的数据特性，以属性值为单位进行压缩存储的压缩比更高，能节省更多的存储空间。如下图所示（<u>行列存储的比较图</u>）：</p><p align="left"><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816776mUUH.gif" style="border: none; " /></p></blockquote><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>4、&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;HBase</strong></p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>&nbsp;&nbsp;&nbsp; HBase是一个分布式的、面向列的开源数据库，它不同于一般的关系数据库,是一个适合于非结构化数据存储的数据库。另一个不同的是HBase基于列的而不是基于行的模式。HBase使用和 BigTable非常相同的数据模型。用户存储数据行在一个表里。一个数据行拥有一个可选择的键和任意数量的列，一个或多个列组成一个ColumnFamily，一个Fmaily下的列位于一个HFile中，易于缓存数据。表是疏松的存储的，因此用户可以给行定义各种不同的列。在HBase中数据按主键排序，同时表按主键划分为多个HRegion，如下图所示（<u>HBase数据表结构图</u>）：</p><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_131381682603ZU.gif" style="border: none; " /></p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; Ok，行文至此，看似洋洋洒洒近千里，但若给读者造成阅读上的负担，则不是我本意。接下来的内容，我不会再引用诸多繁杂的专业术语，以给读者心里上造成不良影响。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 我再给出一副图，算是对上文所说的hadoop框架及其组成部分做个总结，如下图所示，便是hadoop的内部结构，我们可以看到，海量的数据交给hadoop处理后，在hadoop的内部中，正如上文所述：hadoop提供一个分布式文件系统（HDFS）及分布式数据库（Hbase）用来存储或部署到各个计算点上，最终在内部采取mapreduce的模式对其数据进行处理，然后输出处理结果：</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816841DQQI.gif" style="border: none; " /></p><p><br /></p><h3><a name="t7" style="color: rgb(51, 102, 153); "></a>第二部分、淘宝海量数据产品技术架构解读&#8212;学习海量数据处理经验</h3></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 在上面的本文的第一部分中，我们已经对mapreduce模式及hadoop框架有了一个深入而全面的了解。不过，如果一个东西，或者一个概念不放到实际应用中去，那么你对这个理念永远只是停留在理论之内，无法向实践迈进。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; Ok，接下来，本文的第二部分，咱们以淘宝的数据魔方技术架构为依托，通过介绍淘宝的海量数据产品技术架构，来进一步学习和了解海量数据处理的经验。</p><h3><a name="t8" style="color: rgb(51, 102, 153); "></a>淘宝海量数据产品技术架构</h3><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 如下图2-1所示，即是淘宝的海量数据产品技术架构，咱们下面要针对这个架构来一一剖析与解读。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 相信，看过本博客内其它文章的细心读者，定会发现，图2-1最初见于本博客内的此篇文章：从几幅架构图中偷得半点海量数据处理经验之上，同时，此图2-1最初发表于《程序员》8月刊，作者：朋春。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 在此之前，有一点必须说明的是：本文下面的内容大都是参考自朋春先生的这篇文章：<u>淘宝数据魔方技术架构解析</u>所写，我个人所作的工作是对这篇文章的一种解读与关键技术和内容的抽取，以为读者更好的理解淘宝的海量数据产品技术架构。与此同时，还能展示我自己读此篇的思路与感悟，顺带学习，何乐而不为呢?。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; Ok，不过，与本博客内之前的那篇文章（几幅架构图中偷得半点海量数据处理经验）不同，本文接下来，要详细阐述这个架构。我也做了不少准备工作（如把这图2-1打印了下来，经常琢磨）：</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>&nbsp;<img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816866b9b9.gif" style="border: none; " /></p><p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; 图2-1 淘宝海量数据产品技术架构</p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 好的，如上图所示，我们可以看到，淘宝的海量数据产品技术架构，分为以下五个层次，从上至下来看，它们分别是：数据源，计算层，存储层，查询层和产品层。我们来一一了解这五层：</p><ol style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><li>数据来源层。存放着淘宝各店的交易数据。在数据源层产生的数据，通过DataX，DbSync和Timetunel准实时的传输到下面第2点所述的&#8220;云梯&#8221;。</li><li>计算层。在这个计算层内，淘宝采用的是hadoop集群，这个集群，我们暂且称之为云梯，是计算层的主要组成部分。在云梯上，系统每天会对数据产品进行不同的mapreduce计算。</li><li>存储层。在这一层，淘宝采用了两个东西，一个使MyFox，一个是Prom。MyFox是基于MySQL的分布式关系型数据库的集群，Prom是基于hadoop Hbase技术 的（读者可别忘了，在上文第一部分中，咱们介绍到了这个hadoop的组成部分之一，Hbase&#8212;在hadoop之内的一个分布式的开源数据库）的一个NoSQL的存储集群。</li><li>查询层。在这一层中，有一个叫做glider的东西，这个glider是以HTTP协议对外提供restful方式的接口。数据产品通过一个唯一的URL来获取到它想要的数据。同时，数据查询即是通过MyFox来查询的。下文将具体介绍MyFox的数据查询过程。</li><li>&nbsp;产品层。简单理解，不作过多介绍。</li></ol><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 接下来，咱们重点来了解第三层-存储层中的MyFox与Prom，然后会稍带分析下glide的技术架构，最后，再了解下缓存。文章即宣告结束。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 我们知道，关系型数据库在我们现在的工业生产中有着广泛的引用，它包括Oracle，MySQL、DB2、Sybase和SQL Server等等。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>MyFOX</strong></p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 淘宝选择了MySQL的MyISAM引擎作为底层的数据存储引擎。且为了应对海量数据，他们设计了分布式MySQL集群的查询代理层-MyFOX。</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>如下图所示，是MySQL的数据查询过程：</p><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816897fTJ3.gif" style="border: none; " /></p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 图2-2 MyFOX的数据查询过程</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>&nbsp;&nbsp;&nbsp; 在MyFOX的每一个节点中，存放着热节点和冷节点两种节点数据。顾名思义，热节点存放着最新的，被访问频率较高的数据；冷节点，存放着相对而来比较旧的，访问频率比较低的数据。而为了存储这两种节点数据，出于硬件条件和存储成本的考虑，你当然会考虑选择两种不同的硬盘，来存储这两种访问频率不同的节点数据。如下图所示：</p><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_131381690985qz.gif" style="border: none; " /></p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 图2-3 MyFOX节点结构</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; &nbsp;&#8220;热节点&#8221;，选择每分钟15000转的SAS硬盘，按照一个节点两台机器来计算，单位数据的存储成本约为4.5W/TB。相对应地，&#8220;冷数据&#8221;我们选择了每分钟7500转的SATA硬盘，单碟上能够存放更多的数据，存储成本约为1.6W/TB。</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>Prom</strong></p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>出于文章篇幅的考虑，本文接下来不再过多阐述这个Prom了。如下面两幅图所示，他们分别表示的是Prom的存储结构以及Prom查询过程：</p><p><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_131381694575YP.gif" style="border: none; " /></p><p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;图2-4 Prom的存储结构</p><p>&nbsp;<img alt="" src="http://hi.csdn.net/attachment/201108/20/0_13138169589vHu.gif" style="border: none; " /></p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 图2-5 Prom查询过程</p><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>glide的技术架构</strong></p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p>&nbsp;&nbsp;&nbsp;&nbsp;<img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313816987Nd2a.gif" style="border: none; " /></p><p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp; 图2-6 glider的技术架构</p></blockquote><p style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 在这一层-查询层中，淘宝主要是基于用中间层隔离前后端的理念而考虑。Glider这个中间层负责各个异构表之间的数据JOIN和UNION等计算，并且负责隔离前端产品和后端存储，提供统一的数据查询服务。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>缓存</strong></p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 除了起到隔离前后端以及异构&#8220;表&#8221;之间的数据整合的作用之外，glider的另外一个不容忽视的作用便是缓存管理。我们有一点须了解，在特定的时间段内，我们认为数据产品中的数据是只读的，这是利用缓存来提高性能的理论基础。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">在上文图2-6中我们看到，glider中存在两层缓存，分别是基于各个异构&#8220;表&#8221;（datasource）的二级缓存和整合之后基于独立请求的一级缓存。除此之外，各个异构&#8220;表&#8221;内部可能还存在自己的缓存机制。</p><blockquote dir="ltr" style="font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><p align="left"><img alt="" src="http://hi.csdn.net/attachment/201108/20/0_1313817000lHd5.gif" style="border: none; " /></p><p align="left">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;图2-7 缓存控制体系</p></blockquote><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 图2-7向我们展示了数据魔方在缓存控制方面的设计思路。用户的请求中一定是带了缓存控制的&#8220;命令&#8221;的，这包括URL中的query string，和HTTP头中的&#8220;If-None-Match&#8221;信息。并且，这个缓存控制&#8220;命令&#8221;一定会经过层层传递，最终传递到底层存储的异构&#8220;表&#8221;模块。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 缓存系统往往有两个问题需要面对和考虑：缓存穿透与失效时的雪崩效应。</p><ol style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><li><div align="left">缓存穿透是指查询一个一定不存在的数据，由于缓存是不命中时被动写的，并且出于容错考虑，如果从存储层查不到数据则不写入缓存，这将导致这个不存在的数据每次请求都要到存储层去查询，失去了缓存的意义。至于如何有效地解决缓存穿透问题，最常见的则是采用布隆过滤器（这个东西，在我的此篇文章中有介绍：），将所有可能存在的数据哈希到一个足够大的bitmap中，一个一定不存在的数据会被这个bitmap拦截掉，从而避免了对底层存储系统的查询压力。</div></li></ol><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 而在数据魔方里，淘宝采用了一个更为简单粗暴的方法，如果一个查询返回的数据为空（不管是数据不存在，还是系统故障），我们仍然把这个空结果进行缓存，但它的过期时间会很短，最长不超过五分钟。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2、缓存失效时的雪崩效应尽管对底层系统的冲击非常可怕。但遗憾的是，这个问题目前并没有很完美的解决方案。大多数系统设计者考虑用加锁或者队列的方式保证缓存的单线程（进程）写，从而避免失效时大量的并发请求落到底层存储系统上。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; ">&nbsp;&nbsp;&nbsp; 在数据魔方中，淘宝设计的缓存过期机制理论上能够将各个客户端的数据失效时间均匀地分布在时间轴上，一定程度上能够避免缓存同时失效带来的雪崩效应。</p><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>本文参考：</strong></p><ol dir="ltr" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; margin-right: 0px; "><li><div align="left">基于云计算的海量数据存储模型，侯建等。</div></li><li><div align="left">基于hadoop的海量日志数据处理，王小森</div></li><li><div align="left">基于hadoop的大规模数据处理系统，王丽兵。</div></li><li><div align="left">淘宝数据魔方技术架构解析，朋春。</div></li><li><div align="left">Hadoop作业调优参数整理及原理，guili。</div></li></ol><p align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>读者点评</strong>@xdylxdyl：</p><ol style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><li><div align="left">We want to count all the books in the library. You count up shelf #1, I count up shelf #2. That's map. The more people we get, the faster it goes. Now we get together and add our individual counts. That's reduce。</div></li><li><div align="left">数据魔方里的缓存穿透,架构,空数据缓存这些和Hadoop一点关系都么有，如果是想讲一个Hadoop的具体应用的话,数据魔方这部分其实没讲清楚的。</div></li><li><div align="left">感觉你是把两个东西混在一起了。不过这两个都是挺有价值的东西,或者说数据魔方的架构比Hadoop可能更重要一些,基本上大的互联网公司都会选择这么做。Null对象的缓存保留五分钟未必会有好的结果吧,如果Null对象不是特别大,数据的更新和插入不多也可以考虑实时维护。</div></li><li><div align="left">Hadoop本身很笨重，不知道在数据魔方里是否是在扮演着实时数据处理的角色?还是只是在做线下的数据分析的？</div></li></ol><p dir="ltr" align="left" style="color: #333333; font-family: Arial; line-height: 26px; background-color: #ffffff; "><strong>结语</strong>：写文章是一种学习的过程。<strong>尊重他人劳动成果，转载请注明出处。谢谢。July、2011/8/20。完。<br /><br />转自:</strong>
<a href="http://blog.csdn.net/v_july_v/article/details/6704077">http://blog.csdn.net/v_july_v/article/details/6704077</a>
</p><img src ="http://www.cppblog.com/mysileng/aggbug/196549.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/mysileng/" target="_blank">鑫龙</a> 2012-12-23 19:55 <a href="http://www.cppblog.com/mysileng/archive/2012/12/23/196549.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>