﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>C++博客-小乐-随笔分类-Courses</title><link>http://www.cppblog.com/sosi/category/15078.html</link><description>Virtual Reality </description><language>zh-cn</language><lastBuildDate>Fri, 03 Jan 2014 16:35:55 GMT</lastBuildDate><pubDate>Fri, 03 Jan 2014 16:35:55 GMT</pubDate><ttl>60</ttl><item><title>GOF23 Design Pattern</title><link>http://www.cppblog.com/sosi/archive/2011/03/11/141604.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Fri, 11 Mar 2011 13:34:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2011/03/11/141604.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/141604.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2011/03/11/141604.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/141604.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/141604.html</trackback:ping><description><![CDATA[<p>创建型模式(Creational Pattern)   <br />1、 抽象工厂模式(Abstract Factory Pattern)    <br />介绍    <br />提供一个创建一系列相关或相互依赖对象的接口，而无需指定它们具体的类。</p>  <p>2、 建造者模式(Builder Pattern)   <br />介绍    <br />将一个复杂对象的构建与它的表示分离，使得同样的构建过程可以创建不同的表示。</p>  <p>3、 原型模式(Prototype Pattern)   <br />介绍    <br />用原型实例指定创建对象的种类，并且通过拷贝这个原型来创建新的对象。</p>  <p>4、 工厂方法模式(Factory Method Pattern)   <br />介绍    <br />定义一个用于创建对象的接口，让子类决定将哪一个类实例化。Factory Method使一个类的实例化延迟到其</p>  <p>子类。</p>  <p>5、 单例模式(Singleton Pattern)   <br />介绍    <br />保证一个类仅有一个实例，并提供一个访问它的全局访问点。</p>  <p>结构型模式(Structural Pattern)   <br />6、 适配器模式(Adapter Pattern)    <br />介绍    <br />将一个类的接口转换成客户希望的另外一个接口。Adapter模式使得原本由于接口不兼容而不能一起工作的那</p>  <p>些类可以一起工作。</p>  <p>7、 桥接模式(Bridge Pattern)   <br />介绍    <br />将抽象部分与它的实现部分分离，使它们都可以独立地变化。</p>  <p>8、 组合模式(Composite Pattern)   <br />介绍    <br />将对象组合成树形结构以表示“部分-整体”的层次结构。它使得客户对单个对象和复合对象的使用具有一致</p>  <p>性。</p>  <p>9、 装饰模式(Decorator Pattern)   <br />介绍    <br />动态地给一个对象添加一些额外的职责。就扩展功能而言，它比生成子类方式更为灵活。</p>  <p>10、 外观模式(Facade Pattern)   <br />介绍    <br />为子系统中的一组接口提供一个一致的界面，Facade模式定义了一个高层接口，这个接口使得这一子系统更</p>  <p>加容易使用。</p>  <p>11、 享元模式(Flyweight Pattern)   <br />介绍    <br />运用共享技术有效地支持大量细粒度的对象。</p>  <p>12、 代理模式(Proxy Pattern)   <br />介绍    <br />为其他对象提供一个代理以控制对这个对象的访问。</p>  <p>行为型模式(Behavioral Pattern)   <br />13、 责任链模式(Chain of Responsibility Pattern)    <br />介绍    <br />为解除请求的发送者和接收者之间耦合，而使多个对象都有机会处理这个请求。将这些对象连成一条链，并</p>  <p>沿着这条链传递该请求，直到有一个对象处理它。</p>  <p>14、 命令模式(Command Pattern)   <br />介绍    <br />将一个请求封装为一个对象，从而使你可用不同的请求对客户进行参数化；对请求排队或记录请求日志，以</p>  <p>及支持可取消的操作。</p>  <p>15、 解释器模式(Interpreter Pattern)   <br />介绍    <br />给定一个语言，定义它的文法的一种表示，并定义一个解释器, 该解释器使用该表示来解释语言中的句子。</p>  <p>16、 迭代器模式(Iterator Pattern)   <br />介绍    <br />提供一种方法顺序访问一个聚合对象中各个元素, 而又不需暴露该对象的内部表示。</p>  <p>17、 中介者模式(Mediator Pattern)   <br />介绍    <br />用一个中介对象来封装一系列的对象交互。中介者使各对象不需要显式地相互引用，从而使其耦合松散，而</p>  <p>且可以独立地改变它们之间的交互。</p>  <p>18、 备忘录模式(Memento Pattern)   <br />介绍    <br />在不破坏封装性的前提下，捕获一个对象的内部状态，并在该对象之外保存这个状态。这样以后就可将该对</p>  <p>象恢复到保存的状态。</p>  <p>19、 观察者模式(Observer Pattern)   <br />介绍    <br />定义对象间的一种一对多的依赖关系,以便当一个对象的状态发生改变时,所有依赖于它的对象都得到通知并</p>  <p>自动刷新。</p>  <p>20、 状态模式(State Pattern)   <br />介绍    <br />允许一个对象在其内部状态改变时改变它的行为。对象看起来似乎修改了它所属的类。</p>  <p>21、 策略模式(Strategy Pattern)   <br />介绍    <br />定义一系列的算法，把它们一个个封装起来，并且使它们可相互替换。本模式使得算法的变化可独立于使用</p>  <p>它的客户。</p>  <p>22、 模板方法模式(Template Method Pattern)   <br />介绍    <br />定义一个操作中的算法的骨架，而将一些步骤延迟到子类中。Template Method使得子类可以不改变一个算</p>  <p>法的结构即可重定义该算法的某些特定步骤。</p>  <p>23、 访问者模式(Visitor Pattern)   <br />介绍    <br />表示一个作用于某对象结构中的各元素的操作。它使你可以在不改变各元素的类的前提下定义作用于这些元</p>  <p>素的新操作。</p><img src ="http://www.cppblog.com/sosi/aggbug/141604.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2011-03-11 21:34 <a href="http://www.cppblog.com/sosi/archive/2011/03/11/141604.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>浅谈多元数据的可视化技术</title><link>http://www.cppblog.com/sosi/archive/2011/02/27/140759.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sun, 27 Feb 2011 13:37:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2011/02/27/140759.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/140759.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2011/02/27/140759.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/140759.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/140759.html</trackback:ping><description><![CDATA[<p>多元数据是最常见的数据类型。人们经常都要作出一系列的决定，比如吃什么，买什么新手机，去哪里旅游，住什么旅馆，等等。这类决策往往是基于多元数据的分析：食物中热量有多高，碳水化合物多少，是否含有反式脂肪，三聚氰胺等添加剂等；相机的价格，像素多少，光圈大小，焦距范围，能否红外拍摄，等等。多元数据的分析还能帮助我们发现一些数据间的联系，并进行预测。</p>  <p>对于简单的多元数据，最常见的可视化方法是散点图（scatterplot）。比如，对于二维数据，通常的方法就把它们直接画成二维坐标上的一系列点，从而可以看出数据变化趋势。对于更高维的数据，一种方法是把数据的每种属性用不同的图形，颜色，纹理等，表示在二维坐标上。比如下面的图是由<a href="http://www.gapminder.org/">www.gapminder.org</a>生成的，用来显示了世界上国家的财富和人均寿命的关系。横轴是人均收入，纵轴是平均寿命，每个国家表示一个圆，圆的大小表示该国的人口，圆的颜色对应于所在的区域。这样一张图表达了一个国家的4个属性。左边的图是基于1980年的数据，而右边的图是基于2009年的数据。图中那个大大的红圆就是我们中国，我们可以很直观的看出这30年中国的发展还是很给力！网站上还能生成动画，演示各个圆的位置变换，像中国那么大的圆还能动那么快，也是独一无二的。</p>  <p><img alt="" src="https://lh4.googleusercontent.com/nqI6gD8N19QrPHm35RyrJrreksy1_hi2sEb9x7ADq-t8QXo8SHuc0zphLtQmBlJDXNz7FPJDSk89ak9eYC3HC-uA4kVyE_f7L1_TE7OoiHTON3parQ" width="307" /><img alt="" src="https://lh4.googleusercontent.com/nkCPx9nv9sY65auQw9O4yyF-SXTUcTft7olystcW3d3E7c91ZbbRjtnMaGcmslrpxVBKWhW_TFmu5m8ZpC4kCb7KtKoYg3mDXh5vF33GmKCMBEhufw" width="306" /></p>  <p>对于有更多属性的数据，用上面的方法就往往不能显示了。常见的方法是用散点图矩阵（scatterplot matrix），如果数据有N维（N个属性），所有的属性两两组合就生成N x N个二维散点图。把这些图排列成N x N的矩阵。这样可以观察任意两个属性间的关系。比如下面这张图就显示了汽车的主要属性间的散点图【1】。其中MPG代表了汽车的油耗（每加仑油能开的旅程），从图里我们看出，发动机马力越大MPG越小，车越重MPG越小，年龄越大MPG越小，等等。   <br /><img alt="" src="https://lh4.googleusercontent.com/TZDdx86RuQ-g-WOTyK6JPlgIOsiqtwPFk73xy0M1geHaMt83Q_Wn-gAuBHzJJ5X1RpadMA7p7DoNCBapQWUm-KGUg1dFSFnVXCGXlHPmsrCQn2vxkQ" width="564" /></p>  <p>【1】M. O. Ward. Xmdvtool: Integrating multiple methods for visualizing multivariate data. In Proceedings of IEEE Conference on Visualization, 1994.</p>  <p>&#160;</p>  <p>散点图可以很好的显示少量属性的数据。但是，即使用散点图矩阵也无法解决属性更多的情况，而且每个小图只能显示2个属性间的联系，多个属性间的关系并不是很 直观。这种局限性主要来自将数据投影在了二维直角坐标系上。人们提出一系列的方法来解决这个局限性，而其中最著名的是1981年Alfred Inselberg 提出的平行坐标系（Parallel Coordinates）。这种方法把各个属性表示成一系列的平行的轴，组成一个平行坐标系，每条数据就表示成这个坐标系里的一条直线。比如我们有这样一 组数据   <br /><img alt="" src="https://lh5.googleusercontent.com/dhNkuA5HoT9bV6z3H2QH71-9aTiOqEPB8Z7TkSqMbNmpAVtYAa7v-OzkaXy1SV7Mxz7CMTur0PiZnhkDujYpEip1-g=s512" width="309" /></p>  <p>可以方便的表示成这样的平行坐标   <br /><img alt="" src="https://lh4.googleusercontent.com/gj7ytY46vk6R3AgaIT4gMDHUsDs83hGaLBJHRIaz6jy8Hd9Nm7jbAZd6h-8M5d7hI4sD0Mxa0ez7qDDVTEiHo2ijFCxDFSI4MKdSKfL7XfjguXEsGA" width="324" /></p>  <p>平行坐标系的优势在于发现大规模数据间的属性联系。比如再回到前面的汽车的数据，用平行坐标可以表示成【1】<img alt="" src="https://lh3.googleusercontent.com/KfvqDuK0W293UbOjnyxpj5jEaeaJiTUkH4P3k1kgO2fiLVC1DrwDilznywKrSzcqGVBjRTKQTOp-eaO2Magmc84R9OiImEK_NI1Kmq7hdVMScOex9w" width="573" />    <br />多个属性间的联系比散点图要清晰。比如可以清楚看出来Cylinder多的车，MPG相对小，但是马力大；Cylinder小的车，MPG相对大，但是马力 小。在平行坐标里，我们还可以方便的进行交互式删选数据，方便观察，比如下面的图，我们可以看一下，Cylinders多，MPG小，马力大的车的其他属性怎么样。<img alt="" src="https://lh4.googleusercontent.com/eM4jWPIOVrpMJf8Ccvx88DpJdPD4J69A1SpX187e7HKWcPnWykQjidEWV3cWpk-iRFo3gDP7KbyzLNCXvmchEAnuka5DsEQFaTaQXH_Rxug99cvJEA" width="578" />    <br />当数据过多的时候，平行坐标系里的线就会很多，数据间的联系就看不清楚了，就像下面的左图。一种解决方法是把线画成半透明，这样主要的线的趋势就会随着线的数目的增加而清晰，像下面的右图【2】。当数据很大的时候，主要的趋势的分析通常是数据的分析的第一步。    <br /><img alt="" src="https://lh3.googleusercontent.com/uZzmYER-EL_Np9PEea0e7RzubJ1Ng5e3jg1DC5xpQ2rMoDlmOamP1qgYOf9MsenU6LAH1YUJISCtaxNiXulw_f5peyB1_Gdnx45sYSV_zL3x-1lzJg" width="552" /></p>  <p>再回到我们前面第一个例子，虽然看上去学历和收入成反比，但是如果我们有更多的数据，像汽车的例子一样，平行坐标也可以给我们更清楚显示收入和学历，年龄等等的关系，所以还在读高学位的朋友先不要灰心啊。</p>  <p>【1】M. O. Ward. Xmdvtool: Integrating multiple methods for visualizing multivariate data. In Proceedings of IEEE Conference on Visualization, 1994.   <br />【2】 Chad Jones, et al. An Integrated Exploration Approach to Visualizing Multivariate Particle Data. Computing in Science &amp; Engineering, Volume 10, Number 4, July/August, 2008, pp. 20-29</p>  <p>&#160;</p>  <p>作为多元数据可视化的主要两种方法，散点图和平行坐标各有优缺点。散点图通常只能显现两个属性间的联系，但是每条数据都表示成一个点，从而减少了总的像素数 量和视觉复杂度；平行坐标能显示多个属性间的联系，但是每条数据（多条属性）表示成一条线，增加了像素的数量，当数据量很大的时候，复杂度就大大增加，而无法显示数据间的联系。2009年北大的袁晓如教授在IEEE VisWeek 上发表了篇文章【1】，提出了一种技术将两种方法结合在一块。基本的想法是在平行坐标里的轴之间显示散点图，从而使原来在平行坐标里看不清的数据趋势用散点图表示。像下面的左图显示的是平行坐标图，而右图结合了散点图。</p>  <p><img alt="" src="https://lh3.googleusercontent.com/2-UVKOnxnyT8l8kgolWdRWNXG7kYdD__rTI7UUAvmwJA8a9TgW55YcEl9grhKIyH1epY4yvAOlnxdiX-0TrdzvqI2igKWCtuL8Jkw8eyU9rNxzhS2A" width="296" /><img alt="" src="https://lh6.googleusercontent.com/Lw59X-UegipcfTOZzKoCjB6Z6NMpOZLyUj4YieO0_medguee6L16Miyx0zW8qwPsMBoXBL5c8g6H949vaAzHdJ43Mu5L2qCuSl56AKjOjszqOro9rA" width="290" /></p>  <p>在两个平行轴之间画散点图的基本思路是把右边的轴旋转90图，这样与左边的形成直角坐标系，散点图的绘制和解读也就与传统的一致。画散点图的区域中原来的直线变成穿过散点图的曲线，这样保持了原来平行坐标里的连接关系，也支持通过线来进行数据筛选的功能。这种新的技术也用到了前面所说的汽车数据的例子，有兴趣的话找这篇论文读一读吧。</p>  <p>【1】 Xiaoru Yuan, et al. Scattering Points in Parallel Coordinates. IEEE Transactions on Visualzation and Computer Graphics, November/December 2009 (vol. 15 no. 6)</p>  <p></p>  <p></p>  <p>&#160;</p>  <p>在科学计算可视化的时候，看到了这篇blog，文章讲解得非常好！作者也是非常牛的！特此转载到此！</p>  <p><a href="http://www.vizinsight.com/2010/12/%E6%B5%85%E8%B0%88%E5%A4%9A%E5%85%83%E6%95%B0%E6%8D%AE%E7%9A%84%E5%8F%AF%E8%A7%86%E5%8C%96%E6%8A%80%E6%9C%AF%EF%BC%88%E4%B8%8B%EF%BC%89/">http://www.vizinsight.com/2010/12/%E6%B5%85%E8%B0%88%E5%A4%9A%E5%85%83%E6%95%B0%E6%8D%AE%E7%9A%84%E5%8F%AF%E8%A7%86%E5%8C%96%E6%8A%80%E6%9C%AF%EF%BC%88%E4%B8%8B%EF%BC%89/</a></p>  <p>&#160;</p>  <p>大家可以去顶一下这个blog</p>  <p><a href="http://www.vizinsight.com/">http://www.vizinsight.com/</a></p><img src ="http://www.cppblog.com/sosi/aggbug/140759.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2011-02-27 21:37 <a href="http://www.cppblog.com/sosi/archive/2011/02/27/140759.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>怀念六次Regional(from NJU&amp;amp;HKUST ikki)</title><link>http://www.cppblog.com/sosi/archive/2010/10/29/131783.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Fri, 29 Oct 2010 12:17:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/29/131783.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/131783.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/29/131783.html#Feedback</comments><slash:comments>2</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/131783.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/131783.html</trackback:ping><description><![CDATA[<p>&#160;&#160; 今天看了一下博弈（game theory）的几个问题，看到了一篇文章，ikki的怀念自己ACM/ICPC日子的文章。</p>  <p>&#160;&#160; 全文转载如下：</p>  <p>又是一个赛季……看到很多不知名的ID开始在OJ上的出头，看到以前我们传统意义上觉得可能是“弱校”的一些学校也开始奋起，真的很为他们感到高兴，尤其是天津大学，看到天大已经可以在没有RoBa的情况下也能力压其他的名校拿到学校名次第三，真的还是有很多感慨的。   <br />于是，作为styc笔下的&quot;a notorious martyr of acm/icpc&quot;，怀念我的6次Regional。    <br />1) Shanghai 2004    <br />上来就全场第一个Submit了C - The Counting Problem，没有看数据规模就暴力了一下，没有任何悬念的TLE掉了，然后开始搞H - Tian Ji - The Horse    <br />Racing，看了一眼题目就说是最大匹配，开始拍匹配，又没算复杂度，匹配匹完之后发现又TLE了……最搞笑的事情是我当时还写了个Hopcraft-Karp匹配……后来发现说可以最优匹配...当然，也TLE了……    <br />最后我在比赛场上几乎就是无所事事，凭着我们队长jwise的神勇发挥在最后一个小时顶住全场N多气球但是我们没有气球的压力，把那两个题目过了..最后一分钟我好像是在贪心J - Jamie's Contact Group，后来才知道那个题目是个网络流。    <br />第一次比赛总是很难忘，不过现在回想起来除了反应出来自己弱也没什么其他可以说的。    <br />2) Beijing 2005    <br />一个暑假在POJ上割了800+题目之后发现很多题目好像可以写一写了，这就有了自己的Beijing Regional之行。我如果没有记错这场比赛是ACRush第一次出道acm/icpc，开场3分钟ACRush过掉了E – Holiday    <br />Hotel，全场震惊，后来大家发现这是一个菜题纷纷AC，我好像是在10分钟还是15分钟过掉了这个题目。在这个时候ACRush过掉了G – Desert    <br />King，15分钟，过了2个，大家就纷纷觉得G也是个菜题，然后开始看G，我也不例外……不过看着看着我先以为G可以贪心，后来想了想（我不记得有没有交程序验证了），反正过不掉，后来突然想起来这个题目好像在SRbGa的黑书上有写类似的……所以拿出黑书，找到“最优比率生成树”    <br />，现场学算法，现场过掉了……好像那个题目还有一点点卡常数因子，因为是稠密图所以Kruskal过不掉，好像焦哥当时是被卡在了这里。    <br />过了两个题目之后队友发现了A – Angle and Squares是个弱题，写了一下过了，中间好像还以为忘记了sin和cos是弧度制的调了很久……然后就是比较顺利的用匹配过掉了C – Purifying Machine，当时4题的时候自己真的以为快要出线了，因为好像还有两个小时。    <br />然后就是痛苦的2个小时了，先是看到了我觉得最有希望过的D – Finding Treasure，明显就是一个高斯消元嘛，但是怎么都过不去……怎么写也过不去，虽然后来我知道更简单的做法是随机代点，但是高斯消元也是可过的……但是我就是过不去……B – Get Luffy    <br />Out是个2SAT，但是我不懂2SAT……所以看着队友眼睁睁的贪心了很久，很久很久……后来传闻说这个题目数据弱，搜搜也过了……其他题目我连题目都没看……    <br />3) Hangzhou 2005    <br />在Beijing拿了个第七，宋老师当时还对我们队寄予厚望说争取在杭州出线，我也真以为我们能圆个Final梦，很开心的去了杭州，这次比赛HQM，ZY，YSY好像是一队，并且很顺利的他们切掉了ACRush……    <br />上来很顺利的切了A – Auction，然后看到B – Bridge发现不会那个积分，看到C -    <br />Cell我一下就开心了，说我KAO这不是个LCA么，拿出例程开始敲，敲完发现RE了，顿时FT，以为数组开小了数据不合法，所以测试了很久很久很久……甚至到最后自己开始写了一个测试机开始生成数据，在自己机器上测试发现也RE了……那个时候想到原来是stack    <br />overflow…我就问队友会不会把递归改非递归，得到的答案是不会……当时我就知道这个题目，完全知道问题在哪里，知道该怎么做，但是不可能过了……当然了，后来我知道那个题目对应于白色括号定理，可以直接DFS……但是还是要写非递归的DFS，我还是过不掉，所以死在了DFS上。    <br />B发现不可做之后我开始翻手头的大学数学书，在大学数学书上找到了一个类似的式子，把那个抛物线长度积分积出来了，记得当时的judge还有点卡常数，不过幸好，B – Bridge过了。当时的情况我记得我的判断是出4可以出线，所以只能拼题数了，把I –    <br />Instructions丢给what20，what20看了看告诉我是贪心，他肯定能过，我就没管了，后来发现怎么写也不过怎么写也不过，到最后比赛还有20分钟结束我看了一下题目我说KAO这不是个最短路么，删掉他的支离破碎的程序开始重写……但是来不及了……在最后比赛结束的时候，我的程序好    <br />像刚刚写出了可以compile的雏形，but that’s definitely too late.    <br />说说G – Generator，这个题目是具体数学上的原题，我的队友猜出了一个公式，我口算了一下说好像和答案差的远……根本没试……2题，结束了杭州之行，连铜牌都没有。    <br />赛后Savior跟我说的一句话，我到今天都记得。因为我们在北京的名次说不定你该运气好还能出线，所以Savior说：“Final见啦……” 这一句话成为我永远的伤，无限逼近但是永远进不去……后来，我们还确实扎扎实实的为了从Beijing拿名额出线做了一番争取，Beijing一开始给了非常    <br />少的名额，我就想去争取这个名额，发信给李文新老师和黄金熊，北大的李文新老师还为    <br />我们争取名额，最后帮我们排名前面一位的USTC争取到了，我们没有……还记得当时USTC在POJ的BBS上说谢谢李老师，李老师说不用写我，要谢谢Ikki……哎，虽然看到USTC的兄弟出线也不错，但是好歹是为他人做了个嫁衣裳……    <br />4) Beijing 2006    <br />2006年，我准备出国了，在准备GRE和GRE SUB……所以我其实基本没做什么准备，但是这次组到了JiangYY同学和FreeDian同学，当时我感觉这是南大近几年能组出的最强队了，所以对Final也充满了美好的YY。    <br />还记得JiangYY同学跟我说：“拿金牌和出线哪个意义更大，我们显然要出线……”好吧，那就出线吧，清华的比赛，赛前那一天晚上朱泽园同学跑来告诉我们说明天你们不要紧张，我们赛前试题这套题目很简单，你们的实力应该能做出7题……我在这个时候开玩笑跟JiangYY说，如果明天    <br />考平面图最大流你会做不，JYY说当然懂，我全懂，不就是Dijkstra么……我说你会做就好，反正我是不懂……    <br />比赛的时候FreeDian先告诉我E – Guess是贪心，迅速的过了E，当时的速度好像差一点就是全场第一个过E的，然后发现H –    <br />Ruler明显可做，但是我不知道怎么做，当时先猜了一个猜想，发现猜想不对，后来搜，发现那个数据规模搜肯定超时了，所以我就问JiangYY 同学怎么做，我说“JiangYY你不是还写过如何搜索的文章么……你搜一个。”JiangYY同学说毛……我搜不出……后来实在没办法了，我开始加随机    <br />化的剪枝和一些小的优化，并且拿一个暴力的搜索和那个随机化的优化在不停的对拍，等到对拍那个优化后版本能全过了，我就交。    <br />为了这个题目我写了N个贪心版 + 一个暴力版 + N个改进优化版 + 数据生成对拍器，当我过了之后我兴奋不已，但是细细一看，咦，我好像交错程序把那个暴力版交上去了，我KAO，原来这个题目数据弱，暴力搜索就能过……当时要杀人的心都有了。    <br />差点忘了本次比赛最戏剧性的场景了，JiangYY同学翻题目，翻了一下告诉我说“昊哥，真的出平面图最大流了。”我说那好啊，你不是会做么……JiangYY同学说：“会个毛，我就知道Dijkstra，不知道怎么Dijkstra。”我KAO当时我在场上心都凉了，然后YY同学就在不停的建模怎么Dijks    <br />tra……一会告诉我要三次，一会告诉我要6次，一会告诉我要12次Dijkstra……这就是B – Animal Run的悲惨遭遇。    <br />后来YY同学，自称为过了USACO所有Gold Contest图论题，图论小王子的YY同学，开始搞G – What a Special    <br />Graph，然后一会告诉我要收缩一下花做DFS啥的……后来我说不对…幸好我没被迷惑住开始敲G……赛后和Bamboo的一句话，Bamboo 说：“我们判断一个队，有没有前途，就是看他在不在搞G，如果在搞，就没前途了……”G是个论文题，不可做的…当然了，这套题目我当时在赛场上好像是    <br />都看了的，A – Robot有点想法但是不会，I – A Funny Stone Game太像DP了，但是也不会……都不会@_@    <br />2题，再一次结束了，结束了我本科生涯的acm/icpc。    <br />5) Seoul 2007    <br />阴差阳错的来到了HKUST之后，可以出国比赛，Seoul和Danang也成为了人生acm/icpc的绝响，最后的两个第四也让我永远对Final说了声再见。其实Seoul的失败，可以说死在我的手上，B –    <br />Editor是一个非常弱的最长公共子串的题目，串长小于5000，明显DP嘛，不知道我当时大脑为什么抽筋了，开始写Suffix Array，而且还是写的是线性时间的Suffix    <br />Array，写完之后发现自己很久没写后缀数组了，过不去了，调不过样例，这个时候细细一想才把B用近乎暴力的方式过了……这个时候在时间上我们已经浪费将近一个小时了，后来发现如果我们把这一个小时节约在罚时上，每个题目都少了不少的时间，我们应该就第三名出线了……第三名    <br />也是7题，罚时比我们好一点，比我们少了123分钟的罚时，如果我们这道题目没有按照我YY的方式做，就……应该是第三了吧？    <br />当然，现实没有如果，而且我那两个队友当时还问我说是不是可以DP的样子，我直接斩钉截铁的说“不可以！”并且告诉他们这个是后缀数组经典题，他们听完就不说话开始任我YY的写了……    <br />我犯的第二宗罪就是那个I – Turtle Graphics，当时我过完一遍题目看到I – Turtle    <br />Graphics之后我顿时就想到了CERC还是NEERC一道也是这种走水平竖直方向乱搞的题目，那个题目我好像不会，顿时心理阴影产生了，当队友问我这个题目可做否，我直接一句话“这个题目我以前好像看过，不会做，你们别看了……”从始至终，那道I就没有被我们碰过……后来发现是个弱    <br />题。    <br />人说，自作孽，不可活。    <br />6) Danang 2007    <br />其实Danang赛区我对自己的表现也还是满意的了，主要是题目太RP，同样的C – Prime k-tuple，某队Miller-Rabin就能过，我Miller-Rabin就TLE，D – The longest constant    <br />game，这次确确实实是要上后缀数组了，很可惜我只带了O(nlogn)版本的，发现居然卡这个logn的常数，后来实在没办法乱搞了O(n) 的后缀数组才过，E – Lazy Susan，这个题目我和Math Guy现场推了好久的规律，最后搞出来的时候真的是非常兴奋，J –Space    <br />Beacon，这个题目是我最怕写的题目，陈琛敢于上手，并且在最后WA的时候我给他现场写数据生成器暴力对拍，在最后2分钟的时候过掉这个题目，PH的一声咆哮Yes，全场为我们的鼓掌……虽然最后由于种种原因诡异的被Rejudge了一个题目排名掉到了第四，但是第一次，让我感觉在赛场    <br />上似乎没有遗憾。在4小时封板的时候我也第一次看到了Hong Kong University of Science and Technology居然站在当时的第一位上，离Final的梦，当时让我感觉是这么近。    <br />这么多次，其实合作最愉快的还是在HKUST，尽管队员的实力不是特别强，但是他们肯配合我肯为我做事情想题目，一直坚持不放弃的精神我也一直可以看到……真的很希望能和他们一起站在Final的赛场上。    <br />只是可惜，这一切都已经是似水流年。</p>  <p>&#160;</p>  <p>&#160;&#160; 突然很感慨，就像ikki最后一句，这一切都是似水流年。。。</p>  <p>&#160;&#160; 在Arena上有他的个人比赛记录，看到了那句logo:Never underestimate or just rely on your potential.是对自己的忠告！加油！或许，某一天当ikki看到自己的这篇总结还能激励着某些人前进行的时候，他或许也会感叹一句，似水流年吧。。。。</p><img src ="http://www.cppblog.com/sosi/aggbug/131783.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-29 20:17 <a href="http://www.cppblog.com/sosi/archive/2010/10/29/131783.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>EECS Graphics 2</title><link>http://www.cppblog.com/sosi/archive/2010/10/24/131067.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sun, 24 Oct 2010 09:21:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/24/131067.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/131067.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/24/131067.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/131067.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/131067.html</trackback:ping><description><![CDATA[<p>人眼的特殊结构，是不对光谱进行分析的！</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/GFB2JKOIV0X666KQHKKN.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="GFB2JKOI~V0X@666KQ)HKKN" border="0" alt="GFB2JKOI~V0X@666KQ)HKKN" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/GFB2JKOIV0X666KQHKKN_thumb.jpg" width="454" height="210" /></a></p>  <p>&#160;</p>  <p> 这张图片是经典啊，两种黄色的程度是一样的，给人的感觉非常不一样。。。。</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/HXE477FCD6XUQBG1TEDX0.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="HX]E477FCD6XUQB~G1TEDX0" border="0" alt="HX]E477FCD6XUQB~G1TEDX0" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/HXE477FCD6XUQBG1TEDX0_thumb.jpg" width="416" height="258" /></a></p>  <p>&#160;</p>  <p>人的视锥对光的感觉是实质上是这样一个积分。。。</p>  <p>&#160;</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/RD8ENRET55GZWFVH_7RHF.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="RD8ENRET55GZWFVH_7]RH@F" border="0" alt="RD8ENRET55GZWFVH_7]RH@F" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/RD8ENRET55GZWFVH_7RHF_thumb.jpg" width="478" height="169" /></a></p>  <p>&#160;</p>  <p>我们所看到的RGB，当然不仅仅是R，是很多不同频率的光的混合结构，而且仅仅由R，G，B不能构成整个的色彩空间。。。其实这是一个必然，CIE的色彩空间很显然不是一个严格的三角形，所以看不全是应该的。。。当然，我们的全彩打印，当然也做不到每一种彩色都打印出来，是一个非常不规则的区域。。</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/9W6DWOBV3VZY6Y68.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="`9W6D$W{OBV3{V%Z~Y6(Y68" border="0" alt="`9W6D$W{OBV3{V%Z~Y6(Y68" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/9W6DWOBV3VZY6Y68_thumb.jpg" width="484" height="241" /></a></p>  <p>&#160;</p>  <p>孔雀的尾巴为什么五颜六色，色彩斑斓。。不要天真的以为用不同颜色材质就可以模拟。。。</p>  <p>答案： Iridescence。。。我不清楚汉语应该怎么翻译这个词。。干涉？。。。</p>  <p>由于半透明表皮的表层厚度不同，（这个类似于肥皂泡的五颜六色）。。所以出现下面这种情况。</p>  <p>&#160;</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/JL%60K4SU2(SORWUL9$%7DJY@NG_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="JL`K4SU2(SORWUL9$}JY@NG" border="0" alt="JL`K4SU2(SORWUL9$}JY@NG" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/JL%60K4SU2(SORWUL9$%7DJY@NG_thumb.jpg" width="539" height="329" /></a><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/JL%60K4SU2(SORWUL9$%7DJY@NG_2.jpg"></a></p>  <p>&#160;</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/12EL3DSEW%25ZL3Z5%25%5D%5DPXQWO_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="12EL3DSEW%ZL3Z5%]]PXQWO" border="0" alt="12EL3DSEW%ZL3Z5%]]PXQWO" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics2_D611/12EL3DSEW%25ZL3Z5%25%5D%5DPXQWO_thumb.jpg" width="450" height="248" /></a></p>  <p>&#160;</p>  <p>OK，暂时到此结束。</p><img src ="http://www.cppblog.com/sosi/aggbug/131067.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-24 17:21 <a href="http://www.cppblog.com/sosi/archive/2010/10/24/131067.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Expectation-maximization algorithm  EM算法</title><link>http://www.cppblog.com/sosi/archive/2010/10/20/130569.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Wed, 20 Oct 2010 06:44:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/20/130569.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130569.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/20/130569.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130569.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130569.html</trackback:ping><description><![CDATA[<p>&nbsp;&nbsp;&nbsp;&nbsp; In <a href="http://en.wikipedia.org/wiki/Statistics">statistics</a>, an <strong>expectation-maximization</strong> (<strong>EM</strong>) <strong>algorithm</strong> is a <a href="http://en.wikipedia.org/wiki/Iterative_method">method</a> for finding <a href="http://en.wikipedia.org/wiki/Maximum_likelihood">maximum likelihood</a> or<a href="http://en.wikipedia.org/wiki/Maximum_a_posteriori">maximum a posteriori</a> (MAP) estimates of <a href="http://en.wikipedia.org/wiki/Parameter">parameters</a> in <a href="http://en.wikipedia.org/wiki/Statistical_model">statistical models</a>, where <strong><font color="#ff0000">the model depends on unobserved </font></strong><a href="http://en.wikipedia.org/wiki/Latent_variable"><strong><font color="#ff0000">latent variables</font></strong></a><strong><font color="#ff0000">.</font></strong> EM is an <a href="http://en.wikipedia.org/wiki/Iterative_method">iterative method</a> which alternates between performing an expectation (E) step, which computes the expectation of the log-likelihood evaluated using the current estimate for the latent variables, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the <em>E</em> step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.</p>  <p>&nbsp;</p>  <p>&nbsp; EM算法可用于很多问题的框架，其中需要估计一组描述概率分布的参数<img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" />，只给定了由此产生的全部数据中能观察到的一部分！</p>  <p>&nbsp; EM算法是一种迭代算法，它由基本的两个步骤组成：</p>  <p>&nbsp; E step：估计期望步骤</p>  <p>&nbsp; 使用对隐变量的现有估计来计算log极大似然</p>  <p>&nbsp; M step: 最大化期望步骤</p>  <p>&nbsp; 计算一个对隐变量更好的估计，使其最大化log似然函数对隐变量Y的期望。用新计算的隐变量参数代替之前的对隐变量的估计，进行下一步的迭代！</p>  <p>&nbsp;</p>  <p>&nbsp; </p>  <p>观测数据：观测到的随机变量X的IID样本：</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_2.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb.png" width="244" height="51" /></a> </p>  <p>缺失数据：未观测到的隐含变量(隐变量)Y的值：</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_4.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb_1.png" width="244" height="56" /></a> </p>  <p>完整数据： 包含观测到的随机变量X和未观测到的随机变量Y的数据，Z=(X,Y)</p>  <p>&nbsp;</p>  <p>似然函数：(似然函数的几种写法)</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/JL%7D)D_HBNI489~H%7DGCRMWVJ_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="JL})D_HBNI489~H}GCRMWVJ" border="0" alt="JL})D_HBNI489~H}GCRMWVJ" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/JL%7D)D_HBNI489~H%7DGCRMWVJ_thumb.jpg" width="381" height="95" /></a></p>  <p>log似然函数为：</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_6.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb_2.png" width="545" height="83" /></a></p>  <p>E step：用对隐变量的现有估计<img alt="\boldsymbol\theta^{(t)}" src="http://upload.wikimedia.org/math/c/d/5/cd50a7515b9fdfb7102bb2da1634f8cc.png" />计算隐变量Y的期望</p>  <p>&nbsp; <a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_8.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; margin-left: 0px; border-top: 0px; margin-right: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb_3.png" width="398" height="100" /></a> </p>  <p>其中需要用到贝叶斯公式：</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_10.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb_4.png" width="350" height="157" /></a>&nbsp;</p>  <p>M step：最大化期望，获得对隐变量更好的估计</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_12.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/image_thumb_5.png" width="307" height="73" /></a> </p>  <p>&nbsp;</p>  <p>维基中的表述是这样子：</p>  <p>Given a <a href="http://en.wikipedia.org/wiki/Statistical_model">statistical model</a> consisting of a set <img alt="\mathbf{X}" src="http://upload.wikimedia.org/math/5/9/8/598f6444904755dda4a859a1e377468e.png" /> of observed data, a set of unobserved latent data or <a href="http://en.wikipedia.org/wiki/Missing_values">missing values</a> <strong>Y,</strong> and a vector of unknown parameters <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" />, along with a <a href="http://en.wikipedia.org/wiki/Likelihood_function">likelihood function</a> <img alt="L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta)" src="http://upload.wikimedia.org/math/4/e/4/4e499deec73c2898a0487c2a5763c60d.png" />, the <a href="http://en.wikipedia.org/wiki/Maximum_likelihood_estimate">maximum likelihood estimate</a> (MLE) of the unknown parameters is determined by the <a href="http://en.wikipedia.org/wiki/Marginal_likelihood">marginal likelihood</a> of the observed data&nbsp; </p>  <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/CR%25M2I%5BQD88%5BN5$3(H))%25ZR_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="CR%M2I[QD88[N5$3(H))%ZR" border="0" alt="CR%M2I[QD88[N5$3(H))%ZR" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/CR%25M2I%5BQD88%5BN5$3(H))%25ZR_thumb.jpg" width="363" height="93" /></a></p>  <p>However, this quantity is often <a href="http://en.wikipedia.org/wiki/Intractable">intractable</a>.</p>  <p>The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:</p> <dl><dd><strong>Expectation step (E-step)</strong>: Calculate the <a href="http://en.wikipedia.org/wiki/Expected_value">expected value</a> of the <a href="http://en.wikipedia.org/wiki/Log_likelihood">log likelihood</a> function, with respect to the <a href="http://en.wikipedia.org/wiki/Conditional_probability_distribution">conditional distribution</a> of <strong>Y </strong>given <img alt="\mathbf{X}" src="http://upload.wikimedia.org/math/5/9/8/598f6444904755dda4a859a1e377468e.png" /> under the current estimate of the parameters <img alt="\boldsymbol\theta^{(t)}" src="http://upload.wikimedia.org/math/c/d/5/cd50a7515b9fdfb7102bb2da1634f8cc.png" />: </dd></dl>  <p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/A7DFNWMY)KAI%5DT5)_OMKRUD_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="A7DFNWMY)KAI]T5)_OMKRUD" border="0" alt="A7DFNWMY)KAI]T5)_OMKRUD" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/Expectationmaximizationalgorithm_C294/A7DFNWMY)KAI%5DT5)_OMKRUD_thumb.jpg" width="431" height="87" /></a></p> <dl><dd><strong>Maximization step (M-step)</strong>: Find the parameter that maximizes this quantity: <dl><dd><img alt="\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta} \operatorname{arg\,max} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \, " src="http://upload.wikimedia.org/math/1/c/9/1c9373de85c0d107bc159afd8eb8c088.png" /></dd></dl></dd></dl>  <p>Note that in typical models to which EM is applied:</p>  <ol>   <li>The observed data points <img alt="\mathbf{X}" src="http://upload.wikimedia.org/math/5/9/8/598f6444904755dda4a859a1e377468e.png" /> may be <a href="http://en.wikipedia.org/wiki/Discrete">discrete</a> (taking one of a fixed number of values, or taking values that must be integers) or <a href="http://en.wikipedia.org/wiki/Continuous">continuous</a> (taking a continuous range of real numbers, possibly infinite). There may in fact be a vector of observations associated with each data point. </li>    <li>The <a href="http://en.wikipedia.org/wiki/Missing_values">missing values</a> (aka <a href="http://en.wikipedia.org/wiki/Latent_variables">latent variables</a>) <strong>Y</strong> are <a href="http://en.wikipedia.org/wiki/Discrete">discrete</a>, drawn from a fixed number of values, and there is one latent variable per observed data point. </li>    <li>The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).</li> </ol>  <p>However, it is possible to apply EM to other sorts of models.</p>  <p>The motivation is as follows.<font color="#ff00ff"> If we know the value of the parameters <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" />, we can usually find the value of the latent variables<strong> Y</strong> by maximizing the log-likelihood over all possible values of <strong>Y,</strong> either simply by iterating over <strong>Y </strong>or through an algorithm such as the </font><a href="http://en.wikipedia.org/wiki/Viterbi_algorithm"><font color="#ff00ff">Viterbi algorithm</font></a><font color="#ff00ff"> for </font><a href="http://en.wikipedia.org/wiki/Hidden_Markov_model"><font color="#ff00ff">hidden Markov models</font></a><font color="#ff00ff">. Conversely, if we know the value of the latent variables <strong>Y</strong>, we can find an estimate of the parameters <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" /> fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" /> and <strong>Y</strong> are unknown</font>:</p>  <ol>   <li><font color="#ff0000">First, initialize the parameters <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" /> to some random values. </font></li>    <li><font color="#ff0000">Compute the best value for <strong>Y </strong>given these parameter values. </font></li>    <li><font color="#ff0000">Then, use the just-computed values of <strong>Y</strong> to compute a better estimate for the parameters <img alt="\boldsymbol\theta" src="http://upload.wikimedia.org/math/f/3/7/f371c7df934e1fa0f81fb845eb819600.png" />. Parameters associated with a particular value of <strong>Y</strong> will use only those data points whose associated latent variable has that value. </font></li>    <li><font color="#ff0000">Finally, iterate until convergence.</font></li> </ol>  <p>The algorithm as just described will in fact work, and is commonly called <em>hard EM</em>. The <a href="http://en.wikipedia.org/wiki/K-means_algorithm">K-means algorithm</a> is an example of this class of algorithms.</p>  <p>However, we can do somewhat better by, rather than making a hard choice for <strong>Y </strong>given the current parameter values and averaging only over the set of data points associated with a particular value of <strong>Y</strong>, instead determining the probability of each possible value of <strong>Y </strong>for each data point, and then using the probabilities associated with a particular value of <strong>Y</strong> to compute a <a href="http://en.wikipedia.org/wiki/Weighted_average">weighted average</a> over the entire set of data points. The resulting algorithm is commonly called <em>soft EM</em>, and is the type of algorithm normally associated with EM. The counts used to compute these weighted averages are called <em>soft counts</em> (as opposed to the <em>hard counts</em> used in a hard-EM-type algorithm such as K-means). The probabilities computed for<strong> Y</strong> are<a href="http://en.wikipedia.org/wiki/Posterior_probabilities">posterior probabilities</a> and are what is computed in the E-step. The soft counts used to compute new parameter values are what is computed in the M-step.</p>  <p>总结：</p>  <p>EM is frequently used for <a href="http://en.wikipedia.org/wiki/Data_clustering">data clustering</a> in <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a> and <a href="http://en.wikipedia.org/wiki/Computer_vision">computer vision</a>.</p>  <p>EM会收敛到局部极致，但不能保证收敛到全局最优。</p>  <p>EM对初值比较敏感，通常需要一个好的，快速的初始化过程。</p>  <p>&nbsp;</p>  <p>这是我的Machine Learning课程，先总结到这里，&nbsp;下面的工作是做一个GM_EM的总结，多维高斯密度估计！</p><img src ="http://www.cppblog.com/sosi/aggbug/130569.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-20 14:44 <a href="http://www.cppblog.com/sosi/archive/2010/10/20/130569.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>k-means clustering</title><link>http://www.cppblog.com/sosi/archive/2010/10/19/130483.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Tue, 19 Oct 2010 10:57:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/19/130483.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130483.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/19/130483.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130483.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130483.html</trackback:ping><description><![CDATA[<p>&#160;&#160;&#160;&#160;&#160; In <a href="http://en.wikipedia.org/wiki/Statistics">statistics</a> and <a href="http://en.wikipedia.org/wiki/Machine_learning">machine learning</a>, <b><i>k</i>-means clustering</b> is a method of <a href="http://en.wikipedia.org/wiki/Cluster_analysis">cluster analysis</a> which aims to<a href="http://en.wikipedia.org/wiki/Partition_of_a_set">partition</a> <i>n</i> observations into <i>k</i> clusters in which each observation belongs to the cluster with the nearest<a href="http://en.wikipedia.org/wiki/Mean">mean</a>. It is similar to the <a href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">expectation-maximization algorithm</a> for <a href="http://en.wikipedia.org/wiki/Mixture_model">mixtures</a> of <a href="http://en.wikipedia.org/wiki/Gaussian_distribution">Gaussians</a> in that they both attempt to find the centers of natural clusters in the data as well as in the iterative refinement approach employed by both algorithms.</p>  <p>&#160;</p>  <h4>Description</h4>  <p>Given a set of observations (<b>x</b><sub>1</sub>, <b>x</b><sub>2</sub>, …, <b>x</b><sub><i>n</i></sub>), where each observation is a <i>d</i>-dimensional real vector, <i>k</i>-means clustering aims to partition the <i>n</i> observations into <i>k</i> sets (<i>k</i> &lt; <i>n</i>) <b>S</b> = {<i>S</i><sub>1</sub>, <i>S</i><sub>2</sub>, …, <i>S</i><sub><i>k</i></sub>} so as to minimize the within-cluster sum of squares (WCSS):</p> <dl><dd><img alt="\underset{\mathbf{S}} \operatorname{arg\,min} \sum_{i=1}^{k} \sum_{\mathbf x_j \in S_i} \left\| \mathbf x_j - \boldsymbol\mu_i \right\|^2 " src="http://upload.wikimedia.org/math/5/0/4/504ff7618c6ce2cbe0255e1aa4dfbc73.png" /></dd></dl>  <p>where <i><b>μ</b></i><sub><i>i</i></sub> is the mean of points in <i>S</i><sub><i>i</i></sub>.</p>  <p>&#160;</p>  <h4>Algorithms</h4>  <p>Regarding computational complexity, the <i>k</i>-means clustering problem is:</p>  <ul>   <li><b><a href="http://en.wikipedia.org/wiki/NP-hard">NP-hard</a></b> in general Euclidean space <i>d</i> even for 2 clusters <sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-3">[4]</a></sup><sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-4">[5]</a></sup></li>    <li><b>NP-hard</b> for a general number of clusters <i>k</i> even in the plane <sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-5">[6]</a></sup></li>    <li>If <i>k</i> and <i>d</i> are fixed, the problem can be exactly solved in time <i><b>O(n<sup>dk+1</sup> log n)</b></i>, where <i>n</i> is the number of entities to be clustered <sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-6">[7]</a></sup></li> </ul>  <p>Thus, a variety of <a href="http://en.wikipedia.org/wiki/Heuristic_algorithm">heuristic algorithms</a> are generally used.</p>  <p>&#160;</p>  <p>所以注意到Algorithm是一个典型的NP问题，所以通常我们寻找使用的是启发式方法。</p>  <h3>Standard algorithm</h3>  <p>The most common algorithm uses <font color="#ff0000">an iterative refinement technique</font>.最常用的一个技巧是迭代求精。</p>  <p> Due to its ubiquity it is often called the <b><i>k</i>-means algorithm</b>; it is also referred to as <b><a href="http://en.wikipedia.org/wiki/Lloyd%27s_algorithm">Lloyd's algorithm</a></b>, particularly in the computer science community.</p>  <p>Given an initial set of <i>k</i> means <b>m</b><sub>1</sub><sup>(1)</sup>,…,<b>m</b><sub><i>k</i></sub><sup>(1)</sup>, which may be specified randomly or by some heuristic, the algorithm proceeds by alternating between two steps:<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-7">[8]</a></sup></p> <dl><dd><b>Assignment step</b>: Assign each observation to the cluster with the closest mean (i.e. <font color="#ff0000">partition the observations according to the </font><a href="http://en.wikipedia.org/wiki/Voronoi_diagram"><font color="#ff0000">Voronoi diagram</font></a><font color="#ff0000"> generated by the means（这里等价于把原空间根据Voronoi 图划分为k个，此处的范数指的是2范数，即欧几里得距离，和Voronoi图对应）</font>). <dl><dd><img alt="S_i^{(t)} = \left\{ \mathbf x_j&#160;: \big\| \mathbf x_j - \mathbf m^{(t)}_i \big\| \leq \big\| \mathbf x_j - \mathbf m^{(t)}_{i^*} \big\| \text{ for all }i^*=1,\ldots,k \right\} " src="http://upload.wikimedia.org/math/0/7/f/07fa3a2602838281be3a79accd2f9117.png" /></dd><dd>&#160;</dd></dl></dd><dd><b>Update step</b>: Calculate the new means to be the centroid of the observations in the cluster. <dl><dd><img alt="\mathbf m^{(t+1)}_i = \frac{1}{|S^{(t)}_i|} \sum_{\mathbf x_j \in S^{(t)}_i} \mathbf x_j " src="http://upload.wikimedia.org/math/6/f/3/6f3fd0c008b8c1b0bd1da3cec7033706.png" /></dd><dd>重新计算means</dd></dl></dd></dl>  <p>The algorithm is deemed to have converged when the assignments no longer change. </p>  <p>&#160;</p>  <p><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/K_Means_Example_Step_1.svg/124px-K_Means_Example_Step_1.svg.png" width="124" height="120" /><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/K_Means_Example_Step_2.svg/139px-K_Means_Example_Step_2.svg.png" width="139" height="120" /><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/3/3e/K_Means_Example_Step_3.svg/139px-K_Means_Example_Step_3.svg.png" width="139" height="120" /><img alt="" src="http://upload.wikimedia.org/wikipedia/commons/thumb/d/d2/K_Means_Example_Step_4.svg/139px-K_Means_Example_Step_4.svg.png" width="139" height="120" /></p>  <p>整个算法的流程就是如上图所示</p>  <p>&#160;</p>  <p>As it is a heuristic algorithm, there is no guarantee that it will converge to the global optimum, and the result may depend on the initial clusters. As the algorithm is usually very fast, it is common to run it <font color="#ff0000">multiple times</font> with different starting conditions. However, in the worst case, <i>k</i>-means can be very slow to converge: in particular it has been shown that there exist certain point sets, even in 2 dimensions, on which<i>k</i>-means takes exponential time, that is 2<sup>Ω(<var>n</var>)</sup>, to converge<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-8">[9]</a></sup><sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-9">[10]</a></sup>. <strong>These point sets do not seem to arise in practice: this is corroborated by the fact that the </strong><a href="http://en.wikipedia.org/wiki/Smoothed_analysis"><strong>smoothed</strong></a><strong> running time of <i>k</i>-means is polynomial<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-10">[11]</a></sup>.</strong></p>  <p><strong>最坏的时间复杂度是O(2<sup>Ω(<var>n</var>)</sup>),但是在实践中，一般表现是一个多项式算法。</strong></p>  <p>The &quot;assignment&quot; step is also referred to as <b>expectation step</b>, the &quot;update step&quot; as <b>maximization step</b>, making this algorithm a variant of the <i>generalized</i> <a href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">expectation-maximization algorithm</a>.</p>  <p></p>  <p></p>  <p></p>  <p></p>  <h3>Variations</h3>  <ul>   <li>The <a href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">expectation-maximization algorithm</a> (EM algorithm) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means. </li>    <li><a href="http://en.wikipedia.org/wiki/K-means%2B%2B">k-means++</a> seeks to choose better starting clusters. </li>    <li>The filtering algorithm uses <a href="http://en.wikipedia.org/wiki/Kd-tree">kd-trees</a> to speed up each k-means step.<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-11">[12]</a></sup></li>    <li>Some methods attempt to speed up each k-means step using <a href="http://en.wikipedia.org/wiki/Coreset">coresets</a><sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-12">[13]</a></sup> or the <a href="http://en.wikipedia.org/wiki/Triangle_inequality">triangle inequality</a>.<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-13">[14]</a></sup></li>    <li>Escape local optima by swapping points between clusters.<sup><a href="http://en.wikipedia.org/wiki/K-means_clustering#cite_note-hartigan1979-14">[15]</a></sup></li> </ul>  <p><sup></sup></p>  <p><sup></sup></p>  <h4>Discussion</h4>  <p><img alt="File:Iris Flowers Clustering kMeans.svg" src="http://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Iris_Flowers_Clustering_kMeans.svg/660px-Iris_Flowers_Clustering_kMeans.svg.png" width="660" height="309" /></p>  <p><i>k</i>-means clustering result for the <a href="http://en.wikipedia.org/wiki/Iris_flower_data_set">Iris flower data set</a> and actual species visualized using <a href="http://en.wikipedia.org/wiki/Environment_for_DeveLoping_KDD-Applications_Supported_by_Index-Structures">ELKI</a>. Cluster means are marked using larger, semi-transparent symbols.</p>  <p><img alt="File:ClusterAnalysis Mouse.svg" src="http://upload.wikimedia.org/wikipedia/commons/thumb/0/09/ClusterAnalysis_Mouse.svg/800px-ClusterAnalysis_Mouse.svg.png" width="800" height="323" /></p>  <p><i>k</i>-means clustering and EM clustering on an artificial dataset (&quot;mouse&quot;). <font color="#800080">The tendency of <i>k</i>-means to produce equi-sized clusters leads to bad results, while EM benefits from the Gaussian distribution present in the data set</font></p>  <p>The two key features of <i>k</i>-means which make it efficient are often regarded as its biggest drawbacks:</p>  <ul>   <li><a href="http://en.wikipedia.org/wiki/Euclidean_distance">Euclidean distance</a> is used as a <a href="http://en.wikipedia.org/wiki/Metric_(mathematics)">metric</a> and <a href="http://en.wikipedia.org/wiki/Variance">variance</a> is used as a measure of cluster scatter. </li>    <li>The number of clusters <i>k</i> is an input parameter: an inappropriate choice of <i>k</i> may yield poor results. That is why, when performing k-means, it is important to run diagnostic checks for <a href="http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set">determining the number of clusters in the data set</a>.</li> </ul>  <p>A key limitation of <i>k</i>-means is its cluster model.<font color="#ff0000"> The concept is based on spherical clusters that are separable in a way so that the mean value converges towards the cluster center.</font> The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying <i>k</i>-means with a value of <i>k</i> = 3 onto the well-known <a href="http://en.wikipedia.org/wiki/Iris_flower_data_set">Iris flower data set</a>, the result often fails to separate the three <a href="http://en.wikipedia.org/wiki/Iris_(plant)">Iris</a> species contained in the data set. With <i>k</i> = 2, the two visible clusters (one containing two species) will be discovered, whereas with<i>k</i> = 3 one of the two clusters will be split into two even parts. In fact, <i>k</i> = 2 is more appropriate for this data set, despite the data set containing 3 <i>classes</i>. As with any other clustering algorithm, the <i>k</i>-means result relies on the data set to satisfy the assumptions made by the clustering algorithms. It works very well on some data sets, while failing miserably on others.</p>  <p><font color="#ff0000">The result of <i>k</i>-means can also be seen as the </font><a href="http://en.wikipedia.org/wiki/Voronoi_diagram"><font color="#ff0000">Voronoi cells</font></a><font color="#ff0000"> of the cluster means.</font> Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the &quot;mouse&quot; example. The Gaussian models used by the <a href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation-maximization algorithm</a> (which can be seen as a generalization of <i>k</i>-means) are more flexible here by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than <i>k</i>-means as well as correlated clusters (not in this example).</p>  <p>&#160;</p>  <p>这篇是概念介绍篇，以后出代码和一个K均值优化的论文</p>  <p>Fast Hierarchical Clustering Algorithm Using Locality-Sensitive Hashing</p><img src ="http://www.cppblog.com/sosi/aggbug/130483.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-19 18:57 <a href="http://www.cppblog.com/sosi/archive/2010/10/19/130483.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>2010百度校园招聘试题  R D-C-2</title><link>http://www.cppblog.com/sosi/archive/2010/10/18/130245.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Mon, 18 Oct 2010 04:12:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/18/130245.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130245.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/18/130245.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130245.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130245.html</trackback:ping><description><![CDATA[<p>2010百度校园招聘试题&#160; R D-C-2   <br />第一题 简答 （30分）    <br />1，&#160;&#160;&#160; 定义栈的数据结构，要求添加一个min函数，能够得到栈的最小元素，要求min、push以及pop的时间复杂度都是0(1)，请简要描述你的思路。 （10分）    <br />2，&#160;&#160;&#160; 阅读代码，说明输出的含义并挑错&#160; （10分）    <br />问题1. 写出下列代码的运行结果的前7行并说明数列的含义。    <br />问题2. 代码中是否有不安全的隐患？原因是？    <br />#include &lt;stdio.h&gt;    <br />#include &lt;string.h&gt; </p>  <p>const int MAX_LEN = 128;   <br />const int MAX_LINE = 20;    <br />int main(int argc, char* argv[])    <br />{    <br />&#160;&#160;&#160; char str[MAX_LEN] = &quot;1&quot;;    <br />&#160;&#160;&#160; char tmp_str[MAX_LEN] = &quot;&quot;;    <br />&#160;&#160;&#160; char buf[MAX_LEN] = &quot;&quot;; </p>  <p>&#160;&#160;&#160; printf(&quot;%s\n&quot;,str);   <br />&#160;&#160;&#160; for (int line = 1;line &lt;= MAX_LINE;++line)    <br />&#160;&#160;&#160; {    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; strcpy(tmp_str,str);    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; str[0] = '\0';    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; for (int i=0;tmp_str[i] != 0;++i)    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; {    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; char ch = tmp_str[i];    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; int count = 1;    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; for (;tmp_str[i+1] == tmp_str[i];++i)    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; {    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; ++count;    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; }    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; sprintf(buf,&quot;%d%c&quot;,count,ch);    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; strcat(str,buf);    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; }    <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; printf(&quot;%s\n&quot;,str);    <br />&#160;&#160;&#160; }    <br />&#160;&#160;&#160; return 0;    <br />} </p>  <p>3，&#160;&#160;&#160; 分别才要线性表、二叉平衡树和哈希表存储数据，请分析它们各有什么优劣？（10分） </p>  <p>第二题 算法与程序设计（40分）   <br />1，&#160;&#160;&#160; 有一串首尾相连的珠子，总共m颗，每颗珠子都有自己的颜色，全部颜色总共有n（n&lt;=10）种。现在要在里面截取一段，要求包含所有不同的颜色，并且长度越短越好。求如何截取。    <br />请详细描述你的算法思路（如需要，可给出伪代码来辅助描述），并分析其时间复杂度和空间复杂度。（20分）    <br />2，&#160;&#160;&#160; 设计一个strnumcmp函数，对比普通的strcmp函数，差别在于，当字符串中遇到数字时，以数字大小为准。对于只有其中一个字符串为数字的情况，仍然沿用原来的strcmp方式。 （20分）    <br />举例说    <br />&#160;&#160; strnumcmp的判定结果：”abc”&lt;”abc#”&lt;”abc1”&lt;”abc2”&lt;”abc10”&lt;”abcd”    <br />一般的strcmp的判定结果：”abc”&lt;”abc#”&lt;”abc1”&lt;”abc10”&lt;”abc2”&lt;”abcd”    <br />要求：请给出完整代码，在达到目标的情况下尽量高效，简洁。 </p>  <p>第三题 系统设计题（30分）   <br />在大规模数据处理中经常会用到大规模字典。现需要处理一个词搭配的字典。条件为：    <br />1）&#160;&#160;&#160; 字典中存在的项是两个词的搭配，例如：字典中有“今天”和“晚上”是两个词，那么它们组成的搭配为“今天|晚上”和“晚上|今天”    <br />2）&#160;&#160;&#160; 词的集合很大，约为10万量级    <br />3）&#160;&#160;&#160; 一个词并不会和其他所有词搭配，通常只会和不超过1万个其他此搭配    <br />4）&#160;&#160;&#160; 对字典的使用读操作很大，通常每秒有上千次请求，几乎没有写入需求。    <br />请设计一个字典服务系统，当请求是两个词的搭配时，能够快速返回搭配的相关信息。请使用尽可能少的资源，并估算出需要使用的机器资源。</p><img src ="http://www.cppblog.com/sosi/aggbug/130245.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-18 12:12 <a href="http://www.cppblog.com/sosi/archive/2010/10/18/130245.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>Hat-Check problem 帽子保管问题</title><link>http://www.cppblog.com/sosi/archive/2010/10/17/130199.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sun, 17 Oct 2010 07:21:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/17/130199.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130199.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/17/130199.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130199.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130199.html</trackback:ping><description><![CDATA[<p>这个是算法导论习题5.2-4</p>  <p>&#160; n个顾客在进酒店之前，都会把自己的帽子给前台服务员保管。每个顾客在离开时，前台服务员又会随机地挑选一个帽子给他。问：最终，能拿到自己帽子的顾客们期望数是多少</p>  <p>首先从简单情况入手，对于n=2 n=3 很容易求出 结果res=1；   <br />对于n&gt;=4,我们严格的用概率方法来推导一下！    <br />首先定义P(n)为伯努利放错信的问题的答案。    <br />P_n=n!\sum_{i=2}^{n}\frac{(-1)^i}{i!}</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/HatCheckproblem_D7EE/SLE%5B7%7DIS3_~VFKO3%7DMB71JL_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="SLE[7}IS3_~VFKO3}MB71JL" border="0" alt="SLE[7}IS3_~VFKO3}MB71JL" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/HatCheckproblem_D7EE/SLE%5B7%7DIS3_~VFKO3%7DMB71JL_thumb.jpg" width="344" height="126" /></a></p>  <p>然后对于n个人中i个人匹配正确这个事件的数目是C(n,i)P(i)&#160;&#160; <br />总共有n！中事件，2^n种情况，（可以把人看成是10串，拿到自己帽子的为1）    <br />所以 答案就是    <br />（n-i）*C(n,i)P(i) /n!求和    <br />整理一下是&#160; <br />\sum_{i=2}^{n-1}\frac{n-i}{(n-i)!}\sum_{j=2}^{i}\frac{(-1)^j}{j!}+\frac{1}{(n-1)!}</p>  <p>&#160;</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/HatCheckproblem_D7EE/U~HZ%5BAVXP)X2TC%604%7D(3J%5BXV_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="U~HZ[AVXP)X2TC`4}(3J[XV" border="0" alt="U~HZ[AVXP)X2TC`4}(3J[XV" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/HatCheckproblem_D7EE/U~HZ%5BAVXP)X2TC%604%7D(3J%5BXV_thumb.jpg" width="480" height="140" /></a>    <br />可以证明这个等式等于1    <br />所以答案是1</p><img src ="http://www.cppblog.com/sosi/aggbug/130199.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-17 15:21 <a href="http://www.cppblog.com/sosi/archive/2010/10/17/130199.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>EECS Graphics 1</title><link>http://www.cppblog.com/sosi/archive/2010/10/16/130154.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sat, 16 Oct 2010 10:51:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/16/130154.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130154.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/16/130154.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130154.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130154.html</trackback:ping><description><![CDATA[<p> 这个算是Fundamental of Computer Graphics读书笔记的一个系列吧！</p>  <p>第一次课 </p>  <ul>   <li>Shirley 1, 3.8 </li>    <li>Review Shirley 2.1-2.4, 2.10-2.11, 5</li> </ul>  <p>计算机图形表示的集中方法：</p>  <p>采样的方法(像素)和基于Object based&#160; 我以前没注意他们的区别，事实的确是这样子，区别就是随着图像的放大，Object Based 不会失真</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/F)FMX8_%7D%253O%5D7K%5DU(2%60WM~F_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="F)FMX8_}%3O]7K]U(2`WM~F" border="0" alt="F)FMX8_}%3O]7K]U(2`WM~F" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/F)FMX8_%7D%253O%5D7K%5DU(2%60WM~F_thumb.jpg" width="433" height="229" /></a></p>  <p>它还总结了一种是Functional ，这个就是我们传说的矢量图，轮廓是Function，然后填充一些简单的颜色！ 下图是一个著名的分形图。。</p> <a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/6E2K(TJ%5BV)B0V_@7MG5L%7B55_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="6E2K(TJ[V)B0V_@7MG5L{55" border="0" alt="6E2K(TJ[V)B0V_@7MG5L{55" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/6E2K(TJ%5BV)B0V_@7MG5L%7B55_thumb.jpg" width="352" height="232" /></a>  <p>&#160;</p>  <p>在文章的最后，作者指出了，我们的眼睛并不感知强度值！例如图中的AB 他们的图像灰度是相同的。。</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/%7DZ%5BKDEGDG3X3DW$ZCZ%25I4UD_2.jpg"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="}Z[KDEGDG3X3DW$ZCZ%I4UD" border="0" alt="}Z[KDEGDG3X3DW$ZCZ%I4UD" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/EECSGraphics1_1067D/%7DZ%5BKDEGDG3X3DW$ZCZ%25I4UD_thumb.jpg" width="655" height="418" /></a></p>  <p>然后看一下书，感觉挺好的。。哈哈&#160;&#160;&#160;&#160; !_!~~</p><img src ="http://www.cppblog.com/sosi/aggbug/130154.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-16 18:51 <a href="http://www.cppblog.com/sosi/archive/2010/10/16/130154.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>玩 Python 之必装组件zz</title><link>http://www.cppblog.com/sosi/archive/2010/10/16/130146.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sat, 16 Oct 2010 09:45:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/16/130146.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130146.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/16/130146.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130146.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130146.html</trackback:ping><description><![CDATA[<p>玩 Python 之必装组件</p>  <p>记录一下拿到一台新机器之后必装的 Python 组件，基本上都是通过 setup.py 安装。</p>  <ul>   <li>simplejson <a href="http://pypi.python.org/pypi/simplejson">http://pypi.python.org/pypi/simplejson</a></li>    <li>Imaging (PIL) <a href="http://www.pythonware.com/products/pil/">http://www.pythonware.com/products/pil/</a></li>    <li>Tornado <a href="http://www.tornadoweb.org/">http://www.tornadoweb.org/</a></li>    <li>MySQLdb <a href="http://sourceforge.net/projects/mysql-python/">http://sourceforge.net/projects/mysql-python/</a></li>    <li>couchdb-python <a href="http://code.google.com/p/couchdb-python/">http://code.google.com/p/couchdb-python/</a></li>    <li>redis-py <a href="http://github.com/andymccurdy/redis-py">http://github.com/andymccurdy/redis-py</a></li>    <li>Twisted <a href="http://twistedmatrix.com/">http://twistedmatrix.com/</a></li>    <li>Storm <a href="https://storm.canonical.com/">https://storm.canonical.com/</a></li>    <li>SQLAlchemy <a href="http://www.sqlalchemy.org/">http://www.sqlalchemy.org/</a></li> </ul>  <p>当然，还有 CouchDBX，redis (via MacPorts)，memcached (via MacPorts) 和 GAE SDK。</p>  <p>&#160;</p>  <p>一位Machine Learning的postgraduate总结的，恩，好好玩！</p>  <p>Re:</p>  <p><a href="http://hello-math.appspot.com/python-component">http://hello-math.appspot.com/python-component</a></p><img src ="http://www.cppblog.com/sosi/aggbug/130146.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-16 17:45 <a href="http://www.cppblog.com/sosi/archive/2010/10/16/130146.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>课程体会 算法分析与设计</title><link>http://www.cppblog.com/sosi/archive/2010/10/16/130129.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Sat, 16 Oct 2010 07:08:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/16/130129.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/130129.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/16/130129.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/130129.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/130129.html</trackback:ping><description><![CDATA[<p>&#160; 第一次上算法分析与设计，老师的确很不错，是我听过的算法课中总结非常好的一个，贴几张自己画的图：</p>  <p>&#160;</p>  <p>&#160;</p>  <p><strong>当遇到实际问题，我们应当怎么做！</strong></p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E7%AE%97%E6%B3%95%E5%88%86%E6%9E%90%E9%80%94%E5%BE%84.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="算法分析途径" border="0" alt="算法分析途径" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E7%AE%97%E6%B3%95%E5%88%86%E6%9E%90%E9%80%94%E5%BE%84_thumb.png" width="511" height="451" /></a></p>  <p>&#160;</p>  <p><strong>当抽象成一个数学问题时候，我们应该思考的方式是：</strong></p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E6%95%B0%E5%AD%A6%E9%97%AE%E9%A2%98%E8%A7%A3%E7%9A%84%E6%96%B9%E5%BC%8F.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="数学问题解的方式" border="0" alt="数学问题解的方式" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E6%95%B0%E5%AD%A6%E9%97%AE%E9%A2%98%E8%A7%A3%E7%9A%84%E6%96%B9%E5%BC%8F_thumb.png" width="624" height="497" /></a>&#160;&#160; </p>  <p>&#160;</p>  <p><strong>观察的时候，我们需要做是什么！</strong></p>  <p></p>  <p></p>  <p></p>  <p></p> <a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E8%A7%82%E5%AF%9F%E7%9A%84%E7%9B%AE%E6%A0%87.png"><img style="border-bottom: 0px; border-left: 0px; display: inline; border-top: 0px; border-right: 0px" title="观察的目标" border="0" alt="观察的目标" src="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E8%A7%82%E5%AF%9F%E7%9A%84%E7%9B%AE%E6%A0%87_thumb.png" width="598" height="230" /></a>  <p></p>  <p>&#160;</p>  <p><a href="http://www.cppblog.com/images/cppblog_com/sosi/WindowsLiveWriter/4075521487a5_D4C2/%E7%AE%97%E6%B3%95%E5%88%86%E6%9E%90%E9%80%94%E5%BE%84.png">&#160;</a></p><img src ="http://www.cppblog.com/sosi/aggbug/130129.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-16 15:08 <a href="http://www.cppblog.com/sosi/archive/2010/10/16/130129.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>常用的数学英语表述</title><link>http://www.cppblog.com/sosi/archive/2010/10/08/129066.html</link><dc:creator>Sosi</dc:creator><author>Sosi</author><pubDate>Fri, 08 Oct 2010 11:53:00 GMT</pubDate><guid>http://www.cppblog.com/sosi/archive/2010/10/08/129066.html</guid><wfw:comment>http://www.cppblog.com/sosi/comments/129066.html</wfw:comment><comments>http://www.cppblog.com/sosi/archive/2010/10/08/129066.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/sosi/comments/commentRss/129066.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/sosi/services/trackbacks/129066.html</trackback:ping><description><![CDATA[<p><a href="http://blog.renren.com/share/124366695/3323310699#nogo"></a></p>  <h5><strong>常用的数学英语表述</strong></h5>  <h5>&#160;</h5>  <h5>1.Logic&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; <br />∃&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; there exist     <br />∀&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; for all     <br />p⇒q&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; p implies q / if p, then q     <br />p⇔q&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; p if and only if q /p is equivalent to q / p and q are equivalent     <br />2.Sets&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; <br />x∈A&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x belongs to A / x is an element (or a member) of A     <br />x∉A&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x does not belong to A / x is not an element (or a member) of A     <br />A⊂B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A is contained in B / A is a subset of B     <br />A⊃B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A contains B / B is a subset of A     <br />A∩B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A cap B / A meet B / A intersection B     <br />A∪B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A cup B / A join B / A union B     <br />A\B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A minus B / the diference between A and B     <br />A×B&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A cross B / the cartesian product of A and B     <br />3. Real numbers&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; <br />x+1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x plus one     <br />x-1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x minus one     <br />x±1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x plus or minus one     <br />xy&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; xy / x multiplied by y     <br />(x - y)(x + y)&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x minus y, x plus y     <br />x y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x over y     <br />=&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the equals sign     <br />x = 5&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x equals 5 / x is equal to 5     <br />x≠5&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x (is) not equal to 5     <br />x≡y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is equivalent to (or identical with) y     <br />x ≡ y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is not equivalent to (or identical with) y     <br />x &gt; y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is greater than y     <br />x≥y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is greater than or equal to y     <br />x &lt; y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is less than y     <br />x≤y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x is less than or equal to y     <br />0 &lt; x &lt; 1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; zero is less than x is less than 1     <br />0≤x≤1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; zero is less than or equal to x is less than or equal to 1     <br />| x |&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; mod x / modulus x     <br />x 2&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x squared / x (raised) to the power 2     <br />x 3&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x cubed     <br />x 4&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x to the fourth / x to the power four     <br />x n&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x to the nth / x to the power n     <br />x −n&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x to the (power) minus n     <br />x&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; (square) root x / the square root of x     <br />x 3&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; cube root (of) x     <br />x 4&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; fourth root (of) x     <br />x n&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; nth root (of) x     <br />( x+y ) 2&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x plus y all squared     <br />( x y ) 2&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x over y all squared     <br />n!&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; n factorial     <br />x ^&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x hat     <br />x ¯&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x bar     <br />x ˜&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x tilde     <br />x i&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; xi / x subscript i / x suffix i / x sub i     <br />∑ i=1 n a i&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the sum from i equals one to n a i / the sum as i runs from 1 to n of the a i     <br />4. Linear algebra&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; <br />‖ x ‖&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the norm (or modulus) of x     <br />OA →&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; OA / vector OA     <br />OA ¯&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; OA / the length of the segment OA     <br />A T&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A transpose / the transpose of A     <br />A −1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; A inverse / the inverse of A     <br />5. Functions&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; <br />f( x )&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; fx / f of x / the function f of x     <br />f:S→T&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; a function f from S to T     <br />x→y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; x maps to y / x is sent (or mapped) to y     <br />f'( x )&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; f prime x / f dash x / the (first) derivative of f with respect to x     <br />f''( x )&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; f double-prime x / f double-dash x / the second derivative of f with respect to x     <br />f'''( x )&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; triple-prime x / f triple-dash x / the third derivative of f with respect to x     <br />f (4) ( x )&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; f four x / the fourth derivative of f with respect to x     <br />∂f ∂ x 1&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the partial (derivative) of f with respect to x1     <br />∂ 2 f ∂ x 1 2&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the second partial (derivative) of f with respect to x1     <br />∫ 0 ∞&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the integral from zero to infinity     <br />lim⁡ x→0&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the limit as x approaches zero     <br />lim⁡ x→ 0 +&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the limit as x approaches zero from above     <br />lim⁡ x→ 0 −&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; the limit as x approaches zero from below     <br />log e y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; log y to the base e / log to the base e of y / natural log (of) y     <br />ln⁡y&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; log y to the base e / log to the base e of y / natural log (of) y     <br />一般词汇     <br />&#160;&#160;&#160; 数学 mathematics, maths(BrE), math(AmE)     <br />&#160;&#160;&#160; 公理 axiom     <br />&#160;&#160;&#160; 定理 theorem     <br />&#160;&#160;&#160; 计算 calculation     <br />&#160;&#160;&#160; 运算 operation     <br />&#160;&#160;&#160; 证明 prove     <br />&#160;&#160;&#160; 假设 hypothesis, hypotheses(pl.)     <br />&#160;&#160;&#160; 命题 proposition     <br />&#160;&#160;&#160; 算术 arithmetic     <br />&#160;&#160;&#160; 加 plus(prep.), add(v.), addition(n.)     <br />&#160;&#160;&#160; 被加数 augend, summand     <br />&#160;&#160;&#160; 加数 addend     <br />&#160;&#160;&#160; 和 sum     <br />&#160;&#160;&#160; 减 minus(prep.), subtract(v.), subtraction(n.)     <br />&#160;&#160;&#160; 被减数 minuend     <br />&#160;&#160;&#160; 减数 subtrahend     <br />&#160;&#160;&#160; 差 remainder     <br />&#160;&#160;&#160; 乘    <br />times(prep.), multiply(v.), multiplication(n.)     <br />&#160;&#160;&#160; 被乘数 multiplicand, faciend     <br />&#160;&#160;&#160; 乘数 multiplicator     <br />&#160;&#160;&#160; 积 product     <br />&#160;&#160;&#160; 除 divided by(prep.), divide(v.), division(n.)     <br />&#160;&#160;&#160; 被除数 dividend     <br />&#160;&#160;&#160; 除数 divisor     <br />&#160;&#160;&#160; 商 quotient     <br />&#160;&#160;&#160; 等于 equals, is equal to, is equivalent to     <br />&#160;&#160;&#160; 大于 is greater than     <br />&#160;&#160;&#160; 小于 is lesser than     <br />&#160;&#160;&#160; 大于等于 is equal or greater than     <br />&#160;&#160;&#160; 小于等于 is equal or lesser than     <br />&#160;&#160;&#160; 运算符 operator     <br />&#160;&#160;&#160; 数字 digit     <br />&#160;&#160;&#160; 数 number     <br />&#160;&#160;&#160; 自然数 natural number     <br />&#160;&#160;&#160; 整数 integer     <br />&#160;&#160;&#160; 小数 decimal     <br />&#160;&#160;&#160; 小数点 decimal point     <br />&#160;&#160;&#160; 分数 fraction     <br />&#160;&#160;&#160; 分子 numerator     <br />&#160;&#160;&#160; 分母 denominator     <br />&#160;&#160;&#160; 比 ratio     <br />&#160;&#160;&#160; 正 positive     <br />&#160;&#160;&#160; 负 negative     <br />&#160;&#160;&#160; 零 null, zero, nought, nil     <br />&#160;&#160;&#160; 十进制 decimal system     <br />&#160;&#160;&#160; 二进制 binary system     <br />&#160;&#160;&#160; 十六进制 hexadecimal system     <br />&#160;&#160;&#160; 权 weight, significance     <br />&#160;&#160;&#160; 进位 carry     <br />&#160;&#160;&#160; 截尾 truncation     <br />&#160;&#160;&#160; 四舍五入 round     <br />&#160;&#160;&#160; 下舍入 round down     <br />&#160;&#160;&#160; 上舍入 round up     <br />&#160;&#160;&#160; 有效数字 significant digit     <br />&#160;&#160;&#160; 无效数字 insignificant digit     <br />&#160;&#160;&#160; 代数 algebra     <br />&#160;&#160;&#160; 公式 formula, formulae(pl.)     <br />&#160;&#160;&#160; 单项式 monomial     <br />&#160;&#160;&#160; 多项式 polynomial, multinomial     <br />&#160;&#160;&#160; 系数 coefficient     <br />&#160;&#160;&#160; 未知数 unknown, x-factor, y-factor, z-factor     <br />&#160;&#160;&#160; 等式，方程式 equation     <br />&#160;&#160;&#160; 一次方程 simple equation     <br />&#160;&#160;&#160; 二次方程 quadratic equation     <br />&#160;&#160;&#160; 三次方程 cubic equation     <br />&#160;&#160;&#160; 四次方程 quartic equation     <br />&#160;&#160;&#160; 不等式 inequation     <br />&#160;&#160;&#160; 阶乘 factorial     <br />&#160;&#160;&#160; 对数 logarithm     <br />&#160;&#160;&#160; 指数，幂 exponent     <br />&#160;&#160;&#160; 乘方 power     <br />&#160;&#160;&#160; 二次方，平方 square     <br />&#160;&#160;&#160; 三次方，立方 cube     <br />&#160;&#160;&#160; 四次方 the power of four, the fourth power     <br />&#160;&#160;&#160; n次方 the power of n, the nth power     <br />&#160;&#160;&#160; 开方 evolution, extraction     <br />&#160;&#160;&#160; 二次方根，平方根 square root     <br />&#160;&#160;&#160; 三次方根，立方根 cube root     <br />&#160;&#160;&#160; 四次方根 the root of four, the fourth root     <br />&#160;&#160;&#160; n次方根 the root of n, the nth root     <br />&#160;&#160;&#160; 集合 aggregate     <br />&#160;&#160;&#160; 元素 element     <br />&#160;&#160;&#160; 空集 void     <br />&#160;&#160;&#160; 子集 subset     <br />&#160;&#160;&#160; 交集 intersection     <br />&#160;&#160;&#160; 并集 union     <br />&#160;&#160;&#160; 补集 complement     <br />&#160;&#160;&#160; 映射 mapping     <br />&#160;&#160;&#160; 函数 function     <br />&#160;&#160;&#160; 定义域 domain, field of definition     <br />&#160;&#160;&#160; 值域 range     <br />&#160;&#160;&#160; 常量 constant     <br />&#160;&#160;&#160; 变量 variable     <br />&#160;&#160;&#160; 单调性 monotonicity     <br />&#160;&#160;&#160; 奇偶性 parity     <br />&#160;&#160;&#160; 周期性 periodicity     <br />&#160;&#160;&#160; 图象 image     <br />&#160;&#160;&#160; 数列，级数 series     <br />&#160;&#160;&#160; 微积分 calculus     <br />&#160;&#160;&#160; 微分 differential     <br />&#160;&#160;&#160; 导数 derivative     <br />&#160;&#160;&#160; 极限 limit     <br />&#160;&#160;&#160; 无穷大 infinite(a.) infinity(n.)     <br />&#160;&#160;&#160; 无穷小 infinitesimal     <br />&#160;&#160;&#160; 积分 integral     <br />&#160;&#160;&#160; 定积分 definite integral     <br />&#160;&#160;&#160; 不定积分 indefinite integral     <br />&#160;&#160;&#160; 有理数 rational number     <br />&#160;&#160;&#160; 无理数 irrational number     <br />&#160;&#160;&#160; 实数 real number     <br />&#160;&#160;&#160; 虚数 imaginary number     <br />&#160;&#160;&#160; 复数 complex number     <br />&#160;&#160;&#160; 矩阵 matrix     <br />&#160;&#160;&#160; 行列式 determinant     <br />&#160;&#160;&#160; 几何 geometry     <br />&#160;&#160;&#160; 点 point     <br />&#160;&#160;&#160; 线 line     <br />&#160;&#160;&#160; 面 plane     <br />&#160;&#160;&#160; 体 solid     <br />&#160;&#160;&#160; 线段 segment     <br />&#160;&#160;&#160; 射线 radial     <br />&#160;&#160;&#160; 平行 parallel     <br />&#160;&#160;&#160; 相交 intersect     <br />&#160;&#160;&#160; 角 angle     <br />&#160;&#160;&#160; 角度 degree     <br />&#160;&#160;&#160; 弧度 radian     <br />&#160;&#160;&#160; 锐角 acute angle     <br />&#160;&#160;&#160; 直角 right angle     <br />&#160;&#160;&#160; 钝角 obtuse angle     <br />&#160;&#160;&#160; 平角 straight angle     <br />&#160;&#160;&#160; 周角 perigon     <br />&#160;&#160;&#160; 底 base     <br />&#160;&#160;&#160; 边 side     <br />&#160;&#160;&#160; 高 height     <br />&#160;&#160;&#160; 三角形 triangle     <br />&#160;&#160;&#160; 锐角三角形 acute triangle     <br />&#160;&#160;&#160; 直角三角形 right triangle     <br />&#160;&#160;&#160; 直角边 leg     <br />&#160;&#160;&#160; 斜边 hypotenuse     <br />&#160;&#160;&#160; 勾股定理 Pythagorean theorem     <br />&#160;&#160;&#160; 钝角三角形 obtuse triangle     <br />&#160;&#160;&#160; 不等边三角形 scalene triangle     <br />&#160;&#160;&#160; 等腰三角形 isosceles triangle     <br />&#160;&#160;&#160; 等边三角形 equilateral triangle     <br />&#160;&#160;&#160; 四边形 quadrilateral     <br />&#160;&#160;&#160; 平行四边形 parallelogram     <br />&#160;&#160;&#160; 矩形 rectangle     <br />&#160;&#160;&#160; 长 length     <br />&#160;&#160;&#160; 宽 width</h5><img src ="http://www.cppblog.com/sosi/aggbug/129066.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/sosi/" target="_blank">Sosi</a> 2010-10-08 19:53 <a href="http://www.cppblog.com/sosi/archive/2010/10/08/129066.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>