﻿<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:trackback="http://madskills.com/public/xml/rss/module/trackback/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/"><channel><title>C++博客-巢穴-随笔分类-多线程</title><link>http://www.cppblog.com/ccl0326/category/14990.html</link><description>about:blank</description><language>zh-cn</language><lastBuildDate>Thu, 16 Dec 2010 17:25:23 GMT</lastBuildDate><pubDate>Thu, 16 Dec 2010 17:25:23 GMT</pubDate><ttl>60</ttl><item><title>互斥锁与条件变量的语义</title><link>http://www.cppblog.com/ccl0326/archive/2010/12/16/136638.html</link><dc:creator>Vincent</dc:creator><author>Vincent</author><pubDate>Thu, 16 Dec 2010 07:35:00 GMT</pubDate><guid>http://www.cppblog.com/ccl0326/archive/2010/12/16/136638.html</guid><wfw:comment>http://www.cppblog.com/ccl0326/comments/136638.html</wfw:comment><comments>http://www.cppblog.com/ccl0326/archive/2010/12/16/136638.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/ccl0326/comments/commentRss/136638.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/ccl0326/services/trackbacks/136638.html</trackback:ping><description><![CDATA[<p>互斥锁与条件变量的语义</p>
<p><br>互斥锁，我要对一块共享数据操作，但是我怕同时你也操作，那就乱套了，所以我要加锁，这个时候我就开始操作这块共享数据，而你进不了临界区，等我操作完了，把锁丢掉，你就可以拿到锁进去操作了</p>
<p>&nbsp;</p>
<p>条件变量，我要看一块共享数据里某一个条件是否达成，我很关心这个，如果我用互斥锁，不停的进入临界区看条件是否达成，这简直太悲剧了，这样一来，我醒的时候会占CPU资源，但是却干不了什么时，只是频繁的看条件是否达成，而且这对别人来说也是一种损失，我每次加上锁，别人就进不了临界区干不了事了。好吧，轮询总是痛苦的，咱等别人通知吧，于是条件变量出现了，我依旧要拿个锁，进了临界区，看到了共享数据，发现，咦，条件还不到，于是我就调用pthread_cond_wait(),先把锁丢了，好让别人可以去对共享数据做操作，然后呢？然后我就睡了，直到特定的条件发生，别人修改完了共享数据，给我发了个消息，我又重新拿到了锁，继续干俺要干的事情了&#8230;&#8230;</p>
<p>&nbsp;</p>
<img src ="http://www.cppblog.com/ccl0326/aggbug/136638.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/ccl0326/" target="_blank">Vincent</a> 2010-12-16 15:35 <a href="http://www.cppblog.com/ccl0326/archive/2010/12/16/136638.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>线程内幕</title><link>http://www.cppblog.com/ccl0326/archive/2010/12/16/136635.html</link><dc:creator>Vincent</dc:creator><author>Vincent</author><pubDate>Thu, 16 Dec 2010 06:37:00 GMT</pubDate><guid>http://www.cppblog.com/ccl0326/archive/2010/12/16/136635.html</guid><wfw:comment>http://www.cppblog.com/ccl0326/comments/136635.html</wfw:comment><comments>http://www.cppblog.com/ccl0326/archive/2010/12/16/136635.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/ccl0326/comments/commentRss/136635.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/ccl0326/services/trackbacks/136635.html</trackback:ping><description><![CDATA[<p>一.<br>在主线程中调用<br>(1)pthread_create( &amp;thread_a, NULL, thread_function, NULL);<br>(2)pthread_create( &amp;thread_b, NULL, thread_function, NULL);<br>(3)pthread_create( &amp;thread_c, NULL, thread_function, NULL);</p>
<p>&nbsp;</p>
<p><br>在段2处，线程b可以认为线程a已经存在<br>但是在段2执行完以后，主线程并不知道线程a和线程b谁先执行，并不能在这里做线程a先于线程b执行的假设<br>因为线程的时间片分配在这里是未知的</p>
<p>&nbsp;</p>
<p>二.<br>myglobal=myglobal+1;<br>myglobal是全局变量,多个线程同时在做累加的工作<br>是否应该为myglobal=myglobal+1;加锁呢？<br>肯定是应该加锁<br>首先我们并不知道myglobal=myglobal+1;又或是++ myglobal;能否被编译成一条汇编指令<br>就算如此++ myglobal被编译成了原子操作<br>但考虑到多核处理器，其原子操作可能在多CPU上同时处理<br>其结果仍然是不可预估的</p>
<p><a href="http://www.ibm.com/developerworks/cn/linux/thread/posix_thread2/index.html"><br>以上内容转述自http://www.ibm.com/developerworks/cn/linux/thread/posix_thread2/index.html</a><br></p>
<img src ="http://www.cppblog.com/ccl0326/aggbug/136635.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/ccl0326/" target="_blank">Vincent</a> 2010-12-16 14:37 <a href="http://www.cppblog.com/ccl0326/archive/2010/12/16/136635.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>【转】线程同步-自旋锁与Mutex/信号量的区别和联系</title><link>http://www.cppblog.com/ccl0326/archive/2010/09/21/127247.html</link><dc:creator>Vincent</dc:creator><author>Vincent</author><pubDate>Tue, 21 Sep 2010 07:15:00 GMT</pubDate><guid>http://www.cppblog.com/ccl0326/archive/2010/09/21/127247.html</guid><wfw:comment>http://www.cppblog.com/ccl0326/comments/127247.html</wfw:comment><comments>http://www.cppblog.com/ccl0326/archive/2010/09/21/127247.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.cppblog.com/ccl0326/comments/commentRss/127247.html</wfw:commentRss><trackback:ping>http://www.cppblog.com/ccl0326/services/trackbacks/127247.html</trackback:ping><description><![CDATA[POSIX threads(简称Pthreads)是在多核平台上进行并行编程的一套常用的API。线程同步(Thread Synchronization)是并行编程中非常重要的通讯手段，其中最典型的应用就是用Pthreads提供的锁机制(lock)来对多个线程之间共 享的临界区(Critical Section)进行保护(另一种常用的同步机制是barrier)。<br><br>Pthreads提供了多种锁机制：<br>(1) Mutex（互斥量）：pthread_mutex_***<br>(2) Spin lock（自旋锁）：pthread_spin_***<br>(3) Condition Variable（条件变量）：pthread_con_***<br>(4) Read/Write lock（读写锁）：pthread_rwlock_***<br><br>Pthreads提供的Mutex锁操作相关的API主要有：<br>pthread_mutex_lock (pthread_mutex_t *mutex);<br>pthread_mutex_trylock (pthread_mutex_t *mutex);<br>pthread_mutex_unlock (pthread_mutex_t *mutex);<br><br>Pthreads提供的与Spin Lock锁操作相关的API主要有：<br>pthread_spin_lock (pthread_spinlock_t *lock);<br>pthread_spin_trylock (pthread_spinlock_t *lock);<br>pthread_spin_unlock (pthread_spinlock_t *lock);<br><br>从实现原理上来讲，Mutex属于sleep-waiting类型的锁。例如在一个双核的机器上有两个线程(线程A和线程B)，它们分别运行在Core0和Core1上。假设线程A想要通过pthread_mutex_lock操作去得到一个临界区的锁，而此时这个锁正被线程B所持有，那么线程A就会被阻塞(blocking)，Core0 会在此时进行上下文切换(Context Switch)将线程A置于等待队列中，此时Core0就可以运行其他的任务(例如另一个线程C)而不必进行忙等待。而Spin lock则不然，它属于busy-waiting类型的锁，如果线程A是使用pthread_spin_lock操作去请求锁，那么线程A就会一直在 Core0上进行忙等待并不停的进行锁请求，直到得到这个锁为止。<br><br>如果大家去查阅Linux glibc中对pthreads API的实现NPTL(Native POSIX Thread Library) 的源码的话(使用&#8221;getconf GNU_LIBPTHREAD_VERSION&#8221;命令可以得到我们系统中NPTL的版本号)，就会发现pthread_mutex_lock()操作如果没有锁成功的话就会调用system_wait()的系统调用并将当前线程加入该mutex的等待队列里。而spin lock则可以理解为在一个while(1)循环中用内嵌的汇编代码实现的锁操作(印象中看过一篇论文介绍说在linux内核中spin lock操作只需要两条CPU指令，解锁操作只用一条指令就可以完成)。有兴趣的朋友可以参考另一个名为sanos的微内核中pthreds API的实现：mutex.c spinlock.c，尽管与NPTL中的代码实现不尽相同，但是因为它的实现非常简单易懂，对我们理解spin lock和mutex的特性还是很有帮助的。<br><br>那么在实际编程中mutex和spin lcok哪个的性能更好呢？我们知道spin lock在Linux内核中有非常广泛的利用，那么这是不是说明spin lock的性能更好呢？下面让我们来用实际的代码测试一下（请确保你的系统中已经安装了最近的g++）。<br><br>查看源代码打印帮助001 // Name: spinlockvsmutex1.cc&nbsp;&nbsp;<br><br>002 // Source: [url]http://www.alexonlinux.com/pthread-mutex-vs-pthread-spinlock[/url]&nbsp;&nbsp;<br><br>003 // Compiler(&lt;FONT style="BACKGROUND-COLOR: #00ffff"&gt;spin lock&lt;/FONT&gt; version): g++ -o spin_version -DUSE_SPINLOCK spinlockvsmutex1.cc -lpthread&nbsp;&nbsp;<br><br>004 // Compiler(mutex version): g++ -o mutex_version spinlockvsmutex1.cc -lpthread&nbsp;&nbsp;<br><br>005 #include &lt;stdio.h&gt;&nbsp;&nbsp;<br><br>006 #include &lt;unistd.h&gt;&nbsp;&nbsp;<br><br>007 #include &lt;sys/syscall.h&gt;&nbsp;&nbsp;<br><br>008 #include &lt;errno.h&gt;&nbsp;&nbsp;<br><br>009 #include &lt;sys/time.h&gt;&nbsp;&nbsp;<br><br>010 #include &lt;list&gt;&nbsp;&nbsp;<br><br>011 #include &lt;pthread.h&gt;&nbsp;&nbsp;<br><br>012&nbsp; &nbsp; <br><br>013 #define LOOPS 50000000&nbsp;&nbsp;<br><br>014&nbsp; &nbsp; <br><br>015 using namespace std;&nbsp;&nbsp;<br><br>016&nbsp; &nbsp; <br><br>017 list&lt;int&gt; the_list;&nbsp;&nbsp;<br><br>018&nbsp; &nbsp; <br><br>019 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>020 pthread_spinlock_t spinlock;&nbsp;&nbsp;<br><br>021 #else&nbsp;&nbsp;<br><br>022 pthread_mutex_t mutex;&nbsp;&nbsp;<br><br>023 #endif&nbsp;&nbsp;<br><br>024&nbsp; &nbsp; <br><br>025 //Get the thread id&nbsp;&nbsp;<br><br>026 pid_t gettid() { return syscall( __NR_gettid ); }&nbsp;&nbsp;<br><br>027&nbsp; &nbsp; <br><br>028 void *consumer(void *ptr)&nbsp;&nbsp;<br><br>029 {&nbsp;&nbsp;<br><br>030&nbsp; &nbsp;&nbsp;&nbsp;int i;&nbsp;&nbsp;<br><br>031&nbsp; &nbsp; <br><br>032&nbsp; &nbsp;&nbsp;&nbsp;printf("Consumer TID %lun", (unsigned long)gettid());&nbsp;&nbsp;<br><br>033&nbsp; &nbsp; <br><br>034&nbsp; &nbsp;&nbsp;&nbsp;while (1)&nbsp;&nbsp;<br><br>035&nbsp; &nbsp;&nbsp;&nbsp;{&nbsp;&nbsp;<br><br>036 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>037&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_spin_lock(&amp;spinlock);&nbsp;&nbsp;<br><br>038 #else&nbsp;&nbsp;<br><br>039&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_mutex_lock(&amp;mutex);&nbsp;&nbsp;<br><br>040 #endif&nbsp;&nbsp;<br><br>041&nbsp; &nbsp; <br><br>042&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;if (the_list.empty())&nbsp;&nbsp;<br><br>043&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;{&nbsp;&nbsp;<br><br>044 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>045&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_spin_unlock(&amp;spinlock);&nbsp;&nbsp;<br><br>046 #else&nbsp;&nbsp;<br><br>047&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_mutex_unlock(&amp;mutex);&nbsp;&nbsp;<br><br>048 #endif&nbsp;&nbsp;<br><br>049&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; break;&nbsp;&nbsp;<br><br>050&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;}&nbsp;&nbsp;<br><br>051&nbsp; &nbsp; <br><br>052&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;i = the_list.front();&nbsp;&nbsp;<br><br>053&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;the_list.pop_front();&nbsp;&nbsp;<br><br>054&nbsp; &nbsp; <br><br>055 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>056&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_spin_unlock(&amp;spinlock);&nbsp;&nbsp;<br><br>057 #else&nbsp;&nbsp;<br><br>058&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_mutex_unlock(&amp;mutex);&nbsp;&nbsp;<br><br>059 #endif&nbsp;&nbsp;<br><br>060&nbsp; &nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;<br><br>061&nbsp; &nbsp; <br><br>062&nbsp; &nbsp;&nbsp;&nbsp;return NULL;&nbsp;&nbsp;<br><br>063 }&nbsp;&nbsp;<br><br>064&nbsp; &nbsp; <br><br>065 int main()&nbsp;&nbsp;<br><br>066 {&nbsp;&nbsp;<br><br>067&nbsp; &nbsp;&nbsp;&nbsp;int i;&nbsp;&nbsp;<br><br>068&nbsp; &nbsp;&nbsp;&nbsp;pthread_t thr1, thr2;&nbsp;&nbsp;<br><br>069&nbsp; &nbsp;&nbsp;&nbsp;struct timeval tv1, tv2;&nbsp;&nbsp;<br><br>070&nbsp; &nbsp; <br><br>071 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>072&nbsp; &nbsp;&nbsp;&nbsp;pthread_spin_init(&amp;spinlock, 0);&nbsp;&nbsp;<br><br>073 #else&nbsp;&nbsp;<br><br>074&nbsp; &nbsp;&nbsp;&nbsp;pthread_mutex_init(&amp;mutex, NULL);&nbsp;&nbsp;<br><br>075 #endif&nbsp;&nbsp;<br><br>076&nbsp; &nbsp; <br><br>077&nbsp; &nbsp;&nbsp;&nbsp;// Creating the list content...&nbsp;&nbsp;<br><br>078&nbsp; &nbsp;&nbsp;&nbsp;for (i = 0; i &lt; LOOPS; i++)&nbsp;&nbsp;<br><br>079&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;the_list.push_back(i);&nbsp;&nbsp;<br><br>080&nbsp; &nbsp; <br><br>081&nbsp; &nbsp;&nbsp;&nbsp;// Measuring time before starting the threads...&nbsp;&nbsp;<br><br>082&nbsp; &nbsp;&nbsp;&nbsp;gettimeofday(&amp;tv1, NULL);&nbsp;&nbsp;<br><br>083&nbsp; &nbsp; <br><br>084&nbsp; &nbsp;&nbsp;&nbsp;pthread_create(&amp;thr1, NULL, consumer, NULL);&nbsp;&nbsp;<br><br>085&nbsp; &nbsp;&nbsp;&nbsp;pthread_create(&amp;thr2, NULL, consumer, NULL);&nbsp;&nbsp;<br><br>086&nbsp; &nbsp; <br><br>087&nbsp; &nbsp;&nbsp;&nbsp;pthread_join(thr1, NULL);&nbsp;&nbsp;<br><br>088&nbsp; &nbsp;&nbsp;&nbsp;pthread_join(thr2, NULL);&nbsp;&nbsp;<br><br>089&nbsp; &nbsp; <br><br>090&nbsp; &nbsp;&nbsp;&nbsp;// Measuring time after threads finished...&nbsp;&nbsp;<br><br>091&nbsp; &nbsp;&nbsp;&nbsp;gettimeofday(&amp;tv2, NULL);&nbsp;&nbsp;<br><br>092&nbsp; &nbsp; <br><br>093&nbsp; &nbsp;&nbsp;&nbsp;if (tv1.tv_usec &gt; tv2.tv_usec)&nbsp;&nbsp;<br><br>094&nbsp; &nbsp;&nbsp;&nbsp;{&nbsp;&nbsp;<br><br>095&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;tv2.tv_sec--;&nbsp;&nbsp;<br><br>096&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;tv2.tv_usec += 1000000;&nbsp;&nbsp;<br><br>097&nbsp; &nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;<br><br>098&nbsp; &nbsp; <br><br>099&nbsp; &nbsp;&nbsp;&nbsp;printf("Result - %ld.%ldn", tv2.tv_sec - tv1.tv_sec,&nbsp;&nbsp;<br><br>100&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;tv2.tv_usec - tv1.tv_usec);&nbsp;&nbsp;<br><br>101&nbsp; &nbsp; <br><br>102 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>103&nbsp; &nbsp;&nbsp;&nbsp;pthread_spin_destroy(&amp;spinlock);&nbsp;&nbsp;<br><br>104 #else&nbsp;&nbsp;<br><br>105&nbsp; &nbsp;&nbsp;&nbsp;pthread_mutex_destroy(&amp;mutex);&nbsp;&nbsp;<br><br>106 #endif&nbsp;&nbsp;<br><br>107&nbsp; &nbsp; <br><br>108&nbsp; &nbsp;&nbsp;&nbsp;return 0;&nbsp;&nbsp;<br><br>109 } <br><br>该程序运行过程如下：主线程先初始化一个list结构，并根据LOOPS的值将对应数量的entry插入该list，之后创建两个新线程，它们都执行consumer()这个任务。两个被创建的新线程同时对这个list进行pop操作。主线程会计算从创建两个新线程到两个新线程结束之间所用的时间，输出为下文中的&#8221;Result &#8220;。<br><br>测试机器参数：<br>Ubuntu 9.04 X86_64<br>Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz<br>4.0 GB Memory<br><br>从下面是测试结果：<br><br>查看源代码打印帮助01 pxcwan@pxcwan-desktop:~/Workspace/mutex$ g++ -o spin_version -DUSE_SPINLOCK spinvsmutex1.cc -lpthread&nbsp;&nbsp;<br><br>02 pxcwan@pxcwan-desktop:~/Workspace/mutex$ g++ -o mutex_version spinvsmutex1.cc -lpthread&nbsp;&nbsp;<br><br>03 pxcwan@pxcwan-desktop:~/Workspace/mutex$ time ./spin_version&nbsp;&nbsp;<br><br>04 Consumer TID 5520&nbsp;&nbsp;<br><br>05 Consumer TID 5521&nbsp;&nbsp;<br><br>06 Result - 5.888750&nbsp;&nbsp;<br><br>07&nbsp; &nbsp; <br><br>08 real&nbsp; &nbsp; 0m10.918s&nbsp;&nbsp;<br><br>09 user&nbsp; &nbsp; 0m15.601s&nbsp;&nbsp;<br><br>10 sys&nbsp; &nbsp; 0m0.804s&nbsp;&nbsp;<br><br>11&nbsp; &nbsp; <br><br>12 pxcwan@pxcwan-desktop:~/Workspace/mutex$ time ./mutex_version&nbsp;&nbsp;<br><br>13 Consumer TID 5691&nbsp;&nbsp;<br><br>14 Consumer TID 5692&nbsp;&nbsp;<br><br>15 Result - 9.116376&nbsp;&nbsp;<br><br>16&nbsp; &nbsp; <br><br>17 real&nbsp; &nbsp; 0m14.031s&nbsp;&nbsp;<br><br>18 user&nbsp; &nbsp; 0m12.245s&nbsp;&nbsp;<br><br>19 sys&nbsp; &nbsp; 0m4.368s <br><br>可以看见spin lock的版本在该程序中表现出来的性能更好。另外值得注意的是sys时间，mutex版本花费了更多的系统调用时间，这就是因为mutex会在锁冲突时调用system wait造成的。<br><br>但是，是不是说spin lock就一定更好了呢？让我们再来看一个锁冲突程度非常剧烈的实例程序：<br><br>查看源代码打印帮助01 //Name: svm2.c&nbsp;&nbsp;<br><br>02 //Source: [url]http://www.solarisinternals.com/wiki/index.php/DTrace_Topics_Locks[/url]&nbsp;&nbsp;<br><br>03 //Compile(&lt;FONT style="BACKGROUND-COLOR: #00ffff"&gt;spin lock&lt;/FONT&gt; version): gcc -o spin -DUSE_SPINLOCK svm2.c -lpthread&nbsp;&nbsp;<br><br>04 //Compile(mutex version): gcc -o mutex svm2.c -lpthread&nbsp;&nbsp;<br><br>05 #include &lt;stdio.h&gt;&nbsp;&nbsp;<br><br>06 #include &lt;stdlib.h&gt;&nbsp;&nbsp;<br><br>07 #include &lt;pthread.h&gt;&nbsp;&nbsp;<br><br>08 #include &lt;sys/syscall.h&gt;&nbsp;&nbsp;<br><br>09&nbsp; &nbsp; <br><br>10 #define&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;THREAD_NUM&nbsp; &nbsp;&nbsp;&nbsp;2&nbsp;&nbsp;<br><br>11&nbsp; &nbsp; <br><br>12 pthread_t g_thread[THREAD_NUM];&nbsp;&nbsp;<br><br>13 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>14 pthread_spinlock_t g_spin;&nbsp;&nbsp;<br><br>15 #else&nbsp;&nbsp;<br><br>16 pthread_mutex_t g_mutex;&nbsp;&nbsp;<br><br>17 #endif&nbsp;&nbsp;<br><br>18 __uint64_t g_count;&nbsp;&nbsp;<br><br>19&nbsp; &nbsp; <br><br>20 pid_t gettid()&nbsp;&nbsp;<br><br>21 {&nbsp;&nbsp;<br><br>22&nbsp; &nbsp;&nbsp;&nbsp;return syscall(SYS_gettid);&nbsp;&nbsp;<br><br>23 }&nbsp;&nbsp;<br><br>24&nbsp; &nbsp; <br><br>25 void *run_amuck(void *arg)&nbsp;&nbsp;<br><br>26 {&nbsp;&nbsp;<br><br>27&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;int i, j;&nbsp;&nbsp;<br><br>28&nbsp; &nbsp; <br><br>29&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;printf("Thread %lu started.n", (unsigned long)gettid());&nbsp;&nbsp;<br><br>30&nbsp; &nbsp; <br><br>31&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;for (i = 0; i &lt; 10000; i++) {&nbsp;&nbsp;<br><br>32 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>33&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_spin_lock(&amp;g_spin);&nbsp;&nbsp;<br><br>34 #else&nbsp;&nbsp;<br><br>35&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_mutex_lock(&amp;g_mutex);&nbsp;&nbsp;<br><br>36 #endif&nbsp;&nbsp;<br><br>37&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; for (j = 0; j &lt; 100000; j++) {&nbsp;&nbsp;<br><br>38&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;if (g_count++ == 123456789)&nbsp;&nbsp;<br><br>39&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;printf("Thread %lu wins!n", (unsigned long)gettid());&nbsp;&nbsp;<br><br>40&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; }&nbsp;&nbsp;<br><br>41 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>42&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;pthread_spin_unlock(&amp;g_spin);&nbsp;&nbsp;<br><br>43 #else&nbsp;&nbsp;<br><br>44&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_mutex_unlock(&amp;g_mutex);&nbsp;&nbsp;<br><br>45 #endif&nbsp;&nbsp;<br><br>46&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;}&nbsp;&nbsp;<br><br>47&nbsp; &nbsp; <br><br>48&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;printf("Thread %lu finished!n", (unsigned long)gettid());&nbsp;&nbsp;<br><br>49&nbsp; &nbsp; <br><br>50&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;return (NULL);&nbsp;&nbsp;<br><br>51 }&nbsp;&nbsp;<br><br>52&nbsp; &nbsp; <br><br>53 int main(int argc, char *argv[])&nbsp;&nbsp;<br><br>54 {&nbsp;&nbsp;<br><br>55&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;int i, threads = THREAD_NUM;&nbsp;&nbsp;<br><br>56&nbsp; &nbsp; <br><br>57&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;printf("Creating %d threads...n", threads);&nbsp;&nbsp;<br><br>58 #ifdef USE_SPINLOCK&nbsp;&nbsp;<br><br>59&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;pthread_spin_init(&amp;g_spin, 0);&nbsp;&nbsp;<br><br>60 #else&nbsp;&nbsp;<br><br>61&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;pthread_mutex_init(&amp;g_mutex, NULL);&nbsp;&nbsp;<br><br>62 #endif&nbsp;&nbsp;<br><br>63&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;for (i = 0; i &lt; threads; i++)&nbsp;&nbsp;<br><br>64&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_create(&amp;g_thread[i], NULL, run_amuck, (void *) i);&nbsp;&nbsp;<br><br>65&nbsp; &nbsp; <br><br>66&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;for (i = 0; i &lt; threads; i++)&nbsp;&nbsp;<br><br>67&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; &nbsp; pthread_join(g_thread[i], NULL);&nbsp;&nbsp;<br><br>68&nbsp; &nbsp; <br><br>69&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;printf("Done.n");&nbsp;&nbsp;<br><br>70&nbsp; &nbsp; <br><br>71&nbsp; &nbsp;&nbsp; &nbsp;&nbsp;&nbsp;return (0);&nbsp;&nbsp;<br><br>72 } <br><br>这个程序的特征就是临界区非常大，这样两个线程的锁竞争会非常的剧烈。当然这个是一个极端情况，实际应用程序中临界区不会如此大，锁竞争也不会如此激烈。测试结果显示mutex版本性能更好：<br><br>查看源代码打印帮助01 pxcwan@pxcwan-desktop:~/Workspace/mutex$ time ./spin&nbsp;&nbsp;<br><br>02 Creating 2 threads...&nbsp;&nbsp;<br><br>03 Thread 31796 started.&nbsp;&nbsp;<br><br>04 Thread 31797 started.&nbsp;&nbsp;<br><br>05 Thread 31797 wins!&nbsp;&nbsp;<br><br>06 Thread 31797 finished!&nbsp;&nbsp;<br><br>07 Thread 31796 finished!&nbsp;&nbsp;<br><br>08 Done.&nbsp;&nbsp;<br><br>09&nbsp; &nbsp; <br><br>10 real&nbsp; &nbsp; 0m5.748s&nbsp;&nbsp;<br><br>11 user&nbsp; &nbsp; 0m10.257s&nbsp;&nbsp;<br><br>12 sys&nbsp; &nbsp; 0m0.004s&nbsp;&nbsp;<br><br>13&nbsp; &nbsp; <br><br>14 pxcwan@pxcwan-desktop:~/Workspace/mutex$ time ./mutex&nbsp;&nbsp;<br><br>15 Creating 2 threads...&nbsp;&nbsp;<br><br>16 Thread 31801 started.&nbsp;&nbsp;<br><br>17 Thread 31802 started.&nbsp;&nbsp;<br><br>18 Thread 31802 wins!&nbsp;&nbsp;<br><br>19 Thread 31802 finished!&nbsp;&nbsp;<br><br>20 Thread 31801 finished!&nbsp;&nbsp;<br><br>21 Done.&nbsp;&nbsp;<br><br>22&nbsp; &nbsp; <br><br>23 real&nbsp; &nbsp; 0m4.823s&nbsp;&nbsp;<br><br>24 user&nbsp; &nbsp; 0m4.772s&nbsp;&nbsp;<br><br>25 sys&nbsp; &nbsp; 0m0.032s <br><br>另外一个值得注意的细节是spin lock耗费了更多的user time。这就是因为两个线程分别运行在两个核上，大部分时间只有一个线程能拿到锁，所以另一个线程就一直在它运行的core上进行忙等待，CPU占用率一直是100%；而mutex则不同，当对锁的请求失败后上下文切换就会发生，这样就能空出一个核来进行别的运算任务了。（其实这种上下文切换对已经拿着锁的那个线程性能也是有影响的，因为当该线程释放该锁时它需要通知操作系统去唤醒那些被阻塞的线程，这也是额外的开销）<br><br>总结<br>（1）Mutex适合对锁操作非常频繁的场景，并且具有更好的适应性。尽管相比spin lock它会花费更多的开销（主要是上下文切换），但是它能适合实际开发中复杂的应用场景，在保证一定性能的前提下提供更大的灵活度。<br><br>（2）spin lock的lock/unlock性能更好(花费更少的cpu指令)，但是它只适应用于临界区运行时间很短的场景。而在实际软件开发中，除非程序员对自己的程序的锁操作行为非常的了解，否则使用spin lock不是一个好主意(通常一个多线程程序中对锁的操作有数以万次，如果失败的锁操作(contended lock requests)过多的话就会浪费很多的时间进行空等待)。<br><br>（3）更保险的方法或许是先（保守的）使用 Mutex，然后如果对性能还有进一步的需求，可以尝试使用spin lock进行调优。毕竟我们的程序不像Linux kernel那样对性能需求那么高(Linux Kernel最常用的锁操作是spin lock和rw lock)。<br><br>2010年3月3日补记：这个观点在Oracle的文档中得到了支持：<br><br>During configuration, Berkeley DB selects a mutex implementation for the architecture. Berkeley DB normally prefers blocking-mutex implementations over non-blocking ones. For example, Berkeley DB will select POSIX pthread mutex interfaces rather than assembly-code test-and-set spin mutexes because pthread mutexes are usually more efficient and less likely to waste CPU cycles spinning without getting any work accomplished. <br><br>p.s.调用syscall(SYS_gettid)和syscall( __NR_gettid )都可以得到当前线程的id:)<br><br>转载请注明来自: [url]www.parallellabs.com[/url]<br>------------------------------------------------------------------------------<br><br>spinlock与linux内核调度的关系<br><br><br>　　作者：刘洪涛，华清远见嵌入式培训中心高级讲师，ARM公司授权ATC讲师。<br><br>广告插播信息<br>维库最新热卖芯片：<br><br>　　关于自旋锁用法介绍的文章，已经有很多，但有些细节的地方点的还不够透。我这里就把我个人认为大家容易有疑问的地方拿出来讨论一下。<br><br>　　一、自旋锁（spinlock）简介<br><br>　　自旋锁在同一时刻只能被最多一个内核任务持有，所以一个时刻只有一个线程允许存在于临界区中。这点可以应用在多处理机器、或运行在单处理器上的抢占式内核中需要的锁定服务。<br><br>　　二、信号量简介<br><br>　　这里也介绍下信号量的概念，因为它的用法和自旋锁有相似的地方。<br><br>　　Linux中的信号量是一种睡眠锁。如果有一个任务试图获得一个已被持有的信号量时，信号量会将其推入等待队列，然后让其睡眠。这时处理器获得自由去执行其它代码。当持有信号量的进程将信号量释放后，在等待队列中的一个任务将被唤醒，从而便可以获得这个信号量。<br><br>　　三、自旋锁和信号量对比<br><br>　　在很多地方自旋锁和信号量可以选择任何一个使用，但也有一些地方只能选择某一种。下面对比一些两者的用法。<br><br>　　表1-1自旋锁和信号量对比<br><br><br><br><br><br><br><br><br><br><br>　　四、自旋锁与linux内核进程调度关系<br><br>　　我们讨论下表1-1中的第3种情况（其它几种情况比较好理解），如果临界区可能包含引起睡眠的代码则不能使用自旋锁，否则可能引起死锁。<br><br>　　那么为什么信号量保护的代码可以睡眠而自旋锁就不能呢？<br><br>　　先看下自旋锁的实现方法吧，自旋锁的基本形式如下：<br><br>　　spin_lock(&amp;mr_lock);<br><br>　　//临界区<br><br>　　spin_unlock(&amp;mr_lock);<br><br>　　跟踪一下spin_lock(&amp;mr_lock)的实现<br><br>　　#define spin_lock(lock) _spin_lock(lock)<br><br>　　#define _spin_lock(lock) __LOCK(lock)<br><br>　　#define __LOCK(lock) \<br><br>　　do { preempt_disable(); __acquire(lock); (void)(lock); } while (0)<br><br>　　注意到&#8220;preempt_disable()&#8221;，这个调用的功能是&#8220;关抢占&#8221;（在spin_unlock中会重新开启抢占功能）。从中可以看出，使用自旋锁保护的区域是工作在非抢占的状态；即使获取不到锁，在&#8220;自旋&#8221;状态也是禁止抢占的。了解到这，我想咱们应该能够理解为何自旋锁保护的代码不能睡眠了。试想一下，如果在自旋锁保护的代码中间睡眠，此时发生进程调度，则可能另外一个进程会再次调用spinlock保护的这段代码。而我们现在知道了即使在获取不到锁的&#8220;自旋&#8221;状态，也是禁止抢占的，而&#8220;自旋&#8221;又是动态的，不会再睡眠了，也就是说在这个处理器上不会再有进程调度发生了，那么死锁自然就发生了。<br><br>　　咱们可以总结下自旋锁的特点：<br><br>　　● 单处理器非抢占内核下：自旋锁会在编译时被忽略；<br><br>　　● 单处理器抢占内核下：自旋锁仅仅当作一个设置内核抢占的开关；<br><br>　　● 多处理器下：此时才能完全发挥出自旋锁的作用，自旋锁在内核中主要用来防止多处理器中并发访问临界区，防止内核抢占造成的竞争。<br><br>　　五、linux抢占发生的时间<br><br>　　最后在了解下linux抢占发生的时间，抢占分为用户抢占和内核抢占。<br><br>　　用户抢占在以下情况下产生：<br><br>　　● 从系统调用返回用户空间<br><br>　　● 从中断处理程序返回用户空间<br><br>　　内核抢占会发生在：<br><br>　　● 当从中断处理程序返回内核空间的时候，且当时内核具有可抢占性；<br><br>　　● 当内核代码再一次具有可抢占性的时候。（如:spin_unlock时）<br><br>　　● 如果内核中的任务显式的调用schedule()<br><br>　　● 如果内核中的任务阻塞。<br><br>　　基本的进程调度就是发生在时钟中断后，并且发现进程的时间片已经使用完了，则发生进程抢占。通常我们会利用中断处理程序返回内核空间的时候可以进行内核抢占这个特性来提高一些I/O操作的实时性，如：当I/O事件发生的是时候，对应的中断处理程序被激活，当它发现有进程在等待这个I/O事件的时候，它会激活等待进程，并且设置当前正在执行进程的need_resched标志，这样在中断处理程序返回的时候，调度程序被激活，原来在等待I/O事件的进程（很可能）获得执行权，从而保证了对I/O事件的相对快速响应（毫秒级）。可以看出，在I/O事件发生的时候，I/O事件的处理进程会抢占当前进程，系统的响应速度与调度时间片的长度无关。<br>
<img src ="http://www.cppblog.com/ccl0326/aggbug/127247.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.cppblog.com/ccl0326/" target="_blank">Vincent</a> 2010-09-21 15:15 <a href="http://www.cppblog.com/ccl0326/archive/2010/09/21/127247.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss>