posts - 34, comments - 0, trackbacks - 0, articles - 1
  C++博客 :: 首页 :: 新随笔 :: 联系 :: 聚合  :: 管理

2012年9月18日

     摘要:   阅读全文

posted @ 2012-09-18 19:19 polly 阅读(411) | 评论 (0)编辑 收藏

原文地址在Ubuntu中安装Python科学计算环境 作者HyryStudio


在Ubuntu下安装Python模块通常可以使用apt-get和pip命令。apt-get命令是Ubuntu自带的包管理命令,而pip则是Python安装扩展模块的工具,通常pip会下载扩展模块的源代码并编译安装。

Ubuntu 12.04中缺省安装了Python2.7.3,首先通过下面的命令安装pip,pip是Python的一个安装和管理扩展库的工具。

sudo apt-get install python-pip

安装Python开发环境,方便今后编译其他扩展库,占用空间92.8M:

sudo apt-get install python-dev

IPython

为了安装最新版的IPython 0.13beta,需要下载IPython源代码,并执行安装命令。在IPython 0.13beta中提供了改进版本的IPython notebook。下面的命令首先安装版本管理软件git,然后通过git命令从IPython的开发代码库中下载最新版本的IPython源代码,并执行安装命令:

cd
sudo apt-get install git
git clone https://github.com/ipython/ipython.git
cd ipython
sudo python setup.py install

如果安装目前的最新稳定版本,可以输入:

sudo apt-get install ipython

安装完毕之后,请输入ipython命令测试是否能正常启动。

为了让IPython notebook工作,还还需要安装tornado和pyzmq:

sudo pip install tornado
sudo apt-get install libzmq-dev
sudo pip install pyzmq
sudo pip install pygments

下面测试IPython:

cd
mkdir notebook
cd notebook
ipython notebook

为了在IPython中离线使用LaTeX数学公式,需要安装mathjax,首先输入下面的命令启动ipython notebook:

sudo ipython notebook

在IPython notebook界面中输入:

from IPython.external.mathjax import install_mathjax
install_mathjax()

NumPy,SciPy和matplotlib

通过apt-get命令可以快速安装这三个库:

sudo apt-get install python-numpy
sudo apt-get install python-scipy
sudo apt-get install python-matplotlib

如果需要通过pip编译安装,可以先用apt-get命令安装所有编译所需的库:

sudo apt-get build-dep python-numpy
sudo apt-get build-dep python-scipy

然后通过pip命令安装:

sudo pip install numpy
sudo pip install scipy
通过build-dep会安装很多库,包括Python 3.2。

PyQt4和Spyder

下面的命令安装PyQt4,Qt界面设计器,PyQt4的开发工具以及文档:

sudo apt-get install python-qt4
sudo apt-get install qt4-designer
sudo apt-get install pyqt4-dev-tools
sudo apt-get install python-qt4-doc

安装完毕之后,文档位于:

/usr/share/doc/python-qt4-doc

安装好PyQt4之后通过下面的命令安装Spyder:

sudo apt-get install spyder

由于Spyder经常更新,通过下面的命令可以安装最新版:

sudo pip install spyder --upgrade

cython和SWIG

Cython和SWIG是编写Python扩展模块的工具:

sudo pip install cython
sudo apt-get install swig

输入 cython --versionswig -version 查看版本。

ETS

ETS是enthought公司开发的一套科学计算软件包,其中的Mayavi通过VTK实现数据的三维可视化。

首先通过下面的命令安装编译ETS所需的库:

sudo apt-get install python-dev libxtst-dev scons python-vtk  pyqt4-dev-tools python2.7-wxgtk2.8 python-configobj
sudo apt-get install libgl1-mesa-dev libglu1-mesa-dev

创建ets目录,并在此目录下下载ets.py,运行ets.py可以复制最新版的ETS源程序,并安装:

mkdir ets
cd ets
wget https://github.com/enthought/ets/raw/master/ets.py
python ets.py clone
sudo python ets.py develop
#sudo python ets.py install 或者运行install安装

如果一切正常,那么输入 mayavi2 命令则会启动mayavi。

OpenCV

为了编译OpenCV需要下载cmake编译工具,和一些依赖库:

sudo apt-get install build-essential
sudo apt-get install cmake
sudo apt-get install cmake-gui
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev libjasper-dev

然后从 http://sourceforge.net/projects/opencvlibrary/ 下载最新版的OpenCV源代码,并解压。然后创建编译用的目录release,并启动cmake-gui:

mkdir release
cmake-gui

在界面中选择OpenCV源代码的目录,和编译输出目录release,然后按Configure按钮,并根据需要设置各个编译选项,最后点Generate按钮,退出cmake-gui界面。进入编译路径,执行下面的命令:

cd release
make
sudo make install

安装完毕之后,启动IPython,并输入 import cv2 测试OpenCV是否能正常载入。

 

posted @ 2012-09-18 13:02 polly 阅读(885) | 评论 (0)编辑 收藏

2012年9月17日

 

 

posted @ 2012-09-17 14:20 polly 阅读(252) | 评论 (0)编辑 收藏

2012年8月19日

Google Earth坐标-美国航空母舰坐标

  这里罗列了已经发现的所有美国现役和退役的航空母舰。其中包括:

  “小鹰”号 CV63  35°17'29.66"N,139°39'43.67"E

  “肯尼迪”号 CVN67  30°23'50.91"N, 81°24'14.86"W

  “尼米兹”号 CVN68  32°42'47.88"N,117°11'22.49"W

  “艾森豪威尔”号 CVN69  36°57'27.13"N, 76°19'46.35"W

  “林肯” 号 CVN72   47°58'53.54"N,122°13'42.94"W

  “华盛顿”号 CVN73  36°57'32.90"N, 76°19'45.10"W

  “杜鲁门”号 CVN75  36°48'53.25"N,76°17'49.29"W

  “无畏”号 CV-11   40°45'53.88"N,74° 0'4.22"W

  “莱克星顿”号 CV-2  27°48'54.13"N,97°23'19.65"W

  “星座”号 47°33'11.30"N,122°39'17.24"W

  “独立”号 47°33'7.53"N,122°39'30.13"W

  “游骑兵”号 47°33'10.63"N,122°39'9.53"W

  “佛瑞斯特”号和“萨拉托加”号 41°31'39.59"N,71°18'58.70"W

  “美利坚”号 39°53'6.36"N,75°10'45.55"W

posted @ 2012-08-19 10:34 polly 阅读(297) | 评论 (0)编辑 收藏

本列表收录了美国海军己退役或现役中的航空母舰,包括船级属于CV、CVA、CVB、CVL或CVN的全部舰只。编号在CVA-58之后的都属于超级航空母舰排水量超过75,000吨),CVN-65和CVN-68以后的都属于核动力航空母舰

排水量较小的护航航空母舰(Escort Aircraft Carriers,CVE),则另行收录于美国海军护航航空母舰列表中。

船舰编号 舰名 级别 附注
CV-1 Langley 兰利号 以运煤舰朱比特号(USS Jupiter)改造而成
CV-2 Lexington 列克星敦号 列克星敦级 1942年5月8日珊瑚海海战受到重创沉没
CV-3 Saratoga 萨拉托加号 列克星敦级 1946年7月25日比基尼环礁的核子武器试验中沉没
CV-4 Ranger 突击者号 突击者级 1946年10月18日退役
CV-5 Yorktown 约克城号 约克城级 1942年6月7日中途岛海战中沉没
CV-6 Enterprise 企业号 约克城级 1947年2月17日退役
CV-7 Wasp 胡蜂号 胡蜂级 1942年9月15日日军潜艇击沉
CV-8 Hornet 大黄蜂号 约克城级 1942年10月27日圣克鲁斯群岛战役中受重创沉没
CV-9 Essex 埃塞克斯号 埃塞克斯级 1969年6月30日退役
CV-10 Yorktown 约克城号 埃塞克斯级 1970年6月27日退役
CV-11 Intrepid 无畏号 埃塞克斯级 1974年3月15日退役
CV-12 Hornet 大黄蜂号 埃塞克斯级 1970年6月24日退役
CV-13 Franklin 富兰克林号 埃塞克斯级 1947年2月17日退役
CV-14 Ticonderoga 提康德罗加号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-15 Randolph 伦道夫号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-16 Lexington 列克星敦号 埃塞克斯级 1991年11月8日退役
CV-17 Bunker Hill 邦克山号 埃塞克斯级 1947年1月9日退役
CV-18 Wasp 胡蜂号 埃塞克斯级 1972年7月1日退役
CV-19 Hancock 汉考克号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-20 Bennington 本宁顿号 埃塞克斯级 1970年1月15日退役
CV-21 Boxer 拳师号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CVL-22 Independence 独立号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-23 Princeton 普林斯顿号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-24 Belleau Wood 贝劳森林号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-25 Cowpens 科本斯号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-26 Monterey 蒙特利号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-27 Langley 兰利号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-28 Cabot 卡伯特号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-29 Bataan 巴丹号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CVL-30 San Jacinto 圣哈辛托号 独立级 自“克里夫兰级轻巡洋舰”改装而成
CV-31 Bon Homme Richard 好人理查德号 埃塞克斯级 1971年7月2日退役
CV-32 Leyte 莱特号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-33 Kearsarge 奇沙治号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-34 Oriskany 奥里斯卡尼号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-35 Reprisal 复仇号 埃塞克斯级 建造中途取消
CV-36 Antietam 安提坦号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-37 Princeton 普林斯顿号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-38 Shangri-la 香格里拉号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-39 Lake Champlain 尚普兰湖号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-40 Tarawa 塔拉瓦号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CVB-41 Midway 中途岛号 中途岛级 1992年4月11日退役
CVB-42 Franklin D. Roosevelt 罗斯福号 中途岛级
CVB-43 Coral Sea 珊瑚海号 中途岛级
CVB-44 建造计划取消
CV-45 Valley Forge 福吉谷号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CV-46 Iwo Jima 硫磺岛号 埃塞克斯级 建造计划取消
CV-47 Philippine Sea 菲律宾海号 埃塞克斯级 长舰体埃塞克斯级(Long-hull Essex)
CVL-48 Saipan 塞班岛号 塞班岛级 1970年1月14日 正式除役
CVL-49 Wright 莱特号 塞班岛级 1970年5月27日 正式除役
CV-50到CV-55 埃塞克斯级 建造计划取消
CVB-56到CVB-57 中途岛级 建造中途取消
CVA-58 United States 美国号 美国级 建造中途取消
CVA-59 Forrestal 福莱斯特号 福莱斯特级 1993年9月11日 正式除役
CVA-60 Saratoga 萨拉托加号 福莱斯特级 1994年8月20日 正式除役
CVA-61 Ranger 突击者号 福莱斯特级 1993年7月10日 正式除役
CV-62 Independence 独立号 福莱斯特级 1998年9月30日 正式除役
CV-63 Kitty Hawk 小鹰号 小鹰级 2009年5月12日 正式除役
CV-64 Constellation 星座号 小鹰级 2003年8月6日 正式除役
CVN-65 Enterprise 企业号 企业级 服役中
CVA-66 America 美利坚号 小鹰级 1996年8月9日 正式除役
CV-67 John F. Kennedy 肯尼迪号 (改良)小鹰级 2007年8月1日 正式除役
CVN-68 Nimitz 尼米兹号 尼米兹级 服役中
CVN-69 Dwight D. Eisenhower 艾森豪威尔号 尼米兹级 服役中
CVN-70 Carl Vinson 卡尔文森号 尼米兹级 服役中
CVN-71 Theodore Roosevelt 罗斯福号 尼米兹级 服役中
CVN-72 Abraham Lincoln 林肯号 尼米兹级 服役中
CVN-73 George Washington 华盛顿号 尼米兹级 服役中
CVN-74 John C. Stennis 斯坦尼斯号 尼米兹级 服役中
CVN-75 Harry S. Truman 杜鲁门号 尼米兹级 服役中
CVN-76 Ronald Reagan 里根号 尼米兹级 服役中
CVN-77 George H. W. Bush 布什号 尼米兹级 服役中
CVN-78 Gerald R. Ford 福特号 福特级 建造中
CVN-79 John F. Kennedy 肯尼迪号 福特级 建造中
CVN-80 未命名 福特级 计划中

posted @ 2012-08-19 10:33 polly 阅读(270) | 评论 (0)编辑 收藏

2012年8月10日

高光谱成像是新一代光电检测技术,兴起于2O世纪8O年代,目前仍在迅猛发展巾。高光谱成像是相对多光谱成像而言,通过高光谱成像方法获得的高光谱图像与通过多光谱成像获取的多光谱图像相比具有更丰富的图像和光谱信息。如果根据传感器的光谱分辨率对光谱成像技术进行分类,光谱成像技术一般可分成3类。

 

(1)  多光谱成像——光谱分辨率在 delta_lambda/lambda=0.1数量级,这样的传感器在可见光和近红外区域一般只有几个波段。

 

(2)  高光谱成像—— 光谱分辨率在 delta_lambda/lambda=0.01数量级,这样的传感器在可见光和近红外区域有几卜到数百个波段,光谱分辨率可达nm级。

 

(3)  超光谱成像—— 光谱分辨率在delta_lambda/lambda =O.001数量级,这样的传感器在可见光和近红外区域可达数千个波段。

 

众所周知,光谱分析是自然科学中一种重要的研究手段,光谱技术能检测到被测物体的物理结构、化学成分等指标。光谱评价是基于点测量,而图像测量是基于空间特性变化,两者各有其优缺点。因此,可以说光谱成像技术是光谱分析技术和图像分析技术发展的必然结果,是二者完美结合的产物。光谱成像技术不仅具有光谱分辨能力,还具有图像分辨能力,利用光谱成像技术不仅可以对待检测物体进行定性和定量分析,而且还能进对其进行定位分析。

 

高光谱成像系统的主要工作部件是成像光谱仪,它是一种新型传感器,2O世纪8O年代初正式开始研制,研制这类仪器的目的是为获取大量窄波段连续光谱图像数据,使每个像元具有几乎连续的光谱数据。它是一系列光波波长处的光学图像,通常包含数十到数百个波段,光谱分辨率一般为1~l0nm。由于高光谱成像所获得的高光谱图像能对图像中的每个像素提供一条几乎连续的光谱曲线,其在待测物上获得空间信息的同时又能获得比多光谱更为丰富光谱数据信息,这些数据信息可用来生成复杂模型,来进行判别、分类、识别图像中的材料。

 

通过高光谱成像获取待测物的高光谱图像包含了待测物的丰富的空间、光谱和辐射三重信息。这些信息不仅表现了

地物空间分布的影像特征,同时也可能以其中某一像元或像元组为目标获取它们的辐射强度以及光谱特征。影像、辐射与光谱是高光谱图像中的3个重要特征,这3个特征的有机结合就是高光谱图像。

 

高光谱图像数据为数据立方体(cube)。通常图像像素的横坐标和纵坐标分别用z和Y来表示,光谱的波长信息以(Z即轴)表示。该数据立方体由沿着光谱轴的以一定光谱分辨率间隔的连续二维图像组成。

posted @ 2012-08-10 10:42 polly 阅读(434) | 评论 (0)编辑 收藏

2012年7月30日

Q:Link Error 2019 无法解析的外部符号  _cvCreateImage
A:应将解决方案平台改为win64。
工具栏上方的解决方案平台—》点击下拉菜单—》配置管理器—》活动解决方案平台—》新建—》键入获选着新平台—》x64
问题就解决啦!哈哈!



Q:Error C1189 Building MFC application with /MD[d] (CRT dll version) requires MFC shared dll version. Please #define _AFXDLL or do not use /MD[d]
A:Go to the project properties (Project menu, Properties).  Set 'Use of MFC' to "Use MFC in a Shared DLL".  You have to make this change for both the debug and release configurations

posted @ 2012-07-30 11:57 polly 阅读(468) | 评论 (0)编辑 收藏

2012年7月25日

算法效率,先验特征,算法框架本周搞定。

posted @ 2012-07-25 19:02 polly 阅读(226) | 评论 (0)编辑 收藏

2012年7月24日

  1. Introduction
  2. The Idea
  3. The Gaussian Case
  4. Experiments with Black-and-White Images
  5. Experiments with Color Images
  6. References

Introduction

Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term "filtering", the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved.
The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions?
Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing.

Back to Index

The Idea

The basic idea underlying bilateral filtering is to do in the range of an image what traditional filters do in its domain. Two pixels can be close to one another, that is, occupy nearby spatial location, or they can be similar to one another, that is, have nearby values, possibly in a perceptually meaningful fashion.
Consider a shift-invariant low-pass domain filter applied to an image:

The bold font for
f and h emphasizes the fact that both input and output images may be multi-band. In order to preserve the DC component, it must be

Range filtering is similarly defined:

In this case, the kernel measures the
photometric similarity between pixels. The normalization constant in this case is

The spatial distribution of image intensities plays no role in range filtering taken by itself. Combining intensities from the entire image, however, makes little sense, since the distribution of image values far away from
x ought not to affect the final value at x. In addition, one can show that range filtering without domain filtering merely changes the color map of an image, and is therefore of little use. The appropriate solution is to combine domain and range filtering, thereby enforcing both geometric and photometric locality. Combined filtering can be described as follows:

with the normalization

Combined domain and range filtering will be denoted as
bilateral filtering. It replaces the pixel value at x with an average of similar and nearby pixel values. In smooth regions, pixel values in a small neighborhood are similar to each other, and the bilateral filter acts essentially as a standard domain filter, averaging away the small, weakly correlated differences between pixel values caused by noise. Consider now a sharp boundary between a dark and a bright region, as in figure 1(a).

(a)

(b)

(c)

Figure 1

When the bilateral filter is centered, say, on a pixel on the bright side of the boundary, the similarity function
s assumes values close to one for pixels on the same side, and values close to zero for pixels on the dark side. The similarity function is shown in figure 1(b) for a 23x23 filter support centered two pixels to the right of the step in figure 1(a). The normalization term k(x) ensures that the weights for all the pixels add up to one. As a result, the filter replaces the bright pixel at the center by an average of the bright pixels in its vicinity, and essentially ignores the dark pixels. Conversely, when the filter is centered on a dark pixel, the bright pixels are ignored instead. Thus, as shown in figure 1(c), good filtering behavior is achieved at the boundaries, thanks to the domain component of the filter, and crisp edges are preserved at the same time, thanks to the range component.

Back to Index

The Gaussian Case

A simple and important case of bilateral filtering is shift-invariant Gaussian filtering, in which both the closeness function c and the similarity function s are Gaussian functions of the Euclidean distance between their arguments. More specifically, c is radially symmetric:

where

is the Euclidean distance. The similarity function
s is perfectly analogous to c :

where

is a suitable measure of distance in intensity space. In the scalar case, this may be simply the absolute difference of the pixel difference or, since noise increases with image intensity, an intensity-dependent version of it. Just as this form of domain filtering is shift-invariant, the Gaussian range filter introduced above is insensitive to overall additive changes of image intensity. Of course, the range filter is shift-invariant as well.

Back to Index

Experiments with Black-and-White Images

Figure 2 (a) and (b) show the potential of bilateral filtering for the removal of texture. The picture "simplification" illustrated by figure 2 (b) can be useful for data reduction without loss of overall shape features in applications such as image transmission, picture editing and manipulation, image description for retrieval.

(a)

(b)

Figure 2

Bilateral filtering with parameters sd =3 pixels and sr =50 intensity values is applied to the image in figure 3 (a) to yield the image in figure 3 (b). Notice that most of the fine texture has been filtered away, and yet all contours are as crisp as in the original image. Figure 3 (c) shows a detail of figure 3 (a), and figure 3 (d) shows the corresponding filtered version. The two onions have assumed a graphics-like appearance, and the fine texture has gone. However, the overall shading is preserved, because it is well within the band of the domain filter and is almost unaffected by the range filter. Also, the boundaries of the onions are preserved.

(a)

(b)

 

 

 

 

(c)

(d)

Figure 3

Back to Index

Experiments with Color Images

For black-and-white images, intensities between any two gray levels are still gray levels. As a consequence, when smoothing black-and-white images with a standard low-pass filter, intermediate levels of gray are produced across edges, thereby producing blurred images. With color images, an additional complication arises from the fact that between any two colors there are other, often rather different colors. For instance, between blue and red there are various shades of pink and purple. Thus, disturbing color bands may be produced when smoothing across color edges. The smoothed image does not just look blurred, it also exhibits odd-looking, colored auras around objects.

(a)

(b)

(c)

(d)

Figure 4

Figure 4 (a) shows a detail from a picture with a red jacket against a blue sky. Even in this unblurred picture, a thin pink-purple line is visible, and is caused by a combination of lens blurring and pixel averaging. In fact, pixels along the boundary, when projected back into the scene, intersect both red jacket and blue sky, and the resulting color is the pink average of red and blue. When smoothing, this effect is emphasized, as the broad, blurred pink-purple area in figure 4 (b) shows.
To address this difficulty, edge-preserving smoothing could be applied to the red, green, and blue components of the image separately. However, the intensity profiles across the edge in the three color bands are in general different. Smoothing the three color bands separately results in an even more pronounced pink and purple band than in the original, as shown in figure 4 (c). The pink-purple band, however, is not widened as in the standard-blurred version of figure 4 (b).
A much better result can be obtained with bilateral filtering. In fact, a bilateral filter allows combining the three color bands appropriately, and measuring photometric distances between pixels in the combined space. Moreover, this combined distance can be made to correspond closely to perceived dissimilarity by using Euclidean distance in the
CIE-Lab color space. This color space is based on a large body of psychophysical data concerning color-matching experiments performed by human observers. In this space, small Euclidean distances are designed to correlate strongly with the perception of color discrepancy as experienced by an "average" color-normal human observer. Thus, in a sense, bilateral filtering performed in the CIE-Lab color space is the most natural type of filtering for color images: only perceptually similar colors are averaged together, and only perceptually important edges are preserved. Figure 4 (d) shows the image resulting from bilateral smoothing of the image in figure 4 (a). The pink band has shrunk considerably, and no extraneous colors appear.

(a)

(b)

(c)

Figure 5

Figure 5 (c) shows the result of five iterations of bilateral filtering of the image in figure 5 (a). While a single iteration produces a much cleaner image (figure 5 (b)) than the original, and is probably sufficient for most image processing needs, multiple iterations have the effect of flattening the colors in an image considerably, but without blurring edges. The resulting image has a much smaller color map, and the effects of bilateral filtering are easier to see when displayed on a printed page. Notice the cartoon-like appearance of figure 5 (c). All shadows and edges are preserved, but most of the shading is gone, and no "new" colors are introduced by filtering.

Back to Index

References

[1] C. Tomasi and R. Manduchi, "Bilateral Filtering for Gray and Color Images", Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India.
[2] T. Boult, R.A. Melter, F. Skorina, and I. Stojmenovic,"G-neighbors",
Proceedings of the SPIE Conference on Vision Geometry II, pages 96-109, 1993.
[3] R.T. Chin and C.L. Yeh, "Quantitative evaluation of some edge-preserving noise-smoothing techniques",
Computer Vision, Graphics, and Image Processing, 23:67-91, 1983.
[4] L.S. Davis and A. Rosenfeld, "Noise cleaning by iterated local averaging",
IEEE Transactions on Systems, Man, and Cybernetics, 8:705-710, 1978.
[5] R.E. Graham, "Snow-removal - a noise-stripping process for picture signals",
IRE Transactions on Information Theory, 8:129-144, 1961.
[6] N. Himayat and S.A. Kassam, "Approximate performance analysis of edge preserving filters",
IEEE Transactions on Signal Processing, 41(9):2764-77, 1993.
[7] T.S. Huang, G.J. Yang, and G.Y. Tang, "A fast two-dimensional median filtering algorithm",
IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1):13-18, 1979.
[8] J.S. Lee, "Digital image enhancement and noise filtering by use of local statistics",
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(2):165-168, 1980.
[9] M. Nagao and T. Matsuyama, "Edge preserving smoothing",
Computer Graphics and Image Processing, 9:394-407, 1979.
[10] P.M. Narendra, "A separable median filter for image noise smoothing",
IEEE Transactions on Pattern Analysis and Machine Intelligence, 3(1):20-29, 1981.
[11] K.J. Overton and T.E. Weymouth, "A noise reducing preprocessing algorithm",
Proceedings of the IEEE Computer Science Conference on Pattern Recognition and Image Processing, pages 498-507, Chicago, IL, 1979.
[12] P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion",
IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7):629-639, 1990.
[13] G. Ramponi, "A rational edge-preserving smoother",
Proceedings of the International Conference on Image Processing, volume 1, pages 151-154, Washington, DC, 1995.
[14] G. Sapiro and D.L. Ringach, "Anisotropic diffusion of color images",
Proceedings of the SPIE, volume 2657, pages 471-382, 1996.
[15] D.C.C. Wang, A.H. Vagnucci, and C.C. Li, "A gradient inverse weighted smoothing scheme and the evaluation of its performance",
Computer Vision, Graphics, and Image Processing, 15:167-181, 1981.
[16] G. Wyszecki and W. S. Styles,
Color Science: Concepts and Methods, Quantitative Data and Formulae, John Wiley and Sons, New York, NY, 1982.
[17] L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo, "Weighted median filters: a tutorial",IEEE
Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 43(3):155-192, 1996.

posted @ 2012-07-24 20:39 polly 阅读(1995) | 评论 (0)编辑 收藏

我们在写程序时,常常会遇到类型转换的问题。现总结一些常见的类型转换。

1,const char*(C风格字符串)与string之间转换:

             (1) const char*可以直接对string类型赋值,例如:

                       const char* pchar = "qwerasdf";

                       stringstr = pchar;

             (2) string通过c_str()函数转换为C风格字符串,例如:

                       string str = "qwerasdf";

                       const char* pchar = str.c_str();

 

2,const char*类型可以直接给CString类型赋值,例如:

               const char* pchar = "qwerasdf";

               CString str = pchar;

3,string类型变量转为为Cstring类型变量

             CString类型变量可以直接给string类型变量赋值,但是string类型不能对CString类型直接赋值。通过前两类

      转换我们可以得到,string类型变量转换为const char*类型,然后再直接赋值就可以了。例如:

       CString cstr;

       sring str = “asdasd”;

       cstr = str.c_str();

      同理,CStrng类型变量先转换为string类型在调用c_str()函数就可以完成向const char*类型的转换。例如:

      CString cStr = "adsad";   

      string str = cStr;  

      const char* pchar = str.c_str();
4,double,int转string

      double temp;
   stringstream strStream;
   strStream<<temp;
   string ss = strStream.str() 

   string 转double,int 
   string.atoi   ,   string.atof

     从上面我们可以上面看出,通过类型之间的相互转化,会使本来要通过复杂的函数来完成的类型转换变得简单易懂。

posted @ 2012-07-24 20:34 polly 阅读(688) | 评论 (0)编辑 收藏