]]>

作者：佳音很近

链接：https://www.zhihu.com/question/47221998/answer/294423911

来源：知乎

著作权归作者所有。商业转载请联系作者获得授权，非商业转载请注明出处。

链接：https://www.zhihu.com/question/47221998/answer/294423911

来源：知乎

著作权归作者所有。商业转载请联系作者获得授权，非商业转载请注明出处。

- 键盘 Ctrl+N 新建文本文档

打开需要复制的微信文章——鼠标移动至浏览器上方——将极速模式更改成兼容模式——如图

键盘 ctrl+c 复制微信文章

打开word 键盘 ctrl+v 粘贴

点击图片双击图片——打开图片编辑栏——修改图片尺寸

点击其他图片——键盘 F4 重复上一步骤（快速修改其他图片）

- 7

全部完成，可以检查一下有没有

]]>

现在Caffe2正式发布，这个推荐肯定要改成新版本了。

Caffe2的基本计算单位是Operator。对于适当数量和类型的输入参数，每个Operator都包括所需的计算逻辑。Caffe和Caffe2的总体差异如下图所示：

官方提供了从Caffe迁移到Caffe2的教程，据说这个迁移非常简单。

Caffe2和PyTorch有何不同？

这是另外一个疑问。

Caffe2长于移动和大规模部署。虽然Caffe2新增了支持多GPU的功能，这让新框架与Torch具有了相同的GPU支持能力，但是如前所述，Caffe2支持一台机器上的多个GPU或具有一个及多个GPU的多台机器来进行分布式训练。

PyTorch适合进行研究、实验和尝试不同的神经网络；而Caffe2更偏向于工业应用，而且重点关注在移动端上的表现。

贾扬清现身说法

Caffe2发布后，作者贾扬清在reddit上连发四记解答。“Yangqing here”，贾扬清一上来就表明了身份。

有人问搞出Caffe2意义何在？现在已经有PyTorch、TensorFlow、MXNet等诸多框架。

贾扬清说Caffe2和PyTorch团队紧密合作。他们把Caffe2视作一种生产力的选择，而把Torch视作研究型的选择。而在构建AI模块时，他们也持有一种“非框架”的理念，例如Gloo、NNPACK和FAISS等可以被用于任何深度学习框架。

有人问Caffe2接受外部贡献么？

贾扬清说大爱外部贡献，也会在开源方面继续努力。

有人问Caffe2是否用了Torch的代码库，以及CUDA等相关支持的问题。

贾扬清说他们正在计划让Caffe2和Torch和PyTorch共享后端，这几个框架已经共享Gloo用于分布式训练，THCTensor、THNN和其他C/C++库也将会共享。

在GPU层面，Caffe2使用了CUDA和CUDNN。贾扬清和团队也试验了OpenCL，但是感觉用NVIDIA的GPU CUDA效果更好。

另外在其他平台（例如iOS上），Caffe2使用了特定的工具，例如Metal。一两天内，官方会发布Metal的实施。

有人问Caffe2支持动态图么？

贾扬清给出否定的回答，他表示这是Caffe2和PyTorch团队有意做出的选择。Caffe2的任务就是提供最佳的性能，而如果想要极端灵活的计算，请选择PyTorch。贾扬清认为这是一个更好的方式，因为“一个框架通吃”可能会影响性能。

所以，目前Caffe2只支持非常有限的动态控制，例如动态RNN。

最后，量子位放出传送门：

Caffe2的首页：http://caffe2.ai/

GitGub地址：https://github.com/caffe2/caffe2

Reference:

https://blog.csdn.net/zchang81/article/details/70316864?utm_source=itdadao&utm_medium=referral

阅读记录: read twice

阅读记录: read twice

]]>

]]>

*At The Data Incubator, we pride ourselves on having the most up to date data science curriculum available. Much of our curriculum is based on feedback from corporate and government partners about the technologies they are using and learning. In addition to their feedback we wanted to develop a data-driven approach for determining what we should be teaching in our data science corporate training and our free fellowship for masters and PhDs looking to enter data science careers in industry. Here are the results.*

**The Rankings**

Below is a ranking of 23 open-source deep learning libraries that are useful for Data Science, based on Github and Stack Overflow activity, as well as Google search results. The table shows standardized scores, where a value of 1 means one standard deviation above average (average = score of 0). For example, Caffe is one standard deviation above average in Github activity, while deeplearning4j is close to average. See below for methods.

**Results and Discussion**

The ranking is based on equally weighing its three components: Github (stars and forks), Stack Overflow (tags and questions), and Google Results (total and quarterly growth rate). These were obtained using available APIs. Coming up with a comprehensive list of deep learning toolkits was tricky – in the end, we scraped five different lists that we thought were representative (see methods below for details). Computing standardized scores for each metric allows us to see which packages stand out in each category. The full ranking is here, while the raw data is here.

**TensorFlow**** dominates the field with the largest active community**

TensorFlow is at least two standard deviations above the mean on all calculated metrics. TensorFlow has almost three times as many Github forks and more than six times as many Stack Overflow questions than the second most popular framework, Caffe. First open-sourced by the Google Brain team in 2015, TensorFlow has climbed over more senior libraries such as Theano (4) and Torch (8) for the top spot on our list. While TensorFlow is distributed with a Python API running on a C++ engine, several of the libraries on our list can utlize TensorFlow as a back-end and offer their own interfaces. These include Keras (2), which will soon be part of core TensorFlow and Sonnet (6). The popularity of TensorFlow is likely due to a combination of its general-purpose deep learning framework, flexible interface, good-looking computational graph visualizations, and Google’s significant developer and community resources.** **

**Caffe**** has yet to be replaced by ****Caffe2**

Caffe takes a strong third place on our list with more Github activity than all of its competitors (excluding TensorFlow). Caffe is traditionally thought of as more specialized than Tensorflow and was developed with a focus on image processing, objection recognition, and pre-trained convolutional neural networks. Facebook released Caffe2 (11) in April 2017, and it already ranks in the top half the deep learning libraries. Caffe2 is a more lightweight, modular, and scalable version of Caffe that includes recurrent neural networks. Caffe and Caffe2 are separate repos, so data scientists can continue to use the orginial Caffe. However, there are migration tools such as Caffe Translator that provide a means of using Caffe2 to drive existing Caffe models.

**Keras**** is the most popular front-end for deep learing**

Keras (2) is highest ranked non-framework library. Keras can be used as a front-end for TensorFlow (1), Theano (4), MXNet (7), CNTK (9), or deeplearning4j (14). Keras performed better than average on all three metrics measured. The popularity of Keras is likely due to its simplicity and ease-of-use. Keras allows for fast protoyping at the cost of some of the flexibility and control that comes from working directly with a framework. Keras is favorited by data scientists experimenting with deep learning on their data sets. The development and popularity of Keras continues with R Studio recently releasing an interface in R for Keras.

**Theano**** continues to hold a top spot even without large industry support**

In a sea of new deep learning frameworks, Theano (4) has the distiction of the oldest library in our rankings. Theano pioneered the use of the computational graph and remains popular in the research community for deep learning and machine learning in general. Theano is essentially a numerical computation library for Python, but can be used with high-level deep learning wrappers like Lasagne (15). While Google supports TensorFlow (1) and Keras (2), Facebook backs PyTorch (5) and Caffe2 (11), MXNet (7) is the offical deep learning framework of Amazon Web Services, and Microsoft designed and maintains CNTK (9), Theano remains popular without offical support from a technology industry giant.

**Sonnet**** is the fastest growing library**

Early in 2017 Google’s DeepMind publicly released the code for Sonnet (6), a high-level object oriented library built on top of TensorFlow. The number of pages returned in Google search resutls for Sonnet has grown by 272% from this quarter compared to the last, the largest of all the libraries we ranked. Although Google aquired the British artifical intelligence company in 2014, DeepMind and Google Brain have remained mostly independent teams. DeepMind has a focus on artifical general intelligence and Sonnet can help a user build on top of their specific AI ideas and research.

**Python is the language of deep learning interfaces**

PyTorch (5), a framework whose sole interface is in Python, is the second fastest growing library on our list. Compared to last quarter, PyTorch had 236% more Google search results. Out of the 23 open-source deep learning frameworks and wrappers we ranked, only three did not have interfaces in Python: Dlib (10), MatConvNet (20), and OpenNN (23). C++ and R interfaces were available in just seven and six of the 23 libraries, respectively. While the data science community is somewhat close to a consensus when it comes to using Python, there are still many options for deep learning libraries.

**Limitations**

As with any analysis, decisions were made along the way. All source code and data is on our Github Page. The full list of deep learning libraries came from a few sources.

Naturally, some libraries that have been around longer will have higher metrics, and therefore higher ranking. The only metric that takes this into account is the Google search quarterly growth rate.

The data presented a few difficulties:

- neural designer and wolfram mathematica are proprietary and were removed
- cntk is also called microsoft cognitive toolkit, but we only used the originial ctnk name
- neon was changed to nervana neon
- paddle was changed to paddlepaddle
- Some libraries were obviously derivatives of other libraries, such as Caffe and Caffe2. We decided to treat libraries individidually if they had unique github repositories.

**Methods**

All source code and data is on our Github Page.

We first generated a list of 23 open-source deep learning libraries from each of five different sources, and then collected metrics for all of them, to come up with the ranking. Github data is based on both stars and forks, Stack Overflow data is based on tags and questions containing the package name, and Google Results are based on total number of Google search results over the last five years and the quarterly growth rate of results calculated over the past three months as compared to the prior three months.

A few other notes:

- Several of the libraries were common words (caffe, chainer, lasagne), for this reason the search terms used to determine the number of google search results included the library name and the term “deep learn

ing”. - Any unavailable Stack Overflow counts were converted to zero count.
- Counts were standardized to mean 0 and deviation 1, and then averaged to get Github and Stack Overflow scores, and, combined with Serch Results, the Overall score.
- Some manual checks were done to confirm Github repository location.

All data was downloaded on September 14, 2017.

**Resources**

Source code is available on The Data Incubator‘s GitHub. If you’re interested in learning more, consider

- Data science corporate training
- Free eight-week fellowship for masters and PhDs looking to enter industry
- Hiring Data Scientists

**Authors**

]]>

for Excel with Header”就可以拷贝到excel.

Reference:

https://wenku.baidu.com/view/62f8ea0776c66137ee061916.html

]]>

]]>

经常会看到Table或者Figure后面加上[htb]，其中h表示here, t –top, b-bottom，即表格在文中的位置。

那么[htb]是按照其顺序排列进行选择，即h, t ,b顺序。

/begin{figure}[htb!]

\begin{table}[H], [H]的作用?潇云:把图和表固定在那段文字后面,不然会乱跑.

2.调整表格列高、行高及大小。可以使用：

/begin{table} /renewcommand{/arraystretch}{1.5} //调整行高到原来1.5倍

/begin{table} /addtolength{/tabcolsep}{-2pt} //减少列宽-2pt

/begin{table} /small //缩小表格尺寸到最小

3. 表格的对齐及边框

/begin{tabular}{|l||r|r|r|c|}

此处，l表示left, r表示right, c表示center。上面表示表共有5列，列中文字分别是左、右、右、右、中对齐。

Reference:http://blog.csdn.net/bennyfun79/article/details/4083797

]]>

and "LaTeX 2e sample file" is documentstyle and documentclass, respectively. If you want to use "\usepackage", you have to use

"LaTeX 2e sample file" rather than "LaTeX 2.09 sample file".

]]>

第一种方法：用比较笨的方法，一个一个公式用

\begin{small}

\begin{equation}

\ldots

\end{equation}

\end{small}

第二种方法：

\begin{small}

\begin{equation}

\ldots

\end{equation}

\end{small}

第二种方法：

定义新的变量环境

在开始

\newenvironment{sequation}{\begin{equation}\small}{\end{equation}}

在开始

\newenvironment{sequation}{\begin{equation}\small}{\end{equation}}

演示效果图：

演示代码：

\documentclass{article}

\usepackage[includemp,body={398pt,550pt},footskip=30pt,%

marginparwidth=60pt,marginparsep=10pt]{geometry}

\newenvironment{sequation}{\begin{equation}\small}{\end{equation}}

\newenvironment{tequation}{\begin{equation}\tiny}{\end{equation}}

\begin{document}

\begin{tequation}

\int_a^b f(x) \mathrm{d}x=A

\end{tequation}

\begin{sequation}

\int_a^b f(x) \mathrm{d}x=A

\end{sequation}

\begin{equation}

\int_a^b f(x) \mathrm{d}x=A

\end{equation}

\end{document}

\usepackage[includemp,body={398pt,550pt},footskip=30pt,%

\newenvironment{sequation}{\begin{equation}\small}{\end{equation}}

\newenvironment{tequation}{\begin{equation}\tiny}{\end{equation}}

\begin{document}

\begin{tequation}

\end{tequation}

\begin{sequation}

\end{sequation}

\begin{equation}

\end{equation}

\end{document}

第一种方法如果觉得small还不够小，可以用我的latex教材P101的footnotesize。通过这个已改我投KDD的MvFS论文的公式3

]]>