README文件写得很好，其中的Examples完全理解(包括Precomputed Kernels.Constructing a linear kernel matrix and then using the precomputed kernel gives exactly the same testing error as using the LIBSVM built-in linear kernel.核就是相似度，自己想定义什么相似度都可以)

libsvm_options的设置：

Examples of options: -s 0 -c 10 -t 1 -g 1 -r 1 -d 3
Classify a binary data with polynomial kernel (u'v+1)^3 and C = 10

options:

-s svm_type : set type of SVM (default 0)

0 -- C-SVC

1 -- nu-SVC

2 -- one-class SVM

3 -- epsilon-SVR

4 -- nu-SVR
C-SVC全称是什么？
C-SVC(C-support vector classification),nu-SVC(nu-support vector classification),one-class SVM(distribution estimation),epsilon-SVR(epsilon-support vector regression),nu-SVR(nu-support vector regression)

-t kernel_type : set type of kernel function (default 2)

0 -- linear: u'*v

1 -- polynomial: (gamma*u'*v + coef0)^degree

2 -- radial basis function: exp(-gamma*|u-v|^2)

3 -- sigmoid: tanh(gamma*u'*v + coef0)

-d degree : set degree in kernel function (default 3)

-g gamma : set gamma in kernel function (default 1/num_features)

-r coef0 : set coef0 in kernel function (default 0)

-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)

-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)

-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)

-m cachesize : set cache memory size in MB (default 100)

-e epsilon : set tolerance of termination criterion (default 0.001)

-h shrinking: whether to use the shrinking heuristics, 0 or 1 (default 1)

`-b probability_estimates: whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)`
`-wi weight: set the parameter C of class i to weight*C, for C-SVC (default 1)`
`The k in the -g option means the number of attributes in the input data.`

(2)如何采用线性核？

matlab> % Linear Kernel

matlab> model_linear = svmtrain(train_label, train_data, '-t 0');

严格讲，线性核也要像高斯核一样调整c这个参数，Libing wang讲一般C=1效果比较好，可能调整效果差异不大，当然要看具体的数据集。c大，从SVM目标函数可以看出，c越大，相当于惩罚松弛变量，希望松弛变量接近0，即都趋向于对训练集全分对的情况，这样对训练集测试时准确率很高，但推广能力未必好，即在测试集上未必好。c小点，相当于边界的有些点容许分错，将他们当成噪声点，这样外推能力比较好。

(3)如何采用高斯核？

matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07');

20150420 libing讨论，可能的解释：样本少，不适合高斯核。范围有限，也许更广泛的参数范围会有更好的效果

(4)如何实现交叉验证？

README文件有如下一句话：If the '-v' option is specified, cross validation is

conducted and the returned model is just a scalar: cross-validation

accuracy for classification and mean-squared error for regression.

(5) 如何调整高斯核的两个参数？

(6)如何采用预定义核？

To use precomputed kernel, you must include sample serial number asthe first column of the training and testing data (assume your kernel matrix is K, # of instances is n):
matlab> K1 = [(1:n)', K]; % include sample serial number as first column
matlab> model = svmtrain(label_vector, K1, '-t 4');
matlab> [predict_label, accuracy, dec_values] = svmpredict(label_vector, K1, model); % test the training data

We give the following detailed example by splitting heart_scale into 150 training and 120 testing data.  Constructing a linear kernel matrix and then using the precomputed kernel gives exactly the same testing error as using the LIBSVM built-in linear kernel.
matlab>
matlab> % Split Data
matlab> train_data = heart_scale_inst(1:150,:);
matlab> train_label = heart_scale_label(1:150,:);
matlab> test_data = heart_scale_inst(151:270,:);
matlab> test_label = heart_scale_label(151:270,:);
matlab>
matlab> % Linear Kernel
matlab> model_linear = svmtrain(train_label, train_data, '-t 0');
matlab> [predict_label_L, accuracy_L, dec_values_L] = svmpredict(test_label, test_data, model_linear);
matlab>
matlab> % Precomputed Kernel
matlab> model_precomputed = svmtrain(train_label, [(1:150)', train_data*train_data'], '-t 4');
matlab> [predict_label_P, accuracy_P, dec_values_P] = svmpredict(test_label, [(1:120)', test_data*train_data'], model_precomputed);
matlab>
matlab> accuracy_L % Display the accuracy using linear kernel
matlab> accuracy_P % Display the accuracy using precomputed kernel

(7)如何实现概率估计？
For probability estimates, you need '-b 1' for training and testing:
matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07 -b 1');
matlab> [predict_label, accuracy, prob_estimates] = svmpredict(heart_scale_label, heart_scale_inst, model, '-b 1');

matlab> model = svmtrain(heart_scale_label, heart_scale_inst, '-c 1 -g 0.07');
matlab> [predict_label, accuracy, dec_values] = svmpredict(heart_scale_label, heart_scale_inst, model); % test the training data

[predicted_label, accuracy, decision_values/prob_estimates] = svmpredict(testing_label_vector, testing_instance_matrix, model [, 'libsvm_options']);

svmpredict输出的含义：
predictd_label, is a vector of predicted labels(故CLSlibsvmC的12到14行没用);

The function 'svmpredict' has three outputs. The first one, predictd_label, is a vector of predicted labels. The second output, accuracy, is a vector including accuracy (for classification), mean squared error, and squared correlation coefficient (for regression). The third is a matrix containing decision values or probability estimates (if '-b 1' is specified). If k is the number of classes, for decision values, each row includes results of predicting k(k-1)/2 binary-class SVMs. For probabilities, each row contains k values indicating the probability that the testing instance is in each class. Note that the order of classes here is the same as 'Label' field in the model structure.

(9)LibSVM是如何采用one-versus-rest和one-verse-one实现多类分类的？
one-versus-rest和one-verse-one的定义见模式识别笔记第四页反面(同时见孙即祥教材P47)。找libing wang和junge zhang,他们都讲没对这个深究过。根据“If k is the number of classes, for decision values, each row includes results of predicting k(k-1)/2 binary-class SVMsFor probabilities, each row contains k values indicating the probability that the testing instance is in each class. ”，我觉得应该是probabilities实现的是one-versus-rest，即采用-b 1这个选项，他俩都觉得我理解应该是正确的。junge讲参加pascal竞赛和imagenet，他们都是训练k个SVM(即one-versus-rest,没用one-versus-one,后者太慢，而且估计效果差不多)，没有直接采用SVM做多类问题。
20130910 LibSVM作者回信：
Libsvm implements only 1vs1.
For 1vsrest, you can check the following
libsvm faq

Q: LIBSVM supports 1-vs-1 multi-class classification. If instead I would
like to use 1-vs-rest, how to implement it using MATLAB interface?

http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#f808
Q: LIBSVM supports 1-vs-1 multi-class classification. If instead I would like to use 1-vs-rest, how to implement it using MATLAB interface?

Please use code in the following directory. The following example shows how to train and test the problem dna (training and testing).

`[trainY trainX] = libsvmread('./dna.scale'); [testY testX] = libsvmread('./dna.scale.t');model = ovrtrain(trainY, trainX, '-c 8 -g 4'); [pred ac decv] = ovrpredict(testY, testX, model); fprintf('Accuracy = %g%%\n', ac * 100); `
Conduct CV on a grid of parameters
`bestcv = 0; for log2c = -1:2:3,    for log2g = -4:2:1,        cmd = ['-q -c ', num2str(2^log2c), ' -g ', num2str(2^log2g)];        cv = get_cv_ac(trainY, trainX, cmd, 3);       if (cv >= bestcv),            bestcv = cv; bestc = 2^log2c; bestg = 2^log2g;        end        fprintf('%g %g %g (best c=%g, g=%g, rate=%g)\n', log2c, log2g, cv, bestc, bestg, bestcv);     end end`

(9)如何实现验证模式下的准确率？

--------------------------------------------------------------------------------------------------------------------------------------------------------
http://blog.sina.com.cn/s/blog_64b046c701018c8n.html
MATLAB自带的svm实现函数与libsvm差别小议

1 MATLAB自带的svm实现函数仅有的模型是C-SVC(C-support vector classification)； 而libsvm工具箱有C-SVC(C-support vector classification),nu-SVC(nu-support vector classification),one-class SVM(distribution estimation),epsilon-SVR(epsilon-support vector regression),nu-SVR(nu-support vector regression)等多种模型可供使用。
2 MATLAB自带的svm实现函数仅支持分类问题，不支持回归问题；而libsvm不仅支持分类问题，亦支持回归问题。
3 MATLAB自带的svm实现函数仅支持二分类问题，多分类问题需按照多分类的相应算法编程实现；而libsvm采用1v1算法支持多分类。
4 MATLAB自带的svm实现函数采用RBF核函数时无法调节核函数的参数gamma，貌似仅能用默认的；而libsvm可以进行该参数的调节。
5 libsvm中的二次规划问题的解决算法是SMO；而MATLAB自带的svm实现函数中二次规划问题的解法有三种可以选择：经典二次方法；SMO；最小二乘。（这个是我目前发现的MATLAB自带的svm实现函数唯一的优点~）
--------------------------------------------------------------------------------------------------------------------------------------------------------

SVM 理论部分

SVM下面推导核化形式(Eric Xing教材)+M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold Regularization: AGeometric Framework for Learning from Labeled and Unlabeled Examples,” J. Machine Learning Research, vol. 7, pp. 2399-2434, 2006的4.3和4.4节.+Ensemble Manifold Regularization (TPAMI 2012)

ZhuMLSS14.pdf"是很好的入门材料