SVM是一个很复杂的算法,不是一篇博文就能够讲完的,各位小伙伴可以看看知乎的解释:
https://www.zhihu.com/question/21094489.
具体参考百度百科链接:
https://baike.baidu.com/item/K%E5%9D%87%E5%80%BC%E8%81%9A%E7%B1%BB%E7%AE%97%E6%B3%95/15779627?fromtitle=K-means&fromid=4934806&fr=aladdin.
https://blog.csdn.net/ruthywei/article/details/83045288
SVM算法对两个数据集进行分类 鸢尾花数据集代码如下:from sklearn.svm import SVC from sklearn.datasets import load_iris import matplotlib.pyplot as plt import numpy as np from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV def plot_point2(dataArr, labelArr, Support_vector_index): for i in range(np.shape(dataArr)[0]): if labelArr[i] == 0: plt.scatter(dataArr[i][0], dataArr[i][1], c='b', s=20) elif labelArr[i] == 1: plt.scatter(dataArr[i][0], dataArr[i][1], c='y', s=20) else: plt.scatter(dataArr[i][0], dataArr[i][1], c='g', s=20) for j in Support_vector_index: plt.scatter(dataArr[j][0], dataArr[j][1], s=100, c='', alpha=0.5, linewidth=1.5, edgecolor='red') plt.show() if __name__ == "__main__": iris = load_iris() x, y = iris.data, iris.target x = x[:, :2] X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=0) clf = SVC(C=1, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.1, kernel='li
12345678910111213141516171819202122232425