鸢尾花数据分析项目(附详细代码和结果)
数据集来源:https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html#sklearn.datasets.load_iris
1. 数据导入鸢尾花数据可直接从Sklearn中的datasets 导出。sklearn.datasets.load_iris(*, return_X_y=False, as_frame=False)
该Iris中有两个属性,分别是:iris.data和iris.target
from sklearn import datasets iris = datasets.load_iris() #鸢尾花数据可直接从Sklearn中的datasets 导出 iris #直接从datasets中读取,返回的是字典格式的数据 print(type(iris['data'])) #data数据类型 print(iris['data'].shape) #data结构 print(iris['target'].shape) #结构 print(iris['target_names']) #iris中的targetname print(iris['target']) #iris中的target列 #切分成X和Y X, y = iris.data, iris.target #格式转换,整合成表格 iris_data = pd.DataFrame(np.hstack((X, y.reshape(-1, 1))),index = range(X.shape[0]),columns=['sepal_length_cm','sepal_width_cm','petal_length_cm','petal_width_cm','class'] )
1234567891011121314151617iris_data.info() #查看数据类型` 1
再例行先来一个描述性分析:
iris_data.describe() #统计分析 1
#可视化分析,查看到不同特征之间的关系以及分布 sns.pairplot(iris_data.dropna(),hue = 'class’) 12
fig=plt.gcf() fig.set_size_inches(12, 8) fig=sns.heatmap(iris_data.corr(), annot=True, cmap='GnBu', linewidths=1, linecolor='k', square=True, mask=False, vmin=-1, vmax=1, cbar_kws={"orientation": "vertical"}, cbar=True) 123
#数据集训练集切分 from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=20, shuffle=True) print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)#结果可以看到把样本分成了120和30 两个样本 12345
再画个图看看具体三类花的分布:
参考链接:https://blog.csdn.net/Shine_rise/article/details/102975238
#数据可视化 plt.scatter(X_train[y_train == 0][:, 0], X_train[y_train == 0][:, 1], color='r') plt.scatter(X_train[y_train == 1][:, 0], X_train[y_train == 1][:, 1], color='g') plt.scatter(X_train[y_train == 2][:, 0], X_train[y_train == 2][:, 1], color='b') plt.xlabel('sepal length') plt.ylabel('sepal width') plt.show() 1234567
# Logistic Regression model = LogisticRegression() model.fit(X_train, y_train) prediction = model.predict(X_test) print('The accuracy of the Logistic Regression is {0}'.format(metrics.accuracy_score(prediction,y_test))) # Support Vector Machine model = svm.SVC() model.fit(X_train, y_train) prediction = model.predict(X_test) print('The accuracy of the SVM is: {0}'.format(metrics.accuracy_score(prediction,y_test))) # Decision Tree model=DecisionTreeClassifier() prediction = model.predict(X_test) print('The accuracy of the Decision Tree is:{0}'.format(metrics.accuracy_score(prediction,y_test))) # K-Nearest Neighbours model=KNeighborsClassifier(n_neighbors=3) model.fit(X_train, y_train) prediction = model.predict(X_test) print('The accuracy of the KNN is: {0}'.format(metrics.accuracy_score(prediction,y_test)))
12345678910111213141516171819202122结果SVM最好,其他都一样。
from sklearn.metrics import plot_confusion_matrix #导包 class_names = iris.target_names #用来命名的 np.set_printoptions(precision=2) #数组格式化打印(指定小数点位数) # Plot non-normalized confusion matrix titles_options = [("Confusion matrix, without normalization", None), ("Normalized confusion matrix", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(model, X_test, y_test, display_labels=class_names, cmap=plt.cm.Blues, normalize=normalize) disp.ax_.set_title(title) print(title) print(disp.confusion_matrix) plt.show()
1234567891011121314151617181920结果输出:
再画一下SVM,这里直接用了个整合的代码(建模+画图)
# Support Vector Machine model = svm.SVC() model.fit(X_train, y_train) prediction = model.predict(X_test) print('The accuracy of the SVM is: {0}'.format(metrics.accuracy_score(prediction,y_test))) from sklearn.metrics import plot_confusion_matrix class_names = iris.target_names #用来命名的 np.set_printoptions(precision=2) #数组格式化打印(指定小数点位数) # Plot non-normalized confusion matrix titles_options = [("Confusion matrix, without normalization", None), ("Normalized confusion matrix", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(model, X_test, y_test, display_labels=class_names, cmap=plt.cm.Blues, normalize=normalize) disp.ax_.set_title(title) print(title) print(disp.confusion_matrix) plt.show()
1234567891011121314151617181920212223242526结果如下图,可见都是整整齐齐地排在对角线上,结论和之前的accuracy_score一样,SVM准确度要比KNN高
相关知识
分析鸢尾花数据集
实验一:鸢尾花数据集分类
基于Echarts的鸢尾花数据可视化
【机器学习实战】科学处理鸢尾花数据集
鸢尾花数据集下载
python利用c4.5决策树对鸢尾花卉数据集进行分类(iris)
【机器学习】鸢尾花分类:机器学习领域经典入门项目实战
基于深度学习的花卉识别(附数据与代码)
python鸢尾花数据集的分类问题 -- 逻辑回归问题研究
JavaScript实现的风飓风数据可视化分析
网址: 鸢尾花数据分析项目(附详细代码和结果) https://www.huajiangbk.com/newsview387324.html
上一篇: Python 基于BP神经网络的 |
下一篇: logier/鸢尾花插件 |
推荐分享

- 1君子兰什么品种最名贵 十大名 4012
- 2世界上最名贵的10种兰花图片 3364
- 3花圈挽联怎么写? 3286
- 4迷信说家里不能放假花 家里摆 1878
- 5香山红叶什么时候红 1493
- 6花的意思,花的解释,花的拼音 1210
- 7教师节送什么花最合适 1167
- 8勿忘我花图片 1103
- 9橄榄枝的象征意义 1093
- 10洛阳的市花 1039