您现在的位置: 首页 > 网站导航收录 > 百科知识百科知识
(corrcoef)-matlabxcorr计算过程
颜色,图像,灰度(corrcoef)-matlabxcorr计算过程
发布时间:2020-12-06加入收藏来源:互联网点击:
原始数据集的分类能
首先在原始数据集上训练一个 ResNet 模型来建立一个基线。下面显示了模型在训练期间的能(准确率和损失)。
# Train modelmodel_orig, history_orig = run_resnet(X_tr, y_tr, X_va, y_va)# Show model performance during trainingplot_history(history_orig)# Evaluate Modelloss_orig_tr, acc_orig_tr = model_orig.evaluate(X_tr, y_tr)loss_orig_va, acc_orig_va = model_orig.evaluate(X_va, y_va)loss_orig_te, acc_orig_te = model_orig.evaluate(X_te, y_te)# Report classification report and confusion matrixplot_classification_report(X_te, y_te, model_orig)Train score: loss = 0.0537 - accuracy = 0.9817Valid score: loss = 0.1816 - accuracy = 0.9492Test score: loss = 0.1952 - accuracy = 0.9421模型已经训练好了,通过查看相应的混淆矩阵来看看它在检测 8 个目标类别方面的能力如何。左侧的混淆矩阵显示了正确/错误识别的样本数量,而右侧则显示了每个目标类别的比例值。
灰度数据集的分类能
对灰度转换的图像做同样的事情。训练期间的模型能如何?
# Train modelmodel_gray, history_gray = run_resnet(X_tr_gray, y_tr, X_va_gray, y_va)# Show model performance during trainingplot_history(history_gray)那么混淆矩阵呢?
# Evaluate Modelloss_gray_tr, acc_gray_tr = model_gray.evaluate(X_tr_gray, y_tr)loss_gray_va, acc_gray_va = model_gray.evaluate(X_va_gray, y_va)loss_gray_te, acc_gray_te = model_gray.evaluate(X_te_gray, y_te)# Report classification report and confusion matrixplot_classification_report(X_te_gray, y_te, model_gray)Train score: loss = 0.1118 - accuracy = 0.9619Valid score: loss = 0.2255 - accuracy = 0.9287Test score: loss = 0.2407 - accuracy = 0.9220重新对齐和拉伸数据集的分类能
对重新对齐和拉伸的图像做同样的事情。
# Train modelmodel_stretch, history_stretch = run_resnet(X_tr_stretch, y_tr, X_va_stretch, y_va)# Show model performance during trainingplot_history(history_stretch)混淆矩阵
# Evaluate Modelloss_stretch_tr, acc_stretch_tr = model_stretch.evaluate(X_tr_stretch, y_tr)loss_stretch_va, acc_stretch_va = model_stretch.evaluate(X_va_stretch, y_va)loss_stretch_te, acc_stretch_te = model_stretch.evaluate(X_te_stretch, y_te)# Report classification report and confusion matrixplot_classification_report(X_te_stretch, y_te, model_stretch)Train score: loss = 0.0229 - accuracy = 0.9921Valid score: loss = 0.1672 - accuracy = 0.9533Test score: loss = 0.1975 - accuracy = 0.9491PCA 转换的数据集的分类能
# Train modelmodel_pca, history_pca = run_resnet(X_tr_pca, y_tr, X_va_pca, y_va)# Show model performance during trainingplot_history(history_pca)混淆矩阵
# Evaluate Modelloss_pca_tr, acc_pca_tr = model_pca.evaluate(X_tr_pca, y_tr)loss_pca_va, acc_pca_va = model_pca.evaluate(X_va_pca, y_va)loss_pca_te, acc_pca_te = model_pca.evaluate(X_te_pca, y_te)# Report classification report and confusion matrixplot_classification_report(X_te_pca, y_te, model_pca)Train score: loss = 0.0289 - accuracy = 0.9918Valid score: loss = 0.1459 - accuracy = 0.9509Test score: loss = 0.1898 - accuracy = 0.9448模型对比训练了所有这些单独的模型,让我们看看它们之间的关系以及它们之间的区别。首先,将它们各自对测试集的预测画在一起,比较这些不同的模型预测相同值的方式。
# Compute model specific predictionsy_pred_orig = model_orig.predict(X_te).argmax(axis=1)y_pred_gray = model_gray.predict(X_te_gray).argmax(axis=1)y_pred_stretch = model_stretch.predict(X_te_stretch).argmax(axis=1)y_pred_pca = model_pca.predict(X_te_pca).argmax(axis=1)# Aggregate all model predictionstarget = y_te.ravel()predictions = np.array([y_pred_orig, y_pred_gray, y_pred_stretch, y_pred_pca])[ :, np.argsort(target)]# Plot model individual predictionsplt.figure(figsize=(20, 3))plt.imshow(predictions, aspect="auto", interpolation="nearest", cmap="rainbow")plt.xlabel(f"Predictions for all {predictions.shape[1]} test samples")plt.ylabel("Model")plt.yticks(ticks=range(4), labels=["Orig", "Gray", "Stretched", "PCA"]);我们可以看到,除了原始数据集,其他模型在第8个目标类(红色部分)中都不会出错。所以我们的操作似乎是有用的。在这三种方法中,“重新排列和拉伸”的数据集似乎表现最好。为了支持这一说法,让我们看看我们四个模型的测试准确。
# Collect accuraciesaccs_te = np.array([acc_orig_te, acc_gray_te, acc_stretch_te, acc_pca_te]) * 100# Plot accuraciesplt.figure(figsize=(8, 3))plt.title("Test accuracy for our four models")plt.bar(["Orig", "Gray", "Stretched", "PCA"], accs_te, alpha=0.5)plt.hlines(accs_te[0], -0.4, 3.4, colors="black", linestyles="dotted")plt.ylim(90, 98);模型叠加4个模型都有一些不同,让我们试着进一步训练一个“元”模型,它使用我们4个模型的预测作为输入。
# Compute prediction probabilities for all models and data setsy_prob_tr_orig = model_orig.predict(X_tr)y_prob_tr_gray = model_gray.predict(X_tr_gray)y_prob_tr_stretch = model_stretch.predict(X_tr_stretch)y_prob_tr_pca = model_pca.predict(X_tr_pca)y_prob_va_orig = model_orig.predict(X_va)y_prob_va_gray = model_gray.predict(X_va_gray)y_prob_va_stretch = model_stretch.predict(X_va_stretch)y_prob_va_pca = model_pca.predict(X_va_pca)y_prob_te_orig = model_orig.predict(X_te)y_prob_te_gray = model_gray.predict(X_te_gray)y_prob_te_stretch = model_stretch.predict(X_te_stretch)y_prob_te_pca = model_pca.predict(X_te_pca)# Combine prediction probabilities into meta data setsy_prob_tr = np.concatenate( [y_prob_tr_orig, y_prob_tr_gray, y_prob_tr_stretch, y_prob_tr_pca], axis=1)y_prob_va = np.concatenate( [y_prob_va_orig, y_prob_va_gray, y_prob_va_stretch, y_prob_va_pca], axis=1)y_prob_te = np.concatenate( [y_prob_te_orig, y_prob_te_gray, y_prob_te_stretch, y_prob_te_pca], axis=1)# Combine training and validation datasety_prob_train = np.concatenate([y_prob_tr, y_prob_va], axis=0)y_train = np.concatenate([y_tr, y_va], axis=0).ravel()有许多不同的分类模型可供选择,但为了保持简短和紧凑,让我们快速训练一个多层感知器分类器,并将其得分与其他四种模型进行比较。
from sklearn.neural_network import MLPClassifier# Create MLP classifierclf = MLPClassifier( hidden_layer_sizes=(32, 16), activation="relu", solver="adam", alpha=0.42, batch_size=120, learning_rate="adaptive", learning_rate_init=0.001, max_iter=100, shuffle=True, random_state=24, early_stopping=True, validation_fraction=0.15)# Train modelclf.fit(y_prob_train, y_train)# Compute prediction accuracy of meta classifieracc_meta_te = np.mean(clf.predict(y_prob_te) == y_te.ravel())# Collect accuraciesaccs_te = np.array([acc_orig_te, acc_gray_te, acc_stretch_te, acc_pca_te, 0]) * 100accs_meta = np.array([0, 0, 0, 0, acc_meta_te]) * 100# Plot accuraciesplt.figure(figsize=(8, 3))plt.title("Test accuracy for all five models")plt.bar(["Orig", "Gray", "Stretched", "PCA", "Meta"], accs_te, alpha=0.5)plt.bar(["Orig", "Gray", "Stretched", "PCA", "Meta"], accs_meta, alpha=0.5)plt.hlines(accs_te[0], -0.4, 4.4, colors="black", linestyles="dotted")plt.ylim(90, 98);太棒了!在原始数据集和三种颜色变换数据集上训练四种不同的模型,然后利用这些预测概率训练一种新的元分类器,帮助我们将初始预测准确率从94%提高到96.4%!
上一篇:小盘谷 小盘股
下一篇:返回列表
相关链接 |
||
网友回复(共有 0 条回复) |