{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Multiclass Classification - Measurement metrics\n", "\n", "Selecting the best metrics for evaluating the performance of a given...


edit code ....


1. apply parameters optimization (example: gridsearchcv)


2. discuss all results of each single cell ( include on top of each cell)


* use the same jupyter notebook provided.


* in cell 2, replace 'Data_Glioblastoma5Patients_SC.csv". file withData_ExpressionmRNA_9classes.csv file




{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Multiclass Classification - Measurement metrics\n", "\n", "Selecting the best metrics for evaluating the performance of a given classifier on dataset is guided by a number of consideration including the class-balance and expected outcomes. One particular performance measure may evaluate a classifier from a single perspective and often fail to measure others. Consequently, there is no unified metric to select measure the generalized performance of a classifier.\n", "\n", "Two methods, micro-averaging and macro-averaging, are used to extract a single number for each of the precision, recall and other metrices across multiple classes. A macro-average calculates the metric autonomously for each class to calculate the average. In contrast, the micro-average calculates average metric from the aggregate contributions of all classes. Micro -average is used in unbalanced datasets as this method takes the frequency of each class into consideration. The micro average precision, recall, and accuracy scores are mathematically equivalent.\n", "\n", "Classification report: \n", "The classification report provides the main classification metrics on a per-class basis. a) Precision (tp / (tp + fp) ) measures the ability of a classifier to identify only the correct instances for each class. b) Recall (tp / (tp + fn) is the ability of a classifier to find all correct instances per class. c) F1 score is a weighted harmonic mean of precision and recall normalized between 0 and 1. F score of 1 indicates a perfect balance as precision and the recall are inversely related. A high F1 score is useful where both high recall and precision is important.\n", "d) Support is the number of actual occurrences of the class in the test data set. Imbalanced support in the training data may indicate the need for stratified sampling or rebalancing.\n", "\n", "Confusion Matrix: \n", "A confusion matrix shows the combination of the actual and predicted classes. Each row of the matrix represents the instances in a predicted class, while each column represents the instances in an actual class. It is a good measure of wether models can account for the overlap in class properties and to understand which classes are most easily confused.\n", "\n", "Class Prediction Error: \n", "This is a useful extension of the confusion matrix and visualizes the misclassified classes as a stacked bar. Each bar is a composite measure of predicted classes.\n", "\n", "Aggregate metrics:\n", "These provide a score for the overall performance of the classifier across the class spectrum.\n", "\n", "Cohen’s Kappa: \n", "This is one of the best metrics for evaluating multi-class classifiers on imbalanced datasets. The traditional metrics from the classification report are biased towards the majority class and assumes an identical distribution of the actual and predicted classes. In contrast, Cohen’s Kappa Statistic measures the proximity of the predicted classes to the actual classes when compared to a random classification. The output is normalized between 0 and 1 the metrics for each classifier, therefore can be directly compared across the classification task. Generally closer the score is to one, better the classifier.\n", "\n", "Cross-Entropy: \n", "Cross entropy measures the extent to which the predicted probabilities match the given data, and is useful for probabilistic classifiers such as Naïve Bayes. It is a more generic form of the logarithmic loss function, which was derived from neural network architecture, and is used to quantify the cost of inaccurate predictions. The classifier with the lowest log loss is preferred.\n", "\n", "Mathews Correlation Coefficient (MCC):\n", "MCC , originally devised for binary classification on unbalanced classes, has been extended to evaluates multiclass classifiers by computing the correlation coefficient between the observed and predicted classifications. A coefficient of +1 represents a perfect prediction, 0 is similar to a random prediction and −1 indicates an inverse prediction." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# libraries \n", "import pandas as pd\n", "import numpy as np\n", "import seaborn as sns\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.model_selection import StratifiedKFold\n", "\n", "#Visualizers\n", "from yellowbrick.classifier import ClassificationReport\n", "from yellowbrick.classifier import ClassPredictionError\n", "from yellowbrick.classifier import ConfusionMatrix\n", "from yellowbrick.classifier import ROCAUC\n", "from yellowbrick.classifier import PrecisionRecallCurve\n", "import matplotlib.pyplot as plt\n", "\n", "#Metrics\n", "from sklearn.metrics import cohen_kappa_score\n", "from sklearn.metrics import hamming_loss\n", "from sklearn.metrics import log_loss\n", "from sklearn.metrics import zero_one_loss\n", "from sklearn.metrics import matthews_corrcoef\n", "#Classifiers\n", "from sklearn.neighbors import KNeighborsClassifier \n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.ensemble import RandomForestClassifier\n", "from sklearn import svm\n", "from sklearn.tree import DecisionTreeClassifier\n", "from sklearn.ensemble import ExtraTreesClassifier\n", "from xgboost import XGBClassifier\n", "from sklearn.neural_network import MLPClassifier\n", "\n", "\n", "import warnings\n", "warnings.filterwarnings('ignore')" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(430, 5949)\n", " A2M AAAS AAK1 AAMP AARS AARSD1 AASDH \\\n", "0 -3.80147 -3.889900 -3.985616 2.651558 2.170748 -2.550822 4.807330 \n", "1 -3.80147 -3.889900 -3.158708 2.358992 -6.041792 -0.056092 3.606735 \n", "2 -3.80147 -3.889900 1.733125 -5.820241 -6.041792 -0.576957 -2.473517 \n", "3 -3.80147 -3.889900 -1.665669 3.514271 -6.041792 -3.699171 4.509461 \n", "4 -3.80147 3.742495 -2.166992 -5.820241 2.094729 4.021873 5.535007 \n", "5 -1.98770 -3.889900 4.691156 -4.006471 -3.449348 4.309767 4.002960 \n", "6 -3.80147 -3.889900 2.656469 2.207608 -6.041792 4.235937 0.716257 \n", "7 -3.80147 -3.889900 -1.914759 2.417820 3.162904 -3.699171 -2.473517 \n", "8 -3.80147 -3.889900 0.483560 -5.820241 -6.041792 -3.699171 -2.473517 \n", "9 -3.80147 4.316243 1.828663 2.140173 -6.041792 3.721094 3.479903 \n", "10 -3.80147 -2.753998 -2.495912 -5.820241 -6.041792 2.389635 3.615288 \n", "11 -3.80147 3.036324 -2.039693 2.303001 -3.214235 4.966966 -1.727089 \n", "12 -3.80147 4.724143 1.203836 3.311157 0.185847 3.298032 -2.473517 \n", "13 -3.80147 4.474836 -0.946678 -3.753065 -6.041792 -3.699171 -2.473517 \n", "14 -3.80147 0.054066 -0.648023 1.978155 3.292914 -3.699171 -2.473517 \n", "15 -3.80147 3.864297 -0.298788 2.320943 3.257557 -3.699171 -2.473517 \n", "16 -3.80147 -3.889900 -3.985616 -5.820241 -6.041792 0.400684 3.989057 \n", "17 -3.80147 2.630251 2.534535 -5.820241 -6.041792 -3.699171 4.046634 \n", "18 -3.80147 -3.889900 -3.985616 2.528782 3.211221 -2.428143 -2.473517 \n", "19 -3.80147 -3.889900 0.454342 3.465626 -6.041792 -3.699171 -2.473517 \n", "\n", " AASDHPPT AASS AATF ... ZSWIM6 ZSWIM7 ZUFSP ZW10 \\\n", "0 3.961170 -0.192665 3.614482 ... 2.909466 -3.118284 -1.538324 -1.550699 \n", "1 -2.632250 2.249388 6.857517 ... -1.821098 -3.118284 -1.538324 -1.550699 \n", "2 -4.354127 0.063178 -2.570976 ... -1.821098 5.521892 -1.538324 -1.550699
Apr 27, 2021
SOLUTION.PDF

Get Answer To This Question

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here