Let’s start with a high-level overview of machine learning…
ML is one of the most exciting and advanced branch of Artificial Intelligence that aims to give computers the ability to learn without being explicitly programmed. ML advances the computer to simulate the intelligence of the human mind for learning. Machine learning has become a quintessential piece of technology of our daily life, perhaps in many more places than one would expect.
In ML we train a model with vast volumes of data to make it simulate human intelligence. The major aspect of using a machine learning model is to make predictions and decisions on its own.
Machine learning techniques are of three types:
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
Supervised learning is the one we often run into we are not going to cover unsupervised or reinforcement learning here.
Supervised learning, as the name suggests, has to be trained by a supervisor. Basically supervised learning is when we train the machine using labelled data.
Supervised learning is of two types :
Classification: A classification problem is when the target variable is a category. A classification model attempts to draw some conclusion from observed values. Given one or more inputs a classification model will try to predict the value of one or more outcomes.
Suppose an ML model is used to detect breaches so there is only two categories or classifications, breached or safe.
Now let’s dive into the topic confusion matrix
A confusion matrix is a performance measurement technique for Machine learning classification problems. It’s a simple table that helps us to know the performance of the classification model on test data for the true values are known.
The confusion matrix helps us identify the correct predictions of a model for different individual classes as well as the errors.
Confusion matrix metrics are performance measures that help us find the accuracy of our classifier.
There are 4 main metrics:
TP: The number of times the predicted positive value is an equal actual positive value
TN: The number of times the predicted negative value is equal to the actual negative value.
FP: The number of times our model wrongly predicts negative values as positives.
FN: The number of times our model wrongly predicts positive values as negative.
The most unwanted or risky error is the false-negative and we want zero false-negative predictions.
Especially in fields like medical, cybersecurity, automatic driving.
The data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between ``bad’’ connections, called intrusions or attacks, and ``good’’ normal connections. This database contains a standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment.
In KDD99 dataset these four attack classes (DoS, U2R,R2L, and probe) are divided into 22 different attack classes that tabulated below:
In the KDD Cup 99, the criteria used for evaluation of the participant entries is the Cost Per Test (CPT) computed using the confusion matrix and a given cost matrix. Here:
• True Positive (TP): The amount of attack detected when it is actually attack.
• True Negative (TN): The amount of normal detected when it is actually normal.
• False Positive (FP): The amount of attack detected when it is actually normal (False alarm).
• False Negative (FN): The amount of normal detected when it is actually attack.