Support Vector Machines
Here you will learn about Support Vector Machines. It is one of the most widely used classification algorithms and it provides a lot of power when dealing with classification problems.
Support Vector Machines
Support Vector Machines are one of the most widely used classification algorithms in Machine Learning. They are also used for Regression problems, and we have already seen their implementation in the previous lessons.

If the data is linearly separable (meaning it needs a hyperplane to separate the classes), then the Support Vector Machine (SVM) is simple, and it finds the decision boundary which is the most distant from the points nearest to the said decision boundary from both classes.

If the data is not linearly separable (nonlinear), then Kernel Trick is used in SVM, which involves mapping the feature space in the current dimension to higher dimensions such that they are easily separable using a decision boundary. One of the benefits of Support Vector Machines is that they work very well in cases with limited datasets.

Data points closer to the hyperplane that influence the position and orientation of the hyperplane are called Support vectors.

SVM classification is robust to outliers.
Mathematical intuition
From Logistic Regression, we know that:
$y$ is the actual label of the instance.
$\hat{y} = w_0 * x_0 + w_1 * x_1 + w_2 * x_2 + w_3 * x_3 + w_4 * x_4$ … $w_n * x_n$
$\hat{y} = w^Tx$
$\hat{y}_{Logistic}$ = $\frac{1}{1+e^{\hat{y}}}$

If $y = 1$, we want $\hat{y}_{Logistic}\approx1$ i.e $w^Tx >> 0$

If $y = 0$, we want $\hat{y}_{Logistic}\approx0$ i.e $w^Tx << 0$
The cost function for one instance of Logistic Regression from the previous lesson is shown below.
$Cost(\hat{y}_{Logistic}, y)$ = $\Bigg\{$ $\begin{matrix} log(\hat{y}_{Logistic}) & y=1 \\ log(1\hat{y}_{Logistic}) & y=0 \end{matrix}$
It can also be written as:
$(ylog(\hat{y}_{Logistic}) + (1y)log(1\hat{y}_{Logistic}))$
If y = 0
$cost_0(z)$ = $log(1\frac{1}{1+e^{z}})$
If y = 1
$cost_1(z)$ = $log(\frac{1}{1+e^{z}})$
Support Vector Machine Cost Function
We derive the following cost function for Support Vector Machines. We derived it from the mathematical intuition above. It should be minimized.
$C\sum_{i=1}^{m}[y^icost_1(w^Tx^i) + (1  y^i)cost_0(w^Tx^i)]$
Here is the Regularized version.
$C\sum_{i=1}^{m}[y^icost_1(w^Tx^i) + (1  y^i)cost_0(w^Tx^i)] + \frac{1}{2}\sum_{j=1}^{n}w^{2}_j$
Notice that
$C = \frac{1}{\alpha}$

If $C$ is large, then $\alpha$ will be small, and there is a chance of overfitting means the model generalizes well on the training data and poorly on the unseen data. This situation is also known as High Variance, Low Bias.

If $C$ is small, then $\alpha$ will be large, and there is a chance of underfitting. To review, underfitting means the model does not generalize well on the training data and will always be poor on the unseen dataset. This situation is also known as High Bias, Low Variance.

This concept in Machine Learning is known as the BiasVariance tradeoff and hyperparameters ($C, \alpha, and \lambda$) are chosen in a way that makes the model perform well. This study of choosing the right hyperparameters is known as the hyperparameter Optimization.
Kernel Trick
 If the dataset is Linearly separable then SVM works as below. In the below case we have 3 possible candidates of hyperplanes (A, B and C).
Get handson with 1400+ tech skills courses.