Search⌘ K
AI Features

Implementation of Support Vector Machines

Explore the implementation of Support Vector Machines as a powerful binary classification method. Understand how SVM works by maximizing the margin between classes and handling outliers. This lesson guides you through plotting data points, solving for hyperplane parameters, and applying SVM to predict outcomes in real datasets.

We'll cover the following...

Support vector machines are considered some of the best classifiers in supervised learning for analyzing complex data and downplaying the influence of outliers.

Developed within the computer science community in the 1990s, SVM was originally designed for predicting numeric and categorical outcomes as a double-barrel prediction technique. Today, SVM is mostly used as a classification technique for predicting categorical outcomes, similar to logistic regression.

SVM mirrors logistic regression in binary prediction scenarios as it attempts to separate classes based on the mathematical relationship between variables. However, unlike logistic regression, SVM attempts to separate data classes by maximum distance between the partitioned data points.

Its key feature is the margin, the distance between the boundary line and the nearest data point, multiplied by two. The margin is able to cope with new data points and outliers that would infringe on a logistic regression boundary line.

Example

Given the following positively labelled data points {(1,1), (2,1), (1,-1), (2,-1)} and the following negative labelled data points {(4,0), (5,1), (5,-1), (6,0)}

  • Plot all given data points.
  • Once
...

s1=(21)s1=\binom{2}{1}, s2=(21)s2=\binom{2}{-1} and s3=(4 ...