Linear Separability
Explore the concept of linear separability in datasets and understand when a hard-margin support vector machine can perfectly classify data. Learn how transforming the feature space using polynomial expansion can make non-linearly separable data linearly separable, enabling SVMs to find optimal hyperplanes. This lesson also covers checking optimization feasibility with cvxpy and visualizing support vectors, laying a foundation for advanced SVM techniques.
We'll cover the following...
In this lesson, we explore the concept of linear separability, which is key to understanding when a hard-margin SVM can perfectly classify data. A dataset is linearly separable if a straight hyperplane can divide the classes without errors. However, many real-world datasets are not linearly separable in their original feature space. We’ll see how this affects the feasibility of the SVM optimization problem and learn how transforming the feature space using techniques such as polynomial feature expansion can make non-linearly separable data linearly separable. This sets the foundation for understanding the kernel trick and more advanced SVM techniques.
Linear separability
If the data isn’t linearly separable, then for every possible , at least one point ...