K-Means Clustering
In this lesson, you’ll learn about clustering algorithms, which form the core of unsupervised learning and involve grouping similar items together.
We'll cover the following
Clustering
Clustering is a famous unsupervised learning technique. It involves making clusters or groups of items such that the items in the same cluster are more similar to each other than items in the other cluster. In this lesson, we will be looking into K-means clustering.
K-means clustering
K-Means clustering as the name suggests, looks for a fixed number of clusters () in the dataset. The mean or center of the cluster is represented by , which is also called the cluster centroid or average point. K-means relies on the idea of similarity/dissimilarity when assigning instances to respective clusters.
Similarity can also be thought of as proximity. It is a numerical measure of how alike two data instances are. Cosine Similarity is one of the most commonly used similarity measures. It takes values between 0 and 1, where a higher value indicates more similar instances. Cosine similarity between two feature vectors of and is given as
Where and are the Euclidean norm of the feature vectors and respectively. It is given as
The notion of dissimilarity can be understood using the concept of distance between two points. It is a numerical measure of how far apart or different two data instances are. The Euclidean distance is one of the most commonly used measures of dissimilarity between instances. Lower values indicate more similar instances. The Euclidean distance between two instances can be given as and is given as
In the case of Categorical Features, if two feature values are same then their similarity is one and vice versa.
How does the K-means algorithm work
In K-means clustering we repeat the below steps iteratively.
- Randomly pick centroids in the dataset.
- Add each instance to the closest centroid by computing the similarity or distance measures.
- Recompute the centroids. The centroid is the center (average) point of the cluster.
- Add each instance to the closest centroid.
- Recompute the centroids.
- Repeat until no changes occur.
Get hands-on with 1200+ tech skills courses.