What are self-organizing maps (SOMs)?

Self-organizing maps (SOMs) are a type of artificial, unsupervised neural network; this network uses a competitive learning algorithm for its training. These are used to map higher dimensional data onto lower-dimensional data, thus making for an easier understanding of the data due to the reduced complexity. This model consists of two layers: the input and the output layer.

The structure of a simple SOM is provided below. Clusters are marked as V, inputs as X, and weights as W.

Structure of self-organizing maps

Process of selection

  1. We initialize the weights by randomly picking numbers.
  2. We select a training example randomly.
  3. We select the winning vector (shortest distance) through the Euclidean method.
  4. Then, we update the weights of the vectors.
  5. We repeat step number 2 until the process of selection is completed.

The formula for Euclidean distance is as follows:

The learning rate at time tt is denoted here by alphaalpha. The winning vector is denoted by jj. The ithi^{th} and kthk^{th} training examples are represented by ii and kk respectively. After the completion of the process, a winning vector is selected.

Code example

The Python implementation for the SOM process is provided below:

class SOM:
def winner(self, weights, sample): #find winning vector
P0,P1,i = 0,0,0 # i is iterator counter
while i < (len(sample)):
P0 += (sample[i] - weights[0][i])**2
P1 += (sample[i] - weights[1][i])**2
if P1 < P0: return 0
elif P1 > P0: return 1
i += 1
def update(self, weights, sample, K, alpha): # update winning vector
i = 0
while i < (len(weights)):
weights[K][i] += (sample[i] - weights[K][i]) * alpha
i += 1
return weights
def main(): # main function
# Initialization of weights
weights = [[0.2, 0.6, 0.3, 0.9], [0.5, 0.2, 0.7, 0.3]]
# Training Examples
T = [[1, 1, 1, 0], [0, 1, 0, 1], [0, 0, 0, 1], [0, 1, 0, 1]]
ob = SOM() # training
epochs, alpha = 3, 0.4 # epochs and alpha values initialised
i = 0
while i < epochs:
i += 1 # increment counter
j = 0 # initialise e j as 0
while j < len(T):
sample = T[j] # training sample
K = ob.winner(weights, sample) # Computing the winner vector
weights = ob.update(weights, sample, K, alpha) # Updating winning vector
j += 1 # increment counter
# classifying the test sample
K = ob.winner(weights, [0, 1, 0, 0])
print(f"Test sample s belongs to cluster, {K} ","\n")
print("The trained weights are ", *weights, sep = "\n")
main()

Code explanation

  • Line 1: We create a class SOM.
  • Line 2–11: We create a function to find the winning vector. We apply the Euclidean distance to all available samples.
  • Line 13–18: We create an update function that updates the weight values.
  • Line 21-25: We initialize the weights and the T array.
  • Line 27: We create an object of SOM as ob.
  • Line 28: We initialize the epochs and alpha values.
  • Line 31–39: We run a nested loop of epoch and m, where we first select a training sample, compute the winning vector, and then finally proceed to update the winning vector in our list.
  • Line 47: We run the main driver.

Free Resources

Copyright ©2026 Educative, Inc. All rights reserved