The **hill-climbing algorithm** is a local search algorithm used in mathematical optimization. An important property of local search algorithms is that the path to the goal does not matter, only the goal itself matters. Because of this, we do not need to worry about which path we took in order to reach a certain goal state, all that matters is that we reached it.

The basic principle behind the algorithm is moving across neighboring states according to elevation or increase in value. This working principle also causes the algorithm to be susceptible to local maximums.

We will now look at the pseudocode for this algorithm and some visual examples in order to gain clarity on its workings.

```
HillClimbing(problem){
currentState = problem.startState
goal = false
while(!goal){
neighbour = highest valued successor of currentState
if neighbour.value <= currentState.value
goal = true
else
currentState = neighbour
}
}
```

We begin with a starting state that we assign to the `currentState`

variable. Following that, we proceed to perform a loop until we reach our goal state.

The current objective function of our algorithm is to find the maximum valued state or, in simpler terms, a ‘peak’.

In order to proceed, we first find out the immediate neighbors of our current state. The way this is done is left up to the reader since the data organization of the problem may vary.

After we have found the neighbors, we make the highest valued `neighbor`

and compare it with `currentState`

.

If the neighbor’s value is higher than our current state, we move to the neighboring state; else, we end the loop (since according to the algorithm we have found our peak).

import numpy as npdef find_neighbours(state, landscape):neighbours = []dim = landscape.shape# left neighbourif state[0] != 0:neighbours.append((state[0] - 1, state[1]))# right neighbourif state[0] != dim[0] - 1:neighbours.append((state[0] + 1, state[1]))# top neighbourif state[1] != 0:neighbours.append((state[0], state[1] - 1))# bottom neighbourif state[1] != dim[1] - 1:neighbours.append((state[0], state[1] + 1))# top leftif state[0] != 0 and state[1] != 0:neighbours.append((state[0] - 1, state[1] - 1))# bottom leftif state[0] != 0 and state[1] != dim[1] - 1:neighbours.append((state[0] - 1, state[1] + 1))# top rightif state[0] != dim[0] - 1 and state[1] != 0:neighbours.append((state[0] + 1, state[1] - 1))# bottom rightif state[0] != dim[0] - 1 and state[1] != dim[1] - 1:neighbours.append((state[0] + 1, state[1] + 1))return neighbours# Current optimization objective: local/global maximumdef hill_climb(curr_state, landscape):neighbours = find_neighbours(curr_state, landscape)boolascended = Falsenext_state = curr_statefor neighbour in neighbours: #Find the neighbour with the greatest valueif landscape[neighbour[0]][neighbour[1]] > landscape[next_state[0]][next_state[1]]:next_state = neighbourascended = Truereturn ascended, next_statedef __main__():landscape = np.random.randint(1, high=50, size=(10, 10))print(landscape)start_state = (3, 6) # matrix index coordinatescurrent_state = start_statecount = 1ascending = Truewhile ascending:print("\nStep #", count)print("Current state coordinates: ", current_state)print("Current state value: ", landscape[current_state[0]][current_state[1]])count += 1ascending, current_state = hill_climb(current_state, landscape)print("\nStep #", count)print("Optimization objective reached.")print("Final state coordinates: ", current_state)print("Final state value: ", landscape[current_state[0]][current_state[1]])__main__()

Please try and fiddle with the given code! A few things to try out are:

- Change the coordinates of the start state.
- Modify the optimization objective of the algorithm (maybe try and find the local or global minimum instead of maximum as a start)

The hill-climbing algorithm shown above was just one variant called the *steepest ascent hill climbing*.

We may also change the objective function to find the local minimum and make the algorithm descend instead of climb.

Other variants that prioritize exploring the state space also exist such as

TRENDING TOPICS