# Understanding Genetic Algorithms

Learn about what genetic algorithms are and how they are linked with the process of optimization.

## We'll cover the following

## The process of optimization

In a world of competition, people are always searching for the best: the best job, the best diet, the best financial plan, and so on. Unfortunately, with so many options, it’s impossible to make the best decisions all the time. Fortunately, humans have evolved to navigate the complexity of everyday life and make informed decisions that ultimately lead to success.

While the brain is naturally wired to make informed decisions, computers are not. Computers are naive — they can only do what we program them to do. So, how do we program a computer to make informed decisions, and why is this even necessary?

Consider this example: we are tasked with designing the shipping route for a large shipping company. We are given a list of fifteen cities, and your job is to pick the shortest route between them to save the company money on gas and travel expenses. At first, we might think it’s best to calculate every possible path between the cities since there are only fifteen. Unfortunately, the number of possible paths is 130,766,744,000,000 — that’s 130 *trillion*. This is an example of the **traveling salesman** problem, in which the goal is to find the shortest route between a designated number of cities.

The number of possible paths grows at a **factorial** rate. A factorial is the product of every integer up to a certain integer. In the shipping example with fifteen cities, we can calculate the number of paths by multiplying every integer from 1 to 15.

Nobody has enough time to calculate the distance of 130 trillion paths, so we need a different approach. We could choose a random starting point and choose to travel to the next closest city after every stop. This strategy might produce the shortest path. We could even calculate the distance of the paths produced from starting at every city and choose the shortest one from that. We’d then only have to calculate the distance of fifteen paths. Unfortunately, experimenting with different strategies is still time-consuming, and without a calculated approach, we might miss the best strategy.

So how can we make the best decisions and teach a computer to do the same?

The answer is **optimization**. Optimization is the practice of making the best possible decisions in a given situation. We can think of optimization as the search for the best. Humans are great at optimizing; it’s natural for us to find and make the best decisions for ourselves. Computers can be great at optimizing too, but they just need a little help.

Optimization algorithms are techniques for solving optimization problems, specifically problems where the goal is to find the best of something. We know that an algorithm is a series of instructions. Hence, we can redefine an optimization algorithm as a “set of instructions” for finding the best solution to a problem. While there are countless optimization algorithms, one of the oldest and most common is the genetic algorithm.

## What are genetic algorithms?

Genetic algorithms are a class of optimization algorithms based on evolution and natural selection. They use strategies loosely based on genetics and biology to produce optimal, or near-optimal, solutions to complicated problems. Initially conceived in the 1960s, genetic algorithms were intended simply as a technique for creating adaptable programs. Today, genetic algorithms are used in numerous applications in fields like artificial intelligence and finance. They’re great at solving difficult optimization problems and lend themselves nicely to parallel computing and distributed architectures. They can even yield solutions to the shipping problem described earlier.

The First Genetic Algorithm

The first genetic algorithm was introduced by John Holland at the University of Michigan in the 1960s. However, evolutionary algorithms had been around long before that. Early artificial intelligence researchers believed evolution was the key to creating truly intelligent programs. Today, the field of evolutionary computation has many, somewhat loosely defined, branches of research, such as evolution strategies, genetic programming, and genetic algorithms.

At their core, optimization problems are **search problems**. Search problems require us to navigate an area, like a maze, to find an objective, like the end of the maze. Optimization problems are basically the same thing, only there are multiple possible solutions. Imagine a maze with multiple exits. Our goal is to exit the maze as quickly as possible. In other words, we want to find the shortest path to any of the maze exits.

There are two basic approaches for solving search problems: brute-force search and informed search.

It is important to know the difference between brute-force search and informed search to understand why optimization and genetic algorithms are so useful.