# Introduction to Computation

Get an introduction to the concepts of computation, abstract machines, and languages.

We'll cover the following

Computer science predates computers. The theoretical foundations of computation were laid long before the first computerBabbage’s Analytical Engine of 1837 is considered the first manifestation of a design with the computational power of general-purpose computers. The idea of an algorithm, which is the essence of computer science, is centuries old, and many automated mechanical processes got their start during the first Industrial Revolution. Here, we are mainly interested in more abstract ideas, as in software. was built. What we now call the theory of computation was developed in the mid-twentieth century by researchers seeking a systematic method of solving all mathematical problems. Their efforts led to the design of digital computers, and equally importantly, revealed what automatic computation cannot do. But first, let’s be clear on exactly what we mean by computation.

## What is computation?

A computation is the execution of a self-contained procedure that processes input, unaided by outside intervention once it starts, and, if all goes well, eventually halts, yielding output that corresponds to the input given. For example, running a compiler on source code as input and receiving an executable file as output is a computation. A computation that always halts is called an algorithm.

Press + to interact
Computation as a black box

## Abstract machines and languages

For our purposes, a computation invokes a procedure representing some underlying mathematical model—a computing mechanism that we call an abstract machineA theoretical model of a computing system. For example, the model for a simple calculator is the basic operations of arithmetic. The abstract machines we cover in this course are various types of finite automata.. A computation requires a symbolic notation, or language, to express it. A language is a set of strings with some inherent structure.

In computer programming, we need to agree on which strings of symbols are valid for describing programs and data. Source code is just a string of symbols fed into a compiler or interpreter and needs to conform to the syntax of the programming language used. Ultimately, input, output, and procedures appear as streams of symbols from some alphabet (denoted by ${\Sigma}$). For many programming languages, the alphabet is the set of Unicode characters. For a computer, it is the two-symbol alphabet $\{0,1\}$. The fact that we think of the symbols $0$ and $1$ as numbers is immaterial. We are not concerned in this course about the notion of data types; we just process strings, usually one symbol at a time. With this simple approach, one can grasp all that is possible with automatic computation. Types are abstractions built upon this elementary foundation.