Quantum Computing

Learn what tasks quantum computers are good at and the definitions of superposition, interference, and entanglement.

What is quantum computing?

Quantum computing is a different form of computation. It uses three fundamental properties of quantum physics: superposition, interference, and entanglement.

Superposition

Superposition refers to the quantum phenomenon where a quantum system can exist in multiple states concurrently.

The quantum system does not exist in multiple states concurrently. It exists in a complex linear combination of a state, 0, and a state, 1. It is a different kind of combination that is neither “or” nor “and.” We will explore this state in-depth in this course.

Interference

Quantum interference is what allows us to bias quantum systems toward the desired state. The idea is to create a pattern of interference where the paths leading to wrong answers interfere destructively and cancel out, but the paths leading to the correct answer reinforce each other.

Entanglement

Entanglement is a robust correlation between quantum particles. Entangled particles remain perfectly correlated even if they are separated by great distances.

How does machine learning work?

There are myriads of machine learning algorithms out there. But every one of these algorithms has three components:

  • The representation depicts the inner architecture that the algorithm uses to represent the knowledge. It may consist of rules, instances, decision trees, support vector machines, neural networks, and so on.

  • The evaluation is a function to evaluate candidate algorithm parameterizations. Examples include accuracy, prediction and recall, squared error, posterior probability, cost, margin, entropy, and more.

  • The optimization describes the way of generating candidate algorithm parameterizations. It is known as the search process, such as combinatorial, convex, and constrained optimization.

The first step of machine learning is the development of the architecture. The architecture specifies the parameters whose values hold the representation of the knowledge. This step determines how suited the solution will be to solve a specific problem. More parameters are not always better. For example, if a linear function can solve our problem, trying to solve it with a solution that consists of millions of parameters will likely fail. On the other hand, an architecture with very few parameters may be insufficient to solve complex problems such as natural language understanding.

Once we settle for the architecture to represent the knowledge, we train the machine learning algorithm with examples. Depending on the number of parameters, we need many examples. Next, the algorithm tries to predict the label of each instance. Finally, we use the evaluation function to measure how well the algorithm has performed.

The optimizer adjusts the representation to parameters that promise better performance concerning the measured evaluation. It may even involve changing the architecture of the representation.

Learning does not happen in giant leaps. Instead, it takes tiny steps. To yield a good performance, and depending on the complexity of the problem, it takes several iterations of this general process until the machine can put the correct label on a thing.

What tasks are quantum computers good at?

The world of quantum mechanics is different from the Physics we experience in our everyday situations, and soo is the world of quantum computing from classical (digital) computing.

What makes quantum computing so powerful isn’t its processing speed, which is quite slow, or its memory, which is only a few quantum bits.

What makes quantum computing so powerful is the algorithms it makes possible because these algorithms exhibit complexity characteristics that are different from their classical equivalents. To understand what that means, let’s have a brief look at complexity theory. Complexity theory is the study of the computational effort required to run an algorithm.

For instance, the computational effort of addition is O(n)O(n). This means that the effort of adding two numbers increases linearly with the size, in digits,of the number. The computational effort of multiplication is O(n2​​)O(n​^2​​). The effort increases by the square of the number size. These algorithms are said to be solvable in polynomial time.

But these problems are comparably simple. For example, the best algorithm for solving the problem of factorization, finding the prime factors of an n-digit number, is O(en1/3)O(e^{n1/3}). It means that the effort increases exponentially with the number of digits.

We should not underestimate the difference between O(n2)\mathcal{O}(n^2) and O(en1/3)\mathcal{O}({e^n}^{1/3}) complexity. While our smartphone can multiply numbers with 800 digits in a few seconds, the factorization of such numbers takes about 2,000 years on a supercomputer.

A proper quantum algorithm, such as Shor’s algorithm, can use superposition to evaluate all possible factors of a number simultaneously. And rather than calculating the result, it uses interference to combine all possible answers in a way that yields a correct answer. This algorithm solves a factorization problem with O((logn)2(loglogn)(logloglogn))\mathcal{O}\left((\log n)^2 (\log \log n) (\log \log \log n)\right) complexity. This is a polynomial complexity! So is multiplication.

Quantum computing is powerful because it promises to solve certain types of mathematical calculations with reduced complexity.