Search⌘ K
AI Features

Multiplication Algorithms

Understand the historical development and foundational concepts of multiplication algorithms. Learn how to implement the lattice method, peasant multiplication, and classical geometric constructions in C++. This lesson enhances your ability to efficiently multiply large numbers through step-by-step algorithmic approaches.

Historical roots of algorithms

Although they have been a topic of formal academic study for only a few decades, algorithms have been with us since the dawn of civilization. Descriptions of step-by-step arithmetic computation are among the earliest examples of written human language, long predating the expositions by Fibonacci and al-Khwarizmi, or even the place value notation they popularized.

Lattice multiplication

The most familiar method for multiplying large numbers, at least for American students, is the lattice algorithm. This algorithm was popularized in Liber Abaci by Fibonacci, who learned it from Arabic sources including al-Khwarizmi, who in turn learned it from Indian sources including the 7th century treatise Brahmasphutasiddhanta by Brahmagupta, who may have learned it in turn from Chinese sources. The oldest surviving descriptions of the algorithm appear in The Mathematical Classic of Sunzi, written in China between the 3rd and 5th centuries, and in Eutocius of Ascalon’s commentaries on Archimedes’ Measurement of the Circle, written around 500CE. However, there is evidence that the algorithm was known much earlier. Eutocius credits the method to a lost treatise of Apollonius of Perga, who lived around 300BCE, entitled Okytokion (`Ω\Omegaκ\kappaϑ\varthetaτ\tauoˊ\acute{o}κ\kappaι\iotaooν\nu). The Sumerians recorded multiplication tables on clay tablets as early as 2600BCE, suggesting that they may have used the lattice algorithm.

The lattice algorithm assumes that the input numbers are represented as explicit strings of digits. Here, we’ll assume that we’re working in base ten, but the algorithm generalizes immediately to any other base. To simplify notation, the input consists of a pair of arrays X[0..m1]X [0 .. m − 1] and Y[0..n1]Y [0 .. n − 1], representing the numbers

x=i=0m1X[i].10i  and  y=j=0n1Y[j].10j,x = \sum_{i=0}^{m-1}X[i].10^i \text{\ \ and\ \ } y = \sum_{j=0}^{n-1}Y[j].10^j,

and similarly, the output consists of a single array Z[0..m+n1]Z [0 .. m + n − 1], representing the product

z=x.y=k=0m+n1Z[k].10k.z=x.y=\sum_{k=0}^{m+n-1}Z[k].10^k.

The algorithm uses addition and single-digit multiplication as primitive operations. Addition can be performed using a simple for loop. In practice, single-digit multiplication is performed using a lookup table, either carved into clay tablets, painted on strips of wood or bamboo, written on paper, stored in read-only memory, or memorized by the computator. The entire lattice algorithm can be summarized by the formula

x.y=i=0m1j=0n1(X[i].Y[j].10i+j)x.y = \sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(X[i].Y[j].10^{i+j})

Different variants of the lattice algorithm evaluate the partial products X[i]Y[j]10i+jX [i] · Y [ j] · 10^{i+ j} in different orders and use different strategies for computing their sum. For example, in Liber Abaci, Fibonacci describes a variant that considers the mn partial products in increasing order of significance, as shown in modern pseudocode below.

Algorithm


FibonacciMultiply(X[0..m1],Y[0..n1]):hold0for k0 to n+m1for all i and j such that i+j=kholdhold+X[i]Y[j]Z[k]hold mod 10holdhold/10return Z[0..m+n1]\underline{FibonacciMultiply(X [0 .. m − 1], Y [0 .. n − 1]):} \\ \hspace{0.4cm} hold←0 \\ \hspace{0.4cm} for\space k ← 0\space to\space n + m − 1 \\ \hspace{1cm} for\space all\space i\space and\space j\space such\space that\space i+j=k \\ \hspace{1.7cm} hold←hold+X[i]·Y[j] \\ \hspace{1cm} Z[k] ← hold\space mod\space 10 \\ \hspace{1cm} hold ← ⌊hold/10⌋ \\ \hspace{0.45cm} return\space Z[0..m+n−1]


Implementation

C++
#include <bits/stdc++.h>
using namespace std;
vector<int> FibonacciMultiply(vector<int> X, vector<int> Y)
{
int m = X.size();
int n = Y.size();
vector<int> Z(m + n, 0);
int hold = 0;
for (int k = 0; k < n + m - 1; k++)
{
for (int i = 0; i < m; i++)
{
int j = k - i;
if (j >= 0 && j < n && i + j == k)
{
hold += X[i] *Y[j];
}
}
Z[k] = hold % 10;
hold /= 10;
}
Z[m + n - 1] = hold;
while (Z.size() > 1 && Z.back() == 0)
{
Z.pop_back();
}
reverse(Z.begin(), Z.end());
return Z;
}
int main()
{
vector<int> X = { 1, 2, 3 };
// example X
vector<int> Y = { 4, 5 };
// example Y
vector<int> Z = FibonacciMultiply(X, Y);
cout << "The product of two input vector is : ";
for (int i = 0; i < Z.size(); i++)
{
cout << Z[i];
}
cout << endl;
return 0;
}

Explanation

  • Lines 6–7: We calculate the size of the input vectors X and Y and store them in the variables m and n, respectively.

  • Line 8: We create a vector Z of size m + n and initialize all of its elements to 0.

  • Lines 10–12: We compute the product of the two input vectors. For each digit position k in the result vector Z, the loop goes through all possible pairs of digits i and j in the input vectors, such that i + j = k.

  • Lines 14–22: If such a pair exists, their product is added to the hold variable, which keeps track of the carry-over digits. The last digit of hold is added to the result vector Z[k], and the remaining digits are carried over to the next iteration.

  • Lines 25–29: These lines show the remaining carry-over digits that weren’t added to Z during the main loop. The last digit of hold is added to Z[m+n-1], and any leading zeros in Z are removed by calling Z.pop_back() repeatedly, until the last digit of Z is not 0.

Fibonacci’s algorithm is often executed by storing all the partial products in a two-dimensional table (often called a tableau or grate or lattice) and then summing along the diagonals with appropriate carries. American elementary-school students are taught to multiply one factor (the multiplicand) by each digit in the other factor (the multiplier), writing down all the multiplicand-by-digit products before adding them up. This was also the method described by Eutocius, although he fittingly considered the multiplier digits from left to right. Both of these variants (and several others) are described and illustrated side by side in the anonymous 1458 textbook L’Arte dell’Abbaco, also known as the Treviso Arithmetic, the first printed mathematics book in the West.

Storing all the partial products in a two-dimensional table
Storing all the partial products in a two-dimensional table

Multiplier digits from left to right
Multiplier digits from left to right

All of these variants of the lattice algorithm, and other similar variants described by Sunzi, al-Khwarizmi, Fibonacci, L’Arte dell’Abbaco, and many other sources, compute the product of any mm-digit number and any nn-digit number in O(mn)O(mn) time. The running time of every variant is dominated by the number of single-digit multiplications.

Duplation and mediation

The lattice algorithm isn’t the oldest multiplication algorithm for which we have direct recorded evidence. An even older, and arguably simpler, algorithm, which doesn’t rely on place-value notation, is sometimes called Russian peasant multiplication, Ethiopian peasant multiplication, or just peasant multiplication. A variant of this algorithm was copied into the Rhind papyrus by the Egyptian scribe Ahmes around 1650BCE, from a document he claimed was (then) about 350 years old. This algorithm was still taught in elementary schools in Eastern Europe in the late 20th century; it was also commonly used by early digital computers that didn’t implement integer multiplication directly in hardware.

The peasant multiplication algorithm reduces the difficult task of multiplying arbitrary numbers to a sequence of four simpler operations: (1) determining parity (even or odd), (2) addition, (3) duplation (doubling a number), and (4) mediation (halving a number, rounding down).

Algortihm


PeasantMultiply(x,y):prod0while x>0if x is oddprodprod+yxx/2yy+yreturn prod\underline{PeasantMultiply(x, y):} \\ \hspace{0.4cm} prod←0 \\ \hspace{0.4cm} while\space x > 0 \\ \hspace{1cm} if\space x\space is\space odd \\ \hspace{1.7cm} prod ← prod + y \\ \hspace{1cm} x ← ⌊x/2⌋ \\ \hspace{1cm} y←y+y \\ \hspace{0.45cm} return\space prod

Implementation

C++
#include <iostream>
#include <cmath>
using namespace std;
int PeasantMultiply(int x, int y)
{
int prod = 0;
while (x > 0)
{
if (x % 2 == 1)
{
prod += y;
}
x = floor(x / 2);
y = y + y;
}
return prod;
}
int main()
{
int x = 7;
int y = 9;
int prod = PeasantMultiply(x, y);
cout << x << " * " << y << " = " << prod << endl;
return 0;
}

Explanation

  • Line 5: This is the PeasantMultiply function, which takes two integer arguments x and y. The function uses the peasant multiplication algorithm to compute the product of x and y.

  • Lines 10–16: Inside the loop, if x is odd, the value of y is added to the prod variable. The value of x is then divided by 2 using the floor function and the result is stored back into x. The value of y is doubled by adding y to itself.

Multiplication by duplation and mediation
Multiplication by duplation and mediation

The correctness of this algorithm follows by induction from the following recursive identity, which holds for all non-negative integers xx and yy:

x.y={0 if x=0x/2.(y+y) if x is evenx/2.(y+y)+y if x is odd x.y = \begin{cases} 0& \text{ if } x=0 \\ \lfloor x/2 \rfloor.(y+y)& \text{ if x is even} \\ \lfloor x/2 \rfloor.(y+y)+y& \text{ if x is odd } \end{cases}

Arguably, this recurrence is the peasant multiplication algorithm. Don’t let the iterative pseudocode fool you; the algorithm is fundamentally recursive!

As stated, PeasantMultiply performs O(logx)O(\log x) parity, addition, and mediation operations. However, we can improve this bound to O(logmin{x,y}O(\log min\{x, y\}) by swapping the two arguments when x>yx > y. Assuming the numbers are represented using any reasonable place-value notation (like binary, decimal, Babylonian hexagesimal, Egyptian duodecimal, Roman numeral, Chinese counting rods, bead positions on an abacus, and so on), each operation requires at most O(log(xy))=O(logmax{x,y})O(\log(x y)) = O(\log max\{x, y\}) single-digit operations. So, the overall running time of the algorithm is O(logmin{x,y}logmax{x,y})=O(logxlogy)O(\log min\{ x , y \} · \log max\{ x , y \}) = O(\log x · \log y ).

In other words, this algorithm requires O(mn)O(mn) time to multiply an m-digit number by an n-digit number; up to constant factors, this is the same running time as the lattice algorithm. This algorithm requires (a constant factor!) more paperwork to execute by hand than the lattice algorithm, but the necessary primitive operations are arguably easier for humans to perform. In fact, the two algorithms are equivalent when numbers are represented in binary.

Compass and straightedge

Classical Greek geometers identified numbers (or, more accurately, magnitudes) with line segments of the appropriate length, which they manipulated using two simple mechanical tools—the compass and the straightedge—versions of which had already been in common use by surveyors, architects, and other artisans for centuries. Using only these two tools, these scholars reduced several complex geometric constructions to the following primitive operations, starting with one or more identified reference points.

  • Draw the unique line passing through two distinct identified points.
  • Draw the unique circle centered at one identified point and passing through another.
  • Identify the intersection point (if any) of two lines.
  • Identify the intersection points (if any) of a line and a circle.
  • Identify the intersection points (if any) of two circles.

In practice, Greek geometry students almost certainly drew their constructions on an abax (αˊ`\'\alphaβ\betaα\alphaξ\xi), a table covered in dust or sand. Centuries earlier, Egyptian surveyors carried out many of the same constructions using ropes to determine straight lines and circles on the ground. However, Euclid and other Greek geometers presented compass and straightedge constructions as precise mathematical abstractions—points are ideal points, lines are ideal lines, and circles are ideal circles.

The illustration below shows an algorithm, described in Euclid’s Elements about 2500 years ago, for multiplying or dividing two magnitudes. The input consists of four distinct points AA, BB, CC, and DD, and the goal is to construct a point Z such that AZ=ACAD/AB|AZ| = |AC||AD|/|AB|. In particular, if we define AB|AB| to be our unit of length, the algorithm computes the product of AC|AC| and AD|AD|. Notice that Euclid first defines a new primitive operation RightAngle (as modern programmers would phrase it) by writing a subroutine. The correctness of the algorithm follows from the observation that triangles ACEACE and AZFAZF are similar. The second and third lines of the main algorithm are ambiguous, because αα intersects any circle centered at AA at two distinct points, but the algorithm is actually correct no matter which intersection points are chosen for EE and FF.

Algorithm


〈〈Construct the line perpendicular to l passing through P.〉〉 RightAngle(l,P):A,BIntersect(Circle(P,A),l)C,DIntersect(Circle(A,B),Circle(B,A))returnLine(C,D)\underline{RightAngle(l, P):} \\ \hspace{0.4cm} A, B ← Intersect(Circle(P, A), l) \\ \hspace{0.4cm} C , D ← Intersect(Circle(A, B), Circle(B, A)) \\ \hspace{0.45cm} return Line(C , D)

〈〈Construct a point Z such that |AZ| = |AC||AD|/|AB|.〉〉

MultiplyOrDivide(A,B,C,D):αRightAngle(Line(A,C),A)EIntersect(Circle(A,B),α)FIntersect(Circle(A,D),α)βRightAngle(Line(E,C),F)γRightAngle(β,F)return Intersect(γ,Line(A,C))\underline{MultiplyOrDivide(A,B,C,D):} \\ \hspace{0.4cm} α ← RightAngle(Line(A, C ), A) \\ \hspace{0.4cm} E ← Intersect(Circle(A, B), α) \\ \hspace{0.4cm} F ← Intersect(Circle(A, D), α) \\ \hspace{0.4cm} β ←RightAngle(Line(E, C), F) \\ \hspace{0.4cm} γ ← RightAngle(β,F) \\ \hspace{0.45cm} return\space Intersect(γ,Line(A,C))


Multiplication by compass and straightedge
Multiplication by compass and straightedge

Euclid’s algorithm reduces the problem of multiplying two magnitudes (lengths) to a series of primitive compass and straightedge operations. These operations are difficult to implement precisely on a modern digital computer, but Euclid’s algorithm wasn’t designed for a digital computer. It was designed for the Platonic ideal geometer, wielding the Platonic ideal compass and the Platonic ideal straightedge, who could execute each operation perfectly in constant time by definition. In this model of computation, MultiplyOrDivideMultiplyOrDivide runs in O(1)O(1) time.