Search⌘ K
AI Features

Multiplication Algorithms

Discover the historical development and key concepts of multiplication algorithms. Learn lattice and peasant multiplication methods, their recursive and iterative implementations, and their efficiency analysis. This lesson enhances your understanding of fundamental algorithms used for multiplying numbers, crucial for problem solving in Java programming.

Historical roots of algorithms

Although they have been a topic of formal academic study for only a few decades, algorithms have been with us since the dawn of civilization. Descriptions of step-by-step arithmetic computation are among the earliest examples of written human language, long predating the expositions by Fibonacci and al-Khwarizmi, or even the place-value notation they popularized.

Lattice multiplication

The most familiar method for multiplying large numbers, at least for American students, is the lattice algorithm. This algorithm was popularized by Fibonacci in Liber Abaci, who learned it from Arabic sources including al-Khwarizm, who in turn learned it from Indian sources including Brahmagupta’s 7th century treatise Brahmasphutasiddhanta, who may have learned it from Chinese sources. The oldest surviving descriptions of the algorithm appear in The Mathematical Classic of Sunzi, written in China between the 3rd and 5th centuries, and in Eutocius of Ascalon’s commentaries on Archimedes’ Measurement of the Circle, written around 500 CE, but there is evidence that the algorithm was known much earlier. Eutocius credits the method to a lost treatise of Apollonius of Perga, who lived around 300 BCE, entitled Okytokion (`Ω\varOmegaκ\kappaυ\upsilonτ\tauoˊ\acute{o}κ\kappaι\iotaooν\nu). The Sumerians recorded multiplication tables on clay tablets as early as 2600 BCE, suggesting that they may have used the lattice algorithm.

The lattice algorithm assumes that the input numbers are represented as explicit strings of digits; we’ll assume here that we’re working in base ten, but the algorithm generalizes immediately to any other base. To simplify notation, the input consists of a pair of arrays X[0..m1]X [0 .. m − 1] and Y[0..n1]Y [0 .. n − 1], representing the numbers

x=i=0m1X[i].10i  and  y=j=0n1Y[j].10j,x = \sum_{i=0}^{m-1}X[i].10^i \text{\ \ and\ \ } y = \sum_{j=0}^{n-1}Y[j].10^j,

and similarly, the output consists of a single array Z[0..m+n1]Z [0 .. m + n − 1], representing the product

z=x.y=k=0m+n1Z[k].10k.z=x.y=\sum_{k=0}^{m+n-1}Z[k].10^k.

The algorithm uses addition and single-digit multiplication as primitive operations. Addition can be performed using a simple for loop. In practice, single digit multiplication is performed using a lookup table, either carved into clay tablets, painted on strips of wood or bamboo, written on paper, stored in read-only memory, or memorized by the computator. The entire lattice algorithm can be summarized by the formula

x.y=i=0m1j=0n1(X[i].Y[j].10i+j).x.y = \sum_{i=0}^{m-1}\sum_{j=0}^{n-1}(X[i].Y[j].10^{i+j}).

Different variants of the lattice algorithm evaluate the partial products X[i]Y[j]10i+jX [i] · Y [ j] · 10^{i+ j} in different orders and use different strategies for computing their sum. For example, in Liber Abaco, Fibonacci describes a variant that considers the mnmn partial products in increasing order of significance, as shown in modern pseudocode below.

Algorithm


FibonacciMultiply(X[0..m1],Y[0..n1]):hold0for k0 to n+m1for all i and j such that i+j=kholdhold+X[i]Y[j]Z[k]hold mod 10holdhold/10return Z[0..m+n1]\underline{FibonacciMultiply(X [0 .. m − 1], Y [0 .. n − 1]):} \\ \hspace{0.4cm} hold←0 \\ \hspace{0.4cm} for\space k ← 0\space to\space n + m − 1 \\ \hspace{1cm} for\space all\space i\space and\space j\space such\space that\space i+j=k \\ \hspace{1.7cm} hold←hold+X[i]·Y[j] \\ \hspace{1cm} Z[k] ← hold\space mod\space 10 \\ \hspace{1cm} hold ← ⌊hold/10⌋ \\ \hspace{0.45cm} return\space Z[0..m+n−1]

Implementation

Java
import java.util.*;
public class Main {
public static ArrayList<Integer> FibonacciMultiply(ArrayList<Integer> X, ArrayList<Integer> Y) {
int m = X.size();
int n = Y.size();
ArrayList<Integer> Z = new ArrayList<Integer>(Collections.nCopies(m + n, 0));
int hold = 0;
for (int k = 0; k < n + m - 1; k++) {
for (int i = 0; i < m; i++) {
int j = k - i;
if (j >= 0 && j < n && i + j == k) {
hold += X.get(i) * Y.get(j);
}
}
Z.set(k, hold % 10);
hold /= 10;
}
Z.set(m + n - 1, hold);
while (Z.size() > 1 && Z.get(Z.size()-1) == 0) {
Z.remove(Z.size()-1);
}
Collections.reverse(Z);
return Z;
}
public static void main(String[] args) {
ArrayList<Integer> X = new ArrayList<Integer>(Arrays.asList(1, 2, 3));
ArrayList<Integer> Y = new ArrayList<Integer>(Arrays.asList(4, 5));
ArrayList<Integer> Z = FibonacciMultiply(X, Y);
System.out.print("The product of two input vectors is: ");
for (int i = 0; i < Z.size(); i++) {
System.out.print(Z.get(i));
}
System.out.println();
}
}

Explanation

  • Lines 5–6: These two lines define two integers m and n that represent the size of the input vectors X and Y, respectively.
  • Lines 12–15: These lines define an integer variable j and compute the corresponding index in the input vector Y.
  • Lines 20–22: This while loop removes any leading zeros from the output vector Z.

Fibonacci’s algorithm is often executed by storing all the partial products in a two-dimensional table (often called a tableau or grate or lattice) and then summing along the diagonals with appropriate carries. American elementary-school students are taught to multiply one factor (the multiplicand) by each digit in the other factor (the multiplier), writing down all the multiplicand-by-digit products before adding them up. This was also the method described by Eutocius, although he fittingly considered the multiplier digits from left to right. Both of these variants (and several others) are described and illustrated side by side in the anonymous 1458 textbook L’Arte dell’Abbaco, also known as the Treviso Arithmetic, the first printed mathematics book in the West.

Storing all the partial products in a two-dimensional table
Storing all the partial products in a two-dimensional table
Multiplier digits from left to right
Multiplier digits from left to right

All of these variants of the lattice algorithm and other similar variants described by Sunzi, al-Khwarizmi, Fibonacci, L’Arte dell’Abbaco, and many other sources compute the product of any mm-digit number and any nn-digit number in O(mn)O(mn) time. The running time of every variant is dominated by the number of single-digit multiplications.

Duplation and mediation

The lattice algorithm is not the oldest multiplication algorithm for which we have direct recorded evidence. An even older and arguably simpler algorithm, which does not rely on place-value notation, is sometimes called Russian peasant multiplication, Ethiopian peasant multiplication, or just peasant multiplication. A variant of this algorithm was copied into the Rhind papyrus by the Egyptian scribe Ahmes around 1650 BCE, from a document he claimed was (then) about 350 years old. This algorithm was still taught in elementary schools in Eastern Europe in the late 20th century; it was also commonly used by early digital computers that did not implement integer multiplication directly in hardware.

The peasant multiplication algorithm reduces the difficult task of multiplying arbitrary numbers to a sequence of four simpler operations:

  • Determining parity (even or odd)
  • Addition
  • Duplation (doubling a number)
  • Mediation (halving a number, rounding down)

Algortihm


PeasantMultiply(x,y):prod0while x>0if x is oddprodprod+yxx/2yy+yreturn prod\underline{PeasantMultiply(x, y):} \\ \hspace{0.4cm} prod←0 \\ \hspace{0.4cm} while\space x > 0 \\ \hspace{1cm} if\space x\space is\space odd \\ \hspace{1.7cm} prod ← prod + y \\ \hspace{1cm} x ← ⌊x/2⌋ \\ \hspace{1cm} y←y+y \\ \hspace{0.45cm} return\space prod

Implementation

Java
import java.lang.Math;
public class main {
public static int PeasantMultiply(int x, int y) {
int prod = 0;
while (x > 0) {
if (x % 2 == 1) {
prod += y;
}
x = Math.floorDiv(x, 2);
y += y;
}
return prod;
}
public static void main(String[] args) {
int x = 7;
int y = 9;
int prod = PeasantMultiply(x, y);
System.out.println(x + " * " + y + " = " + prod);
}
}

Explanation

  • Lines 4–16: This is the implementation of PeasantMultiply as explained in the algorithm above.
  • Line 22: This line outputs 7 * 9 = 63 on the console.
Multiplication by duplation and mediation
Multiplication by duplation and mediation

The correctness of this algorithm follows by induction from the following recursive identity, which holds for all non-negative integers x and y:

x.y={0 if x=0x/2.(y+y) if x is evenx/2.(y+y)+y if x is odd x.y = \begin{cases} 0& \text{ if } x=0 \\ \lfloor x/2 \rfloor.(y+y)& \text{ if x is even} \\ \lfloor x/2 \rfloor.(y+y)+y& \text{ if x is odd } \end{cases}

Arguably, this recurrence is the peasant multiplication algorithm. Don’t let the iterative pseudocode fool you; the algorithm is fundamentally recursive!

As stated, PeasantMultiplyPeasantMultiply performs O(logx)O(\log x) parity, addition, and mediation operations, but we can improve this bound to O(logmin{x,y})O(\log min\{x, y\}) by swapping the two arguments when x>yx > y. Assuming the numbers are represented using any reasonable place-value notation (like binary, decimal, Babylonian hexagesimal, Egyptian duodecimal, Roman numeral, Chinese counting rods, bead positions on an abacus, and so on), each operation requires at most O(log(xy))=O(logmax{x,y})O(\log(x y)) = O(\log max\{x, y\}) single-digit operations, so the overall running time of the algorithm is O(logmin{x,y}logmax{x,y})=O(logxlogy).O(\log min\{ x , y \} · \log max\{ x , y \}) = O(\log x · \log y ).

In other words, this algorithm requires O(mn)O(mn) time to multiply an mm-digit number by an nn-digit number; up to constant factors, this is the same running time as the lattice algorithm. This algorithm requires (a constant factor!) more paperwork to execute by hand than the lattice algorithm, but the necessary primitive operations are arguably easier for humans to perform. In fact, the two algorithms are equivalent when numbers are represented in binary.

Compass and straightedge

Classical Greek geometers identified numbers (or, more accurately, magnitudes) with line segments of the appropriate length, which they manipulated using two simple mechanical tools—the compass and the straightedge—versions of which had already been in common use by surveyors, architects, and other artisans for centuries. Using only these two tools, these scholars reduced several complex geometric constructions to the following primitive operations, starting with one or more identified reference points.

  • Draw the unique line passing through two distinct identified points.
  • Draw the unique circle centered at one identified point and passing through another.
  • Identify the intersection point (if any) of two lines.
  • Identify the intersection points (if any) of a line and a circle.
  • Identify the intersection points (if any) of two circles.

In practice, Greek geometry students almost certainly drew their constructions on an abax( αˊ\'{`{\alpha}}β\betaα\alphaξ\xi), a table covered in dust or sand. Centuries earlier, Egyptian surveyors carried out many of the same constructions using ropes to determine straight lines and circles on the ground. However, Euclid and other Greek geometers presented compass and straightedge constructions as precise mathematical abstractions—points are ideal points, lines are ideal lines, and circles are ideal circles.

The illustration below shows an algorithm, described in Euclid’s Elements about 2500 years ago, for multiplying or dividing two magnitudes. The input consists of four distinct points AA, BB, CC, and DD, and the goal is to construct a point ZZ such that AZ=ACAD/AB|AZ| = |AC||AD|/|AB|. In particular, if we define AB|AB| to be our unit of length, then the algorithm computes the product of AC|AC| and AD|AD|. Notice that Euclid first defines a new primitive operation RightAngleRightAngle by (as modern programmers would phrase it) writing a subroutine. The correctness of the algorithm follows from the observation that triangles ACEACE and AZFAZF are similar. The second and third lines of the main algorithm are ambiguous, because αα intersects any circle centered at AA at two distinct points, but the algorithm is actually correct no matter which intersection points are chosen for EE and FF.

Algorithm


Construct the line perpendicular to l passing through P. RightAngle(l,P):A,BIntersect(Circle(P,A),l)C,DIntersect(Circle(A,B),Circle(B,A))returnLine(C,D) \\ \underline{RightAngle(l, P):} \\ \hspace{0.4cm} A, B ← Intersect(Circle(P, A), l) \\ \hspace{0.4cm} C , D ← Intersect(Circle(A, B), Circle(B, A)) \\ \hspace{0.45cm} return Line(C , D)

Construct a point Z such that |AZ| = |AC||AD|/|AB|.

MultiplyOrDivide(A,B,C,D):αRightAngle(Line(A,C),A)EIntersect(Circle(A,B),α)FIntersect(Circle(A,D),α)βRightAngle(Line(E,C),F)γRightAngle(β,F)return Intersect(γ,Line(A,C))\underline{MultiplyOrDivide(A,B,C,D):} \\ \hspace{0.4cm} α ← RightAngle(Line(A, C ), A) \\ \hspace{0.4cm} E ← Intersect(Circle(A, B), α) \\ \hspace{0.4cm} F ← Intersect(Circle(A, D), α) \\ \hspace{0.4cm} β ←RightAngle(Line(E, C), F) \\ \hspace{0.4cm} γ ← RightAngle(β,F) \\ \hspace{0.45cm} return\space Intersect(γ,Line(A,C))


Multiplication by compass and straightedge
Multiplication by compass and straightedge

Euclid’s algorithm reduces the problem of multiplying two magnitudes (lengths) to a series of primitive compass and straightedge operations. These operations are difficult to implement precisely on a modern digital computer, but Euclid’s algorithm wasn’t designed for a digital computer. It was designed for the Platonic ideal geometer, wielding the Platonic ideal compass and the Platonic ideal straightedge, which could execute each operation perfectly in constant time by definition. In this model of computation, MultiplyOrDivideMultiplyOrDivide runs in O(1)O(1) time.