The Deutsch algorithm (named after its originator David Deutsch) is a vivid and simple illustration of the power of quantum computing. Beyond achieving this admirable goal, it has no practical purpose, though it does seem to have inspired more useful algorithms. This blog post works through the arithmetic to show that the algorithm works (I did the sums to convince myself as I couldn’t grasp what was going on in Dirac notation). I don’t attempt to explain why it works.
The problem
Here’s the problem the Deutsch algorithm solves.
Suppose you are presented with a function \(f : \{0, 1\} \rightarrow \{0, 1\}\). You don’t know how \(f\) is defined, but you can call it with a \(0\) or a \(1\) and see what result it gives. The result will always be a \(0\) or a \(1\).
Your task is to decide whether or not \(f\) is constant, that is, \(f(0) = f(1)\). If \(f\) isn’t constant, it is said to be balanced since for one input the answer is \(0\) and for the other it is \(1\).
There are four possible functions, \(f_i\), as follows:
Function 
\(f_i(0)\) 
\(f_i(1)\) 
Type 
\(f_0\) 
0 
0 
Constant 
\(f_1\) 
0 
1 
Balanced 
\(f_2\) 
1 
0 
Balanced 
\(f_3\) 
1 
1 
Constant 
Classically, to distinguish between the constant and balanced functions we would simply call \(f(0)\) and \(f(1)\), look at the results, and see which function it is. The Deutsch quantum algorithm can distinguish between a constant and balanced function by calling the function only once. It can’t tell us which function it is, just whether it is constant or balanced.
The circuit
Here is the circuit that solves it (diagram mildly simplified from Alice Flarend and Bob Hilborn, 2022, Fig 11.3, p. 163):
The gate \(U_f\) is a representation of the unknown function, \(H\) is the Hadamard gate (named after Jacques Hadamard), and \(M\) is measurement. Let’s work through the arithmetic. I’ll start with the easy bits at the input and output before tackling \(U_f\) in the middle.
The inputs
To get it going, first we need to apply \(H\) to the basis state \(1\rangle\).
\(\displaystyle H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}\),
\(\displaystyle 1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}\), so
\(\displaystyle H1\rangle = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \end{bmatrix}\).
Next, since the circuit starts with two qubits both in the state \(H1\rangle\), we need to calculate the tensor product \(H1\rangle \otimes H1\rangle\):
\(\displaystyle H1\rangle \otimes H1\rangle = \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\).
In ket notation, and reading top to the bottom of the column matrix, this is equivalent to:
\(\displaystyle \frac{1}{2}00\rangle\frac{1}{2}01\rangle\frac{1}{2}10\rangle+\frac{1}{2}11\rangle\).
\(00\rangle\) means the inputs (reading the circuit top to bottom) were \(0\) and \(0\), \(01\rangle\) means \(0\) and \(1\), and so on.
This pair of qubits is in superposition. If we measured after this step, then each of the four input possibilities would be observed with equal probability. (Use Born’s rule to square the state magnitudes; each is \(1/4\).)
The output
After the \(U_f\) magic happens (next section), we undo the Hadamard on the top path (register) of the circuit. Another tensor product gives us the appropriate matrix, this time of Hadamard and the identity matrix, \(I\), defined:
\(\displaystyle I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\).
So, \(\displaystyle H \otimes I = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{bmatrix}\).
The middle
Now we are left with the four implementations of \(U_f\), the overall structure of which is as follows (Mermin, 2007, p. 42):
Note that the first input, \(x\rangle\) to the gate remains unchanged by the output and the second out is the exclusiveOR (XOR) of \(y\) and \(f(x)\). (In a future blog post I explain why this helps.) Here are the sums for each \(U_f\) (the pictures are from Mermin, 2007, p. 42; I did the ‘rithmetic):
\(U_{f_0}\):
This is the easiest – it’s just the identity matrix. In the spirit of how we typically create a product of inputs, this is the following tensor product:
\(\displaystyle U_{f_0} = I \otimes I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\),
the structure of which pleases me for some reason (how the \(4 \times 4\) identity matrix has a couple of \(2 \times 2\) identity matrices in topleft and bottomright corners, and how \(\otimes\) makes it happen!).
\(U_{f_1}\):
This is the controlled NOT gate:
\(U_{f_1} = \mbox{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\).
\(U_{f_2}\):
This takes the NOT (a.k.a. \(X\)) of the second input and then applies CNOT to both inputs. Let’s take the first step first:
\(I \otimes X = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\).
(This is mildly pleasing: note the \(2 \times 2\) NOT gate at the top left and bottom right.)
Now multiply CNOT by it:
\(U_{f_2} = \mbox{CNOT} (I \otimes X) = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}\)
\(U_{f_3}\):
Another easier one: just NOT the second input, so we need \(I \otimes X\) and we already calculated it above:
\(U_{f_3} = I \otimes X = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\).
Gluing it all together
Now the fun bit in a big table. Let
\(\mbox{Inp} = H1\rangle \otimes H1\rangle\) and
\(\mbox{Undo} = H \otimes I\).
That is, \(\mbox{Inp}\) is the input and \(\mbox{Undo}\) is the step in the Deutsch circuit before measurement which applies the Hadamard gate.
Function 
\(\mbox{Inp}\ U_{f_i}\) 
\(\mbox{Inp}\ U_{f_i}\ \mbox{Undo}\) 
\(f_0\)
(constant) 
\(\displaystyle \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\) 
\(\displaystyle \frac{1}{\sqrt{2}} \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}\) 
\(f_1\)
(balanced) 
\(\displaystyle \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\) 
\(\displaystyle \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) 
\(f_2\)
(balanced) 
\(\displaystyle \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\) 
\(\displaystyle \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}\) 
\(f_3\)
(constant) 
\(\displaystyle \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\) 
\(\displaystyle \frac{1}{\sqrt{2}} \begin{bmatrix} 0 \\ 0 \\ 1 \\ 1 \end{bmatrix}\) 
To see what we would measure, use Born’s rule and square the amplitudes. This gives the following probabilities for each qubit pair:

\(P(00\rangle)\) 
\(P(01\rangle)\) 
\(P(10\rangle)\) 
\(P(11\rangle)\) 
\(f_0\)
(constant) 
\(0\) 
\(0\) 
\(\frac{1}{2}\) 
\(\frac{1}{2}\) 
\(f_1\)
(balanced) 
\(\frac{1}{2}\) 
\(\frac{1}{2}\) 
\(0\) 
\(0\) 
\(f_2\)
(balanced) 
\(\frac{1}{2}\) 
\(\frac{1}{2}\) 
\(0\) 
\(0\) 
\(f_3\)
(constant) 
\(0\) 
\(0\) 
\(\frac{1}{2}\) 
\(\frac{1}{2}\) 
If the function is constant, the probability that the first qubit is a \(1\), which I’ll write \(P(1\_\rangle)\), is:
\(\displaystyle P(1\_\rangle) = P(10\rangle) + P(11\rangle) = \frac{1}{2} + \frac{1}{2} = 1\).
We observe either \(10\rangle\) or \(11\rangle\) with equal probability but the first qubit is always \(1\).
If the function is balanced, the probability that the first qubit is a \(0\) is:
\(\displaystyle P(0\_\rangle) = P(00\rangle) + P(01\rangle) = \frac{1}{2} + \frac{1}{2} = 1\).
For this possibility, we observe either \(00\rangle\) or \(01\rangle\) with equal probability but the first qubit is always a \(0\).
It worked! Looking at this first qubit tells us whether the function was constant or balanced. The second qubit has a 5050 chance of being a \(1\) or \(0\) so doesn’t tell us anything helpful.
See this post for the output when I ran the algorithm on an actual quantum computer, ibmq_quito.
But why does it work?!
As I mentioned at the outset, I am not going to attempt to explain why it works, but Mermin’s (2007, p. 41) potted history of the algorithm may be of interest:
“Deutsch’s problem is the simplest example of a quantum tradeoff that sacrifices particular information to acquire relational information. A crude version of it appeared in a 1985 paper by David Deutsch that, together with a 1982 paper by Richard Feynman, launched the whole field. In that early version the trick could be executed successfully only half the time. It took a while for people to realize that the trick could be accomplished every single time.”
References
Flarend, A., & Hilborn, B. (2022). Quantum computing: from Alice to Bob. Oxford University Press.
Mermin, N. D. (2007). Quantum computer science: an introduction. Cambridge University Press.