Linear Algebra
for Quantum Computing
Vectors, complex numbers, inner products, Dirac notation, matrices and eigenvalues β everything needed to understand qubits from scratch.
Vectors β what they really are
A vector is an ordered list of numbers. Think of it as an arrow in space: it has a direction and a length. In mathematics it is written as a column:
| vβ |
| vβ |
| vβ |
A vector in 2D has 2 components, in 3D it has 3, and in general in N dimensions it has N. We work in 2D for now because it is easy to visualize β but everything generalizes.
Column notation (transpose)
We often write vectors compactly. The symbol α΅ (transpose) means a row vector is read as a column:
| 3 |
| 2 |
This will come in handy with quantum notation.
The vector v = (β2, 3)α΅ has vertical component equal to:
Vector addition is performed component by component:
| 1 |
| 2 |
| 3 |
| β1 |
| 1+3 |
| 2+(β1) |
| 4 |
| 1 |
Calculate (2, 5)α΅ + (β1, 3)α΅:
Vector Space, Norm and Hilbert Space
What is a vector space?
A vector space is simply a set of vectors on which two operations are defined: addition and scalar multiplication. The set βΒ² (all pairs of real numbers) is a vector space, and so is βα΄Ί (vectors with N complex components).
What is Hilbert Space?
A Hilbert Space is simply a vector space equipped with an additional structure called the inner product (covered in section 3). In practice, for quantum computing:
For a qubit N=2, therefore the Hilbert space is βΒ² β the vectors have 2 complex components.
Don't be intimidated by the name. "Hilbert" is simply the mathematician who studied these structures. Physically, every quantum state is a unit vector in this space.
Calculate the norm of the following vectors:
a) v = (3, 4)α΅ β βvβ = β(3Β² + 4Β²) = ?
b) u = (1, 0)α΅ β βuβ = ?
To make a unit vector, divide by its norm: Γ» = v / βvβ
normalize v = (3, 4)α΅. Write the first component as a fraction (e.g. 3/5):
Complex Numbers β conjugate and phase
A complex number z has two parts: a real part and an imaginary part:
visualization β the Argand Plane
Every complex number is a point in the plane (or an arrow from the origin). The horizontal axis is the real part, the vertical axis is the imaginary part. Click to explore.
The Complex Conjugate z*
The conjugate of z = a + jb is obtained simply by flipping the sign of the imaginary part:
Geometrically on the Argand plane, the conjugate is the reflection across the real axis.
Modulus |z| β the "length" of a complex number
The modulus of z is its distance from the origin β always a real number β₯ 0:
Squared modulus |z|Β² β fundamental in quantum
In quantum computing |z|Β² appears constantly. There is an elegant trick using the conjugate:
The phase Ο
Every complex number has an angle (phase) relative to the real axis. It can be written in polar form:
Write the conjugate of these numbers (remember: only change the sign of j):
a) z = 5 + 2j b) z = β3 β j c) z = 4 (pure real)
Calculate |z|Β² = aΒ² + bΒ² for:
a) z = 3 + 4j β 3Β² + 4Β² = ?
b) z = 1/β2 + j/β2 β (1/β2)Β² + (1/β2)Β² = ? (appears very often in quantum!)
Given z = 2 + 3j, verify that z Β· z* = |z|Β² by multiplying explicitly:
Substitute jΒ² = β1, then add:
Inner Product β measuring "alignment"
The inner product (or dot product) of two vectors measures how "aligned" they are. For real vectors in βα΄Ί:
Geometrically: u Β· v = βuβ Β· βvβ Β· cos(ΞΈ) where ΞΈ is the angle between the two vectors.
Orthogonality
Two vectors are orthogonal (perpendicular) if their inner product is zero β the angle between them is 90Β°.
Inner product with complex numbers
When the components are complex numbers (as in quantum), the inner product is modified: the first vector must be conjugated:
Calculate u Β· v with u = (2, 3)α΅ and v = (4, β1)α΅
u Β· v = 2Β·4 + 3Β·(β1) = ?
Are a = (1, 1)α΅ and b = (1, β1)α΅ orthogonal? Calculate the inner product:
Dirac Notation β the language of Quantum
Quantum computing uses a special notation invented by Paul Dirac, which looks strange at first but is very convenient. Let's learn it step by step.
Ket |vβ© β the column vector
The symbol |vβ© (read "ket v") is simply a column vector. Nothing mysterious:
| Ξ± |
| Ξ² |
Bra β¨v| β the conjugate transpose
The symbol β¨v| (read "bra v") is the conjugate transpose of the ket. That is: transpose the column to a row, and conjugate every complex component:
| Ξ± |
| Ξ² |
Inner product in Dirac notation
| vβ |
| vβ |
Given |Οβ© = (2, β3)α΅ (real components),
a) The corresponding bra β¨Ο| is a ___ vector (row/column)?
b) What is the second component of β¨Ο|?
Given |uβ© = (1, 2)α΅ and |vβ© = (3, 4)α΅ (all real),
calculate β¨u|vβ© = (1, 2)Β·(3, 4)α΅ = 1Β·3 + 2Β·4 = ?
Why |0β© = (1, 0)α΅ and |1β© = (0, 1)α΅?
This is a fundamental question. The short answer: it is a conventional choice, like choosing a coordinate system in geometry. Here is the full explanation.
The standard basis of βΒ²
In 2D, the "standard basis" vectors are the simplest ones pointing along the axes:
| 1 |
| 0 |
| 0 |
| 1 |
| 3 |
| 2 |
| 1 |
| 0 |
| 0 |
| 1 |
The connection to qubits
A qubit can be in one of two distinguishable and opposite states: "zero" and "one" (think: spin-up / spin-down, horizontal / vertical, etc.).
We choose to represent them with the standard basis vectors:
| 1 |
| 0 |
| 0 |
| 1 |
Why these two vectors and not others?
We could choose any other pair of orthogonal unit vectors. For example:
| 1/β2 |
| 1/β2 |
| 1/β2 |
| β1/β2 |
The choice of |0β©=(1,0)α΅ and |1β©=(0,1)α΅ is called the standard computational basis and is the most widely used by convention.
Verify that |0β© and |1β© form an orthonormal basis:
a) β¨0|0β© = (1, 0)Β·(1, 0)α΅ = ?
b) β¨1|1β© = (0, 1)Β·(0, 1)α΅ = ?
c) β¨0|1β© = (1, 0)Β·(0, 1)α΅ = ? (orthogonal?)
Every vector can be written as a combination of |0β© and |1β©. Write v = (3, β2)α΅ in ket notation:
The Qubit β the basic quantum state
A qubit is the basic unit of quantum computing. Unlike a classical bit (which is only 0 or 1), a qubit can be in a superposition of |0β© and |1β©.
| 1 |
| 0 |
| 0 |
| 1 |
| Ξ± |
| Ξ² |
Why |Ξ±|Β² + |Ξ²|Β² = 1?
This is exactly the normalization condition we studied. In Dirac terms:
| Ξ± |
| Ξ² |
Physically: the total probability of finding the qubit in some state must be 1 (100%).
Qubit examples
For each one, check whether it is a valid qubit (norm = 1):
a) |Οβ© = (1/2)|0β© + (β3/2)|1β© β |1/2|Β² + |β3/2|Β² = ?
b) |Οβ© = (1/2)|0β© + (1/2)|1β© β valid?
Given |Οβ© = (1/β3)|0β© + Ξ²|1β© with Ξ² real and positive,
find Ξ² knowing it is a valid qubit.
Measurement β where the quantum world meets the classical
If we have |Οβ© = Ξ±|0β© + Ξ²|1β©, what happens when we measure it?
State collapse
After measuring and obtaining, say, |0β©, the system's state becomes |0β©. The next measurement gives 0 with certainty (probability 1). The system has "forgotten" the original amplitudes.
Dato |Οβ© = (β3/2)|0β© + (1/2)|1β©
a) P(measure |0β©) = |β3/2|Β² = ?
b) P(measure |1β©) = |1/2|Β² = ?
c) Which outcome is more likely?
As in your notes: |Οβ© = (|0β© + |1β©)/β2
a) Write Ξ± and Ξ² explicitly
b) Calculate P(|0β©) and P(|1β©)
c) Why is this a perfect random number generator?
Matrices and Quantum Gates β transforming qubits
How to read a matrix
A matrix is a grid of numbers organized in rows and columns. A 2Γ2 matrix has 2 rows and 2 columns:
| a | b |
| c | d |
(0 1 ; 1 0)Read as: row 1 = (0, 1) Β· row 2 = (1, 0)
That is exactly:
( 1 0 ) β row 2: "1 0"
(0 1 ; 1 0) and ((0,1),(1,0)) and the two-row form are three ways of writing the same thing.
Scaling to larger matrices β 3Γ3 and beyond
Same logic: each block separated by ";" is a row. A 3Γ3 matrix has 3 blocks:
| a | b | c |
| d | e | f |
| g | h | i |
(d,e,f),
(g,h,i))
| 1 | 0 | 0 |
| 0 | 1 | 0 |
| 0 | 0 | 1 |
4Γ4 Matrix β two qubits
CNOT gate, the most common 2-qubit gate β 4 rows separated by ";":
| 1 | 0 | 0 | 0 |
| 0 | 1 | 0 | 0 |
| 0 | 0 | 0 | 1 |
| 0 | 0 | 1 | 0 |
(0,1,0,0),
(0,0,0,1),
(0,0,1,0))
(1 2 3 ; 4 5 6) = 1 ";" β 2 rows, 3 numbers β 3 columns = 2Γ3 matrix.
Matrix-vector product
The rule: each row of the matrix takes the inner product with the column vector:
| a | b |
| c | d |
| x |
| y |
| aΒ·x + bΒ·y |
| cΒ·x + dΒ·y |
| 0 | 1 |
| 1 | 0 |
| 1 |
| 0 |
| 0Β·1+1Β·0 |
| 1Β·1+0Β·0 |
| 0 |
| 1 |
Unitary matrices β the quantum condition
In quantum computing, gates must be unitary matrices: Uα΄΄U = I, where Uα΄΄ is the conjugate transpose of U.
where Uα΄΄ = (Uα΅)* = transpose then conjugate each element.
Why? Because it preserves the vector norm: βU|Οβ©β = β|Οβ©β = 1. States remain valid states!
Gates from your notes β in both notations
| 0 | 1 |
| 1 | 0 |
| 1 | 0 |
| 0 | β1 |
| 0 | βj |
| j | 0 |
| 1 | 1 |
| 1 | β1 |
Calculate X|1β© by multiplying the matrix by the vector:
| 0 | 1 |
| 1 | 0 |
| 0 |
| 1 |
Calculate H|0β© where H = (1/β2)Β·[[1,1],[1,β1]]
Eigenvalues and Eigenvectors β in matrix form
| aβΞ» | b |
| c | dβΞ» |
Step 1 β Rewrite as a homogeneous system
Start from L|uβ© = Ξ»|uβ© and move everything to the left. Recall that Ξ»|uβ© = Ξ»I|uβ© where I is the identity matrix:
| lβββΞ» | lββ |
| lββ | lβββΞ» |
| uβ |
| uβ |
| 0 |
| 0 |
( c d ) ( 0 1 ) ( c dβΞ» )
Step 2 β Existence condition: det = 0
The system (L β Ξ»I)|uβ© = 0 has a non-zero solution only if (L β Ξ»I) is singular (non-invertible). This occurs when the determinant is zero:
Full example β Pauli Z, all steps
Step 1 β Write the matrix:
| 1 | 0 |
| 0 | β1 |
Step 2 β Calculate Z β Ξ»I by subtracting Ξ» from the diagonal:
| 1 | 0 |
| 0 | β1 |
| 1 | 0 |
| 0 | 1 |
| 1βΞ» | 0 |
| 0 | β1βΞ» |
Step 3 β Calculate the determinant (diagonal matrix: product of diagonal elements):
Step 4 β Solve the characteristic equation:
Step 3 β Finding the eigenvectors
For each eigenvalue Ξ»β, substitute into (L β Ξ»βI) and solve (L β Ξ»βI)|uβ© = 0:
Case Ξ»β = +1 β substitute Ξ»=+1:
| 1β1 | 0 |
| 0 | β1β1 |
| 0 | 0 |
| 0 | β2 |
| uβ |
| uβ |
| 0 |
| 0 |
| 1 |
| 0 |
Case Ξ»β = β1 β substitute Ξ»=β1:
| 1+1 | 0 |
| 0 | β1+1 |
| 2 | 0 |
| 0 | 0 |
| uβ |
| uβ |
| 0 |
| 0 |
| 0 |
| 1 |
Second example β Pauli X, non-trivial eigenvectors
Gate X has the same eigenvalues as Z (Β±1) but different and more interesting eigenvectors:
Steps 1β4 β Eigenvalues (computed above): Ξ»β = +1, Ξ»β = β1
Eigenvector for Ξ»β = +1:
| 0β1 | 1 |
| 1 | 0β1 |
| β1 | 1 |
| 1 | β1 |
| uβ |
| uβ |
| 0 |
| 0 |
| 1 |
| 1 |
| 1/β2 |
| 1/β2 |
Eigenvector for Ξ»β = β1:
Normalizing an eigenvector
An eigenvector found by solving the system has arbitrary components. To use it as a quantum state it must be normalized: divide by its norm.
| a |
| b |
| a |
| b |
| a/β(aΒ²+bΒ²) |
| b/β(aΒ²+bΒ²) |
The connection to quantum measurement
The possible measurement outcomes = eigenvalues of L (always real for Hermitian matrices).
After measurement the state collapses to the eigenvector corresponding to the outcome obtained.
Pauli Z β measures "is it |0β© or |1β©?" β eigenvalues Β±1, eigenvectors |0β©, |1β©
Pauli X β measures "is it |+β© or |ββ©?" β eigenvalues Β±1, eigenvectors |+β©, |ββ©
Given the matrix A = ((3, 1),(1, 3)), write A β Ξ»I in matrix form.
Remember: Ξ» is subtracted only from the diagonal elements.
With A β Ξ»I = ((3βΞ», 1),(1, 3βΞ»)), calculate the determinant:
Solve with the quadratic formula: Ξ» = (6 Β± β(36β32)) / 2 = (6 Β± ?) / 2
Substitute Ξ»=4 in (A β Ξ»I)|uβ© = 0:
| 3β4 | 1 |
| 1 | 3β4 |
| β1 | 1 |
| 1 | β1 |
| uβ |
| uβ |
| 0 |
| 0 |
From row 1: βuβ + uβ = 0 β uβ = uβ. Choose uβ=1, then normalize.
The normalized eigenvector |uββ© = ?
Verify that |uββ© = (1/β2, 1/β2)α΅ is truly an eigenvector of A with Ξ»=4, by computing A|uββ©:
| 3 | 1 |
| 1 | 3 |
| 1/β2 |
| 1/β2 |
| 3/β2 + 1/β2 |
| 1/β2 + 3/β2 |
| 4/β2 |
| 4/β2 |
What is 4/β2 simplified? And is this equal to 4Β·(1/β2) = λ·|uββ©? (yes/no)
Practical tricks to speed up calculations
If M = aI + bL, then M|uβ© = aI|uβ© + bL|uβ© = a|uβ© + bΞ»|uβ© = (a + bΞ»)|uβ©.
Eigenvectors are unchanged; eigenvalues become a + bΞ».
Example: A = ((3,1),(1,3)) = 3I + X β same eigenvectors as X (i.e. |+β© and |ββ©), eigenvalues 3Β±1 = 4 and 2.
If D = ((dβ, 0), (0, dβ)) then the eigenvalues are simply dβ and dβ, and the eigenvectors are eβ=(1,0)α΅ and eβ=(0,1)α΅. No calculation needed.
Example: Pauli Z = ((1,0),(0,β1)) β eigenvalues +1 and β1 at a glance, eigenvectors |0β© and |1β©.
For a 2Γ2 matrix the characteristic equation is always λ² β tr(L)Β·Ξ» + det(L) = 0, where:
tr(L) = sum of diagonal elements det(L) = ad β bc
So Ξ»β + Ξ»β = tr(L) and Ξ»β Β· Ξ»β = det(L). You can verify eigenvalues without redoing all the multiplications.
Example: Pauli X = ((0,1),(1,0)) β tr=0, det=β1 β Ξ»β+Ξ»β=0 and Ξ»βΒ·Ξ»β=β1 β must be +1 and β1 β
If U is unitary (Uα΄΄U = I), all its eigenvalues satisfy |Ξ»| = 1. In quantum this means eigenvalues always lie on the unit circle in the complex plane. For Pauli matrices (which are also Hermitian) they are real β so they are exactly Β±1.
If you find an eigenvalue with |Ξ»| β 1 for a unitary matrix, you have made a calculation error.
If L = Lα΄΄ (Hermitian) and Ξ»β β Ξ»β, then the corresponding eigenvectors satisfy β¨uβ|uββ© = 0. No need to verify this: it is guaranteed by construction.
Example: β¨+|ββ© = (1/β2)(1/β2) + (1/β2)(β1/β2) = 1/2 β 1/2 = 0 β (X is Hermitian, Ξ»ββ Ξ»β β orthogonal)
For a 2Γ2 Hermitian matrix, once the first eigenvector (a, b)α΅ is found, the second is automatically (βb*, a*)α΅ (normalized). So you only need to solve one of the two equations and get the other for free.
Example: found |+β© = (1/β2, 1/β2)α΅ for X β the other is (β1/β2, 1/β2)α΅... which is |ββ© up to a global phase (irrelevant in quantum).