Interactive Notes

Linear Algebra
for Quantum Computing

Vectors, complex numbers, inner products, Dirac notation, matrices and eigenvalues β€” everything needed to understand qubits from scratch.

Section 01

Vectors β€” what they really are

A vector is an ordered list of numbers. Think of it as an arrow in space: it has a direction and a length. In mathematics it is written as a column:

v =
v₁
vβ‚‚
v₃
column vector with 3 components

A vector in 2D has 2 components, in 3D it has 3, and in general in N dimensions it has N. We work in 2D for now because it is easy to visualize β€” but everything generalizes.

When you write v = (3, 2), you are saying: "start at the origin, go 3 units right and 2 up". The vector is the arrow from (0,0) to (3,2).
Click on the plane to place a vector and read its components

Column notation (transpose)

We often write vectors compactly. The symbol α΅€ (transpose) means a row vector is read as a column:

v = (3, 2)α΅€  =
3
2
same thing, two ways of writing it

This will come in handy with quantum notation.

Ex 1.1Reading a vector

The vector v = (βˆ’2, 3)α΅€ has vertical component equal to:

vβ‚‚ =
Ex 1.2Vector addition

Vector addition is performed component by component:

1
2
+
3
βˆ’1
=
1+3
2+(βˆ’1)
=
4
1

Calculate (2, 5)α΅€ + (βˆ’1, 3)α΅€:

Result = ( , )α΅€
Section 02

Vector Space, Norm and Hilbert Space

What is a vector space?

A vector space is simply a set of vectors on which two operations are defined: addition and scalar multiplication. The set ℝ² (all pairs of real numbers) is a vector space, and so is β„‚α΄Ί (vectors with N complex components).

Definition β€” Norm (length of a vector)
The norm of a vector v = (v₁, vβ‚‚, ..., vβ‚™) is its "geometric length":
β€–vβ€– = √( v₁² + vβ‚‚Β² + ... + vβ‚™Β² )
In 2D this is the Pythagorean theorem! If v = (3, 4)α΅€, then β€–vβ€– = √(9+16) = √25 = 5.
Click to place a vector β€” see the norm computed in real time
A vector with norm = 1 is called a unit vector or normalized vector. In quantum computing, all states are unit vectors (β€–Οˆβ€– = 1). This ensures the probabilities sum to 1.

What is Hilbert Space?

A Hilbert Space is simply a vector space equipped with an additional structure called the inner product (covered in section 3). In practice, for quantum computing:

Hilbert space = space of complex vectors β„‚α΄Ί + inner product.
For a qubit N=2, therefore the Hilbert space is β„‚Β² β€” the vectors have 2 complex components.

Don't be intimidated by the name. "Hilbert" is simply the mathematician who studied these structures. Physically, every quantum state is a unit vector in this space.

Ex 2.1Computing the norm

Calculate the norm of the following vectors:

a) v = (3, 4)α΅€ β†’ β€–vβ€– = √(3Β² + 4Β²) = ?

b) u = (1, 0)α΅€ β†’ β€–uβ€– = ?

β€–vβ€– = β€–uβ€– =
Ex 2.2normalizing a vector

To make a unit vector, divide by its norm: Γ» = v / β€–vβ€–

normalize v = (3, 4)α΅€. Write the first component as a fraction (e.g. 3/5):

û₁ = Γ»β‚‚ =
Section 03

Complex Numbers β€” conjugate and phase

A complex number z has two parts: a real part and an imaginary part:

z = a + jb

a =
real part    Re(z)
b =
imaginary part    Im(z)
j =
imaginary unit, with the property jΒ² = βˆ’1
In physics and engineering j is used for the imaginary unit (mathematicians use i). You will see both in your notes β€” they mean the same thing.

visualization β€” the Argand Plane

Every complex number is a point in the plane (or an arrow from the origin). The horizontal axis is the real part, the vertical axis is the imaginary part. Click to explore.

Click to place a complex number β€” read modulus and phase in real time

The Complex Conjugate z*

The conjugate of z = a + jb is obtained simply by flipping the sign of the imaginary part:

z  =a + jb
z* =a βˆ’ jb← only the sign of j changes!

(3+4j)* =
3 βˆ’ 4j
(2βˆ’5j)* =
2 + 5j
(7)* =
7   β† pure reals: unchanged
(jb)* =
βˆ’jb   β† pure imaginary: sign flips

Geometrically on the Argand plane, the conjugate is the reflection across the real axis.

z (blue) and z* (red) β€” reflection across the horizontal axis. Click to move z.

Modulus |z| β€” the "length" of a complex number

The modulus of z is its distance from the origin β€” always a real number β‰₯ 0:

|z| = √( aΒ² + bΒ² )← Pythagorean theorem!

Example:z = 3+4j  β†’  |z| = √(9+16) = √25 = 5

Squared modulus |z|Β² β€” fundamental in quantum

In quantum computing |z|Β² appears constantly. There is an elegant trick using the conjugate:

|z|Β² = z Β· z* = (a+jb)(aβˆ’jb) = aΒ² + bΒ²

Expanding:
(a+jb)(aβˆ’jb) = aΒ² βˆ’ ajb + ajb βˆ’ jΒ²bΒ²
= aΒ² βˆ’ jΒ²bΒ²
= aΒ² βˆ’ (βˆ’1)Β·bΒ²  β† jΒ² = βˆ’1 !
= aΒ² + bΒ²  βœ“
This is why |Ξ±|Β² always appears in quantum measurements: it is the measurement probability β€” a positive real number, perfect for a probability!

The phase Ο†

Every complex number has an angle (phase) relative to the real axis. It can be written in polar form:

z = |z| Β· ejΟ† = |z| Β· (cos Ο† + jΒ·sin Ο†)← Euler's formula
Ο† = arctan( b / a )
In quantum, the phase φ of the amplitudes does not change probabilities (|e^(jφ)|² = 1 always), but it is crucial for interference — the engine of quantum algorithms.
Ex 3.1Complex conjugate

Write the conjugate of these numbers (remember: only change the sign of j):

a) z = 5 + 2j     b) z = βˆ’3 βˆ’ j     c) z = 4 (pure real)

a) z* = b) z* = c) z* =
Ex 3.2Squared modulus |z|Β²

Calculate |z|Β² = aΒ² + bΒ² for:

a) z = 3 + 4j    β†’   3Β² + 4Β² = ?

b) z = 1/√2 + j/√2    β†’   (1/√2)Β² + (1/√2)Β² = ?   (appears very often in quantum!)

a) |z|Β² = b) |z|Β² =
Ex 3.3Multiplying z Β· z* β€” the trick

Given z = 2 + 3j, verify that z Β· z* = |z|Β² by multiplying explicitly:

(2+3j)(2βˆ’3j) = 4 βˆ’ 6j + 6j βˆ’ 9jΒ² = 4 βˆ’ 9Β·(?)

Substitute jΒ² = βˆ’1, then add:

jΒ² = zΒ·z* =
Section 04

Inner Product β€” measuring "alignment"

The inner product (or dot product) of two vectors measures how "aligned" they are. For real vectors in ℝᴺ:

⟨u, v⟩ = u₁v₁ + uβ‚‚vβ‚‚ + ... + uβ‚™vβ‚™

Geometrically: u Β· v = β€–uβ€– Β· β€–vβ€– Β· cos(ΞΈ) where ΞΈ is the angle between the two vectors.

Drag the slider to change the angle between the vectors β€” observe how the inner product changes
ΞΈ = 45Β°

Orthogonality

Two vectors are orthogonal (perpendicular) if their inner product is zero β€” the angle between them is 90Β°.

u Β· v = 0   βŸΊ   u βŠ₯ v(orthogonal)

Inner product with complex numbers

When the components are complex numbers (as in quantum), the inner product is modified: the first vector must be conjugated:

⟨u, v⟩ = u₁*Β·v₁ + uβ‚‚*Β·vβ‚‚ + ...← the * denotes complex conjugate
The conjugate of a complex number z = a + jb is z* = a βˆ’ jb. Only the sign of the imaginary part changes. On the Argand plane this is the reflection across the real axis.
Ex 3.1Real inner product

Calculate u Β· v with u = (2, 3)α΅€ and v = (4, βˆ’1)α΅€

u Β· v = 2Β·4 + 3Β·(βˆ’1) = ?

u Β· v =
Ex 3.2Checking orthogonality

Are a = (1, 1)α΅€ and b = (1, βˆ’1)α΅€ orthogonal? Calculate the inner product:

a Β· b = Orthogonal?
Section 05

Dirac Notation β€” the language of Quantum

Quantum computing uses a special notation invented by Paul Dirac, which looks strange at first but is very convenient. Let's learn it step by step.

Ket |v⟩ β€” the column vector

The symbol |v⟩ (read "ket v") is simply a column vector. Nothing mysterious:

|ψ⟩ =
Ξ±
Ξ²
← just a regular column vector!
Same as writing ψ = (Ξ±, Ξ²)α΅€

Bra ⟨v| β€” the conjugate transpose

The symbol ⟨v| (read "bra v") is the conjugate transpose of the ket. That is: transpose the column to a row, and conjugate every complex component:

If  |ψ⟩ =
Ξ±
Ξ²
  then   ⟨ψ| = ( Ξ±* Ξ²* )

For real vectors:⟨ψ| = ( Ξ± Ξ² )← transposed row
Bra + Ket = "Bracket". The product ⟨u|v⟩ = inner product of u and v. That is the whole idea!

Inner product in Dirac notation

⟨u|v⟩ = ( u₁* uβ‚‚* ) Β·
v₁
vβ‚‚
= u₁*Β·v₁ + uβ‚‚*Β·vβ‚‚
Identical to the inner product from the previous section.
Ex 4.1Writing bra and ket

Given |ψ⟩ = (2, βˆ’3)α΅€ (real components),

a) The corresponding bra ⟨ψ| is a ___ vector (row/column)?

b) What is the second component of ⟨ψ|?

type: ⟨ψ|β‚‚ =
Ex 4.2Computing ⟨u|v⟩

Given |u⟩ = (1, 2)α΅€ and |v⟩ = (3, 4)α΅€ (all real),

calculate ⟨u|v⟩ = (1, 2)Β·(3, 4)α΅€ = 1Β·3 + 2Β·4 = ?

⟨u|v⟩ =
Section 06

Why |0⟩ = (1, 0)α΅€ and |1⟩ = (0, 1)α΅€?

This is a fundamental question. The short answer: it is a conventional choice, like choosing a coordinate system in geometry. Here is the full explanation.

The standard basis of ℝ²

In 2D, the "standard basis" vectors are the simplest ones pointing along the axes:

e₁ =
1
0
    eβ‚‚ =
0
1

3
2
= 3Β·
1
0
+ 2Β·
0
1
= 3Β·e₁ + 2Β·eβ‚‚

The connection to qubits

A qubit can be in one of two distinguishable and opposite states: "zero" and "one" (think: spin-up / spin-down, horizontal / vertical, etc.).

We choose to represent them with the standard basis vectors:

|0⟩ =
1
0
← "spin up"
|1⟩ =
0
1
← "spin down"
The reason for this choice: |0⟩ and |1⟩ are orthogonal (⟨0|1⟩ = 0) and normalized (⟨0|0⟩ = 1). They form an orthonormal basis β€” the most convenient basis type in all of quantum computing.
Basis vectors |0⟩ and |1⟩ in the plane β€” orthogonal and unit length

Why these two vectors and not others?

We could choose any other pair of orthogonal unit vectors. For example:

|+⟩ =
1/√2
1/√2
    |βˆ’βŸ© =
1/√2
βˆ’1/√2
β†’ Hadamard basis β€” equally valid!

The choice of |0⟩=(1,0)α΅€ and |1⟩=(0,1)α΅€ is called the standard computational basis and is the most widely used by convention.

Ex 5.1Verifying the basis

Verify that |0⟩ and |1⟩ form an orthonormal basis:

a) ⟨0|0⟩ = (1, 0)Β·(1, 0)α΅€ = ?

b) ⟨1|1⟩ = (0, 1)Β·(0, 1)α΅€ = ?

c) ⟨0|1⟩ = (1, 0)Β·(0, 1)α΅€ = ? (orthogonal?)

⟨0|0⟩= ⟨1|1⟩= ⟨0|1⟩=
Ex 5.2Decomposition in the basis

Every vector can be written as a combination of |0⟩ and |1⟩. Write v = (3, βˆ’2)α΅€ in ket notation:

v = ___ Β· |0⟩  +  ___ Β· |1⟩
v = |0⟩ + |1⟩
Section 07

The Qubit β€” the basic quantum state

A qubit is the basic unit of quantum computing. Unlike a classical bit (which is only 0 or 1), a qubit can be in a superposition of |0⟩ and |1⟩.

Definition β€” Qubit
|ψ⟩ = α|0⟩ + β|1⟩ = α·
1
0
+ Ξ²Β·
0
1
=
Ξ±
Ξ²

Ξ±, Ξ² ∈ β„‚(complex numbers)
|Ξ±|Β² + |Ξ²|Β² = 1(unit norm)
Ξ± and Ξ² are called probability amplitudes.

Why |Ξ±|Β² + |Ξ²|Β² = 1?

This is exactly the normalization condition we studied. In Dirac terms:

⟨ψ|ψ⟩ = ( α* β* ) ·
Ξ±
Ξ²
= |Ξ±|Β² + |Ξ²|Β² = 1

Physically: the total probability of finding the qubit in some state must be 1 (100%).

The unit circle β€” all real qubits lie on it. Click to place a state.

Qubit examples

|ψ⟩ = |0⟩
Ξ±=1, Ξ²=0  β†’  1Β²+0Β² = 1 βœ“
|ψ⟩ = |1⟩
Ξ±=0, Ξ²=1  β†’  0Β²+1Β² = 1 βœ“
(|0⟩+|1⟩)/√2
Ξ±=Ξ²=1/√2  β†’  1/2+1/2 = 1 βœ“
(|0βŸ©βˆ’|1⟩)/√2
Ξ±=1/√2, Ξ²=βˆ’1/√2  β†’  1/2+1/2 = 1 βœ“
The last example (|0⟩+|1⟩)/√2 is the QRNG from your notes! Measuring this state gives 0 or 1 each with exactly 50% probability.
Ex 6.1Valid qubit or not?

For each one, check whether it is a valid qubit (norm = 1):

a) |ψ⟩ = (1/2)|0⟩ + (√3/2)|1⟩ β†’ |1/2|Β² + |√3/2|Β² = ?

b) |ψ⟩ = (1/2)|0⟩ + (1/2)|1⟩ β†’ valid?

a) sum = valid?
b) sum = valid?
Ex 6.2Finding the missing Ξ²

Given |ψ⟩ = (1/√3)|0⟩ + β|1⟩ with β real and positive,

find Ξ² knowing it is a valid qubit.

1
Write the normalization condition: (1/√3)² + β² = 1
2
1/3 + Ξ²Β² = 1 β†’ Ξ²Β² = 2/3 β†’ Ξ² = ?
Ξ² =
Section 08

Measurement β€” where the quantum world meets the classical

If we have |ψ⟩ = α|0⟩ + β|1⟩, what happens when we measure it?

Born Rule β€” Measurement Probability
P( outcome |Ο†βŸ© ) = |βŸ¨Ο†|ψ⟩|Β²

P( |0⟩ ) =
|⟨0|ψ⟩|² = |α|²
P( |1⟩ ) =
|⟨1|ψ⟩|² = |β|²
After measurement, the state collapses to the outcome obtained.
The symbol |Β·|Β² denotes the squared modulus. For real numbers |x|Β² = xΒ². For complex z = a+jb, |z|Β² = aΒ² + bΒ². Always a real number β‰₯ 0 β€” perfect for a probability!
Measurement simulator β€” move the slider and press Measure
ΞΈ = 45Β° Ξ±=cos ΞΈ, Ξ²=sin ΞΈ

State collapse

After measuring and obtaining, say, |0⟩, the system's state becomes |0⟩. The next measurement gives 0 with certainty (probability 1). The system has "forgotten" the original amplitudes.

This is why quantum computing differs from classical computing: before measurement the system is in superposition (both 0 and 1 simultaneously), but measurement "forces" it to pick one.
Ex 7.1Computing probabilities

Dato |ψ⟩ = (√3/2)|0⟩ + (1/2)|1⟩

a) P(measure |0⟩) = |√3/2|² = ?

b) P(measure |1⟩) = |1/2|² = ?

c) Which outcome is more likely?

P(|0⟩)= P(|1⟩)= more likely:
Ex 7.2QRNG β€” from the notes

As in your notes: |ψ⟩ = (|0⟩ + |1⟩)/√2

a) Write Ξ± and Ξ² explicitly

b) Calculate P(|0⟩) and P(|1⟩)

c) Why is this a perfect random number generator?

Ξ± = Ξ² =
P(|0⟩)= P(|1⟩)=
Section 09

Matrices and Quantum Gates β€” transforming qubits

How to read a matrix

A matrix is a grid of numbers organized in rows and columns. A 2Γ—2 matrix has 2 rows and 2 columns:

Standard notation
ab
cd
Compact notation
((a,b), (c,d))
or: [[a,b],[c,d]]

Each inner pair = one row. First pair = row 1, second = row 2.
Example β€” gate X written compactly: (0 1 ; 1 0)
Read as: row 1 = (0, 1)  Β·  row 2 = (1, 0)
That is exactly:
X = ( 0 1 ) ← row 1: "0 1"
    ( 1 0 ) ← row 2: "1 0"
So (0 1 ; 1 0) and ((0,1),(1,0)) and the two-row form are three ways of writing the same thing.

Scaling to larger matrices β€” 3Γ—3 and beyond

Same logic: each block separated by ";" is a row. A 3Γ—3 matrix has 3 blocks:

Standard
abc
def
ghi
Compact
((a,b,c),
(d,e,f),
(g,h,i))

Example β€” 3Γ—3 identity matrix:
100
010
001
((1,0,0), (0,1,0), (0,0,1))

4Γ—4 Matrix β€” two qubits

CNOT gate, the most common 2-qubit gate β€” 4 rows separated by ";":

CNOT gate β€” the most-used 2-qubit gate:
1000
0100
0001
0010
((1,0,0,0),
(0,1,0,0),
(0,0,0,1),
(0,0,1,0))
Trick for reading compact notation: count ";" and add 1 β†’ rows. Count numbers in the first block β†’ columns. (1 2 3 ; 4 5 6) = 1 ";" β†’ 2 rows, 3 numbers β†’ 3 columns = 2Γ—3 matrix.

Matrix-vector product

The rule: each row of the matrix takes the inner product with the column vector:

ab
cd
Β·
x
y
=
aΒ·x + bΒ·y
cΒ·x + dΒ·y
← row by components

Example β€” X|0⟩:
X|0⟩ =
01
10
Β·
1
0
=
0Β·1+1Β·0
1Β·1+0Β·0
=
0
1
= |1⟩

Unitary matrices β€” the quantum condition

In quantum computing, gates must be unitary matrices: Uα΄΄U = I, where Uα΄΄ is the conjugate transpose of U.

Definition β€” Unitary Matrix
U is unitary if Uα΄΄ Β· U = I
where Uα΄΄ = (Uα΅€)* = transpose then conjugate each element.

Why? Because it preserves the vector norm: β€–U|ΟˆβŸ©β€– = β€–|ΟˆβŸ©β€– = 1. States remain valid states!

Gates from your notes β€” in both notations

Gate X  (NOT)
01
10
((0,1),(1,0))
Gate Z  (Phase flip)
10
0βˆ’1
((1,0),(0,βˆ’1))
Gate Y
0βˆ’j
j0
((0,βˆ’j),(j,0))
Gate H  (Hadamard)
1/√2 ·
11
1βˆ’1
1/√2Β·((1,1),(1,βˆ’1))
Gate visualization β€” observe how it transforms the basis vectors
Ex 8.1Applying gate X

Calculate X|1⟩ by multiplying the matrix by the vector:

X|1⟩ =
01
10
Β·
0
1
= ?
X|1⟩ = ( , )α΅€ =
Ex 8.2Hadamard gate β€” superposition

Calculate H|0⟩ where H = (1/√2)Β·[[1,1],[1,βˆ’1]]

1
First row Γ— |0⟩ = (1/√2)Β·(1Β·1 + 1Β·0) = ?
2
Second row Γ— |0⟩ = (1/√2)Β·(1Β·1 + (βˆ’1)Β·0) = ?
3
Result = (1/√2)|0⟩ + (1/√2)|1⟩ = ?
coeff. on |0⟩: coeff. on |1⟩:
Section 10

Eigenvalues and Eigenvectors β€” in matrix form

Definition β€” Eigenvector and Eigenvalue
A vector |u⟩ β‰  0 is an eigenvector of matrix L with eigenvalue Ξ» if:
L |u⟩ = λ |u⟩
In explicit matrix form:
det(L βˆ’ Ξ»I) = 0← characteristic equation

For a 2Γ—2 matrix:
det
aβˆ’Ξ»b
cdβˆ’Ξ»
= (aβˆ’Ξ»)(dβˆ’Ξ») βˆ’ bΒ·c = 0

Expanding:
ad βˆ’ aΞ» βˆ’ dΞ» + λ² βˆ’ bc = 0
λ² βˆ’ (a+d)Ξ» + (adβˆ’bc) = 0 ← degree-2 polynomial
The matrix scales the vector by Ξ» without changing its direction.
Generic vectors change direction; eigenvectors (coloured) stay on the same line β€” select the matrix

Step 1 β€” Rewrite as a homogeneous system

Start from L|u⟩ = λ|u⟩ and move everything to the left. Recall that λ|u⟩ = λI|u⟩ where I is the identity matrix:

Start from:
L|u⟩ = λ|u⟩
Rearrange:
L|u⟩ βˆ’ Ξ»I|u⟩ = 0  β†’  (L βˆ’ Ξ»I)|u⟩ = 0

In matrix form:
lβ‚β‚βˆ’Ξ»l₁₂
l₂₁lβ‚‚β‚‚βˆ’Ξ»
Β·
u₁
uβ‚‚
=
0
0
Subtracting Ξ»I means subtracting Ξ» only from the main diagonal, because I has 1 on the diagonal and 0 elsewhere:
( a b ) βˆ’ Ξ»( 1 0 ) = ( aβˆ’Ξ» b )
( c d ) ( 0 1 ) ( c dβˆ’Ξ» )

Step 2 β€” Existence condition: det = 0

The system (L βˆ’ Ξ»I)|u⟩ = 0 has a non-zero solution only if (L βˆ’ Ξ»I) is singular (non-invertible). This occurs when the determinant is zero:

det(L βˆ’ Ξ»I) = 0  β† characteristic equation

For a 2Γ—2 matrix:
det
aβˆ’Ξ»
b
c
dβˆ’Ξ»
= (aβˆ’Ξ»)(dβˆ’Ξ») βˆ’ bΒ·c = 0

Expanding:
ad βˆ’ aΞ» βˆ’ dΞ» + λ² βˆ’ bc = 0
 
λ² βˆ’ (a+d)Ξ» + (adβˆ’bc) = 0 β† polynomial of degree 2
For a matrix NΓ—N the characteristic equation is a polynomial of degree N in Ξ» β†’ there are always N eigenvalues (counting multiplicities), possibly complex.

Full example β€” Pauli Z, all steps

Step 1 β€” Write the matrix:

Z =
10
0βˆ’1

Step 2 β€” Calculate Z βˆ’ Ξ»I by subtracting Ξ» from the diagonal:

Z βˆ’ Ξ»I =
10
0βˆ’1
βˆ’ Ξ» Β·
10
01
=
1βˆ’Ξ»0
0βˆ’1βˆ’Ξ»

Step 3 β€” Calculate the determinant (diagonal matrix: product of diagonal elements):

det(Z βˆ’ Ξ»I) = (1βˆ’Ξ»)Β·(βˆ’1βˆ’Ξ»)

Expanding:
(1βˆ’Ξ»)(βˆ’1βˆ’Ξ») = βˆ’1 βˆ’ Ξ» + Ξ» + λ²
= λ² βˆ’ 1

Step 4 β€” Solve the characteristic equation:

λ² βˆ’ 1 = 0  β†’  λ² = 1  β†’  Ξ» = Β±1

→λ₁ = +1     Ξ»β‚‚ = βˆ’1

Step 3 β€” Finding the eigenvectors

For each eigenvalue Ξ»β‚–, substitute into (L βˆ’ Ξ»β‚–I) and solve (L βˆ’ Ξ»β‚–I)|u⟩ = 0:

Case λ₁ = +1 β€” substitute Ξ»=+1:

1βˆ’10
0βˆ’1βˆ’1
=
00
0βˆ’2
Β·
u₁
uβ‚‚
=
0
0

row 1:
0Β·u₁ + 0Β·uβ‚‚ = 0  β†’  u₁ free
row 2:
βˆ’2Β·uβ‚‚ = 0  β†’  uβ‚‚ = 0

Choose u₁=1  β†’  |uβ‚βŸ© =
1
0
= |0⟩ βœ“

Case Ξ»β‚‚ = βˆ’1 β€” substitute Ξ»=βˆ’1:

1+10
0βˆ’1+1
=
20
00
Β·
u₁
uβ‚‚
=
0
0

row 1:
2Β·u₁ = 0  β†’  u₁ = 0
row 2:
0 = 0  β†’  uβ‚‚ free

Choose uβ‚‚=1  β†’  |uβ‚‚βŸ© =
0
1
= |1⟩ βœ“

Second example β€” Pauli X, non-trivial eigenvectors

Gate X has the same eigenvalues as Z (Β±1) but different and more interesting eigenvectors:

Steps 1–4 β€” Eigenvalues (computed above): λ₁ = +1, Ξ»β‚‚ = βˆ’1

Eigenvector for λ₁ = +1:

0βˆ’11
10βˆ’1
=
βˆ’11
1βˆ’1
Β·
u₁
uβ‚‚
=
0
0

row 1:
βˆ’u₁ + uβ‚‚ = 0  β†’  uβ‚‚ = u₁

u₁=1, uβ‚‚=1. Normalize:
|uβ‚βŸ© =1/√2 Β·
1
1
=
1/√2
1/√2
= |+⟩

Eigenvector for Ξ»β‚‚ = βˆ’1:

0+1
1
1
0+1
=
1
1
1
1
Β·
u₁
uβ‚‚
=
0
0

row 1:
u₁ + uβ‚‚ = 0  β†’  uβ‚‚ = βˆ’u₁

Choose u₁=1, uβ‚‚=βˆ’1. Normalize:
|uβ‚‚βŸ© = 1/√2 Β·
1
βˆ’1
=
1/√2
βˆ’1/√2
= |βˆ’βŸ©
The eigenvectors of X are |+⟩ and |βˆ’βŸ© β€” the same states the Hadamard gate produces from |0⟩ and |1⟩! Measuring X means measuring in the "Hadamard basis" instead of the computational one.

Normalizing an eigenvector

An eigenvector found by solving the system has arbitrary components. To use it as a quantum state it must be normalized: divide by its norm.

|u⟩ =
a
b
   β€–|uβŸ©β€– = √(aΒ²+bΒ²)

|û⟩ = 1/√(a²+b²) ·
a
b
=
a/√(a²+b²)
b/√(a²+b²)
Check: ⟨û|û⟩ = aΒ²/(aΒ²+bΒ²) + bΒ²/(aΒ²+bΒ²) = 1 βœ“

The connection to quantum measurement

Every quantum observable is a Hermitian matrix L.
The possible measurement outcomes = eigenvalues of L (always real for Hermitian matrices).
After measurement the state collapses to the eigenvector corresponding to the outcome obtained.

Pauli Z β†’ measures "is it |0⟩ or |1⟩?" β†’ eigenvalues Β±1, eigenvectors |0⟩, |1⟩
Pauli X β†’ measures "is it |+⟩ or |βˆ’βŸ©?" β†’ eigenvalues Β±1, eigenvectors |+⟩, |βˆ’βŸ©
Ex 10.1Subtracting Ξ»I from the diagonal

Given the matrix A = ((3, 1),(1, 3)), write A βˆ’ Ξ»I in matrix form.

Remember: Ξ» is subtracted only from the diagonal elements.

A βˆ’ Ξ»I = (( , 1), (1, ))
Ex 10.2Determinant and eigenvalues of A = ((3,1),(1,3))

With A βˆ’ Ξ»I = ((3βˆ’Ξ», 1),(1, 3βˆ’Ξ»)), calculate the determinant:

det(A βˆ’ Ξ»I) = (3βˆ’Ξ»)(3βˆ’Ξ») βˆ’ 1Β·1
= (3βˆ’Ξ»)Β² βˆ’ 1 = 9 βˆ’ 6Ξ» + λ² βˆ’ 1
= λ² βˆ’ 6Ξ» + 8 = 0

Solve with the quadratic formula: Ξ» = (6 Β± √(36βˆ’32)) / 2 = (6 Β± ?) / 2

√(36βˆ’32) = λ₁ = Ξ»β‚‚ =
Ex 10.3Finding the eigenvector for λ₁ = 4

Substitute Ξ»=4 in (A βˆ’ Ξ»I)|u⟩ = 0:

3βˆ’41
13βˆ’4
=
βˆ’11
1βˆ’1
Β·
u₁
uβ‚‚
=
0
0

From row 1: βˆ’u₁ + uβ‚‚ = 0 β†’ uβ‚‚ = u₁. Choose u₁=1, then normalize.

The normalized eigenvector |uβ‚βŸ© = ?

|uβ‚βŸ© = ( , )α΅€
Ex 10.4Direct verification β€” L|u⟩ = Ξ»|u⟩

Verify that |uβ‚βŸ© = (1/√2, 1/√2)α΅€ is truly an eigenvector of A with Ξ»=4, by computing A|uβ‚βŸ©:

A|uβ‚βŸ© =
31
13
Β·
1/√2
1/√2
=
3/√2 + 1/√2
1/√2 + 3/√2
=
4/√2
4/√2

What is 4/√2 simplified? And is this equal to 4Β·(1/√2) = λ·|uβ‚βŸ©? (yes/no)

4/√2 = A|uβ‚βŸ© = Ξ»|uβ‚βŸ©?

Practical tricks to speed up calculations

Trick 1 β€” aI + bL has the same eigenvectors as L
If M = aI + bL, then M|u⟩ = aI|u⟩ + bL|u⟩ = a|u⟩ + bλ|u⟩ = (a + bλ)|u⟩.
Eigenvectors are unchanged; eigenvalues become a + bΞ».

Example: A = ((3,1),(1,3)) = 3I + X  β†’  same eigenvectors as X (i.e. |+⟩ and |βˆ’βŸ©), eigenvalues 3Β±1 = 4 and 2.
Trick 2 β€” Diagonal matrices: eigenvalues = diagonal elements
If D = ((d₁, 0), (0, dβ‚‚)) then the eigenvalues are simply d₁ and dβ‚‚, and the eigenvectors are e₁=(1,0)α΅€ and eβ‚‚=(0,1)α΅€. No calculation needed.

Example: Pauli Z = ((1,0),(0,βˆ’1))  β†’  eigenvalues +1 and βˆ’1 at a glance, eigenvectors |0⟩ and |1⟩.
Trick 3 β€” Trace and determinant
For a 2Γ—2 matrix the characteristic equation is always λ² βˆ’ tr(L)Β·Ξ» + det(L) = 0, where:
tr(L) = sum of diagonal elements    det(L) = ad βˆ’ bc
So λ₁ + Ξ»β‚‚ = tr(L) and λ₁ Β· Ξ»β‚‚ = det(L). You can verify eigenvalues without redoing all the multiplications.

Example: Pauli X = ((0,1),(1,0))  β†’  tr=0, det=βˆ’1  β†’  λ₁+Ξ»β‚‚=0 and λ₁·λ₂=βˆ’1  β†’  must be +1 and βˆ’1 βœ“
Trick 4 β€” Unitary matrices: eigenvalues have modulus 1
If U is unitary (Uα΄΄U = I), all its eigenvalues satisfy |Ξ»| = 1. In quantum this means eigenvalues always lie on the unit circle in the complex plane. For Pauli matrices (which are also Hermitian) they are real β†’ so they are exactly Β±1.

If you find an eigenvalue with |Ξ»| β‰  1 for a unitary matrix, you have made a calculation error.
Trick 5 β€” Eigenvectors of Hermitian matrices are orthogonal
If L = Lα΄΄ (Hermitian) and λ₁ β‰  Ξ»β‚‚, then the corresponding eigenvectors satisfy ⟨u₁|uβ‚‚βŸ© = 0. No need to verify this: it is guaranteed by construction.

Example: ⟨+|βˆ’βŸ© = (1/√2)(1/√2) + (1/√2)(βˆ’1/√2) = 1/2 βˆ’ 1/2 = 0 βœ“   (X is Hermitian, λ₁≠λ₂ β†’ orthogonal)
Trick 6 β€” Once you find one eigenvector, the other is orthogonal to it
For a 2Γ—2 Hermitian matrix, once the first eigenvector (a, b)α΅€ is found, the second is automatically (βˆ’b*, a*)α΅€ (normalized). So you only need to solve one of the two equations and get the other for free.

Example: found |+⟩ = (1/√2, 1/√2)α΅€ for X  β†’  the other is (βˆ’1/√2, 1/√2)α΅€... which is |βˆ’βŸ© up to a global phase (irrelevant in quantum).