Synapse

An interconnected graph of micro-tutorials

Prerequisites: The Dot Product Matrices

Matrix-Vector Multiplication

This is an early draft. Content may change as it gets reviewed.

The most important operation in linear algebra: multiplying a matrix by a vector to get a new vector.

The mechanics

$$A\mathbf{v} = \begin{pmatrix} 1 & 2 \ 3 & 4 \end{pmatrix} \begin{pmatrix} 5 \ 6 \end{pmatrix} = \begin{pmatrix} 1 \cdot 5 + 2 \cdot 6 \ 3 \cdot 5 + 4 \cdot 6 \end{pmatrix} = \begin{pmatrix} 17 \ 39 \end{pmatrix}$$

Each entry of the result is the dot product of a row of $A$ with the vector $\mathbf{v}$. If $A$ is $m \times n$ and $\mathbf{v}$ has $n$ entries, the result has $m$ entries. The dimensions must match: the number of columns in $A$ must equal the number of entries in $\mathbf{v}$.

The geometric meaning

Matrix-vector multiplication is a transformation: the matrix takes a vector and maps it to a new vector. Different matrices produce different transformations:

Every linear transformation can be represented as a matrix, and every matrix represents a linear transformation. This is why matrices are so central: they are the language of linear maps.

Try it yourself

Try It: Matrix as Transformation
A = [ ; ]
Presets:

The purple and green arrows show where the matrix sends the two basis vectors $\mathbf{e}_1 = (1,0)$ and $\mathbf{e}_2 = (0,1)$. The faint grid shows how the entire space gets warped. Try the presets, then edit the numbers directly.

When the determinant is zero, the matrix collapses 2D space onto a line (or a point) — information is lost.

What “linear” means

A transformation $T$ is linear if it preserves addition and scalar multiplication:

$$T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})$$ $$T(c\mathbf{v}) = c \, T(\mathbf{v})$$

In words: transforming the sum equals the sum of the transforms, and scaling before or after transforming gives the same result. Straight lines stay straight. The origin doesn’t move.

This might sound restrictive, but an enormous number of useful operations are linear — including everything in PCA and factor analysis.

The key question

Given a matrix $A$ and a vector $\mathbf{v}$, the product $A\mathbf{v}$ generally points in a completely different direction from $\mathbf{v}$.

But what if there were special vectors where $A\mathbf{v}$ pointed in the same direction as $\mathbf{v}$ — just scaled by some factor? Those special vectors would tell you the “natural” directions of the transformation. They’re called eigenvectors, and they’re the subject of the next node.