Linear Algebra
Basics¶
The first two lectures are review and an introduction to vectors, matrices, and their algebraic operations. This is material you learned in your Discrete Structures class which you can review here.
Matrix Algebra¶
Once we have objects that represent the information we want and can perform operations on, the next logical question is: can we build an algebra? At this level, algebra involves identifying the rules we can apply to manipulate equations and solve for unknown variables.
The Rules of Matrix Algebra¶
While matrix algebra feels similar to the math we learn in grade school, there are some critical distinctions—particularly regarding commutativity and division.
Addition and Subtraction: You can add and subtract matrices ( or ). These operations are commutative:
Multiplication: You can multiply matrices (), but it is not commutative. Order matters:
Division: You cannot divide matrices ( does not exist).
Associativity: Both addition and multiplication are associative:
Distributivity: Matrices follow the distributive property:
Solving Without Division: Inverses and Identities¶
Without a division operation, we solve for variables by using inverses and identities. In regular arithmetic, the multiplicative inverse of is . When you multiply them, you get the multiplicative identity, 1.
To do this with matrices, we must first define the Identity Matrix (). For a square matrix, the identity is a matrix of all zeros with ones along the main diagonal:
Multiplying any matrix by the identity returns the original matrix: . Therefore, the inverse of a matrix () is a matrix such that their product results in the identity:
While there are simple formulas for the inverse of small matrices (), larger matrices are much more difficult to invert and typically require Gaussian Elimination.
Non-Square Matrices and the Transpose¶
If a matrix is not square, it doesn’t have a standard inverse. To handle this, we “make it square” using the transpose (). To transpose a matrix, you switch the rows and columns:
If , then .
The product of a matrix and its transpose is always a square matrix. By taking the inverse of that resulting square matrix, we can calculate what is known as the pseudo-inverse.
Putting it Together: Solving an Equation¶
Now we can perform algebra to solve for a vector in the linear system :
Gauss-Jordan Elimination¶
Gauss-Jordan Elimination¶
The Gauss-Jordan algorithm is the typical method we use to find the inverse of a square matrix. It is a variation of Gaussian Elimination. We manipulate our matrix using these three rules:
Exchange two rows in the matrix.
Multiply a row with a non-zero constant.
Add or subtract the scalar multiple of one row to another row.
The Algorithm¶
Create the partitioned matrix , where is the identity matrix.
Swap the rows so that the row with the largest, leftmost nonzero entry is on top.
Multiply the top row by a scalar so that the top row’s leading entry becomes 1.
Add/subtract multiples of the top row to the other rows so that all other entries in the column containing the top row’s leading entry are all zero.
Repeat steps 2-4 for the next leftmost nonzero entry until all the leading entries are 1.
Example¶
Let’s invert the following matrix:
Create the partitioned matrix:
The largest entry is already in the top row (in magnitude relative to the first column), so we skip the swap.
Multiply row 1 by to get a 1 in the upper left:
Get zeros in the rest of the first column. Since row 2 has a -1, we replace row 2 with (Row 1 + Row 2):
Get a zero in the last row. Replace row 3 with (Row 3 + 3 Row 1):
Move to the second column. We want a 1 where the 3 is, so we divide the second row by 3:
The first row already has a 0 in this column. Replace row 3 with (Row 3 - Row 2):
Fix the last column. Multiply row 3 by :
Replace row 2 with (Row 2 + Row 3):
Final step: Replace row 1 with (Row 1 - Row 3):
And we’re done. The inverted matrix is:
Complexity Note¶
For an matrix, the algorithm operates by fixing each of the columns. For each column, it operates on rows, and for each row, it performs operations across the entire row. Therefore, Gauss-Jordan Elimination has a computational complexity of .
Linear Regression¶
We have seen how perceptrons classify data using a line (or in higher dimensions, a hyperplane) to separate in-category data from out-category data. Classification is a special machine learning task where the outputs are binary. However, we often want to make predictions where our outputs are real values.
Example: If you are a bank, and someone applies for a line of credit:
Classification: Deciding whether or not to grant the line of credit (Yes/No).
Regression: Deciding how much credit to give (a specific number).
When the task of the system is to output a real number that matches the data, it is called regression. Regression techniques were initially developed in the 19th century but are now understood as a core part of the ML landscape.
As with earlier models, our training data is a collection of input-output pairs , where the values are now real numbers. Our goal is to find a function (our hypothesis) that takes an and outputs the appropriate .
Fundamental Assumptions¶
The function we are trying to find is linear.
The data is noisy, meaning it is impossible to find an exact match function—there will always be some error.
Defining the Error¶
First, let’s call the function we develop for “hypothesis” (since was already used for our feature function). Our error will be defined as:
Since is linear, it is just some constants (weights) times each of the input elements:
Note: We start at 0 for the bias term (). The reason is transposed is that both and are treated as column vectors.
The Matrix Approach¶
Next, we’ll treat all the input as a single giant matrix (by transposing and stacking each input):
Now, we can re-write the error function as:
The last step above is simply the result of multiplying the expression out.
Finding the Minimum Error¶
What we want to do is find the weights that minimize that error: .
To do this, we do what we always do: take the derivative and set it equal to 0 to find the critical points. We write this in vector calculus as , which is defined as:
Useful Vector Calculus Identities:
is symmetrical, so
Using these facts, we can determine the gradient:
The Solution (The Algorithm)¶
Since will never be 0, we only need to worry about the terms in the parentheses:
Now, we simply invert and multiply:
And that last line is literally the whole algorithm.