MATH
21 B
Mathematics Math21b Spring 2009
Linear Algebra and Differential Equations
Lecture plan
Course Head: Oliver Knill
Office: SciCtr 434
1. Lecture of Week 1: Introduction to linear systems (Feb 2) A central point of this week is Gauss-Jordan elimination, introduced in the second lecture. After introducing ourselves, we look at examples of systems of linear equations The aim is to illustrate, where such systems could occur and how one could solve them with 'ad hoc' methods. This involves solving equations by combining equations in a clever way or to eliminate variables until only one variable is left. We see examples with no solution, several solutions or exactly one solution.
2. Lecture of Week 1: matrices and Gauss-Jordan elimination (Feb 4) We rewrite systems of linear equations using matrices and introduce Gauss-Jordan elimination steps: scaling of rows, swapping rows or subtract a multiple of one row to an other row. We also see an example, where one has not only one solution or no solution. Unlike in multi-variable calculus, we distinguish between column vectors and row vectors. Column vectors are nx1 matrices, and row vectors are $1 \times m$W matrices. A general n x m matrix has m columns and n rows. The output of Gauss-Jordan elimination is a matrix rref(A) which is in row reduced echelon form: the first nonzero entry in each row is 1, called leading 1, every column with a leading 1 has no other nonzero elements and every row above a row with a leading 1 has a leading 1 to the left. "Leaders want to be first, alone in their column and have leaders above them to the left".
3. Lecture of Week 1: on solutions of linear systems (Feb 6) How many solutions does a system of linear equations have? The goal of this lecture will be to see that there are three possibilities: exactly one solution, no solution or infinitely many solutions. This can be very well be explained geometrically, as well as algorithmically from the Gauss-Jordan elimination point of view. We also mention that one can see a system of linear equations Ax=b in two different ways: the column picture tells that b = x1 v1 + ... + xn vn is a sum of column vectors vi of A, the row picture tells that the dot product of the row vectors with x are the components w1 . x = b1 of the vector b.
1. Lecture of Week 2: Linear transformations and their inverses (Feb 9) This week will provides a link between the geometric and algebraic description of linear transformations. Linear transformations were introduced formally as transformations T(x) = A x, where A is a matrix. We learn how to distinguish between linear and nonlinear, linear and affine transformations. The transformation T(x) = x+5 for example is not linear because 0 is not mapped to 0. We characterize linear transformations by 3 properties: (i) T(0) = 0 (ii) T(x+y) = T(x) + T(y) and (iii) T(s x) = s T(x).
2. Lecture of Week 2: Linear transformations in geometry (Feb 13) Rotations, dilations, projections reflections, rotation-dilations or shears: how are they described algebraically? The main point of this lecture is to see how to translate forth and back between algebraic and geometric description. The key fact is that the column vectors of a matrix are the images of the basis vectors. We derive the matrices for each of the mentioned geometric transformations.
3. Lecture of Week 2: Matrix product and inverse (Feb 15) The composition of linear transformations corresponds to the product of matrices. The inverse of a transformation is described by the inverse of the matrix. We can so treat square matrices similar as numbers: they can be added and multiplied. Many matrices have inverses. There are only two things to be careful about: the product of two matrices is not commutative and many nonzero matrices have no inverse. If we take the product of a nxp matrix with a pxm matrix, we obtain a nxm matrix. The dot product as a special case of a matrix product between a 1xn matrix and a nx1 matrix. It produces a 1x1 matrix which is a scalar.
1. Lecture of Week 3: Image and kernel (Feb 17) We define the notion of a linear subspace of n-dimensional space and the span of a set of vectors. This is a preparation for the more abstract definition of linear spaces appearing later. The main algorithm is the computation of the kernel and image of a linear transformation using row reduction. The image of a matrix A is spanned by the columns of A which have a leading 1 in rref(A). The kernel of a matrix A is parametrized by "free variables", the variables for which there is no leading 1 in rref(A). For a nxn matrix, the kernel is trivial if and only if the matrix is invertible. The kernel is always nontrivial if the nxm matrix satisfies m>n. If we have more variables than equations, then either the equation has no solution or infinitely many.
2. Lecture of Week 3: Basis and linear independence (Feb 19) With the previously defined "span" and the newly introduced linear independence, one can define a basis for a linear space. It is a set of vectors which span the space and are linear independent. The standard basis in Rn is an example of a basis. We show that if we have a basis, then every vector can be uniquely represented as a linear combination of basis elements. A typical task is to find the basis of the kernel and the basis for the image of a linear transformation.
1. Lecture of Week 4: Dimension and linear spaces (Feb 22) This is a rather abstract week. The concept of abstract linear spaces allows us to talk also about linear spaces of functions, for example. This will be useful for applications in differential equations. We show first that the number of basis elements is independent of the basis. It is called the dimension. The proof uses that if p vectors are linearly independent and q vectors span a linear subspace V, then p is less or equal to q. We see the rank-nullity theorem: dim ker(A) + dim im(A) is the number of columns
2. Lecture of Week 4: Coordinates (Feb 27) We look at coordinates [x]B of a vector with respect to an arbitrary basis B. Given a basis B in Rn, one defines the matrix S which contains the basis as column vectors. The coordinate transformation from the standard basis to the new basis is encoded in this invertible matrix S. We learn how to write a matrix A in a new basis: it is B = S-1 A S. If two matrices satisfy this relation, then they are called similar.
3. Lecture of Week 4: Linear spaces I (Feb 29) We generalize the concept of linear subspaces of Rn and consider abstract linear spaces. An example is the space X=C([a,b]) of continuous functions on the interval [a,b], the space Cinfinity of all smooth functions on the real axes or the space P5 of polynomials of degree smaller or equal to 5 or the space M2 of 2x2 matrices.
1. Lecture of Week 5: Review for hourly I (Mar 3) This is review for the first midterm. The plenary review already covered the main material so that this review can focus on questions, practice exams or more True/False problems.
2. Lecture of Week 5: Linear spaces II (Mar 5) In this lecture we continue to study linear spaces. Since this is a rather abstract subject and it can not hurt to see it again It is also part of the exam. We can also look at the space of all nxm matrices as a linear space as well as spaces of solutions of differential equations. While the topic of linear differential equations will be treated earlier, this is a preview, how linear algebra enters into differential equations: the solution spaces of linear differential equations are linear spaces.
3. Lecture of Week 5: orthonormal bases and orthogonal projections (Mar 7) We review orthogonality between vectors u,v by u.v=0 and define then the notion of an orthnormal basis, a basis which consists of unit vectors which are all orthogonal to each other. The orthogonal complement of a linear space V in a Rn is defined the set of all vectors perpendicular to all vectors in V. It can be found as a kernel of the matrix which contains a basis of V as rows. We then define orthogonal projection onto a linear space. Given an orthonormal basis {u1, ... un } in V, we have a formula for the orthogonal projection: P(x) = (u1.x) u1 + ... + (un.x) un.
1. Lecture of Week 6: Gram-Schmidt and QR factorization (Mar 10) Gram Schmidt orthogonalization lead to the QR decomposition of a matrix. We will look at this process geometrically as well as algebraically. The QR decomposition of a mxn matrix A produces a matrix Q with othonormal columns and an upper triangular nxn matrix R so that A = QR. If A is a nxn matrix, then Q will be called an orthogonal matrix later. The Gram-Schmidt process is useful to obtain an orthonormal basis from a given basis.
2. Lecture of Week 6: Orthogonal transformations (Mar 12) After defining the transpose of a matrix, we look at orthogonal matrices, matrices for which AT A = 1. Rotations and reflections are examples of orthogonal transformations.
3. Lecture of Week 6: Least squares and data fitting (Mar 14) This is an important lecture from the application point of view. We learn how to fit data points with any finite set of functions. An very special example is to fit a set of data by linear functions. The process is very general: we write down a linear system of equations Ax = b which pretends that all data lie on the curves. Then we find the least square solution x of this system, which is x = (AT A)-1 AT. This formula can always easily be derived from the fact that b-Ax is perpendicular to the image of A.
1. Lecture of Week 7: Determinants I (Mar 17) The determinant is defined by a permutation definition det(A) = sump patterns (-1)upcrossings(p> A1 p(1) ... An p(n). It immediately implies the Laplace expansion. The computation of the determinant can be tedious. A faster method is to use Gauss-Jordan elimination. To derive this, one has to see what happens under the three elimination steps: swap, scale and subtract.
2. Lecture of Week 7: Determinants II (Mar 19) Some techniques: computation of the determinant, the permutation definition, Gauss-Jordan elimination, by Laplace expansion, for triangular matrices and for partitioned matrices. Later we will learn how to compute determinants by computing eigenvalues.
3. Lecture of Week 7: Eigenvalues (Mar 21) Eigenvalues and eigenvectors are defined relatively late in this course. It is good to see them in concrete examples like rotations, reflections, shears. As the book, it is a good idea to motivate the eigenvalues with a discrete dynamical system problem like the problem to find the growth rate of the Fibonnacci sequence. Here it becomes evident, why computing eigenvalues and eigenvectors matter.
SPRING BREAK Hurrey! We hope you enjoy the break.
1. Lecture of Week 8: Eigenvectors (Mar 31) After spring break, you might have forgotten about eigenvalues and need to be reminded about eigenvalues. Computing eigenvectors relates to the computation of the kernel of a linear transformation.
2. Lecture of Week 8: Diagonalization (Apr 2) If all eigenvalues of a matrix are different, one can diagonalize A. We also see that if the eigenvalues are the same, like for the shear matrix, one can not diagonalize. If the eigenvalues are complex like for a rotation, one can not diagonalize over the reals. Since we like diagonalization, we like to include complex numbers from now on.
3. Lecture of Week 8: Complex eigenvalues (Apr 4) We start with a very short review on complex numbers in class. Course assistants will do more to get you up to speed with complex numbers. The fundamental theorem of algebra says that a polynomial of degree n has n solutions, when counted with multiplicities. We express the determinant and trace of a matrix in terms of eigenvalues. Unlike in the real case, these formulas hold now for any matrix.
1. Lecture of Week 9: Review for second midterm (Apr 7) We review for the second midterm in section. Since there was a plenary review for all students covering the theory, one could focus on questions and see the big picture or discuss some TF problems.
2. Lecture of Week 9: Stability (Apr 9) We study the stability problem discrete dynamical systems. The absolute value of the eigenvalues determines the stability of the transformation. If all eigenvalues are in absolute value smaller than 1, then the origin is asymptotically stable. Also mention at the case, when the matrix is not diagonalizable like S/2, where S is the shear and where the shear expansion competes with the contraction in the diagonal.
3. Lecture of Week 9: Symmetric matrices (Apr 11) The main point of this lecture is to see that symmetric matrices can be diagonalized. The key fact is that the eigenvectors of a symmetric matrix are perpendicular to each other. This implies for example that for a symmetric matrix, kernel and image are perpendicular to each other. It is enough to see the diagonalization theorem intuitively: if one perturbs the matrix so that the eigenvalues are different, then one can diagonalize. We also want to see that diagonalization is not always possible. The shear is the bad guy. One can also mention without proof the Jordan normal form, which gives the definite answer to the diagonalization question.
1. Lecture of Week 10: Differential equations I (Apr 14) We learn to solve linear differential equations by diagonalization. We discuss linear stability of the origin. Unlike in the discrete time case, where the absolute value of the eigenvalues mattered, the real part of the eigenvalues are now important. Its always good to keep in mind the one dimensional case, where these facts are obvious. The point is that linear algebra allows us to reduce the higher dimensional case to the one-dimensional case.
2. Lecture of Week 10: Differential equations II (Apr 16) A second lecture is necessary for the important topic of applying linear algebra to solve differential equations x' = A x. While the central idea is to diagonalize A and solve y' = D y, where D is the diagonalization, we can do so a bit faster. Write your initial condition x(0) as a linear combination of eigenvectors x(0) = a1 v1 + ... + an vn and get x(t) = a1 v1 el1+ ... + an vn eln We also look at examples where the eigenvalues l1 of the matrix A are complex. An important case for the later is the harmonic oscillator with and without damping. There would be many more interesting examples from physics.
3. Lecture of Week 10: Nonlinear systems (Apr 18) This section is covered in a seperate handout also written by Otto Bretscher. How can nonlinear systems in two dimensions be analyzed using linear algebra? The key concepts are finding nullclines, equilibria and their nature using linearization of the system near the equilibria. Good examples are competing species systems (as the Murray example in the handout), predator-pray examples (like the Volterra system) or mechanical systems.
1. Lecture of Week 11: perators on function spaces (Apr 21) We study linear maps (operators) on linear spaces. The main example is the operator D as well as polynomials of the operator D like D2 + D + 1. The goal is to understand that we can see solutions of differential equations as kernels of linear operators or write partial differential equations in the form ut = T(u) where T is a linear operator.
2. Lecture of Week 11: linear differential operators (Apr 23) The main goal is to be able to solve linear higher order differential equations p(D) = g using the operator method. Factor the polynomial p(D) = (D-a1) (D-a 2) ... (D-an) and invert each linear factor (D-ai). This is a general method which works unconditionally. It allows to put together a "cookbook method", which describes, how to find the special solution of the inhomogeneous problem.
3. Lecture of Week 11: inner product spaces (Apr 25) As a preparation for Fourier theory, we introduce the concept of an inner product which generalizes the dot product. For 2pi-periodic functions, one can define as the integral of f.g from -pi to pi and divide by pi. It has all the properties we know from the dot product in finite dimensions. An other example of an inner product on the space of matrices is = tr(AT B). We mention that we can now do a lot of the geometry, we did before: Examples are length, angle, Gram-Schmidt orthogonalization, projections reflections, orthogonal transformations, etc.
1. Lecture of Week 12: Fourier theory I (Apr 29) The expansion of a function with respect to the orthonormal basis 1/21/2,cos(n x),sin(nx) leads to Fourier theory. A nice example to see how Fourier theory is useful is to derive the Leibniz series for pi/4. The main motivation is that the Fourier basis is an orthonormal eigenbasis to the operator D2. It diagonalizes this operator. We will then use this for partial differential equations.
2. Lecture of Week 12: Fourier theory II (Apr 31) Perseval identity is the "Pythagorean theorem". It is useful to estimate how fast a finite sum approximation converges. We mention also applications like computations of series by the Perseval identity or by relating them to a Fourier series. Nice examples are computations of zeta(2) or zeta(4) using the Perseval identity.
3. Lecture of Week 12: Partial differential equations (Mai 2) Linear partial differential equations ut = p(D) u with an even polynomial p are solved in the same way as ordinary differential equations: by diagonalization. The Fourier basis diagonalizes the "matrix" D2 and so the polynomial p(D2). This is much more powerful than the separation of variable method, which we do not do in this course. For example, the PDE utt = uxx - uxxxx + 10 u can be solved nicely with Fourier as. We can even solve partial differential equations, where we have a driving force like utt = uxx - u + sin(t) or higher dimensional heat and wave equations, as the homework shows.
Please send questions and comments to math21b@fas.harvard.edu
Math21b (Exam Group 1)| Oliver Knill | Spring 2009 | Department of Mathematics | Faculty of Art and Sciences | Harvard University