MATH
21 B
Mathematics Math21b Spring 2008
Linear Algebra and Differential Equations
Lecture plan
Course Head: Oliver Knill
Office: SciCtr 434
1. Lecture Week 1: Introduction to linear systems (Feb 4) The central point of this week is Gauss-Jordan elimination. After introducing ourselves, we look at examples of systems of linear equations The point is to illustrate, where such systems could occur and how one could solve them with 'ad hoc' methods. This involves solving equations by combining equations in a clever way or to eliminate variables until only one variable is left. The course assistants organize problem session times.
2. Lecture Week 1: matrices and Gauss-Jordan elimination (Feb 6) We rewrite systems of linear equations using matrices and introduce Gauss-Jordan elimination steps: scaling of rows, swapping rows or subtract a multiple of one row to an other row. We also see an example, where one has not only one solution or no solution. Unlike in multivariable calculus, we do distinguish between column vectors and row vectors. Column vectors are nx1 matrices, and row vectors are 1xm matrices. A general nxm matrix has m columns and n rows. The end product of Gauss-Jordan elimination is a new matrix rref(A) which is in row reduced echelon form: the first nonzero entry in each row is 1, called leading 1, every column with a leading 1 has no other nonzero elements and every row above a row with a leading 1 has a leading 1 to the left. "leaders want to be first, alone in their column and have leaders above them to the left".
3. Lecture Week 1: on solutions of linear systems (Feb 8) How many solutions does a system of linear equations have? The goal of the lecture will be to see that there are three possibilities: exactly one solution, no solution or infinitely many solutions. This can be very well be explained geometrically, as well as algorithmically from the Gauss-Jordan elimination point of view. We also mention that one can see a system of linear equations Ax=b in two different ways: the column picture tells that b = x1 v1 + ... + xn vn is a sum of column vectors vi of A, the row picture tells that the dot product of the row vectors with x are the components w1 . x = b1 of b.
1. Lecture Week 2: Linear transformations and their inverses (Feb 11) This week will provides a link between the geometric and algebraic description of linear transformations. Linear transformations are introduced formally as transformations T(x) = A x, where A is a matrix. We learn how to distinguish between linear and nonlinear, linear and affine transformations. The transformation T(x) = x+5 for example is not linear because 0 is not mapped to 0. We characterize linear transformations by 3 properties: (i) T(0) = 0 (ii) T(x+y) = T(x) + T(y) and (iii) T(s x) = s T(x).
2. Lecture Week 2: Linear transformations in geometry (Feb 13) Rotations, dilations, projections reflections, rotation-dilations or shears. How are they described algebraically? The main point is to see how to go forth and back between algebraic and geometric description. The key fact is that the column vectors of a matrix are the images of the basis vectors. We derive the matrices for each of the mentioned geometric transformations.
3. Lecture Week 2: Matrix product and inverse (Feb 15) The composition of linear transformations corresponds to the product of matrices. The inverse of a transformation is described by the inverse of the matrix. So, we can treat square matrices can be treated similar as numbers, we can add them, multiply them with scalars and many matrices have inverses. There is two things to be careful about: the product of two matrices is not commutative and many nonzero matrices have no inverse. If we take the product of a nxp matrix with a pxm matrix, we obtain a nxm matrix. The dot product as a special case of a matrix product between a 1xn matrix and a nx1 matrix. It produces a 1x1 matrix, a scalar.
1. Lecture Week 3: Image and kernel (Feb 20) We define the notion of a linear subspace of n-dimensional space and the span of a set of vectors. This is a preparation for the more abstract definition of linear spaces which comes later. The main algorithm is the computation of the kernel and image of a linear transformation using row reduction. The image of a matrix A is spanned by the columns of A which have a leading 1 in rref(A). The kernel of a matrix A is parametrized by "free variables", the variables for which there is no leading 1 in rref(A). For a nxn matrix, the kernel is trivial if and only if the matrix is invertible. The kernel is always nontrivial if the nxm matrix satisfies m>n that is if there are more variables than equations.
2. Lecture Week 3: Basis and linear independence (Feb 22) With the previously defined "span" and the newly introduced linear independence, one can define a basis for a linear space. It is a set of vectors which span the space and are linear independent. The standard basis in Rn is an example of a basis. We show that if we have a basis, then every vector can be uniquely represented as a linear combination of basis elements. A typical task is to find the basis of the kernel and the basis for the image of a linear transformation.
1. Lecture Week 4: Dimension and linear spaces (Feb 25) This is a rather abstract week. The concept of abstract linear spaces allows us to talk also about linear spaces of functions, for example. This will be useful for applications in differential equations. We show first that the number of basis elements is independent of the basis. It is called the dimension. The proof uses that if p vectors are linearly independent and q vectors span a linear subspace V, then p is less or equal to q. We see the rank-nulletly theorem: dim ker(A) + dim im(A) is the number of columns
2. Lecture Week 4: Coordinates (Feb 27) While everybody knows what a coordinate of a vector is, we look at coordinates [x]B with respect to an arbitrary basis B Given a basis B in Rn, one defines the matrix S which contains the basis as column vectors. The coordinate transformation from the standard basis to the new basis is this invertible matrix S. We learn how to write a matrix A in a new basis: it is B = S-1 A S. If two matrices satisfy this relation, then they are called similar.
3. Lecture Week 4: Linear spaces I (Feb 29) We generalize the concept of linear subspaces of Rn and consider abstract linear spaces. An example is the space X=C([a,b]) of continuous functions on the interval [a,b] or the space P5 of polynomials of degree smaller or equal to 5 or the space of 2x2 matrices.
1. Lecture Week 5: Review for hourly I (Mar 3) This is review for the first midterm. The plenary review already covered all the material again so that this review can focus on questions or looking at some TF problems.
2. Lecture Week 5: Linear spaces II (Mar 5) In this lecture we continue to look at linear spaces. Since this is a rather abstract subject and it can not hurt to see it again It is also part of the exam. We can also look at the space of all nxm matrices as a linear space as well as spaces of solutions of differential equations. While the topic of linear differential equations will be treated earlier, this is a preview, how linear algebra enters into differential equations: the solution spaces of linear differential equations are linear spaces.
3. Lecture Week 5: orthonormal bases and orthogonal projections (Mar 7) We review orthogonality between vectors u,v by u.v=0 and define then the notion of an orthnormal basis, a basis which consists of unit vectors which are all orthogonal to each other. The orthogonal complement of a linear space V in a Rn is defined the set of all vectors perpendicular to all vectors in V. It can be found as a kernel of the matrix which contains a basis of V as rows. We then define orthogonal projection onto a linear space. Given an orthonormal basis {u1, ... un } in V, we have a formula for the orthogonal projection: P(x) = (u1.x) u1 + ... + (un.x) un.
1. Lecture Week 6: Gram-Schmidt and QR factorization (Mar 10) Gram Schmidt orthogonalization lead to the QR factorization of a matrix. We will look at this process geometrically as well as algebraically.
2. Lecture Week 6: Orthogonal transformations (Mar 12) After defining the transpose of a matrix, we look at orthogonal matrices, matrices for which AT A = 1. Rotations and reflections are examples of orthogonal transformations.
3. Lecture Week 6: Least squares and data fitting (Mar 14) This is an important lecture from the application point of view. We learn how to fit data points with any finite set of functions. An example is to fit a set of data by linear functions.
1. Lecture Week 7: Determinants I (Mar 17) The book now defines the determinant by Laplace expansion. The advantage of the old permutation definition is that it easly implies the Laplace expansion and allows comfortably to derive all the properties of determinants from the original definition. It is ok to go with the Laplace expansion as the book does.
2. Lecture Week 7: Determinants II (Mar 19) Some techniques students should know: computation of the determinant by Gauss-Jordan elimination, by Laplace expansion, for triangular matrices and for partitioned matrices.
3. Lecture Week 7: Eigenvalues (Mar 21) Eigenvalues and eigenvectors are defined relatively late in this course. It is good to see them in concrete examples like rotations, reflections, shears. As the book, it is a good idea to motivate the eigenvalues with a discrete dynamical system problem like the problem to find the growth rate of the Fibonnacci sequence. Here it becomes evident, why computing eigenvalues and eigenvectors matter.
SPRING BREAK Hurrey! We hope you enjoy the break.
1. Lecture Week 8: Eigenvectors (Mar 31) After spring break, you might have forgotten about eigenvalues and need to be reminded about eigenvalues. Computing eigenvectors relates to the computation of the kernel of a linear transformation.
2. Lecture Week 8: Diagonalization (Apr 2) If all eigenvalues of a matrix are different, one can diagonalize A. We also see that if the eigenvalues are the same, like for the shear matrix, one can not diagonalize. If the eigenvalues are complex like for a rotation, one can not diagonalize over the reals. Since we like diagonalization, we like to include complex numbers from now on.
3. Lecture Week 8: Complex eigenvalues (Apr 4) We start with a very short review on complex numbers in class. Course assistants will do more to get you up to speed with complex numbers. The fundamental theorem of algebra says that a polynomial of degree n has n solutions, when counted with multiplicities. We express the determinant and trace of a matrix in terms of eigenvalues. Unlike in the real case, these formulas hold now for any matrix.
1. Lecture Week 9: Review for second midterm (Apr 7) We review for the second midterm in section. Since there was a plenary review for all students covering the theory, one could focus on questions and see the big picture or discuss some TF problems.
2. Lecture Week 9: Stability (Apr 9) We study the stability problem discrete dynamical systems. The absolute value of the eigenvalues determines the stability of the transformation. If all eigenvalues are in absolute value smaller than 1, then the origin is asymptotically stable. Also mention at the case, when the matrix is not diagonalizable like S/2, where S is the shear and where the shear expansion competes with the contraction in the diagonal.
3. Lecture Week 9: Symmetric matrices (Apr 11) The main point of this lecture is to see that symmetric matrices can be diagonalized. The key fact is that the eigenvectors of a symmetric matrix are perpendicular to each other. This implies for example that for a symmetric matrix, kernel and image are perpendicular to each other. It is enough to see the diagonalization theorem intuitively: if one perturbs the matrix so that the eigenvalues are different, then one can diagonalize. We also want to see that diagonalization is not always possible. The shear is the bad guy. One can also mention without proof the Jordan normal form, which gives the definite answer to the diagonalization question.
1. Lecture Week 10: Differential equations I (Apr 14) We learn to solve linear differential equations by diagonalization. We discuss linear stability of the origin. Unlike in the discrete time case, where the absolute value of the eigenvalues mattered, the real part of the eigenvalues are now important. Its always good to keep in mind the one dimensional case, where these facts are obvious. The point is that linear algebra allows us to reduce the higher dimensional case to the one-dimensional case.
2. Lecture Week 10: Differential equations II (Apr 16) A second lecture is necessary for the important topic of applying linear algebra to solve differential equations x' = A x. While the central idea is to diagonalize A and solve y' = D y, where D is the diagonalization, we can do so a bit faster. Write your initial condition x(0) as a linear combination of eigenvectors x(0) = a1 v1 + ... + an vn and get x(t) = a1 v1 el1+ ... + an vn eln We also look at examples where the eigenvalues l1 of the matrix A are complex. An important case for the later is the harmonic oscillator with and without damping. There would be many more interesting examples from physics.
3. Lecture Week 10: Nonlinear systems (Apr 18) This section is covered in a seperate handout also written by Otto Bretscher. How can nonlinear systems in two dimensions be analyzed using linear algebra? The key concepts are finding nullclines, equilibria and their nature using linearization of the system near the equilibria. Good examples are competing species systems (as the Murray example in the handout), predator-pray examples (like the Volterra system) or mechanical systems.
1. Lecture Week 11: perators on function spaces (Apr 21) We study linear maps (operators) on linear spaces. The main example is the operator D as well as polynomials of the operator D like D2 + D + 1. The goal is to understand that we can see solutions of differential equations as kernels of linear operators or write partial differential equations in the form ut = T(u) where T is a linear operator.
2. Lecture Week 11: linear differential operators (Apr 23) The main goal is to be able to solve linear higher order differential equations p(D) = g using the operator method. Factor the polynomial p(D) = (D-a1) (D-a 2) ... (D-an) and invert each linear factor (D-ai). This is a general method which works unconditionally. It allows to put together a "cookbook method", which describes, how to find the special solution of the inhomogeneous problem.
3. Lecture Week 11: inner product spaces (Apr 25) As a preparation for Fourier theory, we introduce the concept of an inner product which generalizes the dot product. For 2pi-periodic functions, one takes as the integral of f.g from -pi to pi and divide by 2pi. It has all the properties we know from the dot product in finite dimensions. An example of an inner product on matrices is = tr(A^T B). We mention that we can now do a lot of the geometry we did before in a larger context. Examples are Gram-Schmidt orthogonalization, projections reflections, have the concept of coordinates in a basis, define orthogonal transformations etc.
1. Lecture Week 12: Fourier theory I (Apr 29) The expansion of a function with respect to the orthonormal basis 1/21/2,cos(n x),sin(nx) leads to Fourier theory. A nice example to see how Fourier theory is useful is to derive the Leibniz series for pi/4. The main motivation is that the Fourier basis is an orthonormal eigenbasis to the operator D2. It diagonalizes this operator. We will then use this for partial differential equations.
2. Lecture Week 12: Fourier theory II (Apr 31) Perseval identity is the "Pythagorean theorem". It is useful to estimate how fast a finite sum approximation converges. We mention also applications like computations of series by the Perseval identity or by relating them to a Fourier series. Nice examples are computations of zeta(2) or zeta(4) using the Perseval identity.
3. Lecture Week 12: Partial differential equations (Mai 2) Linear partial differential equations ut = p(D) u with a polynomial p are solved in the same way as ordinary differential equations by diagonalization. Fourier theory achieves that the "matrix" D is diagonalized and so the polynomial p(D). This is much more powerful than the separation of variable method, which we do NOT do in this course. For example, the PDE utt = uxx - uxxxx + 10 u can be solved nicely with Fourier as we solve the wave equation. We could even solve partial differential equations, where we have a driving force like utt = uxx - u + sin(t) but unfortunately time is up
Please send questions and comments to math21b@fas.harvard.edu
Math21b | Oliver Knill | Spring 2008 | Department of Mathematics | Faculty of Art and Sciences | Harvard University