MATH
21 B
Mathematics Math21b Spring 2007
Linear Algebra and Differential Equations
Lecture plan
Course Head: Oliver Knill
Office: SciCtr 434
1. Lecture: Introduction to linear systems (Feb 5) The central point of this week is Gauss-Jordan elimination. After introducing ourselves, we look at examples of systems of linear equations The point is to illustrate, where such systems could occur and how one could solve them with 'ad hoc' methods. This involves solving equations by combining equations in a clever way or to eliminate variables until only one variable is left. The course assistants organize problem session times.
2. Lecture: matrices and Gauss-Jordan elimination (Feb 7) We rewrite systems of linear equations using matrices and introduce Gauss-Jordan elimination steps: scaling of rows, swapping rows or subtract a multiple of one row to an other row. We also see an example, where one has not only one solution or no solution. Unlike in multivariable calculus, we do distinguish between column vectors and row vectors. Column vectors are nx1 matrices, and row vectors are 1xm matrices. A general nxm matrix has m columns and n rows. The endproduct of Gauss-Jordan elimination is a new matrix rref(A) which is in row reduced echelon form: the first nonzero entry in each row is 1, called leading 1, every column with a leading 1 has no other nonzero elements and every row above a row with a leading 1 has a leading 1 to the left. "leaders want to be first, alone in their column and have leaders above them to the left".
3. Lecture: on solutions of linear systems (Feb 9) How many solutions does a system of linear equations have? The goal of the lecture will be to see that there are three possibilities: exactly one solution, no solution or infinitely many solutions. This can be very well be explained geometrically, as well as algorithmically from the Gauss-Jordan elimination point of view. We also mention that one can see a system of linear equations Ax=b in two different ways: the column picture tells that b = x1 v1 + ... + xn vn is a sum of column vectors vi of A, the row picture tells that the dot product of the row vectors with x are the components w1 . x = b1 of b.
1. Lecture: Linear transformations and their inverses (Feb 12) This week will provides a link between the geometric and algebraic description of linear transformations. Linear transformations are introduced formally as transformations T(x) = A x, where A is a matrix. We learn how to distinguish between linear and nonlinear, linear and affine transformations. The transformation T(x) = x+5 for example is not linear because 0 is not maped to 0. We characterize linear transformations by 3 properties: (i) T(0) = 0 (ii) T(x+y) = T(x) + T(y) and (iii) T(s x) = s T(x).
2. Lecture: Linear transformations in geometry (Feb 14) Rotations, dilations, projections reflections, rotation-dilations or shears. How are they described algebraically? The main point is to see how to go forth and back between algebraic and geometric description. The key fact is that the column vectors of a matrix are the images of the basis vectors. We derive the matrices for each of the mentioned geometric transformations.
3. Lecture: Matrix product and inverse (Feb 16) The composition of linear transformations corresponds to the product of matrices. The inverse of a transformation is described by the inverse of the matrix. So, we can treat square matrices can be treated similar as numbers, we can add them, multiply them with scalars and many matrices have inverses. There is two things to be careful about: the product of two matrices is not commutative and many nonzero matrices have no inverse. If we take the product of a nxp matrix with a pxm matrix, we obtain a nxm matrix. The dot product as a special case of a matrix product between a 1xn matrix and a nx1 matrix. It produces a 1x1 matrix, a scalar.
3. Week: Linear subspaces (2/19 - 2/23) The concept of linear subspace is a preparation for the more abstract definition of linear spaces. The key point is the notion of a basis. As an application,we compute a basis for the image and the kernel of a linear transformation: the column vectors for the pivot columns form a basis for the image. Writing a general kernel element using free variables leads to a basis for the kernel.
1. Lecture: Image and kernel (Feb 21) We define the notion of a linear subspace of n-dimensional space and the span of a set of vectors. The main goal is to compute the kernel and image of a linear transformation using row reduction. The image of a matrix A is spanned by the columns of A which have a leading 1 in rref(A). The kernel of a matrix A is parametrized by "free variables", the variables for which there is no leading 1 in rref(A). For a nxn matrix, the kernel is trivial if and only if the matrix is invertible. The kernel is always nontrivial if the nxm matrix satisfies m>n that is if there are more variables than equations.
2. Lecture: Basis and linear independence (Feb 23) With the previously defined "span" and the newly introduced linear independence, one can define a basis for a linear space. It is a set of vectors which span the space and are linear independent. The standard basis in Rn is an example of a basis. We show that if we have a basis, then every vector can be uniquely represented as a linear combination of basis elements. A typical task is to find the basis of the kernel and the basis for the image of a linear transformation.
4. Week: Dimension (2/26 - 3/2) This is a rather abstract week. The concept of abstract linear spaces allows us to talk also about linear spaces of functions, for example. This will be useful for applications in differential equations.
1. Lecture: Dimension and linear spaces We show that the number of basis elements is independent of the basis. It is called the dimension. The proof uses that if p vectors are linearly independent and q vectors span a linear subspace V, then p is less or equal to q. We see the rank-nulletly theorem: dim ker(A) + dim im(A) is the number of columns
2. Lecture: Coordinates While everybody knows what a coordinate of a vector is, we look at coordinates [x]B with respect to an arbitrary basis B Given a basis B in Rn, one defines the matrix S which contains the basis as column vectors. The coordinate transformation from the standard basis to the new basis is this invertible matrix S. We learn how to write a matrix A in a new basis: it is B = S-1 A S. If two matrices satisfy this relation, then they are called similar.
3. Lecture: Linear spaces I We generalize the concept of linear subspaces of Rn and consider abstract linear spaces. An example is the space X=C([a,b]) of continuous functions on the interval [a,b] or the space P5 of polynomials of degree smaller or equal to 5 or the space of 2x2 matrices.
1. Lecture: Linear spaces II In this lecture we continue to look at linear spaces. Since this is a rather abstract subject and it can not hurt to see it again It is also part of the exam. We can also look at the space of all nxm matrices as a linear space as well as spaces of solutions of differential equations. While the topic of linear differential equations will be treated earlier, this is a preview, how linear algebra enters into differential equations: the solution spaces of linear differential equations are linear spaces.
2. Lecture: Review This is review for the first midterm. The plenary review already covered all the material again so that this review can focus on questions or looking at some TF problems.
3. Lecture: orthonormal bases and orthogonal projections We review orthogonality between vectors u,v by u.v=0 and define then the notion of an orthnormal basis, a basis which consists of unit vectors which are all orthogonal to each other. The orthogonal complement of a linear space V in a Rn is defined the set of all vectors perpendicular to all vectors in V. It can be found as a kernel of the matrix which contains a basis of V as rows. We then define orthogonal projection onto a linear space. Given an orthonormal basis {u1, ... un } in V, we have a formula for the orthogonal projection: P(x) = (u1.x) u1 + ... + (un.x) un.
1. Lecture: Gram-Schmidt and QR factorization Gram Schmidt orthogonalization lead to the QR factorization of a matrix. We will look at this process geometrically as well as algebraically.
2. Lecture: Orthogonal transformations After defining the transpose of a matrix, we look at orthogonal matrices, matrices for which AT A = 1. Rotations and reflections are examples of orthogonal transformations.
3. Lecture: Least squares and data fitting This is an important lecture from the application point of view. We learn how to fit data points with any finite set of functions. An example is to fit a set of data by linear functions.
1. Lecture: Determinants I The book now defines the determinant by Laplace expansion. The advantage of the old permutation definition is that it easly implies the Laplace expansion and allows comfortably to derive all the properties of determinants from the original definition. It is ok to go with the Laplace expansion as the book does.
2. Lecture: Determinants II Some techniques students should know: computation of the determinant by Gauss-Jordan elimination, by Laplace expansion, for triangular matrices and for partitioned matrices.
2. Lecture: Eigenvalues Eigenvalues and eigenvectors are defined relatively late in this course. It is good to see them in concrete examples like rotations, reflections, shears. As the book, it is a good idea to motivate the eigenvalues with a discrete dynamical system problem like the problem to find the growth rate of the Fibonnacci sequence. Here it becomes evident, why computing eigenvalues and eigenvectors matter.
SPRING BREAK Hurrey! We hope you enjoy the break.
1. Lecture: Eigenvectors After spring break, you might have forgotten about eigenvalues and need to be reminded about eigenvalues. Computing eigenvectors relates to the computation of the kernel of a linear transformation.
2. Lecture: Diagonalization If all eigenvalues of a matrix are different, one can diagonalize A. We also see that if the eigenvalues are the same, like for the shear matrix, one can not diagonalize. If the eigenvalues are complex like for a rotation, one can not diagonalize over the reals. Since we like diagonalization, we like to include complex numbers from now on.
3. Lecture: Complex eigenvalues We start with a very short review on complex numbers in class. Course assistants will do more to get you up to speed with complex numbers. The fundamental theorem of algebra says that a polynomial of degree n has n solutions, when counted with multiplicities. We express the determinant and trace of a matrix in terms of eigenvalues. Unlike in the real case, these formulas hold now for any matrix.
1. Lecture: Review for second midterm We review for the second midterm in section. Since there was a plenary review for all students covering the theory, one could focus on questions and see the big picture or discuss some TF problems.
2. Lecture: Stability We study the stability problem discrete dynamical systems. The absolute value of the eigenvalues determines the stability of the transformation. If all eigenvalues are in absolute value smaller than 1, then the origin is asymptotically stable. Also mention at the case, when the matrix is not diagonalizable like S/2, where S is the shear and where the shear expansion competes with the contraction in the diagonal.
3. Lecture: Symmetric matrices The main point of this lecture is to see that symmetric matrices can be diagonalized. The key fact is that the eigenvectors of a symmetric matrix are perpendicular to each other. This implies for example that for a symmetric matrix, kernel and image are perpendicular to each other. It is enough to see the diagonalization theorem intuitively: if one perturbs the matrix so that the eigenvalues are different, then one can diagonalize. We also want to see that diagonalization is not always possible. The shear is the bad guy. One can also mention without proof the Jordan normal form, which gives the definite answer to the diagonalization question.
1. Lecture: Differential equations I We learn to solve linear differential equations by diagonalization. We discuss linear stability of the origin. Unlike in the discrete time case, where the absolute value of the eigenvalues mattered, the real part of the eigenvalues are now important. Always keep in mind the one dimensional case, where these facts are obvious. The point is that linear algebra allows us to reduce the higher dimensional case to the one dimensional case.
2. Lecture: Differential equations II A second lecture is necessary for this central topic. We also look at examples where the eigenvalues are complex like the harmonic oscillators with and without damping. There are many more interesting examples from physics.
3. Lecture: Nonlinear systems This section is covered in a seperate handout also written by Otto Bretscher. How can nonlinear systems in two dimensions be analyzed using linear algebra? Many students have seen this topic in single variable calculus already but not the linear algebra aspect of it.
1. Lecture: function spaces We are a bit out of order and look only now at linear maps on linear spaces. If it has been mentioned before, it can not hurt to hear it again. The abstractness of this section is major diffulty for students.
2. Lecture: linear differential operators The main goal is to be able to solve linear higher order differential equations p(D) = g using the operator method. Factor the polynomial p(D) = (D-a1) (D-a 2) ... (D-an) and invert each linear factor (D-ai).
3. Lecture: inner product spaces This is a preparation for Fourier theory. We introduce the concept of an inner product which generalizes the dot product. For 1-periodic functions, one takes as the integral of f.g from 0 to 1.
1. Lecture: Fourier theory I The expansion of a function with respect to the orthonormal basis 1/s1/2,cos(n x),sin(nx) leads to Fourier theory.
2. Lecture: Fourier theory II We mention also applications like computations of series by the Parseval identity or by relating them to a Fourier series. A nice example is the Leibnitz series for pi/4.
3. Lecture: Partial differential equations Linear partial differential equations ut = p(D) u with a polynomial p are solved in the same way as ordinary differential equations by diagonalization. Fourier theory achieves that the "matrix" D is diagonalized and so the polynomial p(D).
Please send questions and comments to math21b@fas.harvard.edu
Math21b | Oliver Knill | Spring 2007 | Department of Mathematics | Faculty of Art and Sciences | Harvard University