If you find a mistake, omission, etc., please let me know by e-mail.

The orange balls mark our current location in the course, and the current problem set.

Tony Feng's section and office hours:

Section: Wednesdays (starting September 8), 4-5 PM, Science Center 103b

Office hours: Thursdays (starting September 9), 7-8 PM, Science Center 310

**revised:** now Thursdays 8-9 and 9-10, both in
Science Center 411

at least in the beginning of the

- [cf. Axler, p.3]
Unless noted otherwise,
**F**may be an arbitrary field, not only**R**or**C**. The most important fields other than those of real and complex numbers are the field**Q**of rational numbers, and the finite fields**Z**/*p***Z**(*p*prime). Other examples are the field**Q**(*i*) of complex numbers with rational real and imaginary parts; more generally,**Q**(*d*^{1/2}) for any nonsquare rational number*d*; the “*p*-adic numbers”**Q**_{p}(*p*prime), of which we'll say more when we study topology next term; and more exotic finite fields such as the 9-element field (**Z**/*3***Z**)(*i*). Here's a review of the axioms for fields, vector spaces, and related mathematical structures [**corrected**7.ix.10: I somehow forgot to include the associative property among the vector space axioms )-:] - [cf. Axler, p.22] We define the
*span*of an arbitrary subset*S*of (or tuple in) a vector space*V*as follows: it is the set of all (finite) linear combinations*a*_{1}*v*_{1}+ … +*a*with each_{n}v_{n}*v*in_{i}*S*and each*a*in_{i}*F*. This is still the smallest vector subspace of*V*containing*S*. In particular, if*S*is empty, its span is by definition {0}. We do*not*require that*S*be finite. - Unlike Axler, we discuss “quotient vector spaces”
(and even quotient modules) explicitly and introduce them early.
If
*N*is a submodule of the module*M*over some ring*A*, the*quotient module**M/N*consists of all equivalence classes in*M*, where we define the equivalencerelation “∼” to mean that if and only if*m*∼*m'* is contained in*m−m'**N*. One must then check that the module operations inherited from*M*do give*M/N*the structure of a well-defined [“Well-defined” means that the operations do not depend on the choice of representative from an equivalence class.] A familiar example is the cyclic finite group*A*-module. (recall that**Z**/*n***Z**abelian group= In the case of vector spaces, the dimension of a quotient vector space**Z**-module).*V/U*is called the*codimension*in*V*of its subspace*U*; if*V*is finite-dimensional, this codimension equalsdim( .*V*) − dim(*U*) - Warning: in general the space
*F*[*X*](a.k.a. of polynomials in*P*(*F*))*X*, and its subspaces of polynomials of degree at most*P*_{n}(*F*)*n*, might not be naturally identified with a subspace of the space*F*of functions from^{F}*F*to itself. The problem is that two different polynomials may yield the same function. For example if*F*is the field of 2 elements the polynomial gives rise to the zero function. In general different polynomials can represent the same function if and only if*X*^{2}-*X**F*is finite — do you see why? - In Axler's Theorem 2.10, The hypothesis that the spanning set
be finite (implicit in Axler's use of “lists”)
is not necessary as long as the vector space
*V*is finite dimensional. Indeed let*S*be an arbitrary spanning set. Since*V*is finite dimensional, it has a finite spanning set*S*'. Write each element of*S'*as a linear combination of elements of*S*. These linear combinations involve only finitely many elements of*S*, so*S*has a finite subset*S*_{0}that spans*S*' and thus spans*V*. Now apply Axler's argument to find a subset of*S*_{0}, and thus*a fortiori*of*S*, that is a basis of*V*. - Here's an extreme example of how basic theorems about
finite-dimensional vector spaces can become utterly false for
finitely-generated modules: a module generated by just one element
can have a submodule that is not finitely generated. Indeed,
for any field
*F*let*A*be the ring of polynomials in infinitely many variables*X*. As usual we can regard_{j}*A*as a module over itself, with a single generator 1. A submodule is then just an ideal of the ring. Choose the ideal*I*generated by all the*X*, which consists of all polynomials with constant coefficient equal 0. Then if there are infinitely many indices_{j}*j*then*I*is infinitely generated; indeed any generating set must be at least as large as the index set of , so for every cardinal ℵ we can make a ring*j*’s*A*with a singly-generated module (namely*A*itself) and with a submodule that cannot be generated by fewer than ℵ elements.

For a subtler example, consider the ring we might call “*F*[*X*^{1/2∞}]”, consisting of*F*-linear combinations of monomials*X*^{n/2k}for arbitrary nonnegative integers*n*and*k*. Again let*I*be the ideal generated by the nonconstant monomials, which is not finitely generated, though there are generating sets that are “only” countably infinite. The new behavior involves the countable generating set{ : there is no minimal generating subset, because each*X*^{1/2k}| k≥0} is a multiple of*X*^{1/2k} for any*X*^{1/2k'}k'>k . Likewise for the ring generated by all monomials*X*^{r}with*r*a nonnegative rational number (or even all*X*^{r}with*r*a nonnegative real number). - Suppose
*T*:*V*→*W*is a linear transformation. Axler's notation for the image of*T*was already becoming rather old-fashioned when he wrote his book; these days simply is common (and likewise for any function at all). The terminology “null space” (whether one or two words) for*T*(*V*) is also somewhat quaint; we usually say “kernel” and write “ker(*T*^{ -1}(0)*T*)” [and (LA)TeX already provides the command`\ker`to typeset this properly]. - Axler also unaccountably soft-pedals the important notion of
**duality**. [**corrected**18.ix.10: If*W*is the direct sum of*U*and*V*then the copy of*U*^{*}in*W*^{*}is the “annihilator” in*W*^{*}of*V*,__not__of*U*.] - About “Lemma 3.?”:
some notes and warnings about the behavior of
Hom(
*V*,*W*) under (finite or infinite) direct sums. - Here's a brief preview of abstract nonsense (a.k.a. diagram-chasing), and a diagram-chasing interpretation of quotients and duality.
- Axler proves the Fundamental Theorem of Algebra using
complex analysis, which cannot be assumed in Math 55a
(we'll get to it at the end of 55b).
Here's a proof using the topological tools
we'll develop at the start of 55b.
(Axler gives the complex-analytic proof on page 67.)
[
**corrected**24.ix.10 to fix a minor typo: when f-tilde is introduced, it is f-tilde that vanishes at zero, not f] - We shall need some “eigenstuff” also in an infinite-dimensional setting, so will not assume that any vector space is (nonzero) finite dimensional unless we really must.
- If
*T*is a linear operator on a vector space*V*, and*U*is an invariant subspace, then the quotient space*V*/*U*inherits an action of*T*. Moreover, the annihilator of*U*in*V*^{*}is an invariant subspace for the action of the adjoint operator*T*^{*}on*V*^{*}. (Make sure you understand why both these claims hold.) - Triangular matrices are intimately related with “flags”.
A (complete)
*flag*in a finite dimensional vector space*V*is a sequence of subspaces {0}=*V*_{0},*V*_{1},*V*_{2}, …,*V*=_{n}*V*, with each*V*of dimension_{i}*i*and containing*V*_{i-1}. A basis determines a flag:*v*_{1},*v*_{2}, …,*v*_{n}*V*is the span of the first_{i}*i*basis vectors. Another basis determines the same flag if and only if each*w*_{1},*w*_{2}, …,*w*_{n}*w*is a linear combination of_{i} (necessarily with nonzero*v*_{1},*v*_{2}, …,*v*_{i}*v*coefficient). The_{i}*standard flag*in*F*is the flag obtained in this way from the standard basis of unit vectors_{n} . The punchline is that, just as a diagonal matrix is one that respects the standard basis (equivalently, the associated decomposition of*e*_{1},*e*_{2}, …,*e*_{n}*V*as a direct sum of 1-dimensional subspaces),*an upper-triangular matrix is one that respects the standard flag.*Note that the*i*-th diagonal entry of a triangular matrix gives the action on the one-dimensional quotient space*V*/_{i}*V*_{i−1}(each*i*=1,…,*n*).

- One of many applications is the
**trace**of an operator on a finite dimensional*F*-vector space*V*. This is a linear map from Hom(*V*,*V*) to*F*. We can define it simply as the composition of two maps: our identification of Hom(*V*,*V*) with the tensor product of*V*^{*}and*V*, and the natural map from this tensor product to*F*coming from the bilinear map taking (*v*^{*},*v*) to*v*^{*}(*v*). We'll see that this is the same as the classical definition: the trace of*T*is the sum of the diagonal entries of the matrix of*T*with respect to any basis. The coordinate-independent construction via tensor algebra explains why the trace does not change under change of basis. (The invariance can also be proved by checking explicitly that*AB*and*BA*have the same trace for any square matrices of the same size.)*A*,*B* - Here are some basic definitions and facts about general
**norms**on real and complex vector spaces. - Just as we can study bilinear symmetric forms
on a vector space over any field, not just
**R**, we can study sesquilinear conjugate-symmetric forms on a vector space over any field*with a conjugation*, not just**C**. Here a “conjugation” on a field*F*is a field automorphismσ: such that σ is not the identity but σ*F*→*F*^{2}is (that is, σ is an involution). Given a basis {*v*} for_{i}*F*, a sesquilinear form ⟨.,.⟩ on*F*is determined by the field elements and is conjugate-symmetric if and only if*a*_{i,j}= ⟨*v*_{i},*v*_{j}⟩, for all*a*= σ(_{j,i}*a*)_{i,j}*i,j*. Note that the “diagonal entries”*a*— and more generally ⟨_{i,i}*v*,*v*⟩ for any*v*in*V*— must be elements of the subfield of*F*fixed by σ. - “Sylvester's Law of Inertia” states that
for a nondegenerate pairing on a finite-dimensional vector space
*V*/*F*, where either*F*=**R**and the pairing is bilinear and symmetric, or*F*=**C**and the pairing is sesquilinear and conjugate-symmetric,*the counts of positive and negative inner products for an orthogonal basis constitute an invariant of the pairing*and do not depend on the choice of orthogonal basis. (This invariant is known as the “signature” of the pairing.) The key trick in proving this result is as follows. Suppose*V*is the orthogonal direct sum of subspaces*U*_{1},*U*_{2}for which the pairing is positive definite on*U*_{1}and negative definite on*U*_{2}. Then any subspace*W*of*V*on which the pairing is positive definite has dimension no greater than dim(*U*_{1}). Proof: On the intersection of*W*with*U*_{2}, the pairing is both positive and negative definite; hence that subspace is {0}. The claim follows by a dimension count, and we quickly deduce Sylvester's Law. - Over any field not of characteristic 2,
we know that for any non-degenerate
symmetric pairing on a finite-dimensional vector space
there is an orthogonal basis, or equivalently
a choice of basis such that the pairing is
(
*x*,*y*)=sum_{i}(*a*) for some nonzero scalars_{i}x_{i}y_{i}*a*. But in general it can be quite hard to decide whether two different collections of_{i}*a*yield isomorphic pairings. Even over_{i}**Q**the answer is already tricky in dimensions 2 and 3, and I don't think it's known in a vector space of arbitrary dimension. Over a finite field of odd size there are always exactly two possibilities, as we'll see in a few weeks. - A regular graph of degree
*d*is a*Moore graph of girth 5*if any two different vertices are linked by a unique path of length at most 2. Such a graph necessarily has vertices. Let*n*= 1 +*d*+*d*(*d*−1) =*d*^{2}+ 1*A*be the adjacency matrix, and**1**the all-ones vector. Then (because each vertex has degree*d*)**1**is a of*d*-eigenvector*A*. We have(1 + for all*A*+*A*^{2})*v*=*d v*+ ⟨*v*,**1**⟩**1***v*(proof: check on unit vectors and use linearity). Thus*A*takes the orthogonal complementof to itself and satisfies**R**·**1**(1 + on that space. Since this quadratic equation has distinct roots*A*+*A*^{2}) =*d**m*and−1− for some*m**m*> 0(1 + ), it follows that the orthogonal complement*m*+*m*^{2}) =*d*of is the direct sum of the corresponding eigenspaces. Let**R**·**1***d*_{1}and*d*_{2}be their dimensions. These sum to , and satisfy*n*−1 =*d*^{2} because the matrix*md*_{1}+ (−1−*m*)*d*_{2}+*d*= 0*A*has trace zero. This lets us solve for*d*_{1}and*d*_{2}. in particular we find that their difference is(2 . Since that's an integer, either*d*−*d*^{2}) / (2*m*+1) (giving the pentagon graph) or*d*=2*m*is an integer. Substituting for*m*^{2}+*m*+ 1*d*, we find that16( is an integer plus*d*_{1}−*d*_{2})15/(2 , whence*m*+1)*m*is one of 0, 1, 2, or 7. The first of these is impossible, and the others give , 7, or 57 as claimed.*d*= 3It is clear that the pentagon is the unique Moore graph of girth 5, and degree 2, and it is not hard to check that the Petersen graph is the unique example with

. For*d*= 3 there is again a unique graph up to an interesting group of automorphisms; see this proof outline, which also lets one count the automorphisms of this graph. There is no spectral graph theory there, but this approach is suggested by the inequality you'll prove in the fifth problem of problem set 7: if the 50 vertices are divided into two equal subsets, what's the smallest possible number of edges between them?*d*= 7For more of this kind of application of linear algebra to combinatorics, see for example Cameron and van Lint's text

*Designs, Graphs, Codes and their Links*(London Math. Society, 1991, reprinted 1996). - All of Chapter 8 works over an arbitrary algebraically closed field,
not only over
**C**(except for the minor point about extracting square roots, which breaks down in characteristic 2); and the first section (“Generalized Eigenvalues”) works over any field. - We don't stop at Corollary 8.8: let
*T*be any operator on a vector space*V*over a field*F*, not assumed algebraically closed. If*V*is finite-dimensional, then The Following Are Equivalent:

(1) There exists a nonnegative integer*k*such that*T*=0;^{k}

(2) For any vector*v*, there exists a nonnegative integer*k*such that*T*=0;^{k}v

(3)*T*=0, where^{n}*n*=dim(*V*).

Note that (1) and (2) make no mention of the dimension, but are still not equivalent for operators on infinite-dimensional spaces. We readily deduce the further equivalent conditions:

(4) There exists a basis for*V*for which*T*has an upper-triangular matrix with every diagonal entry equal zero;

(5) Every upper-triangular matrix for*T*has zeros on the diagonal, and there exists at least one upper-triangular matrix for*T*.

Recall that the second part of (5) is automatic if*F*is algebraically closed. - The space of generalized 0-eigenvectors
(the maximal subspace on which
*T*is nilpotent) is sometimes called the*nilspace*of*T*. It is an invariant subspace. When*V*is finite dimensional,*V*is the direct sum of the nilspace and another invariant subspace*V'*, consisting of the intersection of the subspaces*T*(^{k}*V*) as*k*ranges over all positive integers. See Exercise 8.11; this can be used to quickly prove Theorem 8.23 and consequences such as Cayley-Hamilton (Theorem 8.20). - The dimension of the space of generalized
*c*-eigenvalues (i.e., of the nilspace of*T−cI*) is usually called the*algebraic*multiplicity of*c*(since it's the multiplicity of*c*as a root of the characteristic polynomial of*T*), to distinguish it from the “geometric multiplicity” which is the dimension ofker( .*T−cI*)

We'll *define* the determinant of an operator *T*
on a finite dimensional space *V* as follows:
*T* induces a linear operator *T'*
on the top exterior power of *V*;
this exterior power is one-dimensional,
so an operator on it is multiplication by some scalar;
det(*T*) is by definition the scalar
corresponding to *T'*.
The “top exterior power”
is a subspace of the “exterior algebra” of *V*,
which is the quotient of the tensor algebra by
the ideal generated by {*v***v*: *v* in *V*}.
We'll still have to construct the sign homomorphism from the
symmetry group of order dim(*V*) to *V*))-th exterior power has dimension 1
rather than zero.

Interlude: normal subgroups;
short exact sequences in the context of groups:
A subgroup *H* of *G* is **normal**
(satisfies *H* = *g*^{−1}*Hg**g* in *G*) __iff__
*H* is the kernel of some group homomorphism from *G*
__iff__ the injection *H* → *G* *H*
→ *G*
→ *Q*
→ {1},
*Q* is the **quotient group**
*G*/*H**H*
→ *G*
→ *Q*
→ 1.]
*A _{n}*
→

Some more tidbits about exterior algebra:

- If
*w*,*w*' are elements of the*m*-th and*m*'-th exterior powers of*V*, then*ww*'=(−1)^{mm'}*w*'*w*; that is,*w*and*w*' commute unless*m*and*m*' are both odd in which case they anticommute. - If
*m*+*m*'=*n*=dim(*V*) then the natural pairing from the*m*-th and*m*'-th exterior powers to the*n*-th is nondegenerate, and so identifies the exterior power canonically with the dual of the*m*'-th*m*-th*tensored with the top (n-th) exterior power*. - In particular, if
*m*=1, and*T*is any invertible operator on*V*, then we find that the induced action of*T*on the (*n*−1)st exterior power is the same as its action on*V*^{*}multiplied by det(*T*). This yields the formula connecting the inverse and cofactor matrix of an invertible matrix (a formula which you may also know in the guise of “Cramer's rule”). - For each
*m*there is a natural non-degenerate pairing between the*m*-th exterior powers of*V*and*V*^{*}, which identifies these exterior powers with each other's dual.

We'll also show that a symmetric (or Hermitian) matrix
is positive definite __iff__ all its eigenvalues are positive
__iff__ it has positive principal minors
(the “principal minors” are the determinants
of the square submatrices of all orders containing the (1,1) entry).
More generally we'll show that the eigenvalue signs determine
the signature, as does the sequence of signs of principal minors
if they are all nonzero. More precisely: an invertible
symmetric/Hermitian matrix has signature
*r*,*s*)*r* is the number of positive eigenvalues and
*s* is the number of negative eigenvalues;
if its principal minors are all nonzero then
*r* is the number of *j* in
*n*}*j*-th*j*−1)-st*s* is the number of *j* in that range such that the
*j*-th*j*−1)-st*j*=1^{s}*r*',*s*')*r*' ≤ *r**s*' ≤ *s*

For positive definiteness, we have the two further
equivalent conditions: the symmetric (or Hermitian) matrix
*A*=(*a _{ij}*) is positive definite

For any matrix

Here's a brief introduction to field algebra and Galois theory.

Our source for *representation theory* of finite groups
(on finite-dimensional vector spaces over **C**)
will be Artin's *Algebra*, Chapter 9. We'll omit
sections 3 and 10 (which require not just topology and
calculus but also, at least for §3, some material beyond 55b
to do properly, namely the construction of Haar measures);
also we won't spend much time on §7, which works out in detail
the representation theory of a specific group that Artin
calls *I* (the icosahedral group,
*A*_{5}**C**[*G*]*G* (a.k.a.\ the algebra of
functions on *G* under convolution).
See for instance Chapter 1 of *Representation Theory*
by Fulton and Harris (mentioned in class). A canonical treatment
of representations of finite groups is
Serre's *Linear Representations of Finite Groups*,
which is the only entry for this chapter in the list of
“Suggestions for Further Reading” at the end of Artin's book
(see p.604).

While we'll work almost exclusively over **C**,
most of the results work equally well (though with somewhat different
proofs) over any field *F* that contains the roots of unity
of order *G*)*F* is not a factor of *G*)

Here's an alternative viewpoint on representations of a finite group
*G* (not in Artin, though you can find it elsewhere, e.g.
Fulton-Harris pages 36ff.): a representation of *G* over
a field *F* is equivalent to a module for the
*group ring*
*F*[*G*]*F*-algebra*G* is commutative) that consists of the
formal *F*-linear*F*[*G*]*F ^{G}*

Just as the regular representation is
*F*[*G*]*F*[*G*]*F*[*G*]*V*)*V* ranging over all the
irreducible representations of *G*. Comparing dimensions
gives the formula
*G*) = Σ_{V} dim(*V*)^{2}*F*-linear*F*[*G*]*V*)*G*
is commutative we recover the DFT of
Problem Set 10].

In case we haven't officially said it yet:
the *unitary group* *U _{n}* that
figures prominently in Artin's treatment (see the start of 9.2, p.310)
is the subgroup of

Artin derives the diagonalizability of *g*)*g*)*X ^{n}*−1=0

Let *T* be the character table of a group of *N* elements,
and let *T*^{*}*TDT*^{*} = *NI**D* is the diagonal matrix whose diagonal entries
are the sizes of the conjugacy classes of *G*.
But then *NI* is also
*DT*^{*}*T**columns* of *T*
are also orthogonal,
and the column indexed by conjugacy class *C*
has inner product *N*/#(*C*)*g*, *h* we have
_{χ} χ(*g*)χ(*h*) = 0*g* and *h* are conjugate, in which case
*g*) = χ(*h*)_{χ}|χ(*g*)|^{2} = N/#(*C*)*C* is the conjugacy class of *g*.
In particular, taking *g* = id*C* = {id}*N* is the sum of the squares of the dimensions of
the irreducible representations.

Let *G* be a group of *N* elements, and
*V*,ρ)*N* is nonzero. Let *P* be the operator
*N*^{−1} Σ_{g∈G} ρ(*g*)*G*-endomorphism*V*,
i.e. it commutes with each *g*)*P* is an *idempotent*: it satisfies
*P*^{2} = *P**V* the direct sum of the
eigenspaces *V*_{0}*V*_{1}*P*,
which are respectively the kernel and image of *P*.
It follows that *P* has
*V*_{1})*N*^{−1} Σ_{g∈G} χ(*g*) = ⟨χ, 1&rang*V*.
On the other hand *V*_{1}*G*-invariant*V ^{G}* of

To get from this to the dimension of
_{G}(*V*,*V*’)*V*,*V*’*V*,*V*’)*G*-action*V*^{*}⊗*V*’*G*
*V*^{*}*g*)*g*))^{*}*V*^{*}*G* is abelian): we have
*g*))^{*}(ρ(*h*))^{*}
= (ρ(*hg*))^{*}^{*}*G*
*V*^{*}^{*}(*g*)
= (ρ(*g*^{−1}))^{*}*g*. This yields the *dual representation*
(sometimes called the “contragredient representation”)
*V*^{*},ρ^{*})*V*,ρ)^{*}*V*,*V*’)
= *V*^{*}⊗*V*’*G*,
and we can check that the invariant subspace is precisely
_{G}(*V*,*V*’)*G*-invariant_{G}(*V*,*V*’)^{*}χ', 1⟩ = ⟨χ', χ⟩*a fortiori* real —
it is also equal to

NB Artin's definition of the inner product on class functions,
or indeed on any functions on *G* (Chapter 9, (5.8), page 318),
makes this inner product semilinear in the __first__ variable.
This is consistent with his treatment elsewhere in the book
(see Chapter 4, (4.4), page 250), and is occasionally more convenient
— e.g. here we wouldn't have had to finish by observing that
the inner product is real and thus its own conjugate — but is
rarer than the convention that we (following Axler) have been using.
We have to tweak our development accordingly.

The operators introduced by Artin towards the end of his proof of
(most of) Theorem 5.9 have further uses when φ is the
character of an irreducible representation *V*.
Indeed consider
*P*_{φ} := *N*^{−1} Σ_{g∈G} φ^{*}(*g*) ρ(*g*)*W*. Since φ is a class
function we again find that *P*_{φ}*G*-endomorphism*W*.
Thus (by Schur) if *W* is irreducible then
*P*_{φ}*P*_{φ}*W*
(whether or not *W* is irreducible).
Using the orthogonality relations we deduce that if
*W* is irreducible then
*P*_{φ} = 0*W* is isomorphic with *V*, in which case
*P*_{φ} = 1 / dim(*V*)*W* is the direct sum of irreducibles
*V _{i}*

If you know about integral closures and related ideas (see for instance
the beginning of the summary of field algebra)
then you can also use *P*_{φ}*V*
is a factor *N* = #(*G*)*N*/dim(*V*)*N·P*_{φ}
= Σ_{g∈G} φ^{*}(*g*) ρ(*g*)^{*}(*g*)*N*/dim(*V*)*N* / dim(*V*) ∈ **Z**

Our final topic for Math 55a is the
Sylow theorems, covered by Artin in
Chapter 6, Section 4 (pages 205–208).
Corollary 4.3 (a finite group *G* contains
an element of order *p* for every prime factor
*p* *G*|

The proof Artin gives for Sylow I can be extended to give the
*p*”*p ^{e}m*,

Artin's treatment of Sylow II may be easier to read if you just replace
each instance of *s* by *H*
(recall that *s* was introduced as the coset
1*H* of *H*).

First problem set / Linear Algebra I: vector space basics; an introduction to convolution rings

Second problem set / Linear Algebra II:
dimension of vector spaces; torsion groups/modules and divisible groups

Third problem set / Linear Algebra III:
Countable vs. uncountable dimension of vector spaces;
linear transformations and duality

NB: The “trickiness” of 2ii has nothing to do with
subtleties of infinite cardinals, the Axiom of Choice, etc.;
all you should need about countability (beyond the definition) is that
a countable union of countable sets is countable (as in Cantor's proof
of the countability of the rationals).

For Problem 9, the dual is the product
(not sum!) of the *V _{i}*.
If

For 2ii, the sequences

Here's Curt McMullen's proof, which does not depend on the cardinality of

Fourth problem set / Linear Algebra IV:
Eigenstuff, and a projective overture (in which
“*U*^{⊥}*U*
in **P***V*^{*}

Fifth problem set / Linear Algebra V:
Tensors, eigenstuff cont'd, and a bit on inner products

Problem 5i can be less confusing if you start with compositions of maps
*U* → *V* → *W**V*.
Thus if *A* is in Hom(*V*,*W*)
then the map taking *X* to *AX* goes from
*U*,*V*)*U*,*W*)*U ^{*}*⊗

Sixth problem set / Linear Algebra VI: Pairings and inner products, cont'd

Seventh problem set / Linear Algebra VII:
More about the spectrum, including spectral graph theory;
symplectic structures

For 2i, changing each coordinate of a λ-eigenvector *v*
to its absolute value preserves
*v*, *v*⟩*Av*, *v*⟩

•
For 2ii, show that if *v _{i}* > 0

• Finally, for 2iii if there is a

• NB The term “connected”, and the connection with Problem 5 (see the next paragraph), should suggest forming a graph with

• There is an analogous result for matrices with nonnegative entries that are not required to be symmetric: there is a real eigenvalue λ such that

For 5ii, let

Eighth problem set / Linear Algebra VIII:
Nilpotent operators and related topics; a natural inner product strucure
on operators on a finite-dimensional space; exterior algebra;
recreational applications of the sign homomorphism on S_{n}

**corrected** 29.x.10
(unipotent operator in 3, not “unipotent vector”)

Ninth problem set / Linear Algebra IX:
More on exterior algebra and determinants

**corrected** to replace duplicated Problem 4 (ouch)
and fix the Axler quote

**corrected again** (ouch-prime)
to fix Problem 2:
__not__

For problem 2, certainly if
ω = *v*_{1} ^ *v*_{2}
then
*v*_{1} ^ *v*_{2} ^ *v*_{1} ^ *v*_{2}
= − *v*_{1} ^ *v*_{1} ^ *v*_{2} ^ *v*_{2}
= 0.*v*_{1} ^ *v*_{2} + *v*_{3} ^ *v*_{4}*v*_{1}, *v*_{2}, *v*_{3}, *v*_{4}}*v*_{1} ^ *v*_{2} ^ *v*_{3} ^ *v*_{4}

For problem 3, choose any basis *B* for *V*;
let ψ be the wedge product of all 4*n* basis vectors, and
use for *W* a basis of pure exterior products of
*n*-element*B*.
Then
*B*, in which case
*W* is the orthogonal direct sum of
*r* := dim(*W*)/2*W* has signature *r*, *r*)*W*_{+}*r* positive basis vectors, and for
*W*_{−}*r* negative ones.

Tenth problem set / Linear Algebra X:
Signatures and real polynomials; determinants and distances;
Pontrjagin duality

**corrected**: in 3(ii), each
*x _{i}* and

also,

and,

⊕ For 1(iii), the point is to find a polynomial you can compute easily from the coefficients of

Eleventh and final problem set:
Representations of finite groups

**corrected** overnight: in problem 3,
*S ^{k}*

Problem 3 shows one approach and a few typical applications for a result often known as “Burnside's Lemma” [though it was already well-known in Burnside's day, going back at least to Cauchy (1845) according to this Wikipage]. I thank Lewis Stiller for pointing out the connection with representation theory some years back.

The construction illustrated by Problem 4(iii) is the ”Molien series” or ”Hilbert-Molien series” of the ring of invariants of a representation of a finite group. (See for example this Wikipage; as of this writing, the example there happens to be the same as the one in this problem.) In general the denominator can always be taken to be a product of factors

For problem 7(iii), there are some alternative solutions for this particular case, but the intended solution was to show more generally: if

The

Another typo