Lecture notes for Math 55a: Honors Abstract Algebra (Fall 2017)

If you find a mistake, omission, etc., please let me know by e-mail.

The orange balls mark our current location in the course, and the current problem set.


Ceci n’est pas un Math 55a syllabus
[No, you don’t have to know French to take Math 55a. Googling ceci+n'est suffices to turn up the explanation, such as it is.]
The CAs for Math 55a are Vikram Sundar (vikramsundar@college) and Rohil Prasad (prasad01@college)
[if writing from outside the Harvard network, append .college.edu to ...@harvard].
CA office hours are Monday 8-10 PM in the Leverett Dining Hall, starting September 4 (same place and time that Math Night will start the week following).
Thanks to Vikram for setting up this Dropbox link for the CAs’ notes from class.

Section times:
Vikram Sundar: Monday 1-2 PM; Science Center room 112 on Sep.11, and room 222 from Sep.18 on.
Rohil Prasad: Thursday 4-5 PM, Science Center room 411
! If you are coming to class but not officially registered for Math 55 (e.g. you are auditing, or still undecided between 25a and 55a but officially signed up for 25a), send me your e-mail address so that I and the CA's can include you in class announcements.
My office hours for the week of 18-22 September will be Wednesday (Sep.20), not the usual Tuesday. (Still 7:30 to 9:00 PM in the Lowell House Dining Hall.)
Here is some more information from last year on the number 5777 etc. (converted to MathJax and with the added remark on $5779 = L_{27}/L_9$); as noted in the Sep.20 lecture, the fact that the palindrome 5775 factors so smoothly ($3 \cdot 5^2 \cdot 7 \cdot 11$) is also due in part to the fact that $5776 = 76^2$. Shanah Tovah!
! The diagnostic quiz will be given Wednesday, September 27 in class (11:07 AM to 12:00 noon). It will cover only material from the first three problem sets.


August 30: “Math blackboard” ($\rm\TeX$’s \mathbb font), such as $\mathbb R$, is a printed representation of a handwritten representation of ordinary boldface such as $\bf R$. When using $\rm\TeX$ (or $\rm\LaTeX$ etc.), you might as well use normal boldface. Either $\mathbf R$ or $\mathbb R$ means the set of real numbers, whether considered as a field, abelian group, metric space (more on this in Math 55b), or whatever other structure is relevant. Likewise $\mathbf C$ = $\mathbb C$ = the set of complex numbers; $\mathbf Q$ = $\mathbb Q$ = the set of rational numbers (quotients of integers — since the initial letter of “rational(s)” is preempted by the use of $\bf R$ for the reals); $\mathbf Z$ = $\mathbb Z$ = the set of integers (from German Zahlen); and in Axler, $\mathbf F$ = $\mathbb F$ = the field $\bf R$ or $\bf C$.

At least in the beginning of the linear algebra unit, we’ll be following the Axler textbook closely enough that supplementary lecture notes should not be needed. Some important extensions/modifications to the treatment in Axler:

While the third edition of Axler includes quotients and duality, it still lacks tensor algebra. This is no surprise, but it will not stop us in Math 55! Here’s an introduction [As you might guess from \oplus, the TeXism for the tensor-product symbol is \otimes.]
Corrected 14.x.2017 [Alec Sun]: at the end of the first display on page 2, it’s $w_{ij}$, not $u_i \otimes v_j$. We’ll define the determinant of an operator $T$ on a finite dimensional space $V$ as follows: $T$ induces a linear operator $\wedge^n T$ on the top exterior power $\wedge^n V$ of $V$ (where $n = \dim V);$ this exterior power is one-dimensional, so an operator on it is multiplication by some scalar; $\det(T)$ is by definition the scalar corresponding to $\wedge^n T$. The “top exterior power” is a subspace of the “exterior algebra” $\wedge^\bullet (V)$ of $V$, which is the quotient of the tensor algebra by the two-sided ideal generated by $\{ v \otimes v : v \in V\}.$ (Recall that this ideal also contains $v \otimes w + w \otimes v$ for all $v,w \in V.)$ We’ll still have to construct the sign homomorphism from the symmetric group of order $\dim V$ to $\{1, -1\}$ to make sure that this exterior algebra is as large as we expect it to be, and that in particular that the $(\dim(V))$-th exterior power has dimension 1 rather than zero.

Interlude: normal subgroups; short exact sequences in the context of groups
A subgroup $H$ of $G$ is normal (satisfies $H = g^{-1} \! H g$ for all $g \in G)$ iff $H$ is the kernel of some group homomorphism from $G$ iff the injection $H \hookrightarrow G$ fits into a short exact sequence $\{1\} \to H \to G \to Q \to \{1\},$ in which case $Q$ is the quotient group $G/H.$ [The notation {1} for the one-element (“trivial”) group is usually abbreviated to plain 1, as in $1 \to H \to G \to Q \to 1.]$ This is not in Axler but can be found in any introductory text in abstract algebra; see for instance Artin, Chapter 2, section 10.
Examples: $1 \to A_n \to S_n \to \{ \pm 1 \};$ also, the determinant homomorphism ${\rm GL}_n(F) \to F^*$ gives the short exact sequence $1 \to {\rm SL}_n(F) \to {\rm GL}_n(F) \to F^* \to 1,$ and this works even if $F$ is just a commutative ring with unit as long as $F^*$ is understood as the group of invertible elements of $F$ — for example, ${\bf Z}^* = \{\pm 1\}.$

Some more tidbits about exterior algebra:

More will be said about exterior algebra when differential forms appear in Math 55b.

We’ll also show that a symmetric (or Hermitian) matrix is positive definite iff all its eigenvalues are positive iff it has positive principal minors (the “principal minors” are the determinants of the square submatrices of all orders containing the (1,1) entry). More generally we’ll show that the eigenvalue signs determine the signature, as does the sequence of signs of principal minors if they are all nonzero. More precisely: an invertible symmetric/Hermitian matrix has signature $(r,s)$ where $r$ is the number of positive eigenvalues and $s$ is the number of negative eigenvalues; if its principal minors are all nonzero then $r$ is the number of $j \in \{ 1, 2, \ldots, n \}$ such that the $j$-th and ($j-1$)-st minors have the same sign, and $s$ is the number of $j$ in that range such that the $j$-th and ($j-1$)-st minors have opposite sign [for $j=1$ we always count the “zeroth minor” as being the positive number 1]. This follows inductively from the fact that the determinant has sign $(-1)^s$ and the signature $(r',s')$ of the restriction of a pairing to a subspace has $r' \leq r$ and $s' \leq s.$

For positive definiteness, we have the two further equivalent conditions: the symmetric (or Hermitian) matrix $A = (a_{jk})$ is positive definite iff there is a basis $(v_j)$ of $F^n$ such that $a_{j,k} = \langle v_j, v_k \rangle$ for all $j,k$, and iff there is an invertible matrix $B$ such that $A = B^* \! B.$ For example, the matrix with entries $1 / (j+k-1)$ (“Hilbert matrix”) is positive-definite, because it is the matrix of inner products (integrals on [0,1]) of the basis $1, x, x^2, \ldots, x^{n-1}$ for the polynomials of degree $\lt n.$ See the 10th problem set for a calculus-free proof of the positivity of the Hilbert matrix, and an evaluation of its determinant.

Our source for representation theory of finite groups (on finite-dimensional vector spaces over $\bf C$) will be Artin’s Algebra, Chapter 9. We’ll omit sections 3 and 10 (which require not just topology and calculus but also, at least for §3, some material beyond 55b to do properly, namely the construction of Haar measures); also we won’t spend much time on §7, which works out in detail the representation theory of a specific group that Artin calls $I$ (the icosahedral group, a.k.a. $A_5$).. There are many other sources for this material, some of which take a somewhat different point of view via the “group algebra” ${\bf C}[G]$ of a finite group $G$ (a.k.a. the algebra of functions on $G$ under convolution). See for instance Chapter 1 of Representation Theory by Fulton and Harris (mentioned in class); some further introductory remarks in this direction are a couple of paragraphs below. A canonical treatment of representations of finite groups is Serre’s Linear Representations of Finite Groups, which is the only entry for this chapter in the list of “Suggestions for Further Reading” at the end of Artin’s book (see p.604).

While we’ll work almost exclusively over C, most of the results work equally well (though with somewhat different proofs) over any field $F$ that contains the roots of unity of order $\#(G)$, as long as the characteristic of $F$ is not a factor of $\#(G)$). [We also use the notation $|G|$ for the cardinality $\#(G)$]. Without roots of unity, many more results are different, but there is still a reasonably satisfactory theory. Dropping the characteristic condition leads to much trickier territory, e.g. even Maschke’s theorem (every finite-dimensional representation is a direct sum of irreducibles) fails; some natural problems are still unsolved a century-plus later!

Here’s an alternative viewpoint on representations of a finite group $G$ (not in Artin, though you can find it elsewhere, e.g. Fulton-Harris pages 36ff.): a representation of $G$ over a field $F$ is equivalent to a module for the group ring $F[G].$ The group ring is an associative $F$-algebra (commutative iff $G$ is commutative) that consists of the formal $F$-linear combinations of group elements. This means that $F[G]$ is $F^G$ as an $F$-vector space, and the algebra structure is defined by setting $e_{g_1} e_{g_2} = e_{g_1 g_2}$ for all $g_1, g_2 \in G$, together with the $F$-bilinearity of the product. This means that if we identify elements of the group ring with functions $G \to F$ then the multiplication rule is $(f_1 * f_2)(g) = \sum_{g_1 g_2 = g} f_1(g_1) \, f_2(g_2)$ — yes, it’s convolution again. To identify an $F[G]$-module with a representation, use the action of $F$ to define the vector space structure, and let $\rho(g)$ act by multipliction by the unit vector $e_g$. In particular, the regular representation is $F[G]$ regarded in the usual way as a module over itself. If we identify the image of this representation with certain permutation matrices of order $\#(G)$, we get an explicit model of $F[G]$ as a subalgebra of the algebra of square matrices the same order. For example, if $G = {\bf Z} / n {\bf Z}$ we recover the algebra of circulant matrices of order $n$.

The regular representation of a group $G$ is the special case $G=S$ of the permutation representation associated to an action of $G$ on a set $S$ (which in turn can be defined as a homomorphism, call it $h,$ from $G$ to the group of permutations of $S$; NB as with linear representations there is no requirement that $h$ be injective — if it is injective, the action is said to be “faithful”). The permutation representation associated to $h$, call it $\rho_h,$ has dimension $\#S$, and can be regarded as the vector space ${\bf C}^S$ with basis $\{ e_s : s \in S \}$ indexed by $S$. (Thus we usually assume that $S$ is finite.) Any $g \in G$ takes $e_s$ to $e_{g(s)}$ (more fully, to $e_{(h(g))(s)}$); this and linearity gives the action of $G$ on all of ${\bf C}^S$. Warning: if we write the typical element of ${\bf C}^S$ as $\sum_{s \in S} c_s e_s$ then $\rho_h(g)$ takes it to $\sum_{s \in S} c_s e_{g(s)}$, which in general is not the same thing as $\sum_{s \in S} c_{g(s)} e_s$ as you might expect, but $\sum_{s\in S} c_{g^{-1}(s)} e_s$. Indeed if we tried to define a representation by $(\rho(g)) \left(\sum_{s \in S} c_s e_{g(s)}\right) = \sum_{s \in S} c_{g(s)} e_s$ then we would find that $\rho(g_1) \rho(g_2)$ is not $\rho(g_1 g_2)$ but $\rho(g_2 g_1)$ [check this!], so we wouldn’t get a representation at all unless $G$ is abelian!

Another way to describe this action: identify ${\bf C}^S$ with the space of maps $S \to {\bf C}$ (so $\sum_{s \in S} c_s e_s$ corresponds to the map $s \mapsto c_s$), and then $\rho_h(g)$ takes the map $f$ to $f \circ h(g^{-1}).$ This all works with $\bf C$ replaced by any ground field, as does the formula for the character of this representation, which states that $\chi_{\rho_h} (g)$ is the number of elements of $S$ fixed by $h(g)$ (though as usual this is not as informative in positive characteristic).

It seems Artin does not mention the following: if $\phi$ is the character of an irreducible representation $U$, then for any representation $(V,\rho)$ the map $P_\phi = \frac{\dim U}{\#G} \sum_g \overline{\phi(g)} \rho(g)$ is a \hbox{$G$-endomorphism}, and if $V$ is irreducible then (by Schur and the orthogonality formula) $P_\phi$ is the identity if $V \cong U$ and zero otherwise. That is, $P_\phi$ is projection to the “$U$-isotypic subspace” of $V$: if $V = \oplus_i V_i$ is any decomposition into irreducibles then $P_\phi$ is projection to the subsum of those $V_i$ that are isomorphic with $U$. In particular, this isotypic subspace is an invariant of $V$ (whether or not $V$ is finite dimensional). This is a grand generalization of the fact that if $\iota$ is any involution of a vector space $V$ (over a field of characteristic other than 2) then $\frac12(1 \pm \iota)$ is projection to the $(\pm 1)$-eigenspace of $\iota$ (which is the special case $G = \{1, \iota\}).$

(In particular, $\sum_g \overline{\phi(g)} \rho(g)$ acts on $U$ by multiplication by $\#G \, / \dim(U);$ once one knows enough about algebraic integers and related conceptsit follows that this is an integer, and thus that the dimension of every irreducible representation of a finite group is a factor of the group order. We alas will not be able to prove this fact in Math 55.)

Finally, apropos of orthogonality of characters: if $T$ is the character table then orthogonality of characters is tantamount to $T D T^* = |G| I,$ where $T^*$ is the Hermitian transpose and $D$ is the diagonal matrix whose diagonal entries are the sizes of the conjugacy classes (in the same order as the columns of the table). If $G$ is abelian, this simplifies to $T T^* = |G| I$, which then implies $T^* T = |G| I$ so the columns of $T$ are orthogonal too — which we have seen already. This remains true in the present case: since $T$ is invertible (with inverse $|G|^{-1} D T^*),$ we may conjugate $T D T^* = |G| I,$ by $T$ to deduce $D T^* T = |G| I$ and thus $T^* T = |G| D^{-1}$. This says that the columns of $T$ are orthogonal (with respect to the usual complex inner product), and for any $g \in G$ the column of $T$ corresponding to the conjugacy class $[g]$ has squared norm $|G| / [g],$ which is the size of the commutator of $g$: $$\sum_\chi |\chi(g)|^2 = \# C_g.$$ For example, for the character table $$ \left[ \begin{array}{rrr}1 & 1 & 1 \cr 2 & 0 & -1 \cr 1 & -1 & 1\end{array} \right] $$ of the symmetric group $S_3$, these squared norms are $6, 2, 3$, which are indeed the sizes of the corresponding commutators. As a special case, taking $g=1$ we again recover the sum-of-squares formula $|G| = \sum_\chi \chi(1)^2.$

A few remarks around Artin’s development in Chapter 6 leading up to the Sylow theorems:


Thanks to Vikram for this $\rm\LaTeX$ template for problem-set solutions (here’s what the resulting PDF looks like). They ask that e-mail submissions of problem sets have “Math 55 homework” in the Subject line.

First problem set / Linear Algebra I: vector space basics; an introduction to convolution rings
Clarifications:
• “Which if any of these basic results would fail if $\bf F$ were replaced by $\bf Z$?” — but don’t worry about this for problems 7 and 24, which specify $\bf R$.
• Problem 12: If you see how to compute this efficiently but not what this has to do with Problem 8, please keep looking for the connection.
Here’s the “Proof of Concept” mini-crossword with links concerning the ∎ symbol. Here’s an excessively annotated solution.

Second problem set / Linear Algebra II: dimension of vector spaces; torsion groups/modules and divisible groups
About Problem 5: You may wonder: if not determinants, what can you use? See Axler, Chapter 4, namely 4.8 through 4.12 (pages 121–123), and note that the proof of 4.8 (using techniques we won’t cover till next week) can be replaced by the ordinary algorithm for polynomial long division, which you probably learned with real coefficients but works over any field. While I’m at it, 4.7 (page 120) works over any infinite field; Axler’s proof is special to the real and complex numbers, but 4.12 yields the result in general. (We already remarked that this result does not hold for finite fields.)

Third problem set / Linear Algebra III: Countable vs. uncountable dimension of vector spaces; linear transformations and duality
corrected 18 September (Mark Kong):
• Problem 2: Suppose that for some (finite) $n$ we can extend $B_0$ by $n$ vectors (not “extend $B$ by $n$ vectors” etc.).
• Also, in Problem 1 Mark notes that one already needs a bit of the Axiom of Choice even to prove the fact (which I blithely asserted in class) that a countable union of countable, or even finite, sets is itself countable. (If you can enumerate a countable disjoint union $\bigcup_{i=1}^\infty S_i$ of countable or finite sets, then you can choose an element of $\prod_{i=1}^\infty S_i$ by choosing from each $S_i$ the element that comes earliest in the enumeration.) Go ahead and assume this for Problem 1.
• (And in Problem 10 it’s subsets of fewer than $e$ elements of $F$, not $e$-element subsets — but that’s still not polynomial in $q$.)

Fourth problem set / Linear Algebra IV: Duality, and connections with projective spaces and with vector spaces of polynomials
corrected 27.ix.2017: In problem 2ii, we need nonzero $x \in F$ such that $x^n \neq 1$ (not “$x^n=1$” which always exists);
and the introductory sentence now makes explicit the intention that $F$ is a finite field of $q$ elements also for problem 3.
(Noted by Forrest Flesher)

Fifth problem set / Linear Algebra V: “Eigenstuff” (preceded by prelude: exact sequences and more duality)
corrected 8.x.2017: CJ Dowd is the first to note that in problem 9 (Axler 5A:31) we cannot quite let $\bf F$ be arbitrary:
if it is finite and of size less than $m$ then it cannot contain enough pairwise distinct eigenvalues to accommodate $v_i$
for each $i=1,2,\ldots,m$ ! Fortunately this is the only obstruction, so for this problem assume that $\bf F$ contains
at least $m$ distinct elements.

Sixth problem set / Linear Algebra VI: $\bigotimes$ (and also eigenstuff cont’d, and a bit on inner products)

Seventh problem set / Linear Algebra VII: Inner products etc.

Eighth problem set / Linear Algebra VIII: The spectral theorem; spectral graph theory; symplectic structures
Problems 7 and 8 postponed till Friday, 3 November at noon.

Ninth problem set / Linear Algebra IX: Trace, determinant, and more exterior algebra

Tenth problem set: Linear Algebra X (determinants and distances); representations of finite abelian groups (Discrete Fourier transform)
(Yes, in Problem 9i the equation “$A^4 = N^2$” means $A^4 = N^2 I$ [i.e. $P(A) = 0$ where $P$ is the polynomial $X^4 - N^2 \in {\bf C}[X]$.)
correction to 1i (CJ Dowd): the equality condition is not quite right when $\det A = 0$ (and $n \geq 3)$, when equality holds iff some $v_i = 0$, and then the other $v_j$ are orthogonal to that $v_i$ but need not be orthogonal to each other.

Problem set 10.99557…

Eleventh and final problem set: Representations of finite abelian groups
corrected 28.xi.2017: Fan Zhou notes that in part (i) of Problem 5 $g_1,g_2$ are in $G_1,G_2$ respectively, not $V_1,V_2$.