If you find a mistake, omission, etc., please let me know by e-mail.

The orange balls mark our current location in the course, and the current problem set.

[No, you don’t have to know French to take Math 55a. Googling

The CAs for Math 55a are Vikram Sundar (

[if writing from outside the Harvard network, append

CA office hours are Monday 8-10 PM in the Leverett Dining Hall, starting September 4 (same place and time that Math Night will start the week following).

Thanks to Vikram for setting up this Dropbox link for the CAs’ notes from class.

**Section times**:

Vikram Sundar: Monday 1-2 PM; Science Center room 112 on Sep.11,
and room 222 from Sep.18 on.

Rohil Prasad: Thursday 4-5 PM, Science Center room 411

!
If you are coming to class but not
officially registered for Math 55 (e.g. you are auditing,
or still undecided between 25a and 55a but officially signed up for 25a),
send me your e-mail address
so that I and the CA's can include you in class announcements.

My office hours for the week of 18-22 September
will be **Wednesday** (Sep.20), not the usual Tuesday.
(Still 7:30 to 9:00 PM in the Lowell House Dining Hall.)

Here
is some more information from last year on the number 5777 etc.
(converted to MathJax and with the added remark on $5779 = L_{27}/L_9$);
as noted in the Sep.20 lecture, the fact that the palindrome 5775 factors
so smoothly ($3 \cdot 5^2 \cdot 7 \cdot 11$)
is also due in part to the fact that $5776 = 76^2$. Shanah Tovah!

!
The diagnostic quiz will be given
**Wednesday, September 27** in class (11:07 AM to 12:00 noon).
It will cover only material from the first three problem sets.

August 30: “Math blackboard” ($\rm\TeX$’s

At least in the beginning of the **linear algebra**
unit, we’ll be following the Axler textbook closely enough that
supplementary lecture notes should not be needed. Some important
extensions/modifications to the treatment in Axler:

- [see Axler, page 5]
*Pace*the boxed note on that page, virtually*all*mathematicians say and write“$n$-tuple” (more fully,“ordered $n$-tuple” ), while I cannot recall another instance of “list” used for this as Axler does. (One sometimes sees “tuple” for an$n$-tuple of unspecified length $n$, and “ordered pair” and perhaps “ordered triple”, “ordered quadruple”, etc. for$n = 2, 3, 4, \ldots$ .) - [cf. Axler, Notation 1.6 on page 4, and the
“Digression on Fields” on page 10]

Unless noted otherwise, $\bf F$ may be an arbitrary field, not only $\bf R$ or $\bf C$. The most important fields other than those of real and complex numbers are the field $\bf Q$ of rational numbers, and the finite fields ${\bf Z} / p {\bf Z}$ ($p$ prime). Other examples are: the field ${\bf Q}(i)$ of complex numbers with rational real and imaginary parts; more generally, ${\bf Q}(d^{1/2})$ for any non-square rational number $d$; the “$p$-adic numbers” ${\bf Q}_p$ ($p$ prime), of which we’ll say more when we study topology next term; and more exotic finite fields such as the9-element field $({\bf Z}/3{\bf Z})(i)$. Here’s a review of the axioms for fields, vector spaces, and related mathematical structures. - [cf. Axler, p.28 ff.] We define the
*span*of an arbitrary subset $S$ of (or tuple in) a vector space $V$ as follows: it is the set of all (finite) linear combinations $a_1 v_1 + \cdots + a_n v_n$ with each $v_i$ in $S$ and each $a_i$ in $F\!$. This is still the smallest vector subspace of $V$ containing $S$. In particular, if $S$ is empty, its span is by definition $\{0\}$. We do*not*require that $S$ be finite. - Warning: in general the space $F[X]$
(a.k.a. ${\cal P}(F)$) of polynomials in $X$, and its subspaces ${\cal P}_n(F)$ of polynomials of degree at most $n$, might not be naturally identified with a subspace of the space $F^F$ of functions from $F$ to itself. The problem is that two different polynomials may yield the same function. For example, if $F$ is the field of $2$ elements then the polynomial $X^2-X$ gives rise to the zero function. In general, different polynomials can represent the same function from the field $F$ to itself if and only if $F$ is finite — do you see why? - (See also Exercise 11 in Axler 1.C, assigned as part of the first
problem set)

If $U_i$ are any subspaces of a vector space $V\!$, then so is their intersection $\cap_i U_i$. Note that this is not limited to finite intersections: $i$ could range over an “index set” $I$ of any cardinality (so we would write the intersection as $\cap_{i \in I} U_i$). We don’t usually want to intersect an*empty*family of sets (do you see why not?), but for subsets of a given set $V$ we can declare that $\cap_{i\in\emptyset} U_i = V$. - For any field (or even any ring) $F$ there is a canonical
ring homomorphism, call it $h$, from $\bf Z$ to $F\!$.
“Ring homomorphism” means:
$h(0) = 0$, $h(1) = 1$, and for any integers $m,n$ we have
$h(m+n) = h(m) + h(n)$ and $h(mn) = h(m) \, h(n)$
(and $h(m-n) = h(m) - h(n)$, but this already follows from the
other properties, as indeed does $h(0)=0$).
But this doesn’t quite mean that we get an isomorphic copy of
$\bf Z$ in $F\!$, because $h$ might not be injective.
Equivalently, the
*kernel*(that is, the preimage $h^{-1}(\{0\}) = \{n : h(n) = 0\}$) might be larger than just {0}. In general, $I$ must be an*ideal*, i.e. an additive subgroup of $\bf Z$ that is closed under multiplication by*arbitrary*integers (whether in $I$ or not — this mimics the definition of a subspace, though as it happens for ideals in $\bf Z$ it’s automatic). Now every ideal in $\bf Z$ is either the zero ideal {0} or $(n) := \{ cn \mid c \in {\bf Z}\}$ for some integer $n > 0$ (namely the least positive element of the ideal), called the (positive)*generator*of the ideal. When $F$ is a ring, any $n$ may arise as the generator of $\ker(h)$, most easily for the ring ${\bf Z} / n {\bf Z}$ of integers $\bmod n$. But if $F$ is a field and $\ker h = (n)$ then $n$ must be either zero or prime, lest $F$ have zero divisors (elements $a$ and $b$, neither zero, for which $ab=0$). This $n$ is then called the*characteristic*of the field $F\!$. The familiar fields$\bf Q$, $\bf R$, $\bf C$ all have characteristic zero. For any prime*p*, there are fields of characteristic $p$, notably the “prime field” ${\bf Z} / p {\bf Z}$ (mentioned above; this is the key fact from elementary (but nontrivial) number theory that any nonzero element of ${\bf Z} / p {\bf Z}$ has a multiplicative inverse!). This field ${\bf Z} / p {\bf Z}$ and other finite fields have important uses in number theory, combinatorics, computer science, and elsewhere, often using the linear algebra that we develop in Math 55a. - [cf. the boxed note on page 42 of Axler] It is natural to wonder whether
**every**vector space, finite-dimensional or not, has a basis. The polynomial ring $F[x]$, considered as a vector spaceover $F$ (and denoted by a fancy script $\mathcal P$ in Axler), does have a basis (powersof $z$ ), as does a polynomial ring in several variables, or even infinitely many (see the next item); butdoes $F^\infty$ ? The answer is yes — but only under the Axiom of Choice (equivalently, Zorn’s Lemma)! [I can write “But only under” because it is known that Choice/Zorn is*equivalent*to the claim that every vector space has a basis. Don’t spend too much time trying to find an explicit basisfor $F^\infty$ , or for $\bf R$ as a vector spaceover $\bf Q$ (a “Hamel basis”)…] Using the same tool one can prove analogues of some other results in Chapter 2, such as 2.33 (p.41: every linearly independent set extends to a basis), and thus 2.34 (p.42: every subspace is a direct summand; again, don’t spend too much time trying to do this explicitly for $\bf Q$ as a subspace of the$\bf Q$-vector space $\bf R$, or for $\oplus_{n\geq1} F$ as a subspace of $F^\infty$!). NB some other results clearly fail in infinite dimensions, even when we have an explicit basis; e.g. the even powersof $z$ form a linearly independent subset of $F[z]$ that has the same cardinality as a basis but is not a basis. - However, 2.31 (p.40: every spanning list contains a basis)
still holds with no further axioms for spanning sets $S$
of arbitrary size, as long as $V$ is finite dimensional.
The reason is that $V$ has a finite spanning set, say $S_0$,
and every element
of $S_0$ is a linear combination of elementsof $S$ , and since linear combinations are of necessity finite it takes only a finite subsetof $S$ to span $S_0$ andthus $V$ . Now apply the proof of 2.31 to this finite subset. We may call this generalization “2.31+”. - Here’s an extreme example of
how basic theorems about finite-dimensional vector spaces can become
utterly false for finitely-generated modules:
a module generated by just one element
can have a submodule that is not finitely generated.
Indeed, for any field $F$, let $A$ be the ring of polynomials
in infinitely many
variables $X_j$. [The letter $A$ is a common name for a ring, from French*anneau*, cognate with English “annulus”.] As usual we can regard $A$ as a module over itself, with a single generator 1. Then a submodule is just an ideal of the ring. Choose the ideal $I$ generated by allthe $X_j$ which consists of all polynomials with constant coefficient equal 0. Then if there are infinitely many indices $j$ then $I$ is infinitely generated; indeed any generating set must be at least as large as the index setof $j$’s , so for everycardinal $\aleph$ we can make a ring $A$ with a singly-generated module (namely $A$ itself) and with a submodule that cannot be generated by fewer than$\aleph$ elements .

For a subtler example, consider the ring we might call“$F[X^{1/2^\infty}]$” , consisting of*F*-linear combinations of monomials $X^{n/2^k}$ for arbitrary nonnegative integers $n$and $k$ . Again let $I$ be the ideal generated by the nonconstant monomials, which is not finitely generated, though there are generating sets that are “only” countably infinite. The new behavior involves the countable generating set $\{ X^{1/2^k} \mid k \geq 0 \}$: there is no minimal generating subset, because each $X^{1/2^k}$ is a multiple of $X^{1/2^{k'}}$ for any $k' \gt k$. Likewise for the ring generated by all monomials $X^r$ with $r$ any nonnegative rational number (or even all $X^r$ with $r$ any nonnegative real number).

(When*A*is Noetherian, submodules of finitely-generated modules*are*finitely-generated, but might still require more generators; for example, there are Noetherian rings $A$ with “non-principal ideals” $I$, which give examples of a1-generator module with a submodule that requires at least 2 generators.) - Please avoid Axler’s notation “product” and
“$V \times W$” (p.91, 3.71 ff.).
I understand the motivation for this notation: it is formally correct,
and avoids the need to distinguish between “external direct sum”
(the usual name for that vector space) and “internal direct sum”
(a vector space sum [within some larger vector space] that happens to be direct).
The problem with this is that in Math 55 (and ubiquitously in the literature)
we shall introduce before long a “tensor product”
$V \otimes W$ of vector spaces, whose dimension is the product of the
dimensions of $V$
and $W$ when those two dimensions are finite; and it would be a much bigger source of confusion to have*that*notation coexist with“$V \times W$” where the dimensions add. So please stick with“$V \oplus W$” and the name “external direct sum” — or if you must, “Cartesian product” to avoid confusion with tensor products. For a possibly infinite Cartesian product, which is*not*the same as a direct sum (because an element of the direct sum must have only finitely many nonzero components), we still have the notation $\Pi_{i \in I} V_i$ to distinguish the Cartesian product from the direct sum $\oplus_{i \in I} V_i$. - Apropos Axler 2.43 (page 47), a warning: the formula $\dim(U+W) = \dim(U)+\dim(W) - \dim(U\cap W)$, and its analogy with the inclusion-exclusion principle, may lead you to expect a similar formula for $\dim(U_1+U_2+U_3)$ for any three subspaces $U_1,U_2,U_3$ of a vector space; but that expected generalization is (in)famously false in general (and likewise for four or more subspaces)!
- As with the notions of span and linear combination, the definition of
a linear transformation makes sense for modules over any
ring $A$ (whether commutative or not), and in that generality is called an (so you now know the “morphisms” in the “category” of*$A$-module homomorphism*$A$-modules ); when $A$ is a skew field, we still call this a linear transformation, and the “rank-nullity theorem” (3.22, page 63) still holds for finite-dimensional vector spaces in that context. - Suppose $T: V \to W$ is a linear transformation.
Axler’s notation for the image of
*T*was already becoming rather old-fashioned when he wrote the first edition of his book; these days simply $T(V)$ is common (and likewise for any function at all). The terminology “null space” (whether one or two words) for $T^{-1}(\{0\})$ is also somewhat quaint, and we usually say “kernel” and write“$\ker(T)$” [and $\rm\LaTeX$ already provides the command`\ker`to typeset this properly]. While I’m at it, best to avoid the use of“one-to-one” to mean“injective” (see boxed note on page 60), because it is also sometimes used for“bijective” . Also, the $\rm\LaTeX$ for ${\cal L}(V,W)$ is`{\cal L}(V,W)`; note the brackets aroud`\cal L`, without which you would get $\cal L(V,W)$. - Here’s a page of ntoes on “Lemma 3.?” and some related observations on how $\rm Hom$ connectes with finite and infinite direct sums.
- More notes on notation: I understand why Axler wants to distinguish
$V'$ and $T'$ (dual space and transformation) from $V^*$ and $T^*$, and
$U^0$ (annihilator) from $U^\perp$ (see the boxed note on page 104).
I’ll try to stick with $U^0$ in this class. But for the duals,
using “ $\!\phantom|'\!$ ” this way incurs a steep price of the very useful
construction exemplified by “let $V$ and $V'$ be vector spaces”:
we already have few enough good letters to name mathematical structures
that even $\pi$ is pressed into double duty (not just $3.14159\ldots$ but
also the quotient πrojection from $V$
to $V/U$). I’ll stick with the common $V^*$ and $T^*$ here. - An equivalent statement of the identity $(ST)^* = T^* S^*$
(third part of 3.101, page 104 of Axler), together with $(I_V)^* = I_{V^*}$
(which Axler might not even bother stating explicitly),
is that duality of vector spaces and linear transformations constitutes a
“contravariant
functor” from the category of
spaces and linear transformations to itself.*F*-vector - The results about quotient spaces and duality in sections E and F of
Chapter 3 are often described in terms of
*exact sequences*. A sequence $\ \cdots \to L \to M \to N \to \cdots\ $ of linear transformations (or$A$-module homomorphisms, “etc.”) is said to be“exact at $M$” if the kernel of the map $M \to N$ is the image of the map $L \to M$ (that is, if the elements of $M$ that go to zero in $N$ are precisely those that come from $L$). The sequence is “exact” if it is exact at each step with both an incoming and an outgoing map. In particular, a map $M \to N$ is injective__iff__it extends to a sequence $0 \to M \to N$ that is exact at $M$, and surjective__iff__it extends to a sequence $M \to N \to 0$ that is exact at $N$. [In this context “0” is commonly used for the trivial vector space (or module, etc.) $\{0\}$. Note that in each case there is no choice about the function from or to that trivial vector space 0, and likewise at least for modules. Another notation that signals injectivity is $M \hookrightarrow N$ (${\rm\LaTeX}$:`\hookrightarrow`, with the extra hook suggesting $\subset$); likewise $M \to\!\!\!\!\to N$ for a surjective map.] Thus the map is an isomorphism__iff__$0 \to M \to N \to 0$ is exact (at both $M$ and $N$). Even more easily, $0 \to M \to 0$ is exact__iff__$M=0$. A*short exact sequence*is the next case, with three modules other than the initial and final 0. The standard example is $0 \to L \to M \to N \to 0$ where the map $L \to M$ is an inclusion map (thus an injection) and the map $M \to N$ is the quotient map $M \to M/L$ (thus a surjection). In general if $0 \to L \to M \to N \to 0$ is a short exact sequence then the injection $L \to M$ identifies $L$ with a submodule of $M$, and then the surjection $M \to N$ is identified with the quotient map. More generally,*any*homomorphism $L \to M$ extends (uniquely up to equivalence) to an exact sequence with*four*modules between the outer zeros: $0 \to K \to L \to M \to N \to 0$, where $K$ is the kernel of the map $L \to M$, and $N$ is its “*cokernel*”, that is, the quotient of $M$ by the image of $L$.Now consider the case of vector spaces. Then to each linear transformation $V \to W$ we associate the dual transformation $V^* \leftarrow W^*$, with the dual of a composition $V \to W \to X$ being the composition of the dual transformations $V^* \leftarrow W^* \leftarrow X^*$ in reverse order; this makes duality a “contravariant functor” on the category of

$F$-vector spaces. The key fact is that*for finite-dimensional vector spaces, duality preserves exactness*of sequences of linear transformations. Thus starting from any linear $V \to W$, we can extend to an exact sequence $0 \to U \to V \to W \to X \to 0$ with $U$ the kernel and $X$ the cokernel, and dualize to deduce the exactness of $0 \leftarrow U^* \leftarrow V^* \leftarrow W^* \leftarrow X^* \leftarrow 0$ with $V^* \leftarrow W^*$ the dual map. This immediately encodes Axler 3.108 (page 107): the map $V \to W$ is surjective__iff__$X$ is zero__iff__$X^*$ is zero__iff__the dual map is injective. Likewise for 3.110 (p.108) via the vanishing of$U$ and $U^*$ . With a bit more work we can get the general relations $\ker(T^*) = ({\rm im}(T))^0$ (3.107, p.106) and ${\rm im}(T^*) = (\ker(T))^0$ (3.109, p.107) between the kernels and images of $T$ and its dual, again assuming that $T$ is a linear map between finite-dimensional vector spaces. Conversely, the fact that duality preserves exactness (for sequences of linear maps between finite-dimensional vector spaces) can be deduced as a special case of 3.107 and 3.109.You can now understand this joke (such as it is).

- Being a special case of ${\rm Hom}$, duality makes sense
in the more general setting of modules over a
ring $A$: the dual of an$A$-module $M$ is $M^* := {\rm Hom}(M,A)$, the$A$-module of o$A$-linear homomorphism from $M$to $A$. This still gives a “contravariant functor” from the category of$A$-modules to itself: an$A$-module homomorphism $M \to N$ gives rise in the same way an$A$-module homomorphism $M^* \leftarrow N^*$ with the direction reversed, consistent with identity and composition. But, as you might suspect by now, our theorems about the kernel and image of the dual of a linear transformation can fail in this more general setting, even when applied to finitely-generated modules. We already see this for injections and surjections: For a linear transformation $T : V \to W$, we saw that if $T$ is injective then the dual transformation $T^*$ is surjective, and vice versa. Only one of these two results holds for injections and surjections of$A$-modules ; can you see which one it is, and give a counterexample for the other (already for$A = \bf Z$) ? - Another way to think about the eigen-basics: “Lemma 5.0”:
If $T$ is an operator on any vector
space $V$, and $\lambda$ any scalar, then $U$ is an invariant subspacefor $T$ __iff__it is an invariant subspace for$T - \lambda I.$ So, for instance, since $\ker T$ is an invariant subspace, so is $\ker(T-\lambda I),$ a.k.a. the$\lambda$-eigenspace . - Yet another note on notation: Axler’s name
“$T/U$” (for the operator on $V/U$ induced from the action of $T$ on a vector space $V$ with an invariant subspace $U,$ see 5.14 on p.137) is a nice notation, but (unlike $T|_U$ for the restriction of $T$to $U$) is seen rarely if at all in the research literature. Normally it will be called plain $T$, or possibly $\overline T$ (since it is constructed by descending to$V/U$ the composition of $T$ with the quotient map $V \to V/U$). - Let $T$ be a linear operator on $V$. The algebraic
properties of polynomial evaluation
at $T$ can be summarized by saying that the mapfrom $F[X]$ to End($V$) that takes any polynomial $P$ to $P(T)$ is not just linear but a*ring homomorphism*. [Since $F[X]$ is a commutative ring, so is the image of this homomorphism, even thoughEnd($V$) is not commutative once $\dim(V) > 1$.] In particular the kernel is an idealin $F[X]$ ; when $V$ is finite dimensional, this ideal must be nonzero, and its generator is what we shall call the “minimal polynomial”of $T$. *Special case:*if $V$ is $F$ itself, then we naturally identifyEnd($V$) with $F$, and we get for any field element $x$ the evaluation homomorphismfrom $F[X]$ to F that takes any polynomial to its valueat $x$. - Axler proves the Fundamental Theorem of Algebra using complex analysis, which cannot be assumed in Math 55a (we’ll get to it at the end of 55b). Here’s a proof using the topological tools we’ll develop at the start of 55b. (Axler gives one standard complex-analytic proof in 4.13 on page 124.) Here are two other equivalent conditions for algebraic closure, in terms of irreducible polynomials and finite(-dimensional) field extensions.
- Triangular matrices are intimately related with “flags”.
A (complete)
*flag*in a finite dimensional vector space $V$ is a sequence of subspaces $\{0\} = V_0, V_1, V_2, \ldots, V_n = V$, with each $V_i$ of dimension $i$ and (for $1\leq i\leq n$) containing $V_{i-1}$. A basis $v_1,v_2,\ldots,v_n$ determines a flag: $V_i$ is the span of the first $i$ basis vectors. Another basis $w_1,w_2,\ldots,w_n$ determines the same flag if and only if each $w_i$ is a linear combination of $v_1,v_2,\ldots,v_i$ (necessarily with nonzero $v_i$ coefficient). The*standard flag*in $F^n$ is the flag obtained in this way from the standard basis of unit vectors $e_1,e_2,\ldots,e_n$. The punchline is that, just as a diagonal matrix is one that respects the standard basis (equivalently, the associated decomposition of*V*as a direct sum of1-dimensional subspaces),*an upper-triangular matrix is one that respects the standard flag.*Note that the$i$-th diagonal entry of a triangular matrix gives the action on the one-dimensional quotient space $V_i / V_{i-1}$ (again for each $i=1,\ldots,n$).

- One of many applications is the
**trace**of an operator on a finite dimensional$F$-vector space $V$. This is a linear map from ${\rm Hom}(V,V)$to $F$. We can define it simply as the composition of two maps: our identification of ${\rm Hom}(V,V)$ with the tensor product of$V^*$ and $V$, and the natural map from this tensor productto $F$ coming from the bilinear map taking $(v^*,v)$to $v^*(v)$. We shall see that this is the same as the classical definition: the trace of $T$ is the sum of the diagonal entries of the matrix of $T$ with respect to any basis. The coordinate-independent construction via tensor algebra explains why the trace does not change under change of basis. (The invariance can also be proved by checking explicitly that $AB$ and $BA$ have the same trace for any square matrices $A,B$ of the same size.) Once we’ve constructed the trace, we have a series of invariants ${\rm tr}(T^k)$ ($k=1,2,3,\ldots$; the $k=0$ trace is ${\rm tr}(I_V) = \dim V$). If $T$ has an upper-triangular matrix $(a_{ij})$ then the diagonal entries of $T^k$ are $a_{ii}^k$, so ${\rm tr}(T^k) = \sum_i a_{ii}^k$. In characteristic zero (or characteristic $>\dim V$), that’s enough to construct the characteristic polynomialof $T$ [with apologies for using two mathematical senses of “characteristic” in the same sentence...]; but to do it in general we’ll have to work harder. - Here are some basic definitions and facts about general
**norms**on real and complex vector spaces. - Just as we can study bilinear symmetric forms
on a vector space over any field, not
just $\bf R$, we can study sesquilinear conjugate-symmetric forms on a vector space over any field*with a conjugation*, notjust $\bf C$. Here a “conjugation” on afield $F$ is a field automorphism $\sigma: F \to F$ such that $\sigma$ is not the identity but $\sigma^2$ is the identity (that is, $\sigma$ is an involution). Given a basis $\{v_i\}$for $F$, a sesquilinear form $\langle \cdot, \cdot \rangle$on $F$ is determined by the field elements $a_{i,\,j} = \langle v_i, v_j \rangle,$ and is conjugate-symmetric if and only if $a_{j,i} = \sigma(a_{i,\,j})$ for all $i,j$. Note that the “diagonal entries” $a_{i,i} = \langle v_i, v_i \rangle$ — and more generally $\langle v,v \rangle$ for any $v \in V$ — must be elements of the subfieldof $F$ fixedby $\sigma$.

Thanks to Vikram for this $\rm\LaTeX$ template for problem-set solutions (here’s what the resulting PDF looks like). They ask that e-mail submissions of problem sets have “Math 55 homework” in the Subject line.

First problem set / Linear Algebra I:
vector space basics; an introduction to convolution rings

**Clarifications**:

•
“Which if any of these basic results would fail if
$\bf F$ were replaced by $\bf Z$?” —
but don’t worry about this for problems 7 and 24,
which specify $\bf R$.

•
Problem 12: If you see how to compute this efficiently but not
what this has to do with Problem 8, please keep looking for
the connection.

Here’s
the “Proof of Concept” mini-crossword with links concerning
the ∎ symbol.
Here’s
an excessively annotated solution.

Second problem set / Linear Algebra II:
dimension of vector spaces; torsion groups/modules and divisible groups

**About Problem 5:** You may wonder: if not determinants,
what *can* you use? See Axler, Chapter 4, namely 4.8 through 4.12
*infinite*
field; Axler’s proof is special to the real and complex numbers,
but 4.12 yields the result in general. (We already remarked that
this result does not hold for finite fields.)

Third problem set / Linear Algebra III:
Countable vs. uncountable dimension of vector spaces;
linear transformations and duality

**corrected** 18 September (Mark Kong):

• Problem 2: Suppose that for some (finite) $n$ we can extend $B_0$ by
$n$ vectors (*not* “extend $B$ by $n$ vectors” etc.).

• Also, in Problem 1 Mark notes that one already needs a bit of the
Axiom of Choice even to prove the fact (which I blithely asserted in class)
that a countable union of countable, or even finite, sets is itself countable.
(If you can enumerate a countable disjoint union
$\bigcup_{i=1}^\infty S_i$ of countable or finite sets, then you can choose
an element of $\prod_{i=1}^\infty S_i$ by choosing from each $S_i$
the element that comes earliest in the enumeration.) Go ahead and
assume this for Problem 1.

• (And in Problem 10 it’s subsets of *fewer than*
$e$ elements of $F$, not $e$-element subsets —
but that’s still not polynomial

**corrected** 27.ix.2017:
In problem 2ii, we need nonzero $x \in F$
such that $x^n \neq 1$ (not “$x^n=1$” which always exists);

and the introductory sentence now makes explicit the intention that
$F$ is a finite field of $q$ elements also for problem 3.

(Noted by Forrest Flesher)

**corrected** 8.x.2017: CJ Dowd is the first to note that
in problem 9 (Axler 5A:31) we cannot quite let $\bf F$ be arbitrary:

if it is finite and of size less than $m$ then it cannot contain
enough pairwise distinct eigenvalues to accommodate $v_i$

for each $i=1,2,\ldots,m$ ! Fortunately this is the only obstruction,
so for this problem assume that $\bf F$ contains

at least $m$ distinct elements.