Math 55b: Honors Real and Complex Analysis Lecture notes, etc., for Math 55b: Honors Real and Complex Analysis (Spring [2017-]2018)

If you find a mistake, omission, etc., please let me know by e-mail.

The orange balls mark our current location in the course, and the current problem set.


Office hours will be in the Lowell House Dining Hall as in Math 55a (usually Tuesday after Math Table, so 7:30-9:00 PM), or by appointment.
Math Night will again happen Monday evenings/nights, usually 8-10, in Leverett Dining Hall, starting January 22. The course CA’s will again hold office hours there.
Section times:
Rohil Prasad: Thursday 3-4 PM, Science Center room 309A.
Vikram Sundar: Wendesday 4-5 PM, Science Center room 304.
Vikram’s notes for 55b will be here.
Our first topic is the topology of metric spaces, a fundamental tool of modern mathematics that we shall use mainly as a key ingredient in our rigorous development of differential and integral calculus over $\bf R$ and $\bf C$. To supplement the treatment in Rudin’s textbook, I wrote up 20-odd pages of notes in six sections; copies will be distributed in class, and you also may view them and print out copies in advance from the PDF files linked below. [Some of the explanations, e.g. of notations such as $f(\cdot)$ and the triangle inequality in $\bf C$, will not be necessary; they were needed when this material was the initial topic of Math 55a, and it doesn’t feel worth the effort to delete them now that it’s been moved to 55b. Likewise for the comment about the Euclidean distance at the top of page 2 of the initial handout on “basic definitions and examples”.]

Metric Topology I
Basic definitions and examples: the metric spaces Rn and other product spaces; isometries; boundedness and function spaces

The “sup metric” on $X^S$ is sometimes also called the “uniform metric” because $d(\,f,g) \leq r$ is equivalent to a bound, $d(\,f(s),g(s)) \leq r$ for all $s \in S$, that is “uniform” in the sense that it’s independent of the choice of $s$. Likewise for the sup metric on the space of bounded functions from $S$ to an arbitrary metric space $X$ (see the next paragraph).
If $S$ is an infinite set and $X$ is an unbounded metric space then we can’t use our definition of $X^S$ as a metric space because $\sup_S d_X(\,f(s),g(s))$ might be infinite. But the bounded functions from $S$ to $X$ do constitute a metric space under the same definition of $d_{X^S}$. A function is said to be “bounded” if its image is a bounded set. You should check that $d_{X^S}(\,f,g)$ is in fact finite for bounded f and g.
Now that metric topology is in 55b, not 55a, the following observation can be made: if $X$ is $\bf R$ or $\bf C$, the bounded functions in $X^S$ constitute a vector space, and the sup metric comes from a norm on that vector space: $d(\,f,g) = \| \, f-g \|$ where the norm $\left\| \cdot \right\|$ is defined by $\| \, f \| = \sup_{s \in S} | \, f(s)|.$ Likewise for the bounded functions from $S$ to any normed vector space. Such spaces will figure in our development of real analysis (and in your further study of analysis beyond Math 55).
The “Proposition” on page 3 of the first topology handout can be extended as follows:
iv) For every $p \in X$ there exists a real number $M$ such that $d(p,q) \lt M$ for all $q \in E.$
In other words, for every $p \in X$ there exists an open ball about $p$ that contains $E.$ Do you see why this is equivalent to (i), (ii), and (iii)?
Metric Topology II
Open and closed sets, and related notions

Metric Topology III
Introduction to functions and continuity

Metric Topology IV
Sequences and convergence, etc.

The proof of “uniform limit of continuous is continuous” also shows that “uniform limit of uniformly continuous is uniformly continuous”: if each $f_n$ is uniformly continuous then a single $\delta$ will work for all $x$.
A typical application is the continuity of power series such as $\sum_{k=0}^\infty x^k/k!$ (which by definition is the pointwise limit of the partial sums $f_n(x) = \sum_{k=0}^n x^k/k!$). The limit is not uniform as $x$ ranges over all of $\bf R$ (or $\bf C$), but it is uniform for $x$ in every ball $B_R(0)$, and since every $x$ is in some such ball the limit function is continuous. Likewise for any power series $\sum_{k=0}^\infty a_k x^k$ with $\{a_k\}$ bounded, as long as $x$ is within the open circle of convergence $|x| \lt 1$: the limit is not in general uniform, but it is uniform in the ball $B_r(0)$ for every $r \lt 1$ (why?), which is good enough because $|x|\lt 1$ means that $x$ is contained in such a ball. In either case, each $f_n$ is easily seen to be uniformly continuous on bounded sets, so the argument noted in the previous paragraph shows that the limit function $f$ is also uniformly continuous in every ball on which we showed that it is continuous (that is, arbitrary $B_R(0)$ for $\sum_{k=0}^\infty x^k/k!$, or $B_r(0)$ with $r \lt 1$ for $\sum_{k=0}^\infty a_k x^k$ with $\{a_k\}$ bounded). [NB we need to use open balls $B$ so that we can test continuity at $x \in X$ on a neighborhood in $B$: for small enough $\epsilon$, the $\epsilon$-neighborhood of $x$ in $X$ is contained in $B$.]

Metric Topology V
Compactness and sequential compactness

Metric Topology VI
Cauchy sequences and related notions (completeness, completions, and a third formulation of compactness)

Here is a more direct proof (using sequential compactness) of the theorem that a continuous map $f: X \to Y$ between metric spaces is uniformly continuous if $X$ is compact. Assume not. Then there exists $\epsilon \gt 0$ such that for all $\delta \gt 0$ there are some points $p,q \in X$ such that $d(p,q) \lt \delta$ but $d(\,f(p),f(q)) \geq \epsilon.$ For each $n = 1, 2, 3, \ldots,$ choose $p_n, q_n$ that satisfy those inequalities for $\delta = 1/n.$ Since $X$ is assumed compact, we can extract a subsequence $\{ p_{n_i} \! \}$ of $\{p_n\!\}$ that converges to some $p \in X.$ But then $\{q_{n_i}\!\}$ converges to the same $p$. Hence both $f(p_{n_i})$ and and $f(q_{n_i})$ converge to $f(p),$ which contradicts the fact that $d(\,f(p_{n_i}), f(q_{n_i})) \geq \epsilon$ for each $i$.

Our next topic is differential calculus of vector-valuek functions of one real variable, building on Chapter 5 of Rudin.
You may have already seen “little oh” and “big Oh” notations. For functions $f,g$ on the same space, “$f = O(g)$” means that $g$ is a nonnegative real-valued function, $\,f$ takes values in a normed vector space, and there exists a real constant $M$ such that $\left|\,f(x)\right| \le M g(x)$ for all $x$. The notation “$f = o(g)$” is used in connection with a limit; for instance, “$f(x) = o(g(x))$ as $x$ approaches $x_0$” indicates that $f,g$ are vector- and real-valued functions as above on some neighborhood of $x_0$, and that for each $\epsilon \gt 0$ there is a neighborhood of $x_0$ such that $\left|\,f(x)\right| \le \epsilon g(x)$ for all $x$ in the neighborhood. (Given $g$ and the target of $f\!$, functions $f=O(g)$ form a vector space, which contains functions $o(g)$ as a subspace.) Thus $f'(x_0) = a$ means the same as “$f(x) = f(x_0) + a (x-x_0) + o(\left|x-x_0\right|)$ as $x$ approaches $x_0\!$”, with no need to exclude the case $x = x_0.$ Rudin in effect uses this approach when proving the Chain Rule (5.5).

Apropos the Chain Rule: as far as I can see we don’t need continuity of $f$ at any point except $x$ (though that hypothesis will usually hold in any application). All that’s needed is that $x$ has some relative neighborhood $N$ in $[a,b]$ such that $f(N)$ is contained in $I$. Also, it is necessary that $f$ map $[a,b]$ to $\bf R$, but $g$ can take values in any normed vector space.

The derivative of $f/g$ can be obtained from the product rule, together with the derivative of $1/g$ — which in turn can be obtained from the Chain Rule together with the the derivative of the single function $1/x$. [Also, if you forget the quotient-rule formula, you can also reconstruct it from the product rule by differentiating both sides of $f = g \cdot (\,f/g)$ and solving for $(\,f/g)';$ but this is not a proof unless you have some other argument to show that the derivative exists in the first place.] Once we do multivariate differential calculus, we’ll see that the derivatives of $f+g$, $f-g$, $fg$, $f/g$ could also be obtained in much the same way that we showed the continuity of those functions, by combining the multivariate Chain Rule with the derivatives of the specific functions $x+y$, $x-y$, $xy$, $x/y$ of two variables $x,y.$

As Rudin notes at the end of this chapter, differentiation can also be defined for vector-valued functions of one real variable. As Rudin does not note, the vector space can even be infinite-dimensional, provided that it is normed; and the basic algebraic properties of the derivative listed in Thm. 5.3 (p.104) can be adapted to this generality, e.g., the formula $(\,fg)' = f'g + fg'$ still holds if $f,g$ take values in normed vector spaces $U,V$ and multiplication is interpreted as a continuous bilinear map from $U \times V$ to some other normed vector space $W$.

Rolle’s Theorem” is the special case $f(b) = f(a)$ of Rudin’s Theorem 5.10; as you can see it is in effect the key step in his proof of Theorem 5.9, and thus of 5.10 as well. [As you can see from the Wikipedia page, the attribution of this result to Michelle Rolle (1652–1719) is problematic in several ways, and seems to be a good example of “Stigler’s law of eponymy”.]

We omit 5.12 (continuity of derivatives) and 5.13 (L’Hôpital’s Rule). In 5.12, see p.94 for Rudin’s notion of “simple discontinuity” (or “discontinuity of the first kind”) vs. “discontinuity of the second kind”, but please don’t use those terms in your problem sets or other mathematical writing, since they’re not widely known. In Rudin’s proof of L’Hôpital’s Rule (5.13), why can he assume that $g(x)$ does not vanish for any $x$ in $(a,b)$, and that the denominator $g(x) - g(y)$ in equation (18) is never zero?

NB The norm does not have to come from an inner product structure. Often this does not matter because we work in finite dimensional vector spaces, where all norms are equivalent, and changing to an equivalent norm does not affect the definition of the derivative. One exception to this is Theorem 5.19 (p.113) where one needs the norm exactly rather than up to a constant factor. This theorem still holds for a general norm but requires an additional argument. The key ingredient of the proof is this: given a nonzero vector $z$ in a vector space $V\!$, we want a continuous functional $w$ on $V\,$ such that $\left\| w \right \| = 1$ and $w(z) = \left| z \right|.$ If $V$ is an inner product space (finite-dimensional or not), the inner product with $z \left/ \left| z \right| \right.$ provides such a functional $w$. But this approach does not work in general. The existence of such $w$ is usually proved as a corollary of the Hahn-Banach theorem. When $V$ is finite dimensional, $w$ can be constructed by induction on the dimension $V\!$. To deal with the general case one must also invoke the Axiom of Choice in its usual guise of Zorn’s Lemma.

We next start on univariate integral calculus, largely following Rudin, chapter 6. The following gives some motivation for the definitions there. (And yes, it’s the same Riemann (1826–1866) who gave number theorists like me the Riemann zeta function and the Riemann Hypothesis.)
The Riemann-sum approach to integration goes back to the “method of exhaustion” of classical Greek geometry, in which the area of a plane figure (or the volume of a region in space) is bounded below and above by finding subsets and supersets that are finite unions of disjoint rectangles (or boxes). The lower and upper Riemann sums adapt this idea to the integrals of functions which may be negative as well as positive (recall that one of the weaknesses of geometric Greek mathematics is that the ancient Greeks had no concept of negative quantities — nor, for that matter, of zero). You may have encountered the quaint technical term “quadrature”, used in some contexts as a synonym for “integration”. This too is an echo of the geometrical origins of integration. “Quadrature” literally means “squaring”, meaning not “multiplying by itself” but “constructing a square of the same size as”; this in turn is equivalent to “finding the area of”, as in the phrase “squaring the circle”. For instance, Greek geometry contains a theorem equivalent to the integration of $\int x^2 \, dx,$ a result called the “quadrature of the parabola”. The proof is tantamount to the evaluation of lower and upper Riemann sums for the integral of $x^2 \, dx$.

An alternative explanation of the upper and lower Riemann sums, and of “partitions” and “refinements” (Definitions 6.1 and 6.3 in Rudin), is that they arise by repeated application of the following two axioms describing the integral (see for instance L.Gillman’s expository paper in the American Mathematical Monthly (Vol.100 #1, 16–25)):

The latter axiom is a consequence of the following two: the integral $\int_a^b K \, dx$ of a constant function $K$ is $K(b-a);$ and if $f(x) \le g(x)$ for all $x$ in the interval $[a,b]$ then $\int_a^b f(x) \, dx \le \int_a^b g(x) \, dx.$ Note that again all these axioms arise naturally from an interpretation of the integral as a “signed area”.

The (Riemann-)Stieltjes integral, with $d\alpha$ in place of $dx$, is then obtained by replacing each $\Delta x = b - a$ by $\Delta\alpha = \alpha(b) - \alpha(a).$

Here’s a version of Riemann-Stieltjes integrals that works cleanly for integrating bounded functions from $[a,b]$ to any complete normed vector space.


Problem sets 1 and 2: Metric topology basics

Problem set 3: Metric topology cont’d

Problem set 4: Topology finale; differential-calculus prelude

Problem set 5: More univariate differential calculus; introducing univariate integral calculus