[from a travel journal e-mailed to a number of friends and family members in mid-August 2000; ANTS = Algorithmic Number Theory Symposium. --NDE]

The pre-ANTS week had been quite laid back; only twenty of us fit into a small lecture room for the few scheduled talks, and the rest of the time was free for collaboration, comparing notes, and discussions of various mathematical and non-mathematical topics. But ANTS is a full-fledged conference with a packed schedule. Each of the first two days has an hour-long invited talk and about ten contributed talks of twenty minutes each, with some breaks for tea/coffee or lunch. As usual for these sizable conferences, ANTS functions also as a reunion of the widely dispersed community. Participants came from four continents and almost twenty countries. Most of these countries are The Usual Suspects (U.S./Canada, France/UK/England/Germany, and a large local contingent from Holland), but several people came from places as far as Australia and Japan, and as unexpected as Estonia and Egypt. This, again, is common for a math conference of this size. Not so usual for a theoretical math conference, but common for ANTS, is a significant contingent of mathematicians with non-academic jobs. A dozen or so nametags give ``National Security Agency'', or an analogous agency of another government, as the institutional affiliation. There are several representatives of private industry as well. What are those people doing in a conference devoted to number theory, a discipline so removed from the common applications of mathematics that Hardy could famously boast (in A Mathematician's Apology, 1940) that it has no ``practical'' use at all? Well, a lot has happened since 1940. I got this far in my travel report without saying anything to describe what algorithmic number theory is, but some explanation is needed now, not only to explain the NSA's presence but also to put my own talk and the pi celebration in context. Readers already conversant with number theory and its cryptological applications may want to skim through the next three paragraphs.

To describe algorithmic number theory, I must first say something about number theory, and I could start with the dry definition: it's the study of whole numbers and related mathematical structures -- primes, solving equations in rational numbers, that sort of thing. But number theory is unique among modern mathematical disciplines in having a wealth of problems, some extremely hard, that can be stated in terms comprehensible to the non-specialist. So rather than give technical definitions I'll try to suggest the spirit of the subject by giving examples. The canonical example is the problem suggested by Fermat and known misleadingly as ``Fermat's Last Theorem'', in fact proved only a few years ago -- see this URL for one telling of the story.

Fermat -- the mathematician, not the Theorem -- also provides a good example of the concerns of algorithmic number theory. He asked: which primes p can be written as the sum of two distinct squares? For instance, 13 can be so written (13=4+9, with 4=22 and 9=32), but 2 and 11 cannot (2=1+1 doesn't count because the two squares are the same). It's not hard to see that p-1 must be a multiple of 4 (e.g. 13 passes this test because 13-1=12=3*4, but 11-1=10 and 2-1=1 are not divisible by 4, so 11 and 2 are ruled out). Remarkably, the converse is true: every prime that is one more than a multiple of 4 can be written as a sum of two distinct squares. As an added bonus, the representation is unique (except for permuting the squares: 13=9+4 is naturally regarded as equivalent to 13=4+9). So, for instance, the first 38 digits of pi happen to yield a prime p = 31415926535897932384626433832795028841 with p-1 a multiple of 4, so p is the sum of two distinct squares. In fact, p=36847587138599206042+42235624485179944052. How did I find these numbers? for that matter how do I know that p is prime? [The digits of pi are of no special significance here. I only used that sequence to keep me honest: if I used a ``random'' p, I could have cheated by starting from the two squares!] Yes, I used the computer for the primality test and the two-square decomposition, but that's not a sufficient explanation: computers are very fast at arithmetic, but not nearly fast enough to check that p is prime by trial division, or find 3684758713859920604 and 4223562448517994405 by exhaustive search. Such a computation would take thousands of years with present technology, while in fact I got both the primality test and the two-square decomposition in a fraction of a second. One part of algorithmic number theory is finding efficient methods (``algorithms'') for computing things whose existence is promised by number theory, like large primes and their two-squares decompositions. Another part is computing data for number-theoretical problems for which we do not have theorems yet, in order to surmise patterns that we can then try to mathematically prove and/or relate to known mathematical structures. (Fermat surely did this himself for the two-squares problem before setting out to prove his result; it's hard to imagine embarking on such a project without seeing enough data to be reasonably confident of the truth of the result.) Occasionally an algorithm can itself yield also a theoretical advance; this was already true for the grand-daddy of all algorithms, the ``Euclidean Algorithm'' for finding the greatest common factor of two whole numbers.

As can be seen from this example, algorithmic number theory has a pedigree almost as long and distinguished as number theory itself. But it really came into its own in the computer age. This age also saw a third mode added to the classical two: in addition to solving computational challenges and providing experimental data, algorithmic number theory provides the framework for all currently viable forms of ``public-key cryptography'', i.e., of securely coded communication without secret keys. This seems paradoxical: in familiar codes, from the simple substitutions of ``cryptoquotes'' to modern monsters like DES, if you know the ``key'' used to encode the message (A becomes J, B becomes U, etc.) then you can just as easily decode it. [For simple substitution you don't even need the key -- a fact noted in 19th-century detective novels by Poe and Doyle, and appreciated by anyone who has solved a Cryptoquote -- but that's another matter.] But this causes logistical problems: secret keys must be distributed and secured against theft and loss. ``Public key'' means, you announce to the world how to encode messages to send them to you, but anyone seeing both the encoding rule and an encoded message still cannot decipher it -- except you, thanks to some extra information which only you know. Constructing such codes is a tall order, and so far the only methods that have survived extended scrutiny use number theory. Ironically, the simplest of these methods exploits the great difficulty of factoring large numbers as opposed to the relative ease of finding large prime numbers and multiplying them -- i.e., precisely the kind of beautiful number theory of whose uselessness Hardy boasted! This is the famous public-key algorithm of Rivest, Shamir, and Adleman, known by their initials RSA; see this URL for an extended description of the underlying mathematics, and this one for a discussion of some of the convoluted legal issues arising from R, S, and A's controversial patent on the algorithm (for their side of the story, go here).

So that's why NSA and various commercial computer-security outfits are interested in large prime numbers, and in even more esoteric number-theoretical artifacts like ``elliptic curves''. In fact there are so many conferences each year devoted to cryptological applications of algorithmic number theory that ANTS organizers make a specific effort to favor presentations on other topics of interest to number theorists that do not have such a wide audience elsewhere -- including my invited talk on Thursday.

I mentioned already the packed schedule of ANTS-IV talks. Besides that daytime schedule, Monday also had the conference banquet, which was a rijsttafel at the Hortus Botanicus, a botanical garden in the old city that also features a traditional Japanese rock garden and some modern Japanese sculpture. It is unusual for the banquet to be held on the first day of a conference, but here there was also to be the evening in honor of Van Ceulen and pi [...]

[Earlier in my Dutch journal I had reported that

Ludolph van Ceulen [is] known to mathematical history buffs as the 17th-century fencing instructor who long held the record for computing pi: 35 digits, the last such record using pre-calculus formulas for that most famous of mathematical constants. A special event scheduled for the middle of ANTS-IV week was a ``pi day'' in honor of pi and Ludolph, with the participation of the Crown Prince of Holland alongside the expected complement of mathematicians, media, and other pi-parazzi.
and promised to explain why this was of interest to number theorists, who study rational or at least algebraic numbers, whereas pi is notoriously transcendental. --NDE]

Why were we celebrating Van Ceulen and his pi computation now, and at a number theory conference? For one thing, pi computation is surprisingly close to algorithmic number theory. Quite a few of the methods and formulas used for large-scale pi computation either originate in algorithmic number theory or find important uses there. Surprisingly, algorithmic number theory is also a major ``consumer'' of very accurate values of pi -- to be sure, not nearly as accurate as the billions of digits known today, but much more than any other use I'm aware of. The familiar 3.1415926535 is more than accurate enough for almost any application in the material world; for instance, it would give the circumference of the Earth to within a millimeter, except that the answer would be off by much more than that because the Equator is not quite a circle! Even atomic clocks and other hyper-accurate measurements of fundamental physical constants never need as many as 15 digits. But as a number theorist I've had to use almost 100 digits of pi, and thousands of digits is not unheard of. For instance, suppose that instead of writing a prime as the sum of squares of two whole numbers (Fermat's problem above), we want a sum of cubes of two rational numbers. Centuries after Fermat, this problem is still only partly understood, and such understanding as we do have involves complicated ``transcendental'' formulas that prominently feature pi. To translate these formulas into algorithms that calculate, for instance, that the prime 571 is the sum of the cubes of (44391008927477 / 5056073480199) and (-23911621134284 / 5056073480199), one must know pi to lots of decimals -- ``only'' 25 for the above example, but more and more as we increase the prime. [...]

[a conjecture dating back almost a century states that any prime that leaves a remainder of 4, 7, or 8 when divided by 9 can be written as a sum of two rational cubes; this is based both on considerable numerical evidence and on various theoretical insights from the theory of elliptic curves. I proved this conjecture for the cases of 4 and 7. See these tables for more information and big numbers. --NDE]

Back to the Algorithmic number theory page