You are currently browsing the category archive for the ‘Exposition’ category.
Welcome to the 54th Carnival of Mathematics, and Happy Fourth of July to our American readers! Indeed, the carnival should have been hosted yesterday, and I apologize for being a day late.
Trivia: Today, we have the 234th Independence Day celebrations in the US, and ours is the 54th carnival. 2+3+4 = 5+4, see? Boy, do I feel so clever!
Ok, let’s begin, now!
We start off with a post, submitted by Shai Deshe, that presents a collection of YouTube videos explaining different kinds of infinities in set theory, causality vs conditionality in probability and some topology. The videos are the kind of ones that “math people” could use to explain a few mathematical concepts to their friends, family members and colleagues who may not be enamored of math very much but may still possess a lingering interest in it.
Experimental philosophy, according to the Experimental Philosophy Society, “involves the collection of empirical data to shed light on philosophical issues“. As such, a careful quantitative analyses of results of experiments are used to shed light on many philosophical issues/debates. Anthony Chemero wrote a post titled, ‘What Situationist Experiments Show‘, that links to a paper with the same title that he coauthored with John Campbell and Sarah Meerschaert. In the paper, the authors, through quantitative analyses of actual experimental data, argue that virtue ethics has not lost to the siuationist side, whose critiques of virtue theory are far from convincing.
Next, I would like to bring the readers’ attention to two math blogs that came into existence somewhat recently and which I think have a lot of really good mathematical content. They are Annoying Precision and A Portion of the Book. In my opinion, their blog posts contain a wealth of mathematical knowledge, especially for undergraduates (and graduate students too!), who, if inclined toward problem-solving, will enjoy the posts even more. Go ahead and dive into them!
At Annoying Precision, a project aimed at the “Generally Interested Lay Audience” that Qiaochu Yuan started aims “to build up to a discussion of the Polya enumeration theorem without assuming any prerequisites other than a passing familiarity with group theory.” It begins with GILA I: Group Actions and Equivalence Relations, the last post of the series being GILA VI: The cycle index polynomials of the symmetric groups.
Usually, undergrads hardly think integrals have much to do with combinatorics. At A Portion of the Book, Masoud Zargar has a very nice post that deals with the intersection of Integrals, Combinatorics and Geometry.
Tom Escent submitted a link to an article titled, “Introduction to Nerds on Wall Street“, which actually provides a very small snapshot of the book named, Nerds on Wall Street: Math, Machines and Wired Markets whose author is David J. Leinweber. I haven’t read the book yet, but based on generally good reviews, it seems like it chronicles the contribution of Quant guys to Wall Street over the past several decades. Should be interesting to Math and CS majors, I think.
Let’s have a post on philosophy and logic, shall we? At Skeptic’s Play, there is a discussion on Gödel’s modal ontological argument regarding the possibility of existence of God. As someone who has just begun a self-study of modal logic, I will recommend Brian K. Chellas’ excellent introduction to the subject, titled Modal Logic: An Introduction.
Then, there is the Daily Integral, a blog dealing with solving elementary integrals and which I think may be particularly useful for high-school students.
Let me close this carnival by asking the reader, “What do you think is the world’s oldest mathematical artifact?” There are several candidates, and according to The Number Warrior, candidate #1 is The Lebombo Bone, found in the Lebombo Mountains of South Africa and Swaziland, that dates back to 35,000 BC!
That’s all for now! Thanks to everyone who made submissons.
After a long hiatus, I’d like to renew the discussion of axiomatic categorical set theory, more specifically the Elementary Theory of the Category of Sets (ETCS). Last time I blogged about this, I made some initial forays into “internalizing logic” in ETCS, and described in broad brushstrokes how to use that internal logic to derive a certain amount of the structure one associates with a category of sets. Today I’d like to begin applying some of the results obtained there to the problem of constructing colimits in a category satisfying the ETCS axioms (an ETCS category, for short).
(If you’re just joining us now, and you already know some of the jargon, an ETCS category is a well-pointed topos that satisfies the axiom of choice and with a natural numbers object. We are trying to build up some of the elementary theory of such categories from scratch, with a view toward foundations of mathematics.)
But let’s see — where were we? Since it’s been a while, I was tempted to review the philosophy behind this undertaking (why one would go to all the trouble of setting up a categories-based alternative to ZFC, when time-tested ZFC is able to express virtually all of present-day mathematics on the basis of a reasonably short list of axioms?). But in the interest of time and space, I’ll confine myself to a few remarks.
As we said, a chief difference between ZFC and ETCS resides in how ETCS treats the issue of membership. In ZFC, membership is a global binary relation: we can take any two “sets” and ask whether
. Whereas in ETCS, membership is a relation between entities of different sorts: we have “sets” on one side and “elements” on another, and the two are not mixed (e.g., elements are not themselves considered sets).
Further, and far more radical: in ETCS the membership relation is a function, that is, an element
“belongs” to only one set
at a time. We can think of this as “declaring” how we are thinking of an element, that is, declaring which set (or which type) an element is being considered as belonging to. (In the jargon, ETCS is a typed theory.) This reflects a general and useful philosophic principle: that elements in isolation are considered inessential, that what counts are the aggregates or contexts in which elements are organized and interrelated. For instance, the numeral ‘2’ in isolation has no meaning; what counts is the context in which we think of it (qua rational number or qua complex number, etc.). Similarly the set of real numbers has no real sense in isolation; what counts is which category we view it in.
I believe it is reasonable to grant this principle a foundational status, but: rigorous adherence to this principle completely changes the face of what set theory looks like. If elements “belong” to only one set at a time, how then do we even define such basic concepts as subsets and intersections? These are some of these issues we discussed last time.
There are other significant differences between ZFC and ETCS: stylistically, or in terms of presentation, ZFC is more “top-down” and ETCS is more “bottom-up”. For example, in ZFC, one can pretty much define a subset by writing down a first-order formula
in the language; the comprehension (or separation) axiom scheme is a mighty sledgehammer that takes care of the rest. In the axioms of ETCS, there is no such sledgehammer: the closest thing one has to a comprehension scheme in the ETCS axioms is the power set axiom (a single axiom, not an axiom scheme). However, in the formal development of ETCS, one derives a comprehension scheme as one manually constructs the internal logic, in stages, using the simple tools of adjunctions and universal properties. We started doing some of that in our last post. So: with ZFC it’s more as if you can just hop in the car and go; with ETCS you build the car engine from smaller parts with your bare hands, but in the process you become an expert mechanic, and are not so rigidly attached to a particular make and model (e.g., much of the theory is built just on the axioms of a topos, which allows a lot more semantic leeway than one has with ZF).
But, in all fairness, that is perhaps the biggest obstacle to learning ETCS: at the outset, the tools available [mainly, the idea of a universal property] are quite simple but parsimonious, and one has to learn how to build some set-theoretic and logical concepts normally taken as “obvious” from the ground up. (Talk about “foundations”!) On the plus side, by building big logical machines from scratch, one gains a great deal of insight into the inner workings of logic, with a corresponding gain in precision and control and modularity when one would like to use these developments to design, say, automated deduction systems (where there tend to be strong advantages to using type-theoretic frameworks).
Enough philosophy for now; readers may refer to my earlier posts for more. Let’s get to work, shall we? Our last post was about the structure of (and relationships between) posets of subobjects relative to objects
, and now we want to exploit the results there to build some absolute constructions, in particular finite coproducts and coequalizers. In this post we will focus on coproducts.
Note to the experts: Most textbook treatments of the formal development of topos theory (as for example Mac Lane-Moerdijk) are efficient but highly technical, involving for instance the slice theorem for toposes and, in the construction of colimits, recourse to Beck’s theorem in monad theory applied to the double power-set monad [following the elegant construction of Paré]. The very abstract nature of this style of argumentation (which in the application of Beck’s theorem expresses ideas of fourth-order set theory and higher) is no doubt partly responsible for the somewhat fearsome reputation of topos theory.
In these notes I take a much less efficient but much more elementary approach, based on an arrangement of ideas which I hope can be seen as “natural” from the point of view of naive set theory. I learned of this approach from Myles Tierney, who was my PhD supervisor, and who with Bill Lawvere co-founded elementary topos theory, but I am not aware of any place where the details of this approach have been written up before now. I should also mention that the approach taken here is not as “purist” as many topos theorists might want; for example, here and there I take advantage of the strong extensionality axiom of ETCS to simplify some arguments.
The Empty Set and Two-Valued Logic
We begin with the easy observation that a terminal category, i.e., a category with just one object and one morphism (the identity), satisfies all the ETCS axioms. Ditto for any category
equivalent to
(where every object is terminal). Such boring ETCS categories are called degenerate; obviously our interest is in the structure of nondegenerate ETCS categories.
Let be an ETCS category (see here for the ETCS axioms). Objects of
are generally called “sets”, and morphisms are generally called “functions” or “maps”.
Proposition 0: If an ETCS category is a preorder, then
is degenerate.
Proof: Recall that a preorder is a category in which there is at most one morphism for any two objects
. Every morphism in a preorder is vacuously monic. If there is a nonterminal set
, then the monic
to any terminal set defines a subset
distinct from the subset defined by
, thus giving (in an ETCS category) distinct classifying maps
, contradicting the preorder assumption. Therefore all objects
are terminal.
Assume from now on that is a nondegenerate ETCS category.
Proposition 1: There are at least two truth values, i.e., two elements , in
.
Proof: By proposition 0, there exist sets and two distinct functions
. By the axiom of strong extensionality, there exists
such that
. The equalizer
of the pair
is then a proper subset of
, and therefore there are at least two distinct elements
.
Proposition 2: There are at most two truth values ; equivalently, there are at most two subsets of
.
Proof: If are distinct subsets of
, then either
or
, say the former. Then
and
are distinct subsets, with distinct classifying maps
. By strong extensionality, there exists
distinguishing these classifying maps. Because
is terminal, we then infer
and
, so
as subsets of
, and in that case only
can be a proper subset of
.
By propositions 1 and 2, there is a unique proper subset of the terminal object . Let
denote this subset. Its domain may be called an “empty set”; by the preceding proposition, it has no proper subsets. The classifying map
of
is the truth value we call “false”.
Proposition 3: 0 is an initial object, i.e., for any there exists a unique function
.
Proof: Uniqueness: if are maps, then their equalizer
, which is monic, must be an isomorphism since 0 has no proper subsets. Therefore
. Existence: there are monos
where is “global truth” (classifying the subset
) on
and
is the “singleton mapping
” on
, defined as the classifying map of the diagonal map
(last time we saw
is monic). Take their pullback. The component of the pullback parallel to
is a mono
which again is an isomorphism, whence we get a map
using the other component of the pullback.
Remark: For the “purists”, an alternative construction of the initial set 0 that avoids use of the strong extensionality axiom is to define the subset to be “the intersection all subsets of
“. Formally, one takes the extension
of the map
where the first arrow represents the class of all subsets of , and the second is the internal intersection operator defined at the end of our last post. Using formal properties of intersection developed later, this intersection
has no proper subsets, and then the proof of proposition 3 carries over verbatim.
Corollary 1: For any , the set
is initial.
Proof: By cartesian closure, maps are in bijection with maps of the form
, and there is exactly one of these since 0 is initial.
Corollary 2: If there exists , then
is initial.
Proof: The composite of followed by
is
, and
followed by
is also an identity since
is initial by corollary 1. Hence
is isomorphic to an initial object
.
By corollary 2, for any object the arrow
is vacuously monic, hence defines a subset.
Proposition 4: If , then there exists an element
.
Proof: Under the assumption, has at least two distinct subsets:
and
. By strong extensionality, their classifying maps
are distinguished by some element
.
External Unions and Internal Joins
One of the major goals in this post is to construct finite coproducts in an ETCS category. As in ordinary set theory, we will construct these as disjoint unions. This means we need to discuss unions first; as should be expected by now, in ETCS unions are considered locally, i.e., we take unions of subsets of a given set. So, let be subsets.
To define the union , the idea is to take the intersection of all subsets containing
and
. That is, we apply the internal intersection operator (constructed last time),
to the element that represents the set of all subsets of
containing
and
; the resulting element
represents
. The element
corresponds to the intersection of two subsets
Remark: Remember that in ETCS we are using generalized elements:
really means a function
over some domain
, which in turn classifies a subset
. On the other hand, the
here is a subset
. How then do we interpret the condition “
“? We first pull back
over to the domain
; that is, we form the composite
, and consider the condition that this is bounded above by
. (We will write
, thinking of the left side as constant over
.) Externally, in terms of subsets, this corresponds to the condition
.
We need to construct the subsets . In ZFC, we could construct those subsets by applying the comprehension axiom scheme, but the axioms of ETCS have no such blanket axiom scheme. (In fact, as we said earlier, much of the work on “internalizing logic” goes to show that in ETCS, we instead derive a comprehension scheme!) However, one way of defining subsets in ETCS is by taking loci of equations; here, we express the condition
, more pedantically
or
, as the equation
where the right side is the predicate “true over “.
Thus we construct the subset of
via the pullback:
{C: A ≤ C} -------> 1 | | | | t_X V chi_A => - V PX -----------> PX
Let me take a moment to examine what this diagram means exactly. Last time we constructed an internal implication operator
and now, in the pullback diagram above, what we are implicitly doing is lifting this to an operator
The easy and cheap way of doing this is to remember the isomorphism we used last time to uncover the cartesian closed structure, and apply this to
to define . This map classifies a certain subset of
, which I’ll just write down (leaving it as an exercise which involves just chasing the relevant definitions):
Remark: Similarly we can define a meet operator
by exponentiating the internal meet
. It is important to know that the general Heyting algebra identities which we established last time for
lift to the corresponding identities for the operators
on
. Ultimately this rests on the fact that the functor
, being a right adjoint, preserves products, and therefore preserves any algebraic identity which can be expressed as a commutative diagram of operations between such products.
Hence, for the fixed subset (classified by
), the operator
classifies the subset
Finally, in the pullback diagram above, we are pulling back the operator against
. But, from last time, that was exactly the method we used to construct universal quantification. That is, given a subset
we defined to be the pullback of
along
. Putting all this together, the pullback diagram above expresses the definition
that one would expect “naively”.
Now that all the relevant constructions are in place, we show that is the join of
and
in the poset
. There is nothing intrinsically difficult about this, but as we are still in the midst of constructing the internal logic, we will have to hunker down and prove some logic things normally taken for granted or zipped through without much thought. For example, the internal intersection operator was defined with the help of internal universal quantification, and we will need to establish some formal properties of that.
Here is a useful general principle for doing internal logic calculations. Let be the classifying map of a subset
, and let
be a function. Then the composite
classifies the subset
so that one has the general identity . In passing back and forth between the external and internal viewpoints, the general principle is to try to render “complicated” functions
into a form
which one can more easily recognize. For lack of a better term, I’ll call this the “pullback principle”.
Lemma 1: Given a relation and a constant
, there is an inclusion
as subsets of . (In traditional logical syntax, this says that for any element
,
implies
as predicates over elements . This is the type of thing that ordinarily “goes without saying”, but which we actually have to prove here!)
Proof: As we recalled above, was defined to be
, the pullback of global truth
along the classifying map
. Hold that thought.
Let
be the map which classifies the subset . Equivalently, this is the map
under the canonical isomorphisms ,
. Intuitively, this maps
, i.e., plugs an element
into an element
.
Using the adjunction of cartesian closure, the composite
transforms to the composite
so by the pullback principle, classifies
.
Equivalently,
Also, as subsets of , we have the inclusion
[this just says that belongs to the subset classified by
, or equivalently that
is in the subset
]. Applying the pullback operation
to (2), and comparing to (1), lemma 1 follows.
Lemma 2: If as subsets of
, then
.
Proof: From the last post, we have an adjunction:
if and only if
for any subset of . So it suffices to show
. But
where the first inclusion follows from .
Next, recall from the last post that the internal intersection of was defined by interpreting the following formula on the right:
Lemma 3: If , then
.
Proof: classifies the subset
, i.e.,
is identified with the predicate
in the argument
, so by hypothesis
as predicates on
. Internal implication
is contravariant in the argument
[see the following remark], so
Now apply lemma 2 to complete the proof.
Remark: The contravariance of
, that is, the fact that
implies
is a routine exercise using the adjunction [discussed last time]
if and only if
Indeed, we have
where the first inequality follows from the hypothesis
, and the second follows from
. By the adjunction, the inequality (*) implies
.
Theorem 1: For subsets of
, the subset
is an upper bound of
and
, i.e.,
.
Proof: It suffices to prove that , since then we need only apply lemma 3 to the trivially true inclusion
to infer , and similarly
. (Actually, we need only show
. We’ll do that first, and then show full equality.)
The condition we want,
is, by the adjunction , equivalent to
which, by a –
adjunction, is equivalent to
as subsets of . So we just have to prove (1). At this point we recall, from our earlier analysis, that
Using the adjunction , as in the proof of lemma 2, we have
which shows that the left side of (1) is contained in
where the last inclusion uses another –
adjunction. Thus we have established (1) and therefore also the inclusion
Now we prove the opposite inclusion
that is to say
Here we just use lemma 1, applied to the particular element : we see that the left side of (**) is contained in
which collapses to , since
. This completes the proof.
Theorem 2: is the least upper bound of
, i.e., if
is a subset containing both
and
, then
.
Proof: We are required to show that
Again, we just apply lemma 1 to the particular element : the left-hand side of the claimed inclusion is contained in
but since is true by hypothesis (is globally true as a predicate on the implicit variable
), this last subset collapses to
which completes the proof.
Theorems 1 and 2 show that for any set , the external poset
admits joins. One may go on to show (just on the basis of the topos axioms) that as in the case of meets, the global external operation of taking joins is natural in
, so that by the Yoneda principle, it is classified by an internal join operation
namely, the map which classifies the union of the subsets
and this operation satisfies all the expected identities. In short, carries an internal Heyting algebra structure, as does
for any set
.
We will come back to this point later, when we show (as a consequence of strong extensionality) that is actually an internal Boolean algebra.
Construction of Coproducts
Next, we construct coproducts just as we do in ordinary set theory: as disjoint unions. Letting be sets (objects in an ETCS category), a disjoint union of
and
is a pair of monos
whose intersection is empty, and whose union or join in is all of
. We will show that disjoint unions exist and are essentially unique, and that they satisfy the universal property for coproducts. We will use the notation
for a disjoint union.
Theorem 3: A disjoint union of and
exists.
Proof: It’s enough to embed disjointly into some set
, since the union of the two monos in
would then be the requisite
. The idea now is that if a disjoint union or coproduct
exists, then there’s a canonical isomorphism
. Since the singleton map
is monic, one thus expects to be able to embed and
disjointly into
. Since we can easily work out how all this goes in ordinary naive set theory, we just write out the formulas and hope it works out in ETCS.
In detail, define to be
where is the singleton mapping and
classifies
; similarly, define
to be
Clearly and
are monic, so to show disjointness we just have to show that their pullback is empty. But their pullback is isomorphic to the cartesian product of the pullbacks of the diagrams
so it would be enough to show that each (or just one) of these two pullbacks is empty, let’s say the first.
Suppose given a map which makes the square
A -------> 1 | | h | | chi_0 V sigma_X V X -------> PX
commute. Using the pullback principle, the map classifies
which is just the empty subset. This must be the same subset as classified by (where
is the diagonal), which by the pullback principle is
An elementary calculation shows this to be the equalizer of the pair of maps
So this equalizer is empty. But notice that
equalizes this pair of maps. Therefore we have a map
. By corollary 2 above, we infer
. This applies to the case where
is the pullback, so the pullback is empty, as was to be shown.
Theorem 4: Any two disjoint unions of are canonically isomorphic.
Proof: Suppose is a disjoint union. Define a map
where classifies the subset
, and
classifies the subset
. Applying the pullback principle, the composite
classifies
which is easily seen to be the diagonal on . Hence
. On the other hand,
classifies the subset
which is empty because and
are disjoint embeddings, so
. Similar calculations yield
Putting all this together, we conclude that and
, where
and
were defined in the proof of theorem 3.
Next, we show that is monic. If not, then by strong extensionality, there exist distinct elements
for which
; therefore,
and
. By the pullback principle, these equations say (respectively)
If , then both
factor through the mono
. However, since
is monic, this would imply that
, contradiction. Therefore
. By similar reasoning,
. Therefore
where is the negation operator. But then
. And since
is the union
by assumption,
must be the top element
, whence
is the bottom element 0. This contradicts the assumption that the topos is nondegenerate. Thus we have shown that
must be monic.
The argument above shows that is an upper bound of
and
in
. It follows that the join
constructed in theorem 3 is contained in
, and hence can be regarded as the join of
and
in
. But
is their join in
by assumption of being a disjoint union, so the containment
must be an equality. The proof is now complete.
Theorem 5: The inclusions ,
exhibit
as the coproduct of
and
.
Proof: Let ,
be given functions. Then we have monos
Now the operation certainly preserves finite meets, and also preserves finite joins because it is left adjoint to
. Therefore this operation preserves disjoint unions; we infer that the monos
exhibit as a disjoint union of
. Composing the monos of (1) and (2), we have disjoint embeddings of
and
in
. Using theorem 4,
is isomorphic to the join of these embeddings; this means we have an inclusion
whose restriction to yields
and whose restriction to
yields
. Hence
extends
and
. It is the unique extension, for if there were two extensions
, then the equalizer of
and
would be an upper bound of
in
, contradicting the fact that
is the least upper bound. This completes the proof.
I think that’s enough for one day. I will continue to explore the categorical structure and logic of ETCS next time.
This post is a continuation of the discussion of “the elementary theory of the category of sets” [ETCS] which we had begun last time, here and in the comments which followed. My thanks go to all who commented, for some useful feedback and thought-provoking questions.
Today I’ll describe some of the set-theoretic operations and “internal logic” of ETCS. I have a feeling that some people are going to love this, and some are going to hate it. My main worry is that it will leave some readers bewildered or exasperated, thinking that category theory has an amazing ability to make easy things difficult.
- An aside: has anyone out there seen the book Mathematics Made Difficult? It’s probably out of print by now, but I recommend checking it out if you ever run into it — it’s a kind of extended in-joke which pokes fun at category theory and abstract methods generally. Some category theorists I know take a dim view of this book; I for my part found certain passages hilarious, in some cases making me laugh out loud for five minutes straight. There are category-theory-based books and articles out there which cry out for parody!
In an attempt to nip my concerns in the bud, let me remind my readers that there are major differences between the way that standard set theories like ZFC treat membership and the way ETCS treats membership, and that differences at such a fundamental level are bound to propagate throughout the theoretical development, and impart a somewhat different character or feel between the theories. The differences may be summarized as follows:
- Membership in ZFC is a global relation between objects of the same type (sets).
- Membership in ETCS is a local relation between objects of different types (“generalized” elements or functions, and sets).
Part of what we meant by “local” is that an element per se is always considered relative to a particular set to which it belongs; strictly speaking, as per the discussion last time, the same element is never considered as belonging to two different sets. That is, in ETCS, an (ordinary) element of a set is defined to be a morphism
; since the codomain is fixed, the same morphism cannot be an element
of a different set
. This implies in particular that in ETCS, there is no meaningful global intersection operation on sets, which in ZFC is defined by:
Instead, in ETCS, what we have is a local intersection operation on subsets of a set. But even the word “subset” requires care, because of how we are now treating membership. So let’s back up, and lay out some simple but fundamental definitions of terms as we are now using them.
Given two monomorphisms , we write
(
if the monos are understood, or
if we wish to emphasize this is local to
) if there is a morphism
such that
. Since
is monic, there can be at most one such morphism
; since
is monic, such
must be monic as well. We say
define the same subset if this
is an isomorphism. So: subsets of
are defined to be isomorphism classes of monomorphisms into
. As a simple exercise, one may show that monos
into
define the same subset if and only if
and
. The (reflexive, transitive) relation
on monomorphisms thus induces a reflexive, transitive, antisymmetric relation, i.e., a partial order on subsets of
.
Taking some notational liberties, we write to indicate a subset of
(as isomorphism class of monos). If
is a generalized element, let us say
is in a subset
if it factors (evidently uniquely) through any representative mono
, i.e., if there exists
such that
. Now the intersection of two subsets
and
is defined to be the subset
defined by the pullback of any two representative monos
. Following the “Yoneda principle”, it may equivalently be defined up to isomorphism by specifying its generalized elements:
Thus, intersection works essentially the same way as in ZFC, only it’s local to subsets of a given set.
While we’re at it, let’s reformulate the power set axiom in this language: it says simply that for each set there is a set
and a subset
, such that for any relation
, there is a unique “classifying map”
whereby, under
, we have
The equality is an equality between subsets, and the inverse image on the right is defined by a pullback. In categorical set theory notation,
Hence, there are natural bijections
between subsets and classifying maps. The subset corresponding to is denoted
or
, and is called the extension of
.
The set plays a particularly important role; it is called the “subset classifier” because subsets
are in natural bijection with functions
. [Cf. classifying spaces in the theory of fiber bundles.]
In ordinary set theory, the role of is played by the 2-element set
. Here subsets
are classified by their characteristic functions
, defined by
iff
. We thus have
; the elementhood relation
boils down to
. Something similar happens in ETCS set theory:
Lemma 1: The domain of elementhood is terminal.
Proof: A map , that is, a map
which is in
, corresponds exactly to a subset
which contains all of
(i.e., the subobject
). Since the only such subset is
, there is exactly one map
.
Hence elementhood is given by an element
. The power set axiom says that a subset
is retrieved from its classifying map
as the pullback
.
Part of the power of, well, power sets is in a certain dialectic between external operations on subsets and internal operations on ; one can do some rather amazing things with this. The intuitive (and pre-axiomatic) point is that if
has finite products, equalizers, and power objects, then
is a representing object for the functor
which maps an object to the collection of subobjects of
, and which maps a morphism (“function”)
to the “inverse image” function
, that sends a subset
to the subset
given by the pullback of the arrows
,
. By the Yoneda lemma, this representability means that external natural operations on the
correspond to internal operations on the object
. As we will see, one can play off the external and internal points of view against each other to build up a considerable amount of logical structure, enough for just about any mathematical purpose.
- Remark: A category satisfying just the first three axioms of ETCS, namely existence of finite products, equalizers, and power objects, is called an (elementary) topos. Most or perhaps all of this post will use just those axioms, so we are really doing some elementary topos theory. As I was just saying, we can build up a tremendous amount of logic internally within a topos, but there’s a catch: this logic will be in general intuitionistic. One gets classical logic (including law of the excluded middle) if one assumes strong extensionality [where we get the definition of a well-pointed topos]. Topos theory has a somewhat fearsome reputation, unfortunately; I’m hoping these notes will help alleviate some of the sting.
To continue this train of thought: by the Yoneda lemma, the representing isomorphism
is determined by a universal element , i.e., a subset of
, namely the mono
. In that sense,
plays the role of a universal subset. The Yoneda lemma implies that external natural operations on general posets
are completely determined by how they work on the universal subset.
Internal Meets
To illustrate these ideas, let us consider intersection. Externally, the intersection operation is a natural transformation
This corresponds to a natural transformation
which (by Yoneda) is given by a function . Working through the details, this function is obtained by putting
and chasing
through the composite
Let’s analyze this bit by bit. The subset is given by
and the subset is given by
Hence is given by the pullback of the functions
and
, which is just
The map is thus defined to be the classifying map of
.
To go from the internal meet back to the external intersection operation, let
be two subsets, with classifying maps
. By the definition of
, we have that for all generalized elements
if and only if
(where the equality signs are interpreted with the help of equalizers). This holds true iff is in the subset
and is in the subset
, i.e., if and only if
is in the subset
. Thus
is indeed the classifying map of
. In other words,
.
A by-product of the interplay between the internal and external is that the internal intersection operator
is the meet operator of an internal meet-semilattice structure on : it is commutative, associative, and idempotent (because that is true of external intersection). The identity element for
is the element
. In particular,
carries an internal poset structure: given generalized elements
, we may define
if and only if
and this defines a reflexive, symmetric, antisymmetric relation :
equivalently described as the equalizer
of the maps and
. We have that
if and only if
.
Internal Implication
Here we begin to see some of the amazing power of the interplay between internal and external logical operations. We will prove that carries an internal Heyting algebra structure (ignoring joins for the time being).
Let’s recall the notion of Heyting algebra in ordinary naive set-theoretic terms: it’s a lattice that has a material implication operator
such that, for all
,
if and only if
Now: by the universal property of , a putative implication operation
is uniquely determined as the classifying map of its inverse image
, whose collection of generalized elements is
Given , the equality here is equivalent to
(because is maximal), which in turn is equivalent to
This means we should define to be the classifying map of the subset
Theorem 1: admits internal implication.
Proof: We must check that for any three generalized elements , we have
if and only if
Passing to the external picture, let be the corresponding subsets of
. Now: according to how we defined
a generalized element
is in
if and only if
. This applies in particular to any monomorphism
that represents the subset
.
Lemma 2: The composite
is the classifying map of the subset .
Proof: As subsets of ,
where the last equation holds because both sides are the subsets defined as the pullback of two representative monos
,
.
Continuing the proof of theorem 1, we see by lemma 2 that the condition corresponds externally to the condition
and this condition is equivalent to . Passing back to the internal picture, this is equivalent to
, and the proof of theorem 1 is complete.
Cartesian Closed Structure
Next we address a comment made by “James”, that a category satisfying the ETCS axioms is cartesian closed. As everything else in this article, this uses only the fact that such a category is a topos: has finite products, equalizers, and “power sets”.
Proposition 1: If are “sets”, then
represents an exponential
Proof: By the power set axiom, there is a bijection between maps into the power set and relations:
which is natural in . By the same token, there is a natural bijection
Putting these together, we have a natural isomorphism
and this representability means precisely that plays the role of an exponential
.
Corollary 1: .
The universal element of this representation is an evaluation map , which is just the classifying map of the subset
.
Thus, represents the set of all functions
(that is, relations from
to
). This is all we need to continue the discussion of internal logic in this post, but let’s also sketch how we get full cartesian closure. [Warning: for those who are not comfortable with categorical reasoning, this sketch could be rough going in places.]
As per our discussion, we want to internalize the set of such relations which are graphs of functions, i.e., maps where each
is a singleton, in other words which factor as
where is the singleton mapping:
We see from this set-theoretic description that classifies the equality relation
which we can think of as either the equalizer of the pair of maps or, what is the same, the diagonal map
.
Thus, we put , and it is not too hard to show that the singleton mapping
is a monomorphism. As usual, we get this monomorphism as the pullback
of
along its classifying map
.
Now: a right adjoint such as preserves all limits, and in particular pullbacks, so we ought then to have a pullback
B^A ---------------> 1^A | | sigma^A | | t^A V V P(B)^A -------------> P(1)^A (chi_sigma)^A
Of course, we don’t even have yet, but this should give us an idea: define
, and in particular its domain
, by taking the pullback of the right-hand map along the bottom map. In case there is doubt, the map on the bottom is defined Yoneda-wise, applying the isomorphism
to the element in the hom-set (on the left) given by the composite
The map on the right of the pullback is defined similarly. That this recipe really gives a construction of will be left as an exercise for the reader.
Universal Quantification
As further evidence of the power of the internal-external dialectic, we show how to internalize universal quantification.
As we are dealing here now with predicate logic, let’s begin by defining some terms as to be used in ETCS and topos theory:
- An ordinary predicate of type
is a function
. Alternatively, it is an ordinary element
. It corresponds (naturally and bijectively) to a subset
.
- A generalized predicate of type
is a function
. It may be identified with (corresponds naturally and bijectively to) a function
, or to a subset
.
We are trying to define an operator which will take a predicate of the form
[conventionally written
] to a predicate
[conventionally written
]. Externally, this corresponds to a natural operation which takes subsets of
to subsets of
. Internally, it corresponds to an operation of the form
This function is determined by the subset , defined elementwise by
Now, in ordinary logic, is true if and only if
is true for all
, or, in slightly different words, if
is constantly true over all of
:
The expression on the right (global truth over ) corresponds to a function
, indeed a monomorphism since any function with domain
is monic. Thus we are led to define the desired quantification operator
as the classifying map of
.
Let’s check how this works externally. Let be a generalized predicate of type
. Then according to how
has just been defined,
classifies the subset
There is an interesting adjoint relationship between universal quantification and substitution (aka “pulling back”). By “substitution”, we mean that given any predicate on
, we can always pull back to a predicate on
(substituting in a dummy variable
of type
, forming e.g.
) by composing with the projection
. In terms of subsets, substitution along
is the natural external operation
Then, for any predicate , we have the adjoint relationship
if and only if
so that substitution along is left adjoint to universal quantification along
. This is easy to check; I’ll leave that to the reader.
Internal Intersection Operators
Now we put all of the above together, to define an internal intersection operator
which intuitively takes an element (a family
of subsets of
) to its intersection
, as a subset
.
Let’s first write out a logical formula which expresses intersection:
We have all the ingredients to deal with the logical formula on the right: we have an implication operator as part of the internal Heyting algebra structure on
, and we have the quantification operator
. The atomic expressions
and
refer to internal elementhood:
means
is in
, and
means
is in
.
There is a slight catch, in that the predicates “” and “
” (as generalized predicates over
, where
lives) are taken over different domains. The first is of the form
, and the second is of the form
. No matter: we just substitute in some dummy variables. That is, we just pull these maps back to a common domain
, forming the composites
and
Putting all this together, we form the composite
This composite directly expresses the definition of the internal predicate given above. By cartesian closure, this map
induces the desired internal intersection operator,
.
This construction provides an important bridge to getting the rest of the internal logic of ETCS. Since we can can construct the intersection of arbitrary definable families of subsets, the power sets are internal inf-lattices. But inf-lattices are sup-lattices as well; on this basis we will be able to construct the colimits (e.g., finite sums, coequalizers) that we need. Similarly, the intersection operators easily allow us to construct image factorizations: any function
can be factored (in an essentially unique way) as an epi or surjection
to the image, followed by a mono or injection
. The trick is to define the image as the smallest subset of
through which
factors, by taking the intersection of all such subsets. Image factorization leads in turn to the construction of existential quantification.
As remarked above, the internal logic of a topos is generally intuitionistic (the law of excluded middle is not satisfied). But, if we add in the axiom of strong extensionality of ETCS, then we’re back to ordinary classical logic, where the law of excluded middle is satisfied, and where we just have the two truth values “true” and “false”. This means we will be able to reason in ETCS set theory just as we do in ordinary mathematics, taking just a bit of care with how we treat membership. The foregoing discussion gives indication that logical operations in categorical set theory work in ways familiar from naive set theory, and that basic set-theoretic constructions like intersection are well-grounded in ETCS.
One of the rare pleasures of doing mathematics — not necessarily high-powered research-level mathematics, but casual fun stuff too — is finally getting an answer to a question tucked away at the back of one’s mind for years and years, sometimes decades. Let me give an example: ever since I was pretty young (early teens), I’ve loved continued fractions; they are a marvelous way of representing numbers, with all sorts of connections to non-trivial mathematics [analysis, number theory (both algebraic and transcendental!), dynamical systems, knot theory, …]. And ever since I’ve heard of continued fractions, there’s one little factoid which I have frequently seen mentioned but which is hardly ever proved in the classic texts, at least not in the ones I looked at: the beautiful continued fraction representation for .
[Admittedly, most of my past searches were done in the pre-Google era — today it’s not that hard to find proofs online.]
This continued fraction was apparently “proved” by Euler way back when (1731); I once searched for a proof in his Collected Works, but for some reason didn’t find it; perhaps I just got lost in the forest. Sometimes I would ask people for a proof; the responses I got were generally along the lines of “isn’t that trivial?” or “I think I can prove that”. But talk is cheap, and I never did get no satisfaction. That is, until a few (maybe five) years ago, when by accident I spotted a proof buried in Volume 2 of Knuth’s The Art of Computer Programming. Huge rush of relief! So, if any of you have been bothered by this yourselves, maybe this is your lucky day.
I’m sure most of you know what I’m talking about. To get the (regular) continued fraction for a number, just iterate the following steps: write down the integer part, subtract it, take the reciprocal. Lather, rinse, repeat. For example, the sequence of integer parts you get for is 1, 2, 2, 2, … — this means
giving the continued fraction representation for . Ignoring questions of convergence, this equation should be “obvious”, because it says that the continued fraction you get for
equals the reciprocal of the continued fraction for
.
Before launching in on , let me briefly recall a few well-known facts about continued fractions:
- Every rational number has a continued fraction representation of finite length. The continued fraction expresses what happens when one iterates the Euclidean division algorithm.
For example, the integer parts appearing in the continued fraction for 37/14:
duplicate the successive quotients one gets by using the division algorithm to compute :
- A number has an infinite continued fraction if and only if it is irrational. Let
denote the space of irrationals between 0 and 1 (as a subspace of
). The continued fraction representation (mapping an irrational
to the corresponding infinite sequence of integer parts
in its continued fraction representation
) gives a homeomorphism
where
carries a topology as product of countably many copies of the discrete space
.
In particular, the shift map , defined by
, corresponds to the map
defined by
. The behavior of
is a paragon, an exemplary model, of chaos:
- There is a dense set of periodic points of
. These are quadratic surds like
: elements of
that are fixed points of fractional linear transformations
(for integral
and
).
- The transformation
is topologically mixing.
- There is sensitive dependence on initial conditions.
For some reason, I find it fun to observe this sensitive dependence using an ordinary calculator. Try calculating something like the golden mean , and hit it with
over and over and watch the parade of integer parts go by (a long succession of 1’s until the precision of the arithmetic finally breaks down and the behavior looks random, chaotic). For me this activity is about as enjoyable as popping bubble wrap.
- Remark: One can say rather more in addition to the topological mixing property. Specifically, consider the measure
on
, where
. It may be shown that
is a measure-preserving transformation; much more significantly,
is an ergodic transformation on the measure space. It then follows from Birkhoff’s ergodicity theorem that whenever
is integrable, the time averages
approach the space average
for almost all
. Applying this fact to
, it follows that for almost all irrationals
, the geometric mean of the integer parts
approaches a constant, Khinchin’s constant
. A fantastic theorem!
Anyway, I digress. You are probably waiting to hear about the continued fraction representation of , which is
:
Cute little sequence, except for the bump at the beginning where there’s a 2 instead of a 1. One thing I learned from Knuth is that the bump is smoothed away by writing it in a slightly different way,
involving triads , where
.
Anyway, how to prove this fact? I’ll sketch two proofs. The first is the one I found in Knuth (loc. cit., p. 375, exercise 16; see also pp. 650-651), and I imagine it is close in spirit to how Euler found it. The second is from a lovely article of Henry Cohn which appeared in the American Mathematical Monthly (Vol. 116 [2006], pp. 57-62), and is connected with Hermite’s proof of the transcendence of .
PROOF 1 (sketch)
Two functions which Euler must have been very fond of are the tangent function and its cousin the hyperbolic tangent function,
related by the equation . These functions crop up a lot in his investigations. For example, he knew that their Taylor expansions are connected with Bernoulli numbers, e.g.,
The Taylor coefficients where
are integers called tangent numbers; they are the numbers 1, 2, 16, … which appear along the right edge of the triangle
1
0, 1
1, 1, 0
0, 1, 2, 2
5, 5, 4, 2, 0
0, 5, 10, 14, 16, 16
where each row is gotten by taking partial sums from the preceding row, moving alternately left-to-right and right-to-left. The numbers 1, 1, 5, … which appear along the left edge are called secant numbers , the Taylor coefficients of the secant function. Putting
, the secant and tangent numbers
together are called Euler numbers, and enjoy some interesting combinatorics:
counts the number of “zig-zag permutations”
of
, where
. For more on this, see Stanley’s Enumerative Combinatorics (Volume I), p. 149, and also Conway and Guy’s The Book of Numbers, pp. 110-111; I also once gave a brief account of the combinatorics of the
in terms the generating function
, over here.
Euler also discovered a lovely continued fraction representation,
as a by-product of a larger investigation into continued fractions for solutions to the general Riccati equation. Let’s imagine how he might have found this continued fraction. Since both sides of the equation are odd functions, we may as well consider just , where
. Thus the integer part is 0; subtract the integer part and take the reciprocal, and see what happens.
The MacLaurin series for is
; its reciprocal has a pole at 0 of residue 1, so
gives a function which is odd and analytic near 0. Now repeat: reciprocating
, we get a simple pole at 0 of residue 3, and
gives a function which is odd and analytic near 0, and one may check by hand that its MacLaurin series begins as
.
The pattern continues by a simple induction. Recursively define (for )
It turns out (lemma 1 below) that each is odd and analytic near 0, and then it becomes plausible that the continued fraction for
above is correct: we have
Indeed, assuming the fact that is uniformly bounded over
, these expressions converge as
, so that the continued fraction expression for
is correct.
Lemma 1: Each (as recursively defined above) is odd and analytic near 0, and satisfies the differential equation
Proof: By induction. In the case , we have that
is analytic and
Assuming the conditions hold when , and writing
we easily calculate from the differential equation that . It follows that
is indeed analytic in a neighborhood of 0. The verification of the differential equation (as inductive step) for the case is routine and left to the reader.
- Remark: The proof that the continued fraction above indeed converges to
is too involved to give in detail here; I’ll just refer to notes that Knuth gives in the answers to his exercises. Basically, for each
in the range
, he gets a uniform bound
for all
, and notes that as a result convergence of the continued fraction is then easy to prove for such
(good enough for us, as we’ll be taking
). He goes on to say, somewhat telegraphically for my taste, “Careful study of this argument reveals that the power series for
actually converges for
; therefore the singularities of
get farther and farther away from the origin as
grows, and the continued fraction actually represents
throughout the complex plane.” [Emphasis his] Hmm…
Assuming the continued fraction representation for , let’s tackle
. From the continued fraction we get for instance
Taking reciprocals and manipulating,
Theorem 1: .
Proof: By the last displayed equation, it suffices to show
This follows from a recursive algorithm for multiplying a continued fraction by 2, due to Hurwitz (Knuth, loc. cit., p. 375, exercise 14):
Lemma 2: , and
I won’t bother proving this; instead I’ll just run through a few cycles to see how it applies to theorem 1:
and so on. Continuing this procedure, we get , which finishes the proof of theorem 1.
PROOF 2
I turn now to the second proof (by Cohn, loc. cit.), which I find rather more satisfying. It’s based on Padé approximants, which are “best fit” rational function approximations to a given analytic function, much as the rational approximants provide “best fits” to a given real number
. (By “best fit”, I mean a sample theorem like: among all rational numbers whose denominator is bounded above in magnitude by
, the approximant
comes closest to
.)
Definition: Let be a function analytic in a neighborhood of 0. The Padé approximant to
of order
, denoted
, is the (unique) rational function
such that
,
, and the MacLaurin coefficients of
agree with those of
up to degree
.
This agreement of MacLaurin coefficients is equivalent to the condition that the function
is analytic around 0. Here, we will be interested in Padé approximants to .
In general, Padé approximants may be computed by (tedious) linear algebra, but in the present case Hermite found a clever integration trick which gets the job done:
Proposition 1: Let be a polynomial of degree
. Then there are polynomials
of degree at most
such that
Explicitly,
Proof: Integration by parts yields
and the general result follows by induction.
It is clear that the integral of proposition 1 defines a function analytic in . Taking
, this means we can read off the Padé approximant
to
from the formulas for
in proposition 1, provided that the polynomial
[of degree
] is chosen so that
. Looking at these formulas, all we have to do is choose
to have a zero of order
at
, and a zero of order
at
. Therefore
fits the bill.
Notice also we can adjust by any constant multiple; the numerator and denominator
are adjusted by the same constant multiples, which cancel each other in the Padé approximant
.
Taking in proposition 1, we then infer
Notice that this integral is small when are large. This means that
will be close to
(see the following remark), and it turns out that by choosing
appropriately, the values
coincide exactly with rational approximants coming from the continued fraction for
.
- Remark: Note that for the choice
, the values
derived from proposition 1 are manifestly integral, and
. [In particular,
, justifying the claim that
is small if
is.] In fact,
may be much larger than necessary; e.g., they may have a common factor, so that the fraction
is unreduced. This ties in with how we adjust
by a constant factor, as in theorem 2 below.
For , let
denote the
rational approximant arising from the infinite continued fraction
where . From standard theory of continued fractions, we have the following recursive rule for computing the integers
from the
:
,
,
,
, and
Explicitly, and
, so
[Note: and
, so
is infinite, but that won’t matter below.]
Theorem 2: Define, for ,
Then ,
, and
.
Proof: It is easy to see ,
, and
. In view of the recursive relations for the
above, it suffices to show
The last relation is trivial. The first relation follows by integrating both sides of the identity
over the interval . The second relation
follows by integrating both sides of the identity
which we leave to the reader to check. This completes the proof.
Theorem 2 immediately implies that
indeed, the rational approximants to the right-hand side have the property that
is one of
, or
(for
respectively), and looking at their integral expressions, these quantities approach 0 very rapidly.
This in turn means, since the denominators grow rapidly with
, that the rational approximants
approach
“insanely” rapidly, and this in itself can be used as the basis of a proof that
is transcendental (Roth’s theorem). To give some quantitative inkling of just “how rapidly”: Knuth in his notes gives estimates on how close the approximant
is to the function . It’s something on the order of
(loc. cit., p. 651).
- Remark: Quoting Roth’s theorem in support of a theorem of Hermite is admittedly anachronistic. However, the Padé approximants and their integral representations used here did play an implicit role in Hermite’s proof of the transcendence of
; in fact, Padé was a student of Hermite. See Cohn’s article for further references to this topic.
[Wow, another long post. I wonder if anyone will read the whole thing!]
[Edits in response to the comment below by Henry Cohn.]
“In mathematics you don’t understand things. You just get used to them.”
— John von Neumann
I had been wanting to write on this topic – no, I am not referring to the above quote by von Neumann – for quite some time but I wasn’t too sure if doing so would contribute anything “useful” to the ongoing discussion on the pedagogical roles of concrete and abstract examples in mathematics, a discussion that’s been going on on various blogs for some time now. In part coaxed by Todd, let me share some of my own observations for whatever they are worth.
First, some background. A few months ago, Scientific American published an article titled In Abstract: Avoid Concrete Example When Teaching Math (by Nikhil Swaminathan). Some excerpts from that article can be read below:
New research published in Science suggests that attempts by math teachers to make the subject easier to grasp by providing such practical examples may actually have made it tougher to learn.
…
For their study, Kaminski and her colleagues taught 80 undergraduate students—split into four 20-person groups—a new mathematical system (based on several simple arithmetic concepts) in different ways.
One group was taught using generic symbols such as circles and diamonds. The other groups were taught using practical scenarios such as combining liquids in measuring cups.
The researchers then tested their grasp of the concept by seeing how well they could apply it to an unrelated situation, in this case a children’s game. The results: students who learned using symbols on average scored 80 percent; the others scored between 40 and 50 percent, according to Kaminski.
One may read the entire article online to learn a bit more about the study done. Let me add that I do agree with the overall conclusion of the study cited: in mathematics concrete examples (in contradistinction to abstract ones) more often than not obfuscate the underlying concepts behind those examples, thus hindering “real” or complete understanding of those concepts. However, I also feel that such a claim must be somewhat qualified because there is more to it than meets the eye.
Sometimes the line between abstract examples and concrete ones can be quite blurry. What is more, some concrete examples may even be more abstract than other concrete ones. In this post, I will assume (and hope others do too) that the distinction between an abstract example and a concrete one (that I have chosen for this post) is sharp enough for our discussion. Of course, my aim is not to highlight such a distinction but to emphasize the importance of both abstract and concrete examples in mathematical education, for I firmly believe that a “concrete” understanding of concepts isn’t necessarily subsumed under an “abstract” one, even though a concrete example may just be a special case of a more general and abstract one. What is more, and this may sound surprising, abstract examples may sometimes not reveal certain useful principles which, on the other hand, may be clearly revealed by concrete ones!
Let me illustrate what I wrote above by discussing a somewhat well-known problem and its two related solutions, one of which employs an abstract approach and the other a concrete one, if you will. Some time ago, Isabel at God Plays Dice pointed to an online article titled An Intuitive Explanation of Bayesian Reasoning by Eliezer Yudkowsky, and I borrow the problem I am about to discuss in this post from that article.
PROBLEM: 1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?
How may one proceed to solve this problem? Well, first, let us look at an “abstract” solution.
“ABSTRACT” SOLUTION: Here we employ the machinery of set-theoretic probability theory to arrive at our answer. We first note that what we really want to compute is the probability of a woman having breast cancer given that she has tested positive. That is, we want to compute the conditional probability P(A/B), where event A corresponds to that of a woman having breast cancer and event B corresponds to that of a woman testing positive for breast cancer. Now, from Bayes’ theorem, we have
.
Also, we note that
and
. Plugging these values into the above formula immediately yields P(A/B) = 7.76%. And, we are done.
A couple of observations.
1. It is not hard to observe that the derivation of Bayes’ formula follows from the definition of conditional probability, viz. P(A/B) = P(AB)/P(B), where P(B) > 0, and the usual set-theoretic rules involving the union and intersection of sets (events). And, this derivation can be carried out through sheer manipulation of symbols under those rules. By that I mean, if a student knows enough set theory as well as the “laws” of set-theoretic probability theory, then the derivation of Bayes’ theorem makes absolutely no (or, almost no) use of the “intuitive” faculty of a student.
2. The abstract method presented above also subsumes the concrete method, as we shall see shortly. What is more, Bayes’ formula can be generalized even further. This means that once we have this particularly useful “abstract” tool at our disposal, we can solve any number of similar problems by repeatedly using this tool in concrete (and even abstract) cases. In addition, Bayes’ theorem can also be thought of as a “black box” to which we apply certain inputs in order to get our output. This should not surprise us, for in mathematics the use of theorems as black boxes is a common one.
Now, the above two observations may lead one to believe that indeed there is almost no need to find a “concrete” solution to the above problem. After all, the abstract case takes care of the concrete cases completely.
However, let us see if we can come up with a concrete (that is, a far less abstract) solution and examine it more closely to see if we can extract some useful ideas/techniques from the same.
“CONCRETE” SOLUTION: Suppose we choose a random sample of 100,000 women of age forty. (We choose that figure for reasons that will be clear soon.) Then, we have two groups of women.
1st group: 1,000 (1%) women who have breast cancer.
2nd group: 99,000 (99%) women who don’t have breast cancer.
Now, in the 1st group, 800 (80% of 1,000) women will test positive, and, in the 2nd group, 9,504 (9.6% of 99,000) women will test positive. So, it is clear that if a woman tests positive, then the probability that she belongs to the 1st group (that is, she really has cancer) is 800/(800+9504) = 7.76 %. And, we are done.
Let me quickly point out a very important advantage the above solution has over the abstract one we saw earlier.
Indeed, we finally “see” what’s really going on. That is, from an intuitive standpoint, we observe in the above solution that there is a “tree structure” involved in our reasoning. The sample of 1,00,000 women bifurcates into two distinct samples, one of which has 1,000 women who have breast cancer and the other that has 99,000 women who don’t. Next, we observe that each of these two samples in turn bifurcates into two samples, one of which comprises women who test positive and the other that comprises women who don’t. This clearly reveals to the student the “tree structure” in the above reasoning. This makes the concrete solution much more appealing and “satisfying” to the average student. In fact, the generalization we talked about earlier in regard to Bayes’ theorem can even be carried out in this particular method: we will only need to increase the depth and/or breadth of our “tree” by extending more nodes from existing ones!
Moreover, one may recall that the use of such “trees” in reasoning is quite common in mathematics. For instance, the two most basic rules of Combinatorial Principles, viz. the Rule of Sum and the Rule of Product are proved using such “trees”. So, this is one instance in which a concrete solution reveals much more clearly a quite fundamental principle/technique (use of “trees” in reasoning) in mathematics that isn’t clearly revealed at all in the abstract solution we examined earlier.
In other words, much thought needs to be put in deciding if abstract examples should necessarily be “favored” over concrete ones in mathematics education. From a pedagogical standpoint, sometimes concrete examples are simply much better than abstract ones!
A couple of weeks ago, when Miodrag Milenkovic posed an interesting general problem in connection with POW-7, I was reminded of the “hairy ball theorem” (obviously a phrase invented during a more innocent era!), and a surprisingly easy proof of same that John Baez once told me over beers in an English pub. John is quite a good story-teller, and I wasn’t able to guess the punch line of the proof before it came out of his mouth –when it came, I was so surprised that I nearly fell off my stool! Well, I happened to run across the original source of this proof recently, and though it may be “old news” for some, it’s such a nice proof that I thought it was worth sharing.
The hairy ball theorem says: every continuous tangent vector field on a sphere of even dimension must vanish somewhere (at some point of the sphere, the tangent vector is zero). In the case of an ordinary 2-dimensional sphere, if you think of a vector at a point as a little “hair” emanating from that point, then the theorem says that you can’t comb the hairs of a sphere so that they all lie flat against the sphere: there will be a cowlick sticking straight out somewhere.
The classical proofs usually make some kind of appeal to homology theory: for example, a deep result is that the Euler characteristic of a compact manifold can be computed in terms of any continuously differentiable tangent vector field, by adding up the so-called “indices” of the vector field in the neighborhoods of critical points, where the vector field vanishes (a technical result shows there is no loss of generality in assuming the vector field is continuously differentiable). If the vector field vanishes nowhere, then the Euler characteristic is the empty sum 0; this cannot be in the case of an even-dimensional sphere, because its Euler characteristic is 2. The hairy ball theorem follows.
Some of these homology-based proofs are quite slick, but generally speaking, homology theory requires some heavy infrastructure; the question is whether a more elementary proof exists. The following “analytic” proof is due to John Milnor and uses very little machinery, basically nothing beyond advanced calculus. I will follow his exposition (American Mathematical Monthly, July 1978, pp. 521-524) pretty closely.
For the first step, suppose that we have a continuously differentiable vector field defined in a compact region of space
,
. For any real
and for
, define a function
The matrix of first partial derivatives of is
, where
denotes the
identity matrix. For
sufficiently small, the determinant of this matrix is strictly positive over all of
.
Lemma 1: If is sufficiently small, then
is a one-to-one function of
onto its image, and
is a polynomial function of
.
Proof: Since is continuously differentiable over a compact region, there is [using e.g. the mean value theorem, evidently a red rag for some people 😉 ] a Lipschitz constant
so that
We have only if
; if we assume
, this can happen only if
. So
is one-to-one for such
.
The determinant of the matrix of first partials above is of the form
where the are continuous functions in
. By the first part of the lemma, we may take
so small that
is a continuously differentiable embedding, and then by a change-of-variables formula in multivariate calculus we have that
where This completes the proof.
Next, suppose that on we have a continuously differentiable non-vanishing field
of tangent vectors. Applying the continuously differentiable map
, we assume the vector field
consists of unit tangent vectors. For each
, the vector
is thus of length
, hence
maps the unit sphere
to the sphere of radius
.
Lemma 2: For sufficiently small , the map
maps
onto the sphere of radius
.
Proof: Extend the vector field on
(and therefore also
) to the compact region between two concentric spheres,
, by homothety, i.e., put
,
for
and
. There is a Lipschitz constant
such that
for
Take , and let
be any unit vector. The function
maps the complete metric space to itself (because
and
— just use triangle inequalities), and
satisfies a Lipschitz condition
where
. By a classical fixed-point theorem,
has under these conditions a (unique) fixed point
, so that
. Rescaling
and
by the factor
, the statement of lemma 2 follows.
Now we prove the hairy ball theorem. If is a continuous non-vanishing vector field of tangent vectors on
, let
be the absolute minimum of
. By standard techniques (e.g., using the Stone-Weierstrass approximation theorem), there is a continuously differentiable vector field
such that
, and then
so that
is also non-vanishing. As above, we may then substitute
for
, i.e., assume that
consists of unit vectors.
Given , we extend
to the region
by homothety. For sufficiently small
, the map
defined above maps a spherical shell
in this region bijectively onto the shell
. Hence
maps
to the region
, and we get a dilation factor:
By lemma 1, is polynomial in
. So
must be even; therefore
admits a non-vanishing vector field only if
is odd. This gives the hairy ball theorem.
Man, what an awesome proof. That John Milnor is just a master of technique.
Just a quick note on how any of this bears on Milenkovic’s problem. He asked whether for any topological embedding of in
and any point
in the region
interior to the embedding, there exists a hyperplane
through
such that the barycenter of the
-dimensional region
coincides with
.
Under the further simplifying assumption that the barycenter varies continuously with , the answer is ‘yes’ for even-dimensional spheres. For (taking
to be the origin) we can define a tangent vector field whose value at
is the vector from
to the barycenter of
. For
even, this vector vanishes for some
, hence
coincides with the barycenter for that particular
.
After this brief (?) categorical interlude, I’d like to pick up the main thread again, and take a closer look at the some of the ingredients of baby Stone duality in the context of categorical algebra, specifically through the lens of adjoint functors. By passing a topological light through this lens, we will produce the spectrum of a Boolean algebra: a key construction of full-fledged Stone duality!
Just before the interlude, we were discussing some consequences of baby Stone duality. Taking it from the top, we recalled that there are canonical maps
in the categories of Boolean algebras and sets
. We said these are “natural” maps (even before the notion of naturality had been formally introduced), and recalled our earlier result that these are isomorphisms when
and
are finite (which is manifestly untrue in general; for instance, if
is a free Boolean algebra generated by a countable set, then for simple reasons of cardinality
cannot be a power set).
What we have here is an adjoint pair of functors between the categories and
of sets and Boolean algebras, each given by a hom-functor:
( acts the same way on objects and morphisms as
, but is regarded as mapping between the opposite categories). This actually says something very simple: that there is a natural bijection between Boolean algebra maps and functions
given by the formula . [The very simple nature of this formula suggests that it’s nothing special to Boolean algebras — a similar adjunction could be defined for any algebraic theory defined by operations and (universally quantified) equations, replacing
by any model of that theory.] The unit of the adjunction at
is the function
, and the counit at
is the Boolean algebra map
(regarded as a morphism
mapping the other way in the opposite category
).
The functor is usually described in the language of ultrafilters, as I will now explain.
Earlier, we remarked that an ultrafilter in a Boolean algebra is a maximal filter, dual to a maximal ideal; let’s recall what that means. A maximal ideal in a Boolean ring
is the kernel of a (unique) ring map
i.e., has the form for some such map. Being an ideal, it is an additive subgroup
such that
implies
. It follows that if
, then
, so
is closed under finite joins (including the empty join
). Also, if
and
, then
, so that
is “downward-closed”.
Conversely, a downward-closed subset which is closed under finite joins is an ideal in
(exercise!). Finally, if
is a maximal ideal, then under the quotient map
we have that for all , either
or
, i.e., that either
or
.
Thus we have redefined the notion of maximal ideal in a Boolean algebra in the first-order theory of posets: a downward-closed set closed under finite joins, such that every element or its complement (but never both!) is contained in
. [If both
, then
, whence
for all
(since
and
is downward-closed). But then
isn’t a maximal (proper) ideal!]
The notion of ultrafilter is dual, so an ultrafilter in a Boolean algebra is defined to be a subset
which
- Is upward-closed: if
and
, then
;
- Is closed under finite meets: if
, then
;
- Satisfies dichotomy: for every
, exactly one of
belongs to
.
If is a maximal ideal, then
is an ultrafilter, and we have natural bijections between the following concepts:
Boolean algebra maps
maximal ideals
ultrafilters
so that is naturally identified with the set of ultrafilters in
.
If is a set, then an ultrafilter on
is by definition an ultrafilter in the Boolean algebra
. Hence
is identified with the set of ultrafilters on
, usually denoted
. The unit map
maps to an ultrafilter denoted
, consisting of all subsets
which contain
, and called the principal ultrafilter generated by
.
We saw that when is finite, the function
(and therefore also
) is a bijection: there every ultrafilter is principal, as part of baby Stone duality (see Proposition 4 here). Here is a slight generalization:
Proposition 1: If an ultrafilter on
contains a finite set
, then
is principal.
Proof: It is enough to show contains
for some
. If not, then
contains the complement
for every
(by dichotomy), and therefore also the finite intersection
which contradicts the fact that .
It follows that nonprincipal ultrafilters can exist only on infinite sets , and that every cofinite subset of
(complement of a finite set) belongs to such an ultrafilter (by dichotomy). The collection of cofinite sets forms a filter, and so the question of existence of nonprincipal ultrafilters is the question of whether the filter of cofinite sets can be extended to an ultrafilter. Under the axiom of choice, the answer is yes:
Proposition 2: Every (proper) filter in a Boolean algebra is contained in some ultrafilter.
Proof: This is dual to the statement that every proper ideal in a Boolean ring is contained in a maximal ideal. Either statement may be proved by appeal to Zorn’s lemma: the collection of filters which contain a given filter has the property that every linear chain of such filters has an upper bound (namely, the union of the chain), and so by Zorn there is a maximal such filter.
As usual, Zorn’s lemma is a kind of black box: it guarantees existence without giving a clue to an explicit construction. In fact, nonprincipal ultrafilters on sets , like well-orderings of the reals, are notoriously inexplicit: no one has ever seen one directly, and no one ever will.
That said, one can still develop some intuition for ultrafilters. I think of them as something like “fat nets”. Each ultrafilter on a set
defines a poset (of subsets ordered by inclusion), but I find it more suggestive to consider instead the opposite
, where
in
means
— so that the further or deeper you go in
, the smaller or more concentrated the element. Since
is closed under finite intersections,
has finite joins, so that
is directed (any two elements have an upper bound), just like the elements of a net (or more pedantically, the domain of a net). I call an ultrafilter a “fat net” because its elements, being subsets of
, are “fatter” than mere points.
Intuitively speaking, ultrafilters as nets “move in a definite direction”, in the sense that given an element , however far in the net, and given a subset
, the ultrafilter-as-net sniffs out a direction in which to proceed, “tunneling” either into
if
, or into its relative complement
if this belongs to
. In the case of a principal ultrafilter, there is a final element
of the net; otherwise not (but we can think of a nonprincipal ultrafilter as ending at an “ideal point” of the set
if we want).
Since the intuitive imagery here is already vaguely topological, we may as well make the connection with topology more precise. So, suppose now that comes equipped with a topology. We say that an ultrafilter
on
converges to a point
if each open set
containing
(or each neighborhood of
) belongs to the ultrafilter. In other words, by going deep enough into the ultrafilter-as-net, you get within any chosen neighborhood of the point. We write
to say that
converges to
.
General topology can be completely developed in terms of the notion of ultrafilter convergence, often very efficiently. For example, starting with any relation whatsoever between ultrafilters and points,
we can naturally define a topology on
so that
with respect to
whenever
.
Let’s tackle that in stages: in order for the displayed condition to hold, a neighborhood of must belong to every ultrafilter
for which
. This suggests that we try defining the filter
of neighborhoods of
to be the intersection of ultrafilters
Then define a subset to be open if it is a neighborhood of all the points it contains. In other words, define
to be open if
Proposition 3: This defines a topology, .
Proof: Since for every ultrafilter
, it is clear that
is open; also, it is vacuously true that the empty set is open. If
are open, then for all
, whenever
, we have
and
, so that
and
by openness, whence
since
is closed under intersections. So
is also open. Finally, suppose
is a collection of open sets. For all
, if
, then
for some
, so that
by openness, whence
since ultrafilters are upward closed. So
is also open.
.
Let’s recap: starting from a topology on
, we’ve defined a convergence relation
(consisting of pairs
such that
), and conversely, given any relation
, we’ve defined a topology
on
. What we actually have here is a Galois connection where
if and only if
Of course not every relation is the convergence relation of a topology, so we don’t quite have a Galois correspondence (that is,
and
are not quite inverse to one another). But, it is true that every topology
is the topology of its ultrafilter convergence relation, i.e.,
. For this, it suffices to show that every neighborhood filter
is the intersection of the ultrafilters that contain it. But that is true of any filter:
Theorem 1: If is a filter in
and
, then there exists an ultrafilter
for which
and
.
Proof: First I claim for all
; otherwise
for some
, whence
, so that
since filters are upward closed, contradiction. It follows that
can be extended to the (proper) filter
which in turn extends to some ultrafilter , by proposition 2. Since
, we have
.
Corollary 1: Every filter is the intersection of all the ultrafilters which contain it.
The ultrafilter convergence approach to topology is particularly convenient for studies of compactness:
Theorem 2: A space is compact if and only if every ultrafilter
converges to at least one point. It is Hausdorff if and only if every ultrafilter converges to at most one point.
Proof: First suppose that is compact, and (in view of a contradiction) that
converges to no point of
. This means that for every
there is a neighborhood
which does not belong to
, or that
. Finitely many of these
cover
, by compactness. By De Morgan’s law, this means finitely many
have empty intersection. But this would mean
, since
is closed under finite intersections, contradiction.
In the reverse direction, suppose that every ultrafilter converges. We need to show that if is any collection of open subsets of
such that no finite subcollection covers
, then the union of the
cannot cover
. First, because no finite subcollection covers, we may construct a filter generated by the complements:
Extend this filter to an ultrafilter ; then by assumption
. If some one of the
contained
, then
by definition of convergence. But we also have
, and this is a contradiction. So,
lies outside the union of the
, as was to be shown.
Now let be Hausdorff, and suppose that
and
. Let
be neighborhoods of
respectively with empty intersection. By definition of convergence, we have
, whence
, contradiction.
Conversely, suppose every ultrafilter converges to at most one point, and let be two distinct points. Unless there are neighborhoods
of
respectively such that
(which is what we want), the smallest filter containing the two neighborhood filters
(that is to say, the join
in the poset of filters) is proper, and hence extends to an ultrafilter
. But then
and
, which is to say
and
, contradiction.
Theorem 2 is very useful; among other things it paves the way for a clean and conceptual proof of Tychonoff’s theorem (that an arbitrary product of compact spaces is compact). For now we note that it says that a topology is the topology of a compact Hausdorff space structure on
if and only if the convergence relation
is a function. And in practice, functions
which arise “naturally” tend to be such convergence relations, making
a compact Hausdorff space.
Here is our key example. Let be a Boolean algebra, and let
, which we have identified with the set of ultrafilters in
. Define a map
by
where was the counit (evaluated at
) of the adjunction
defined at the top of this post. Unpacking the definitions a bit, the map
is the map
, the result of applying the hom-functor
to
Chasing this a little further, the map “pulls back” an ultrafilter
to the ultrafilter
, viewed as an element of
. We then topologize
by the topology
.
This construction is about as “abstract nonsense” as it gets, but you have to admit that it’s pretty darned canonical! The topological space we get in this way is called the spectrum of the Boolean algebra
. If you’ve seen a bit of algebraic geometry, then you probably know another, somewhat more elementary way of defining the spectrum (of
as commutative ring), so we may as well make the connection explicit. However you define it, the result is a compact Hausdorff space structure with some other properties which make it very reminiscent of Cantor space.
It is first of all easy to see that is compact, i.e., that every ultrafilter
converges. Indeed, the relation
is a function
, and if you look at the condition for a set
to be open w.r.t.
,
you see immediately that converges to
.
To get Hausdorffness, take two distinct points (ultrafilters in
). Since these are distinct maximal filters, there exists
such that
belongs to
but not to
, and then
belongs to
but not to
. Define
Proposition 4: is open in
.
Proof: We must check that for all ultrafilters on
, that
But . By definition of
, we are thus reduced to checking that
or that . But
(as a subset of
) is
!
As a result, and
are open sets containing the given points
. They are disjoint since in fact
(indeed, because
preserves negation). This gives Hausdorffness, and also that the
are clopen (closed and open).
We actually get a lot more:
Proposition 5: The collection is a basis for the topology
on
.
Proof: The sets form a basis for some topology
, because
(indeed,
preserves meets). By the previous proposition,
. So the identity on
gives a continuous comparison map
between the two topologies. But a continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism, so .
- Remark: In particular, the canonical topology on
is compact Hausdorff; this space is called the Stone-Cech compactification of (the discrete space)
. The methods exploited in this lecture can be used to show that in fact
is the free compact Hausdorff space generated from the set
, meaning that the functor
is left adjoint to the underlying-set functor
. In fact, one can go rather further in this vein: a fascinating result (first proved by Eduardo Manes in his PhD thesis) is that the concept of compact Hausdorff space is algebraic (is monadic with respect to the monad
): there is a equationally defined theory where the class of
-ary operations (for each cardinal
) is coded by the set of ultrafilters
, and whose models are precisely compact Hausdorff spaces. This goes beyond the scope of these lectures, but for the theory of monads, see the entertaining YouTube lectures by the Catsters!
In our last post on category theory, we continued our exploration of universal properties, showing how they can be used to motivate the concept of natural transformation, the “right” notion of morphism between functors
. In today’s post, I want to turn things around, applying the notion of natural transformation to explain generally what we mean by a universal construction. The key concept is the notion of representability, at the center of a circle of ideas which includes the Yoneda lemma, adjoint functors, monads, and other things — it won’t be possible to talk about all these things in detail (because I really want to return to Stone duality before long), but perhaps these notes will provide a key of entry into more thorough treatments.
Even for a fanatic like myself, it’s a little hard to see what would drive anyone to study category theory except a pretty serious “need to know” (there is a beauty and conceptual economy to categorical thinking, but I’m not sure that’s compelling enough motivation!). I myself began learning category theory on my own as an undergraduate; at the time I had only the vaguest glimmerings of a vast underlying unity to mathematics, but it was only after discovering the existence of category theory by accident (reading the introductory chapter of Spanier’s Algebraic Topology) that I began to suspect it held the answer to a lot of questions I had. So I got pretty fired-up about it then, and started to read Mac Lane’s Categories for the Working Mathematician. I think that even today this book remains the best serious introduction to the subject — for those who need to know! But category theory should be learned from many sources and in terms of its many applications. Happily, there are now quite a few resources on the Web and a number of blogs which discuss category theory (such as The Unapologetic Mathematician) at the entry level, with widely differing applications in mind. An embarrassment of riches!
Anyway, to return to today’s topic. Way back when, when we were first discussing posets, most of our examples of posets were of a “concrete” nature: sets of subsets of various types, ordered by inclusion. In fact, we went a little further and observed that any poset could be represented as a concrete poset, by means of a “Dedekind embedding” (bearing a familial resemblance to Cayley’s lemma, which says that any group can be represented concretely, as a group of permutations). Such concrete representation theorems are extremely important in mathematics; in fact, this whole series is a trope on the Stone representation theorem, that every Boolean algebra is an algebra of sets! With that, I want to discuss a representation theorem for categories, where every (small) category can be explicitly embedded in a concrete category of “structured sets” (properly interpreted). This is the famous Yoneda embedding.
This requires some preface. First, we need the following fundamental construction: for every category there is an opposite category
, having the same classes
of objects and morphisms as
, but with domain and codomain switched (
, and
). The function
is the same in both cases, but we see that the class of composable pairs of morphisms is modified:
[is a composable pair in
] if and only if
and accordingly, we define composition of morphisms in in the order opposite to composition in
:
in
.
Observation: The categorical axioms are satisfied in the structure if and only if they are in
; also,
.
This observation is the underpinning of a Principle of Duality in the theory of categories (extending the principle of duality in the theory of posets). As the construction of opposite categories suggests, the dual of a sentence expressed in the first-order language of category theory is obtained by reversing the directions of all arrows and the order of composition of morphisms, but otherwise keeping the logical structure the same. Let me give a quick example:
Definition: Let be objects in a category
. A coproduct of
and
consists of an object
and maps
,
(called injection or coprojection maps), satisfying the universal property that given an object
and maps
,
, there exists a unique map
such that
and
.
This notion is dual to the notion of product. (Often, one indicates the dual notion by appending the prefix “co” — except of course if the “co” prefix is already there; then one removes it.) In the category of sets, the coproduct of two sets may be taken to be their disjoint union
, where the injections
are the inclusion maps of
into
(exercise).
Exercise: Formulate the notion of coequalizer (by dualizing the notion of equalizer). Describe the coequalizer of two functions (in the category of sets) in terms of equivalence classes. Then formulate the notion dual to that of monomorphism (called an epimorphism), and by a process of dualization, show that in any category, coequalizers are epic.
Principle of duality: If a sentence expressed in the first-order theory of categories is provable in the theory, then so is the dual sentence. Proof (sketch): A proof of a sentence proceeds from the axioms of category theory by applying rules of inference. The dualization of
proves the dual sentence by applying the same rules of inference but starting from the duals of the categorical axioms. A formal proof of the Observation above shows that collectively, the set of categorical axioms is self-dual, so we are done.
Next, we introduce the all-important hom-functors. We suppose that is a locally small category, meaning that the class of morphisms
between any two given objects
is small, i.e., is a set as opposed to a proper class. Even for large categories, this condition is just about always satisfied in mathematical practice (although there is the occasional baroque counterexample, like the category of quasitopological spaces).
Let denote the category of sets and functions. Then, there is a functor
which, at the level of objects, takes a pair of objects to the set
of morphisms
(in
) between them. It takes a morphism
of
(that is to say, a pair of morphisms
of
) to the function
Using the associativity and identity axioms in , it is not hard to check that this indeed defines a functor
. It generalizes the truth-valued pairing
we defined earlier for posets.
Now assume is small. From last time, there is a bijection between functors
and by applying this bijection to , we get a functor
This is the famous Yoneda embedding of the category . It takes an object
to the hom-functor
. This hom-functor can be thought of as a structured, disciplined way of considering the totality of morphisms mapping into the object
, and has much to do with the Yoneda Principle we stated informally last time (and which we state precisely below).
- Remark: We don’t need
to be small to talk about
; local smallness will do. The only place we ask that
be small is when we are considering the totality of all functors
, as forming a category
.
Definition: A functor is representable (with representing object
) if there is a natural isomorphism
of functors.
The concept of representability is key to discussing what is meant by a universal construction in general. To clarify its role, let’s go back to one of our standard examples.
Let be objects in a category
, and let
be the functor
; that is, the functor which takes an object
of
to the set
. Then a representing object for
is a product
in
. Indeed, the isomorphism between sets
simply recapitulates that we have a bijection
between morphisms into the product and pairs of morphisms. But wait, not just an isomorphism: we said a natural isomorphism (between functors in the argument ) — how does naturality figure in?
Enter stage left the celebrated
Yoneda Lemma: Given a functor and an object
of
, natural transformations
are in (natural!) bijection with elements
.
Proof: We apply the “Yoneda trick” introduced last time: probe the representing object with the identity morphism, and see where
takes it: put
. Incredibly, this single element
determines the rest of the transformation
: by chasing the element
around the diagram
phi_c hom(c, c) -----> Fc | | hom(f, c) | | Ff V V hom(b, c) -----> Fb phi_b
(which commutes by naturality of ), we see for any morphism
in
that
. That the bijection
is natural in the arguments we leave as an exercise.
Returning to our example of the product as representing object, the Yoneda lemma implies that the natural bijection
is induced by the element , and this element is none other than the pair of projection maps
In summary, the Yoneda lemma guarantees that a hom-representation of a functor is, by the naturality assumption, induced in a uniform way from a single “universal” element
. All universal constructions fall within this general pattern.
Example: Let be a category with products, and let
be objects. Then a representing object for the functor
is an exponential
; the universal element
is the evaluation map
.
Exercise: Let be a pair of parallel arrows in a category
. Describe a functor
which is represented by an equalizer of this pair (assuming one exists).
Exercise: Dualize the Yoneda lemma by considering hom-functors . Express the universal property of the coproduct in terms of representability by such hom-functors.
The Yoneda lemma has a useful corollary: for any (locally small) category , there is a natural isomorphism
between natural transformations between hom-functors and morphisms in . Using
as alternate notation for the hom-set, the action of the Yoneda embedding functor
on morphisms gives an isomorphism between hom-sets
the functor is said in that case to be fully faithful (faithful means this action on morphisms is injective for all
, and full means the action is surjective for all
). The Yoneda embedding
thus maps
isomorphically onto the category of hom-functors
valued in the category
.
It is illuminating to work out the meaning of this last statement in special cases. When the category is a group
(that is, a category with exactly one object
in which every morphism is invertible), then functors
are tantamount to sets
equipped with a group homomorphism
, i.e., a left action of
, or a right action of
. In particular,
is the underlying set of
, equipped with the canonical right action
, where
. Moreover, natural transformations between functors
are tantamount to morphisms of right
-sets. Now, the Yoneda embedding
identifies any abstract group with a concrete group
, i.e., with a group of permutations — namely, exactly those permutations on
which respect the right action of
on itself. This is the sophisticated version of Cayley’s theorem in group theory. If on the other hand we take
to be a poset, then the Yoneda embedding is tantamount to the Dedekind embedding we discussed in the first lecture.
Tying up a loose thread, let us now formulate the “Yoneda principle” precisely. Informally, it says that an object is determined up to isomorphism by the morphisms mapping into it. Using the hom-functor to collate the morphisms mapping into
, the precise form of the Yoneda principle says that an isomorphism between representables
corresponds to a unique isomorphism
between objects. This follows easily from the Yoneda lemma.
But far and away, the most profound manifestation of representability is in the notion of an adjoint pair of functors. “Free constructions” give a particularly ubiquitous class of examples; the basic idea will be explained in terms of free groups, but the categorical formulation applies quite generally (e.g., to free monoids, free Boolean algebras, free rings = polynomial algebras, etc., etc.).
If is a set, the free group (q.v.) generated by
is, informally, the group
whose elements are finite “words” built from “literals”
which are the elements of
and their formal inverses, where we identify a word with any other gotten by introducing or deleting appearances of consecutive literals
or
. Janis Joplin said it best:
Freedom’s just another word for nothin’ left to lose…
— there are no relations between the generators of beyond the bare minimum required by the group axioms.
Categorically, the free group is defined by a universal property; loosely speaking, for any group
, there is a natural bijection between group homomorphisms and functions
where denotes the underlying set of the group. That is, we are free to assign elements of
to elements of
any way we like: any function
extends uniquely to a group homomorphism
, sending a word
in
to the element
in
.
Using the usual Yoneda trick, or the dual of the Yoneda trick, this isomorphism is induced by a universal function , gotten by applying the bijection above to the identity map
. Concretely, this function takes an element
to the one-letter word
in the underlying set of the free group. The universal property states that the bijection above is effected by composing with this universal map:
where the first arrow refers to the action of the underlying-set or forgetful functor , mapping the category of groups to the category of sets (
“forgets” the fact that homomorphisms
preserve group structure, and just thinks of them as functions
).
- Remark: Some people might say this a little less formally: that the original function
is retrieved from the extension homomorphism
by composing with the canonical injection of the generators
. The reason we don’t say this is that there’s a confusion of categories here: properly speaking,
belongs to the category of groups, and
to the category of sets. The underlying-set functor
is a device we apply to eliminate the confusion.
In different words, the universal property of free groups says that the functor , i.e., the underlying functor
followed by the hom-functor
, is representable by the free group
: there is a natural isomorphism of functors from groups to sets:
Now, the free group can be constructed for any set
. Moreover, the construction is functorial: defines a functor
. This is actually a good exercise in working with universal properties. In outline: given a function
, the homomorphism
is the one which corresponds bijectively to the function
i.e., is defined to be the unique map
such that
.
Proposition: is functorial (i.e., preserves morphism identities and morphism composition).
Proof: Suppose ,
is a composable pair of morphisms in
. By universality, there is a unique map
, namely
, such that
. But
also has this property, since
(where we used functoriality of in the first equation). Hence
. Another universality argument shows that
preserves identities.
Observe that the functor is rigged so that for all morphisms
,
That is to say, that there is only one way of defining so that the universal map
is (the component at
of) a natural transformation
!
The underlying-set and free functors ,
are called adjoints, generalizing the notion of adjoint in truth-valued matrix algebra: we have an isomorphism
natural in both arguments . We say that
is left adjoint to
, or dually, that
is right adjoint to
, and write
. The transformation
is called the unit of the adjunction.
Exercise: Define the construction dual to the unit, called the counit, as a transformation . Describe this concretely in the case of the free-underlying adjunction
between sets and groups.
What makes the concept of adjoint functors so compelling is that it combines representability with duality: the manifest symmetry of an adjunction means that we can equally well think of
as representing
as we can
as representing
. Time is up for today, but we’ll be seeing more of adjunctions next time, when we resume our study of Stone duality.
[Tip of the hat to Robert Dawson for the Janis Joplin quip.]
I wish to bring the attention of our readers to the Carnival of Mathematics hosted by Charles at Rigorous Trivialities. I guess most of you already know about it. Among other articles/posts, one of Todd’s recent post Basic Category Theory I is part of the carnival. He followed it up with another post titled Basic Category Theory II. There will be a third post on the same topic some time soon. This sub-series of posts on basic category theory, if you recall, is part of the larger series on Stone Duality, which all began with Toward Stone Duality: Posets and Meets. Hope you enjoy the Carnival!
I will write a series of posts as a way of gently introducing category theory to the ‘beginner’, though I will assume that the beginner will have some level of mathematical maturity. This series will be based on the the book, Conceptual Mathematics: A first introduction to categories by Lawvere and Schanuel. So, this won’t go into most of the deeper stuff that’s covered in, say, Categories for the Working Mathematician by Mac Lane. We shall deal only with sets (as our objects) and functions (as our morphisms). This means we deal only with the Category of Sets! Therefore, the reader is not expected to know about advanced stuff like groups and/or group homomorphisms, vector spaces and/or linear transformations, etc. Also, no knowledge of college level calculus is required. Only knowledge of sets and functions, some familiarity in dealing with mathematical symbols and some knowledge of elementary combinatorics are required. That’s all!
Sets, maps and composition
An object (in this category) is a finite set or collection.
A map (in this category) consists of the following:
i) a set called the domain of the map;
ii) a set called the codomain of the map; and
iii) a rule assigning to each , an element
.
We also use ‘function’, ‘transformation’, ‘operator’, ‘arrow’ and ‘morphism’ for ‘map’ depending on the context, as we shall see later.
An endomap is a map that has the same object as domain and codomain, which in this case is
.
An endomap in which for every
is called an identity map, also denoted by
.
Composition of maps is a process by which two maps are combined to yield a third map. Composition of maps is really at the heart of category theory, and this will be evident in plenty in the later posts. So, if we have two maps and
, then
is the third map obtained by composing
and
. Note that
is read ‘
following
‘.
Guess what? Those are all the ingredients we need to define our category of sets! Though we shall deal only with sets and functions, the following definition of a category of sets is actually pretty much the same as the general definition of a category.
Definition: A CATEGORY consists of the following:
(1) OBJECTS: these are usually denoted by etc.
(2) MAPS: these are usually denoted by etc.
(3) For each map , one object as DOMAIN of
and one object as CODOMAIN of
. So,
has domain
and codomain
. This is also read as ‘
is a map from
to
‘.
(4) For each object , there exists an IDENTITY MAP,
. This is also written as
.
(5) For each pair of maps and
, there exists a COMPOSITE map,
. ( ‘
following
‘.)
such that the following RULES are satisfied:
(i) (IDENTITY LAWS): If , then we have
and
.
(ii) (ASSOCIATIVE LAW): If and
, then we have
. ( ‘
following
following
‘) Note that in this case we are allowed to write
without any ambiguity.
Exercises: Suppose and
.
(1) How many maps are there from
to
?
(2) How many maps are there from
to
?
(3) How many maps are there from
to
?
(4) How many maps are there from
to
?
(5) How many maps are there from
to
satisfying
?
(This is a non-trivial exercise for the general case in which for some positive integer
.)
(6) How many maps are there from
to
satisfying
?
(7) Can you find a pair of maps and
such that
. If yes, how many pairs can you find?
(8 ) Can you find a pair of maps and
such that
. If yes, how many pairs can you find?
Bonus exercise:
How many maps are there if
is the empty set and
for some
? What if
and
is the empty set? What if both
and
are empty?
Recent Comments