You are currently browsing the tag archive for the ‘principle of duality’ tag.
In our last post on category theory, we continued our exploration of universal properties, showing how they can be used to motivate the concept of natural transformation, the “right” notion of morphism between functors
. In today’s post, I want to turn things around, applying the notion of natural transformation to explain generally what we mean by a universal construction. The key concept is the notion of representability, at the center of a circle of ideas which includes the Yoneda lemma, adjoint functors, monads, and other things — it won’t be possible to talk about all these things in detail (because I really want to return to Stone duality before long), but perhaps these notes will provide a key of entry into more thorough treatments.
Even for a fanatic like myself, it’s a little hard to see what would drive anyone to study category theory except a pretty serious “need to know” (there is a beauty and conceptual economy to categorical thinking, but I’m not sure that’s compelling enough motivation!). I myself began learning category theory on my own as an undergraduate; at the time I had only the vaguest glimmerings of a vast underlying unity to mathematics, but it was only after discovering the existence of category theory by accident (reading the introductory chapter of Spanier’s Algebraic Topology) that I began to suspect it held the answer to a lot of questions I had. So I got pretty fired-up about it then, and started to read Mac Lane’s Categories for the Working Mathematician. I think that even today this book remains the best serious introduction to the subject — for those who need to know! But category theory should be learned from many sources and in terms of its many applications. Happily, there are now quite a few resources on the Web and a number of blogs which discuss category theory (such as The Unapologetic Mathematician) at the entry level, with widely differing applications in mind. An embarrassment of riches!
Anyway, to return to today’s topic. Way back when, when we were first discussing posets, most of our examples of posets were of a “concrete” nature: sets of subsets of various types, ordered by inclusion. In fact, we went a little further and observed that any poset could be represented as a concrete poset, by means of a “Dedekind embedding” (bearing a familial resemblance to Cayley’s lemma, which says that any group can be represented concretely, as a group of permutations). Such concrete representation theorems are extremely important in mathematics; in fact, this whole series is a trope on the Stone representation theorem, that every Boolean algebra is an algebra of sets! With that, I want to discuss a representation theorem for categories, where every (small) category can be explicitly embedded in a concrete category of “structured sets” (properly interpreted). This is the famous Yoneda embedding.
This requires some preface. First, we need the following fundamental construction: for every category there is an opposite category
, having the same classes
of objects and morphisms as
, but with domain and codomain switched (
, and
). The function
is the same in both cases, but we see that the class of composable pairs of morphisms is modified:
[is a composable pair in
] if and only if
and accordingly, we define composition of morphisms in in the order opposite to composition in
:
in
.
Observation: The categorical axioms are satisfied in the structure if and only if they are in
; also,
.
This observation is the underpinning of a Principle of Duality in the theory of categories (extending the principle of duality in the theory of posets). As the construction of opposite categories suggests, the dual of a sentence expressed in the first-order language of category theory is obtained by reversing the directions of all arrows and the order of composition of morphisms, but otherwise keeping the logical structure the same. Let me give a quick example:
Definition: Let be objects in a category
. A coproduct of
and
consists of an object
and maps
,
(called injection or coprojection maps), satisfying the universal property that given an object
and maps
,
, there exists a unique map
such that
and
.
This notion is dual to the notion of product. (Often, one indicates the dual notion by appending the prefix “co” — except of course if the “co” prefix is already there; then one removes it.) In the category of sets, the coproduct of two sets may be taken to be their disjoint union
, where the injections
are the inclusion maps of
into
(exercise).
Exercise: Formulate the notion of coequalizer (by dualizing the notion of equalizer). Describe the coequalizer of two functions (in the category of sets) in terms of equivalence classes. Then formulate the notion dual to that of monomorphism (called an epimorphism), and by a process of dualization, show that in any category, coequalizers are epic.
Principle of duality: If a sentence expressed in the first-order theory of categories is provable in the theory, then so is the dual sentence. Proof (sketch): A proof of a sentence proceeds from the axioms of category theory by applying rules of inference. The dualization of
proves the dual sentence by applying the same rules of inference but starting from the duals of the categorical axioms. A formal proof of the Observation above shows that collectively, the set of categorical axioms is self-dual, so we are done.
Next, we introduce the all-important hom-functors. We suppose that is a locally small category, meaning that the class of morphisms
between any two given objects
is small, i.e., is a set as opposed to a proper class. Even for large categories, this condition is just about always satisfied in mathematical practice (although there is the occasional baroque counterexample, like the category of quasitopological spaces).
Let denote the category of sets and functions. Then, there is a functor
which, at the level of objects, takes a pair of objects to the set
of morphisms
(in
) between them. It takes a morphism
of
(that is to say, a pair of morphisms
of
) to the function
Using the associativity and identity axioms in , it is not hard to check that this indeed defines a functor
. It generalizes the truth-valued pairing
we defined earlier for posets.
Now assume is small. From last time, there is a bijection between functors
and by applying this bijection to , we get a functor
This is the famous Yoneda embedding of the category . It takes an object
to the hom-functor
. This hom-functor can be thought of as a structured, disciplined way of considering the totality of morphisms mapping into the object
, and has much to do with the Yoneda Principle we stated informally last time (and which we state precisely below).
- Remark: We don’t need
to be small to talk about
; local smallness will do. The only place we ask that
be small is when we are considering the totality of all functors
, as forming a category
.
Definition: A functor is representable (with representing object
) if there is a natural isomorphism
of functors.
The concept of representability is key to discussing what is meant by a universal construction in general. To clarify its role, let’s go back to one of our standard examples.
Let be objects in a category
, and let
be the functor
; that is, the functor which takes an object
of
to the set
. Then a representing object for
is a product
in
. Indeed, the isomorphism between sets
simply recapitulates that we have a bijection
between morphisms into the product and pairs of morphisms. But wait, not just an isomorphism: we said a natural isomorphism (between functors in the argument ) — how does naturality figure in?
Enter stage left the celebrated
Yoneda Lemma: Given a functor and an object
of
, natural transformations
are in (natural!) bijection with elements
.
Proof: We apply the “Yoneda trick” introduced last time: probe the representing object with the identity morphism, and see where
takes it: put
. Incredibly, this single element
determines the rest of the transformation
: by chasing the element
around the diagram
phi_c hom(c, c) -----> Fc | | hom(f, c) | | Ff V V hom(b, c) -----> Fb phi_b
(which commutes by naturality of ), we see for any morphism
in
that
. That the bijection
is natural in the arguments we leave as an exercise.
Returning to our example of the product as representing object, the Yoneda lemma implies that the natural bijection
is induced by the element , and this element is none other than the pair of projection maps
In summary, the Yoneda lemma guarantees that a hom-representation of a functor is, by the naturality assumption, induced in a uniform way from a single “universal” element
. All universal constructions fall within this general pattern.
Example: Let be a category with products, and let
be objects. Then a representing object for the functor
is an exponential
; the universal element
is the evaluation map
.
Exercise: Let be a pair of parallel arrows in a category
. Describe a functor
which is represented by an equalizer of this pair (assuming one exists).
Exercise: Dualize the Yoneda lemma by considering hom-functors . Express the universal property of the coproduct in terms of representability by such hom-functors.
The Yoneda lemma has a useful corollary: for any (locally small) category , there is a natural isomorphism
between natural transformations between hom-functors and morphisms in . Using
as alternate notation for the hom-set, the action of the Yoneda embedding functor
on morphisms gives an isomorphism between hom-sets
the functor is said in that case to be fully faithful (faithful means this action on morphisms is injective for all
, and full means the action is surjective for all
). The Yoneda embedding
thus maps
isomorphically onto the category of hom-functors
valued in the category
.
It is illuminating to work out the meaning of this last statement in special cases. When the category is a group
(that is, a category with exactly one object
in which every morphism is invertible), then functors
are tantamount to sets
equipped with a group homomorphism
, i.e., a left action of
, or a right action of
. In particular,
is the underlying set of
, equipped with the canonical right action
, where
. Moreover, natural transformations between functors
are tantamount to morphisms of right
-sets. Now, the Yoneda embedding
identifies any abstract group with a concrete group
, i.e., with a group of permutations — namely, exactly those permutations on
which respect the right action of
on itself. This is the sophisticated version of Cayley’s theorem in group theory. If on the other hand we take
to be a poset, then the Yoneda embedding is tantamount to the Dedekind embedding we discussed in the first lecture.
Tying up a loose thread, let us now formulate the “Yoneda principle” precisely. Informally, it says that an object is determined up to isomorphism by the morphisms mapping into it. Using the hom-functor to collate the morphisms mapping into
, the precise form of the Yoneda principle says that an isomorphism between representables
corresponds to a unique isomorphism
between objects. This follows easily from the Yoneda lemma.
But far and away, the most profound manifestation of representability is in the notion of an adjoint pair of functors. “Free constructions” give a particularly ubiquitous class of examples; the basic idea will be explained in terms of free groups, but the categorical formulation applies quite generally (e.g., to free monoids, free Boolean algebras, free rings = polynomial algebras, etc., etc.).
If is a set, the free group (q.v.) generated by
is, informally, the group
whose elements are finite “words” built from “literals”
which are the elements of
and their formal inverses, where we identify a word with any other gotten by introducing or deleting appearances of consecutive literals
or
. Janis Joplin said it best:
Freedom’s just another word for nothin’ left to lose…
— there are no relations between the generators of beyond the bare minimum required by the group axioms.
Categorically, the free group is defined by a universal property; loosely speaking, for any group
, there is a natural bijection between group homomorphisms and functions
where denotes the underlying set of the group. That is, we are free to assign elements of
to elements of
any way we like: any function
extends uniquely to a group homomorphism
, sending a word
in
to the element
in
.
Using the usual Yoneda trick, or the dual of the Yoneda trick, this isomorphism is induced by a universal function , gotten by applying the bijection above to the identity map
. Concretely, this function takes an element
to the one-letter word
in the underlying set of the free group. The universal property states that the bijection above is effected by composing with this universal map:
where the first arrow refers to the action of the underlying-set or forgetful functor , mapping the category of groups to the category of sets (
“forgets” the fact that homomorphisms
preserve group structure, and just thinks of them as functions
).
- Remark: Some people might say this a little less formally: that the original function
is retrieved from the extension homomorphism
by composing with the canonical injection of the generators
. The reason we don’t say this is that there’s a confusion of categories here: properly speaking,
belongs to the category of groups, and
to the category of sets. The underlying-set functor
is a device we apply to eliminate the confusion.
In different words, the universal property of free groups says that the functor , i.e., the underlying functor
followed by the hom-functor
, is representable by the free group
: there is a natural isomorphism of functors from groups to sets:
Now, the free group can be constructed for any set
. Moreover, the construction is functorial: defines a functor
. This is actually a good exercise in working with universal properties. In outline: given a function
, the homomorphism
is the one which corresponds bijectively to the function
i.e., is defined to be the unique map
such that
.
Proposition: is functorial (i.e., preserves morphism identities and morphism composition).
Proof: Suppose ,
is a composable pair of morphisms in
. By universality, there is a unique map
, namely
, such that
. But
also has this property, since
(where we used functoriality of in the first equation). Hence
. Another universality argument shows that
preserves identities.
Observe that the functor is rigged so that for all morphisms
,
That is to say, that there is only one way of defining so that the universal map
is (the component at
of) a natural transformation
!
The underlying-set and free functors ,
are called adjoints, generalizing the notion of adjoint in truth-valued matrix algebra: we have an isomorphism
natural in both arguments . We say that
is left adjoint to
, or dually, that
is right adjoint to
, and write
. The transformation
is called the unit of the adjunction.
Exercise: Define the construction dual to the unit, called the counit, as a transformation . Describe this concretely in the case of the free-underlying adjunction
between sets and groups.
What makes the concept of adjoint functors so compelling is that it combines representability with duality: the manifest symmetry of an adjunction means that we can equally well think of
as representing
as we can
as representing
. Time is up for today, but we’ll be seeing more of adjunctions next time, when we resume our study of Stone duality.
[Tip of the hat to Robert Dawson for the Janis Joplin quip.]
Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!
In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.
Let be a poset. If
are elements of
, a join of
and
is an element
with the property that for any
,
if and only if (
and
).
For a first example, consider the poset of subsets of
ordered by inclusion. The join in that case is given by taking the union, i.e., we have
if and only if (
and
).
Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.
Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of and
is an element
such that for all
,
if and only if (
and
).
Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of by
). This is an instance of a very important concept. In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.
(Just to put in some fine print or boilerplate, let me just say that a formula in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain ), which can be built up by taking
as a primitive binary predicate on
. A theorem in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set
), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)
The critical item to install before we’re off and running is the following meta-principle:
Principle of Duality: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.
Proof: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, , is itself! — and of course an axiom is a theorem. The transitivity axiom,
and
implies
, is also self-dual (when you dualize it, it looks essentially the same except that the variables
and
are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done.
So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements and
is denoted
.
Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation is
(and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all
)
. In fact, we can say
Theorem: The join operation is associative, commutative, and idempotent.
Proof: Just apply the principle of duality to the corresponding theorem for the meet operation.
Just to get used to these ideas, here are some exercises.
- State the dual of the Yoneda principle (as stated here).
- Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)
To continue: we say a poset is a join-semilattice if it has all finite joins (including the empty join, which is the bottom element satisfying
for all
). A lattice is a poset which has all finite meets and finite joins.
Time for some examples.
- The set of natural numbers 0, 1, 2, 3, … under the divisibility order (
if
divides
) is a lattice. (What is the join of two elements? What is the bottom element?))
- The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
- The set of subsets of a set
is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
- The set of subspaces of a vector space
is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces
,
is the vector space which they jointly generate (i.e., the set of vector sums
with
, which is closed under addition and scalar multiplication).
The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of closure of the ordinary union seems relevant (e.g., in the vector space example, close up the union of and
under the vector space operations), and indeed this can be made precise in many cases of interest.
To explain this, let’s take a fresh look at the definition of join: the defining property was
if and only if (
and
).
What this is really saying is that among all the elements which “contain” both
and
, the element
is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements
to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups
,
, take the intersection of all subgroups containing both
and
— that intersection is the group-theoretic closure of the union
.
There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:
Definition: Let be a poset, and suppose
. The infimum of
, if it exists, is an element
such that for all
,
if and only if
for all
.
By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of by
.
We say that a poset is an inf-lattice if there is an infimum for every subset. Similarly, the supremum of
, if it exists, is an element
such that for all
,
if and only if
for all
. A poset is a sup-lattice if there is a supremum for every subset. [I’ll just quickly remark that the notions of inf-lattice and sup-lattice belong to second-order logic, since it involves quantifying over all subsets
(or over all elements of
).]
Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the
Theorem: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.
Proof: Suppose is an inf-lattice, and let
. Let
be the set of upper bounds of
. I claim that
(“least upper bound”) is the supremum of
. Indeed, from
and the definition of infimum, we know that
if
, i.e.,
if
for all
. On the other hand, we also know that if
, then
for every
, and hence
by the defining property of infimum (i.e.,
really is an upper bound of
). So, if
, we conclude by transitivity that
for every
. This completes the proof.
Corollary: Every finite meet-semilattice is a lattice.
Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of homomorphism. The notions are straightforward enough: a morphism of meet-semilattices is a function which takes finite meets in
to finite meets in
(
, and
where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices (
and
where the 0’s denote bottom elements). A morphism of inf-lattices
is a function such that
for all subsets
, where
denotes the direct image of
under
. And there is a dual notion of morphism of sup-lattices:
. Finally, a morphism of lattices is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.
Despite the theorem above , it is not true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.
Exercise: Come up with some examples of morphisms which exhibit these distinctions.
The difference between sets and
, also known as the relative complement of
in
, is the set
defined by
.
If we assume the existence of a universe, , such that all the sets are subsets of
, then we can considerably simplify our notation. So, for instance,
can simply be written as
, which denotes the complement of
in
. Similarly,
,
, and so on. A quick look at a few more facts:
-
,
,
if and only if
.
The last one is proved as follows. We prove the “if” part first. Suppose . If
, then clearly
. But, since
, we have
, which implies
. Hence,
. This closes the proof of the “if” part. Now, we prove the “only if” part. So, suppose
. Now, if
, then clearly
. But, since
, we have
, which implies
. Hence,
. This closes the proof of the “only if” part, and, we are done.
The following are the well-known DeMorgan’s laws (about complements):
, and
.
Let’s quickly prove the first one. Suppose belongs to the left hand side. Then,
, which implies
and
, which implies
and
, which implies
. This proves that the left hand side is a subset of the right hand side. We can similarly prove the right hand side is a subset of the left hand side, and this closes our proof.
Though it isn’t very apparent, but if we look carefully at the couple of problems whose proofs we did above, we note something called the principle of duality for sets. One encounters such dual principles in mathematics quite often. In this case, this dual principle is stated a follows.
Principle of duality (for sets): If in an inclusion or equation involving unions, intersections and complements of subsets of
(the universe) we replace each set by its complement, interchange unions and intersections, and reverse all set-inclusions, the result is another theorem.
Using the above principle, it is easy to “derive” one of the DeMorgan’s laws from another and vice versa. In addition, DeMorgan’s laws can be extended to larger collections of sets instead of just pairs.
Here are a few exercises on complementation.
,
if and only if
,
,
,
,
.
We will prove the last one, leaving the remaining as exercises to the reader. Suppose belongs to the left hand side. Then,
and
. Now, note that if
, then
, which implies
, which implies
. If, on the other hand,
, then
, which implies
, which implies
. Hence, in either case, the left hand side is a subset of
, and we are done.
We now define the symmetric difference (or Boolean sum) of two sets and
as follows:
.
This is basically the set of all elements in or
but not in
. In other words,
. Again, a few facts (involving symmetric differences) that aren’t hard to prove:
,
,
(commutativity),
(associativity),
,
.
This brings us now to the axiom of powers, which basically states if is a set then there exists a set that contains all the possible subsets of
as its elements.
Axiom of powers: If
is a set, then there exists a set (collection)
, such that if
, then
.
The set , described above, may be too “comprehens ive”, i.e., it may contain sets other than the subsets of
. Once again, we “fix” this by applying the axiom of specification to form the new set
. The set
is called the power set of
, and the axiom of extension, again, guarantees its uniqueness. We denote
by
to show the dependence of
on
. A few illustrative examples:
,
,
, and so on.
Note that if is a finite set, containing
elements, then the power set
contains
elements. The “usual” way to prove this is by either using a simple combinatorial argument or by using some algebra. The combinatorial argument is as follows. An element in
either belongs to a subset of
or it doesn’t: there are thus two choices; since there are
elements in
, the number of all possible subsets of
is therefore
. A more algebraic way of proving the same result is as follows. The number of subsets with
elements is
. So, the number of subsets of
is
. But, from the binomial theorem, we have
. Putting
, we get
as our required answer.
A few elementary facts:
.
- If
, then
.
.
EXERCISES:
1. Prove that
.
2. Prove that
.
Recent Comments