In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the basic relationships between the logical connectives “and”, “or”, and “implies”. Our discussion disclosed a fundamental relationship between distributive laws and the algebra of implication, which we put to work to discover the structure of the “internal Heyting algebra logic” of a topology.

I’d like to pause and reflect on the general technique we used to establish this relationship; like the Yoneda principle and the Principle of Duality, it comes up with striking frequency, and so it will be useful for us to give it a name. As it turns out, this particular proof technique is analogous to the way *adjoints* are used in linear algebra. Such analogies go all the way back to work of C. S. Peirce, who like Boole was a great pioneer in the discovery of relationships between algebra and logic. At a deeper level, similar analogies were later rediscovered in category theory, and are connected with some of the most potent ideas category theory has to offer.

Our proof that meets distribute over sups in the presence of an implication operator is an example of this technique. Here is another example of similar flavor.

**Theorem**: In a Heyting algebra , the operator preserves any infs which happen to exist in , for any element . [In particular, this operator is a morphism of meet-semilattices, i.e., , and .]

**Proof**: Suppose that has an inf, which here will be denoted . Then for all , we have

if and only if

if and only if

(for all , ) if and only if

for all , .

By the defining property of inf, these logical equivalences show that is indeed the inf of the subset , or in other words that , as desired.

In summary, what we did in this proof is “slide” the operator on the right of the inequality over to the operator on the left, then invoke the defining property of infs, and then slide back to on the right. This sliding trick is analogous to how adjoint mappings work in linear algebra.

In fact, everything we have done so far with posets can be translated in terms of matrix algebra, provided that our matrix entries, instead of being real or complex numbers, are truth values ( for “true”, “false”). These truth values are added and multiplied in the way familiar from truth tables, with join playing the role of addition and meet playing the role of multiplication. In fact the lattice is a very simple distributive lattice, and so most of the familiar arithmetic properties of addition and multiplication (associativity, commutativity, distributivity) do carry over, which is all we need to carry out the most basic aspects of matrix algebra. However, observe that has no additive inverse (for here ) — the type of structure we are dealing with is often called a “rig” (like a ri*n*g, but without assuming *n*egatives). On the other hand, this lattice is, conveniently, a sup-lattice, thinking of sups as arbitrary sums, whether finitary or infinitary.

Peirce recognized that a relation can be *classified* by a truth-valued matrix. Take for example a binary relation on a set , i.e., a subset . We can imagine each point as a pixel in the plane, and highlight by lighting up just those pixels which belong to . This is the same as giving an -matrix , with rows indexed by elements and columns by elements , where the -entry is (on) if is in , and if not. In a similar way, any relation is classified by a -matrix whose entries are truth values.

As an example, the identity matrix has a at the -entry if and only if . Thus the identity matrix classifies the equality relation.

A poset is a set equipped with a binary relation satisfying the reflexive, transitive, and antisymmetry properties. Let us translate these into matrix algebra terms. First reflexivity: it says that implies . In matrix algebra terms, it says , which we abbreviate in the customary way:

(Reflexivity) .

Now let’s look at transitivity. It says

( and ) implies .

The “and” here refers to the meet or multiplication in the rig of truth values , and the existential quantifier can be thought of as a (possibly infinitary) join or sum indexed over elements . Thus, for each pair , the hypothesis of the implication has truth value

which is just the -entry of the *square* of the matrix . Therefore, transitivity can be very succinctly expressed in matrix algebra terms as the condition

(Transitivity) .

**Remark**: More generally, given a relation from to , and another relation from to , the*relational composite*is defined to be the set of pairs for which there exists with and . But this just means that its classifying matrix is the ordinary matrix product !

Let’s now look at the antisymmetry condition: ( and ) implies . The clause is the flip of ; at the matrix level, this flip corresponds to taking the *transpose*. Thus antisymmetry can be expressed in matrix terms as

(Antisymmetry)

where denotes the transpose of , and the matrix meet means we take the meet at each entry.

**Remark**: From the matrix algebra perspective, the antisymmetry axiom is less well motivated than the reflexivity and transitivity axioms. There’s a moral hiding beneath that story:*from the category-theoretic perspective, the antisymmetry axiom is relatively insignificant*. That is, if we view a poset as a category, then the antisymmetry condition is tantamount to the condition that isomorphic objects are equal (in the parlance, one says the category is “skeletal”) — this extra condition makes no essential difference, because isomorphic objects are essentially the same anyway. So: if we were to simply drop the antisymmetry axiom but keep the reflexivity and transitivity axioms (leading to what are called*preordered*sets, as opposed to partially ordered sets), then the theory of preordered sets develops exactly as the theory of partially ordered sets, except that in places where we conclude “ is equal to ” in the theory of posets, we would generally conclude “ is isomorphic to ” in the theory of preordered sets.

Preordered sets do occur in nature. For example, the set of sentences in a theory is preordered by the entailment relation ( is derivable from in the theory). (The way one gets a poset out of this is to pass to a quotient set, by identifying sentences which are logically equivalent in the theory.)

**Exercises**:

- (For those who know some topology) Suppose is a topological space. Given , define if belongs to the closure of ; show this is a preorder. Show this preorder is a poset precisely when is a -space.
- If carries a group structure, define for elements if for some integer ; show this is a preorder. When is it a poset?

Since posets or preorders are fundamental to everything we’re doing, I’m going to reserve a special *pairing* notation for their classifying matrices: define

if and only if .

Many of the concepts we have developed so far for posets can be succinctly expressed in terms of the pairing.

**Example**: The Yoneda principle (together with its dual) is simply the statement that if is a poset, then if and only if (as functionals valued in ) if and only if .

**Example**: A mapping from a poset to a poset is a function such that .

**Example**: If is a poset, its dual or opposite has the same elements but the opposite order, i.e., . The principle of duality says that the opposite of a poset is a poset. This can be (re)proved by invoking formal properties of matrix transpose, e.g., if , then .

By far the most significant concept that can be expressed in terms of these pairings that of **adjoint mappings**:

**Definition**: Let be posets [or preorders], and , be poset mappings. We say is an *adjoint pair* (with the *left adjoint* of , and the *right adjoint* of ) if

or, in other words, if if and only if . We write . Notice that the concept of left adjoint is dual to the concept of right adjoint (N.B.: they are not the same, because clearly the pairing is not generally symmetric in and ).

Here are some examples which illustrate the ubiquity of this concept:

- Let be a poset. Let be the poset where iff ( and ). There is an obvious poset mapping , the
*diagonal mapping*, which takes to . Then a meet operation is precisely a right adjoint to the diagonal mapping. Indeed, it says that if and only if . - Dually, a join operation is precisely a left adjoint to the diagonal mapping .
- More generally, for any set , there is a diagonal map which maps to the -tuple . Its right adjoint , if one exists, sends an -tuple to the inf of the set . Its left adjoint would send the tuple to the sup of that set.
- If is a Heyting algebra, then for each , the conjunction operator is left adjoint to the implication operator .
- If is a sup-lattice, then the operator which sends a subset to is left adjoint to the Dedekind embedding . Indeed, we have if and only if (for all ) if and only if .

As items 1, 2, and 4 indicate, the rules for how the propositional connectives operate are governed by adjoint pairs. This gives some evidence for Lawvere’s great insight *that all rules of inference in logic are expressed by interlocking pairs of adjoint mappings*.

**Proposition**: If and where and are composable mappings, then .

**Proof**: . Notice that the statement is analogous to the usual rule , where refers to taking an adjoint with respect to given inner product forms.

We can use this proposition to give slick proofs of some results we’ve seen. For example, to prove that Heyting algebras are distributive lattices, i.e., that , just take left adjoints on both sides of the tautology , where is right adjoint to . The left adjoint of the left side of the tautology is (by the proposition) applied to . The left adjoint of the right side is applied to . The conclusion follows.

Much more generally, we have the

**Theorem**: Right adjoints preserve any infs which exist in . Dually, left adjoints preserve any sups which exist in .

**Proof**: where the last inf is interpreted in the inf-lattice . This equals . This completes the proof of the first statement (why?). The second follows from duality.

**Exercise**: If is a Heyting algebra, then there is a poset mapping for any element . Describe the left adjoint of this mapping. Conclude that this mapping takes infs in (i.e., sups in ) to the corresponding infs in .

## 11 comments

Comments feed for this article

April 16, 2008 at 7:17 pm

Urs SchreiberHi Todd,

interesting course you are teaching here! :-)

Did you give some kind of course outline, an overview of what the very general idea to be exhibited is, the main points, an indication where all this is headed?

Possibly I missed it. Or possibly there is no better overview than simply following sentence by sentence? But if there is, would you mind giving one? It might help me relate to what is going on, and maybe even get excited about it.

I suppose there are close relationships between what you are discussing and the topos-theoretic discussions we had on the n-café, such as here. ?

April 16, 2008 at 7:58 pm

Urs SchreiberI gather that the title of the course is “Stone duality” or the like.

I hope I do remember correctly from what you kept telling me that Stone duality, which is an adjunction between (sober) topological spaces and (spatial, complete) Heyting algebras, is induced by the

ambimorphic object2 = {0,1}

which is both a lattice as well as a topological space, in a “compatible” way:

I suppose we get from a topological space to its lattice of open subsets by mapping the space into 2 (which yields its subsets, I am not sure I see how the

opensubsets appear).And we get from a lattice to a topological space by mapping it into the lattice 2, as explained on that Wikipedia website.

Would it be correct to say that the goal of your lecture is to describe this in full detail?

April 16, 2008 at 11:03 pm

Todd TrimbleExactly what you described is the spirit of the “course”, where some 2-element space plays the role of ambimorphic object. Classical Stone duality is the restriction of what you described to an ambimorphic adjunction induced by thinking of the 2-element set as a Boolean algebra in the category of Stone spaces (compact Hausdorff totally disconnected spaces), but you are quite right to think of it in a larger context of duality between topology and logic (or geometry and algebra, if you prefer). I should say off the bat that I am by no means an expert on this subject; I just find the general area sort of pretty, and a pleasant meeting ground for a number of fields of mathematics, which could be of general interest to students interested in exploring some nontrivial mathematics.

Perhaps it would be more fair to think of the general subject area as a kind of Gestalt which would include, e.g., Gelfand-Naimark duality for commutative algebras, not restricting just to 2-element ambimorphic objects. Johnstone’s book Stone Spaces is rather a comprehensive treatment of what I have in mind.

April 18, 2008 at 4:27 pm

Urs SchreiberThanks, Todd.

Since you mention Gelfan-Naimark: can you comment on how you see the work of Heunen and Spitters arXiv:0709.4364 fit into the big picture?

Despite the title and advertisement, they essentially consider, I think, the theory of commutative -algebras and its relation to topological spaces internal to certain topoi, where it effectively becomes the theory of non-commutative -algebras and ??? spaces.

This sounds like it should sit nicely somewhere in the big picture of Stone duality.

April 23, 2008 at 2:12 pm

Boolean algebras, Galois connections, and double negation « Vishal Lama’s blog[…] underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a […]

April 24, 2008 at 7:50 pm

Todd TrimbleUrs,

Sorry for the delay in reply. I hadn’t really looked at Heunen-Spitters until you asked about it here. To be honest, I don’t have anything wonderful to say about it yet, but politician-like, I’ll answer a question different from the one you asked, and maybe I’ll even try faking some sort of reply to the actual question!

At one “big picture” level, I am regarding the panoply of dualities which fall under the rubric “Stone duality” as fitting in the niche of Chu space duality. That is, each participating category in a Stone duality (induced by an ambimorphic structure on a 2-element set) embeds in the category Chu, in such a way that the given Stone duality is the restriction of the self-duality of Chu induced by homming into the canonical 2-element Chu space. What I find sort of curious is that Chu is an archetypal *-autonomous category, and the notion of *-autonomous category is itself a categorification of the notion of Boolean algebra (where the self-duality is the negation operator). So Stone duality is in this way a kind of categorification of the self-duality of a Boolean algebra given by taking negation!

I’m still feeling my way into this, so I can’t comment much further on the deep meaning of this — but it’s one of these half-glimpses into the almost terrifyingly vertiginous world of categorified mathematics.

As you mention, the Heunen-Spitters paper internalizes some classical Gelfand-Naimark duality in a topos. There one does not have choice principles (e.g. Boolean prime ideal theorem) satisfied in the internal logic, so the form of the classical dualities changes a little — instead of dealing with classical topological spaces, one has to deal with locales (“pointless topology”). There is an appropriate notion of compact regular locale (as opposed to compact regular space) to which the topos-theoretic version of Gelfand-Naimark applies, as they explain. I myself would be somewhat interested in understanding what sorts of theories are classified by the toposes they study, maybe to better understand the nature of ??? spaces (or ??? locales). Is all this supposed to be a topos-theoretic rendering of non-commutative geometry?

It might be fun to categorify a bit further. Chu spaces are one way of embracing the various categories and their dualities, but another method could be the systematic replacement of a structure (like a Boolean algebra or a Stone space) by an appropriate sheaf topos, and to study them as objects of the 2-category of toposes (or special types of toposes, like so-called etendue). I am presuming that the various categories which come under “Stone duality” all embed faithfully and fully (in the 1-category of toposes and iso classes of geometric morphisms), and the duality might take a nice form in that 2-category. That’s a potential “big picture” in which the Heunen-Spitters paper could be placed — but it’s admittedly murky to me at the moment.

April 25, 2008 at 1:38 am

Urs SchreiberIt was my impression early on that this is what it

shouldbe regarded as.My impression is that this is how the approach should be advertized, while the relation to quantum theory is much more indirect and secondary. So far.

Could you remind me what that category is?

That sounds interesting.

Hm, so a space is

2-nota Heyting algebra?April 25, 2008 at 2:45 am

Todd TrimbleThe “classical” notion of Chu space is that it’s a pair of sets equipped with a map . That’s it! A map between Chu spaces consists of an “adjoint pair” of maps , — adjoint in the sense that for all .

(The receiver doesn’t have to be the set of truth values ; in fact there is a Chu construction which takes as input any finitely complete symmetric monoidal closed category equipped with a chosen object , and produces a *-autonomous category whose objects are triples and whose maps are adjoint pairs. But Chu spaces as defined above can be considered the classical case.)

This seems like a very innocent construction, and yet a great many familiar categories embed faithfully and fully in Chu: sets, posets, topological spaces, Boolean algebras, completely distributive lattices, vector spaces over — as I say, all the categories which participate in classical Stone-type dualities find a natural home in Chu. Vaughan Pratt and his co-workers have developed this idea extensively, and have studied Chu spaces as a model of linear logic.

The Wikipedia article isn’t bad as a start, and it links to this webpage where there are many resources given. A useful reference is Pratt’s Coimbra notes (although my version of those notes from the 1999 conference there had a lot of errors in it; maybe they’ve been mostly weeded out by now).

I’m not quite sure what you’re asking at the end, but I’m guessing you’re asking whether the categorified “not” (as an object of Chu) is an internal Heyting algebra. (Just to be clear, this object is the Chu space where the map is the obvious isomorphism.) To be honest, I hadn’t thought of it as carrying Heyting algebra structure, and I’m not sure I needed to, but let me get back to you after I have a chance to do some back-of-envelope calculations. (Jim Dolan has just called me for an Internet meeting, so it may be a while before I get to it — it’s an interesting question though, and it may come in handy.)

April 25, 2008 at 4:30 pm

Todd TrimbleUrs,

I’m not sure this was your question, but no, the Chu space I call is

nota Heyting algebra, and in fact there are conceptual reasons why it can’t carry any interesting algebraic structure (in the sense of Lawvere algebraic theories) whatsoever: the only maps from finite products , , etc. to are projection maps. The uninterestingness of is actually more interesting than it sounds: if it did carry some non-trivial internal algebraic structure, then that type of structure would get passed on to all the objects of the category Chu, essentially because internally homming into is a contravariant equivalence on Chu.I can say more about this if you want, but it might be more useful to consider some examples of familiar categories and how they embed in . For example, define a functor by

where denotes an evaluation map. Or, define a functor from posets to Chu spaces by

where denotes the underlying set, and denotes the set of poset maps from to . Or again, define a functor by

where this time the 2-element set is regarded as a Boolean algebra. And so on. Thus, intuitively, we can think of the 2-element set as an actor which can play many different structural roles, e.g., as set, poset, Boolean algebra, Stone space, algebraic lattice, etc. etc. (with some of these roles paired up when 2 is considered in an ambimorphic context), according to how we restrict from Chu spaces to various full subcategories of structures like , , , etc. But since Chu is therefore this big “one-size-fits-all” category, the object

quaChu space must correspondingly be as neutral and structureless as possible — maybe that’s a categorified meaning of “2-empty”: the Chu space must be empty or devoid of any particular structure (like Heyting algebra or Boolean algebra structure or whatever).June 22, 2008 at 4:55 pm

Basic category theory, I « Todd and Vishal’s blog[…] June 22, 2008 in Category Theory, Category Theory for Beginners by Todd Trimble Tags: cartesian product, universal property After a long hiatus (sorry about that!), I would like to resume the series on Stone duality. You may recall I started this series by saying that my own point of view on mathematics is strongly informed by category theory, followed by a little rant about the emotional reactions that category theory seems to excite in many people, and that I wouldn’t be “blathering” about categories unless a strong organic need was felt for it. Well, it’s come to that point: to continue talking sensibly about Stone duality, I really feel some basic concepts of category theory are now in order. So: before we pick up the main thread again, I’ll be talking about categories up to the point of the central concept of adjoint pairs, generalizing what we’ve discussed before in the context of truth-valued matrix algebra. […]

July 12, 2008 at 7:37 am

Basic Category Theory, III: Representability, adjoints, and the Yoneda lemma « Todd and Vishal’s blog[…] not hard to check that this indeed defines a functor . It generalizes the truth-valued pairing we defined earlier for […]