Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!

In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.

Let be a poset. If are elements of , a *join* of and is an element with the property that for any ,

if and only if ( and ).

For a first example, consider the poset of subsets of ordered by inclusion. The join in that case is given by taking the union, i.e., we have

if and only if ( and ).

Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.

Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of and is an element such that for all ,

if and only if ( and ).

Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of by ). This is an instance of a very important concept. *In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. *Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.

(Just to put in some fine print or boilerplate, let me just say that a *formula* in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain ), which can be built up by taking as a primitive binary predicate on . A *theorem* in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set ), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)

The critical item to install before we’re off and running is the following meta-principle:

**Principle of Duality**: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.

**Proof**: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, , is itself! — and of course an axiom is a theorem. The transitivity axiom, and implies , is also self-dual (when you dualize it, it looks essentially the same except that the variables and are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done.

So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements and is denoted .

Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation is (and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all ) . In fact, we can say

**Theorem**: The join operation is associative, commutative, and idempotent.

**Proof**: Just apply the principle of duality to the corresponding theorem for the meet operation.

Just to get used to these ideas, here are some exercises.

- State the dual of the Yoneda principle (as stated here).
- Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)

To continue: we say a poset is a *join-semilattice* if it has all finite joins (including the empty join, which is the bottom element satisfying for all ). A *lattice* is a poset which has all finite meets and finite joins.

Time for some examples.

- The set of natural numbers 0, 1, 2, 3, … under the divisibility order ( if divides ) is a lattice. (What is the join of two elements? What is the bottom element?))
- The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
- The set of subsets of a set is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
- The set of subspaces of a vector space is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces , is the vector space which they
*jointly generate*(i.e., the set of vector sums with , which is closed under addition and scalar multiplication).

The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of *closure* of the ordinary union seems relevant (e.g., in the vector space example, close up the union of and under the vector space operations), and indeed this can be made precise in many cases of interest.

To explain this, let’s take a fresh look at the definition of join: the defining property was

if and only if ( and ).

What this is really saying is that among all the elements which “contain” both and , the element is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups , , take the intersection of all subgroups containing both and — that intersection is the group-theoretic closure of the union .

There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:

**Definition**: Let be a poset, and suppose . The *infimum* of , if it exists, is an element such that for all , if and only if for all .

By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of by .

We say that a poset is an *inf-lattice* if there is an infimum for every subset. Similarly, the *supremum* of , if it exists, is an element such that for all , if and only if for all . A poset is a *sup-lattice* if there is a supremum for every subset. [I'll just quickly remark that the notions of inf-lattice and sup-lattice belong to *second-order* logic, since it involves quantifying over all subsets (or over all elements of ).]

Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the

**Theorem**: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.

**Proof**: Suppose is an inf-lattice, and let . Let be the set of *upper bounds* of . I claim that (“least upper bound”) is the supremum of . Indeed, from and the definition of infimum, we know that if , i.e., if for all . On the other hand, we also know that if , then for every , and hence by the defining property of infimum (i.e., really is an upper bound of ). So, if , we conclude by transitivity that for every . This completes the proof.

**Corollary**: Every *finite* meet-semilattice is a lattice.

Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of *homomorphism*. The notions are straightforward enough: a *morphism of meet-semilattices* is a function which takes finite meets in to finite meets in (, and where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices ( and where the 0’s denote bottom elements). A *morphism of inf-lattices* is a function such that for all subsets , where denotes the direct image of under . And there is a dual notion of morphism of sup-lattices: . Finally, a *morphism of lattices* is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.

Despite the theorem above , it is **not** true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.

**Exercise**: Come up with some examples of morphisms which exhibit these distinctions.

## 27 comments

Comments feed for this article

April 4, 2008 at 7:35 pm

VishalSolutions to a few exercises.

(1) Dual of the Yoneda principle:

If (for all ) then .

(2) For all in the poset,

iff and

iff and and

iff and

iff .

Hence, by the dual of Yoneda.

(3) If our poset is the set of natural numbers with divisibility as the operation, then . Also, bottom () = 1 since divides every natural number.

(4) Uniqueness of : Suppose are both infimum of . Then, for all and all , iff iff , and so, by Yoneda.

July 13, 2011 at 12:10 pm

engy sameer\begin{deff}(JOIN-SEMILATTICE). A poset $(S,\leq)$ is a join-semilattice, if for

all elements $x$ and $y$ of $S$, the greatest lower bound (join)

of the set $\{x,y\}$ exists.

\end{deff}

Dually, we define a poset$(S,\leq)$ as a meet-semilattice if for

all elements $x$ and $y$ of $S$, the greatest lower bound (meet)

of the set $\{x,y\}$ exists.

The join and meet of $x$ and $y$ are denoted by $x\vee y$ and

$x\wedge y$, respectively. Clearly, $\vee$ and $\wedge$ define

binary operations on semilattices. By induction, it is easy to see

that the existence of binary suprema implies the existence of all

non-empty finite suprema. The same holds true for infima.

\textbf{Semilattices as algebraic structures}. Consider an

algebraic structure given by $(S,\vee)$, $\vee$ being a binary

operation.

\begin{deff}(JOIN-SEMILATTICE). A poset $(S,\vee)$ is a join-semilattice if the

following identities hold for all elements $x,y$, and $z$ in $S$

\quad \quad \quad \quad \quad \quad$x\vee(y\vee z)=(x\vee y)\vee z$ (associativity)

\quad \quad \quad \quad \quad \quad$x\vee y=y\vee x$(commutativity)

\quad \quad \quad \quad \quad \quad$x\vee x=x$ (idempotency)

\end{deff}

In this case,$\wedge$ is called join. A meet-semilattice is an

algebraic structure $(S,\wedge)$, where the meet $\wedge$

satisfies the same axioms as above, the difference in notation

being only for convenience, usually for dual interpretations in

specific applications

\begin{deff}says that $(S,\vee)$ is a semilattice if $(S,\vee)$ is an idempotent, commutative semigroup.

Sometimes it is desirable to consider join-, meet-semilattices

with least (greatest) elements, which are defined as idempotent,

commutative monoids. Explicitly, $(S,\vee,0)$ are

join-semilattices with least elements if $(S,\vee)$ are

join-semilattices and in addition we have

\begin{center}$x\vee0=x$\end{center}

for all $x\in S$. Similarly, $(S,\wedge,1)$ are meet-semilattices

with greatest elements if $(S,\wedge)$ are meet-semilattices and

\begin{center}$x\vee 1=x$\end{center} for all $x\in S$

\end{deff}

Given two join-semilattices $(S,\vee)$ and $(T,\vee)$, a

homomorphisms of (join-) semilattices is a function

$f:S\longrightarrow T$ with the property that

\begin{center}$f(x\vee y)=f(x)\vee f(y)$\end{center}

That is, $f$ is a homomorphism of the two semigroups. If the

join-semilattices are furthermore equipped with a least element 0,

then $f$ should also be a morphism of monoids, i.e. one

additionally requires that \begin{center}$f(0)=0$\end{center} In

the order-theoretical formulation, these conditions just state

that a homomorphism of joinsemilattices is a function that

preserves binary joins and in the latter case also least elements.

The conditions for homomorphisms of meet-semilattices are the

obvious duals of these definitions. Note that any homomorphism of

(both join- and meet-) semilattices is necessarily monotone with

respect to the associated ordering relation, that is

\begin{center}$x\leq y\Longrightarrow f(x)=f(x)\leq f(y)$\end{center} Or

equivalently \begin{center}$x=x\vee y \Longrightarrow

f(x)=f(x)\vee f(y)$\end{center} This follows from

\begin{center}$f(x)\vee f(y)=f(x\vee y)=f(x)$\end{center}

\begin{deff}A join-semilattice $S$ is distributive if for all $a, b, c \in S$ with

$a\vee b\geq c$ there are $a^{‘}\leq a,b^{‘}\leq b$ with

$a^{‘}\vee b^{‘}= c$.

\end{deff}

A semilattice $S$ is called a modular semilattice if for all $a,

b, c\in S$ with $c\leq a \leq \vee c$ implies the existence of

$b_{1}\leq b$ such that $a = b_{1}\vee c$.

\begin{deff}(COMPLETE SEMILATTICE). A semilattice $(S,\vee)$ is complete if for very non-empty subset $S^{‘}$ of $S$,$^{‘}$ has a least upper bound

in $S$ with respect to $\leq$.\end{deff}

A join-semilattice $S$ is called complete if every $A\subset S$

has a least upper bound $sup(A)\in S$.

By definition, a map $f : (S;\leq_{S})\longrightarrow

(T;\leq_{T})$ of complete join-semilattices is an order-preserving

function which also satisfies $f(sup(A)) = sup(f(A))$ for all

$A\subset S$.

July 13, 2011 at 12:19 pm

Todd Trimble@engy: to get latex to compile properly here (and in other wordpress sites, such as Terry Tao’s), add the word ‘latex’ immediately after the first dollar sign in math mode.

April 4, 2008 at 7:58 pm

Todd TrimbleGood, Vishal! I’ll have to think up some harder ones. :-)

Just a little point of caution on (4): the quantification over s in S needs to be placed a little differently: a <= m iff (for all s in S: a <= s) iff a <= n.

Okay, here are a couple more:

1. In a lattice, prove that the absorption laws hold:

a /\ (a \/ b) = a = a \/ (a /\ b).

2. In the poset PX, the distributive law holds:

a /\ (b \/ c) = (a /\ b) \/ (a /\ c).

Can you think of a lattice in which the distributive law fails?

April 5, 2008 at 12:07 pm

VishalI need a slight clarification on the theorem that states that “every

inf-latticeis asup-lattice.”Let us consider a poset that consists of the natural number and all the prime numbers, with ordinary divisibility as our operation. This is perhaps a pathological case since reflexivity clearly holds whereas transitivity and antisymmetry hold vacuously! Now, if I am correct, for any subset , . So, is an inf-lattice. And using the theorem, we conclude that is also a sup-lattice, which means every has . But, what is ? Does it really exist?

Also, can we consider , with ordinary divisibility as our operation, to be a poset? After all, zero does not divide itself.

——————-

To prove the absorption laws in a lattice, we note that for all ,

iff ( and )

iff () … (this follows from the meet property )

Hence, by Yoneda .

We can now dualize the above law to get the other distributive law .

April 5, 2008 at 2:56 pm

Todd TrimbleVery good question, Vishal! This is exactly the sort of issue that can be confusing at first.

The resolution in the example you give is that the

emptysubset has no inf! That would be the top element of the poset, which we don’t have. If you adjoin one (call it 0 for this example), then all is well again.As for your question about the poset : just to be clear, let’s say “ divides ” means . Under this definition, 0 indeed divides 0 — your objection is essentially that there is no

uniquewitness , but I am not demanding that here. Good question, though, as the same doubt may have occurred to others.For the absorption laws — good call in observing that those equations are dual (so you only need to prove one). And yes, that second clause already follows from the first clause as you point out, so you can condense both to just . Good!

April 6, 2008 at 5:03 am

VishalI am not sure why, but I am unable to prove the distributive law in a poset directly (from “first principles”)! Am I missing something?

Well, in order to circumvent this problem, I came up with the following. We know from

Dedekind’s lemmathat the partial order of a poset is faithfully represented in its power set , partially ordered by set inclusion. Then the meet of two elements (which are subsets) in is simply their intersection and their join is simply their union. So, if we can prove the distributive law is valid in , can we not claim that the distributive law is also valid in ? Isn’t this why Dedekind’s lemma particularly useful?(And, proving that the distributive law is valid in is quite elementary, as we know.)

April 6, 2008 at 12:22 pm

Todd TrimbleYour idea of using Dedekind’s lemma is a very interesting one. The catch is that the embedding , taking to , does not preserve joins. For example, , the set of for which or , almost never contains the element which belongs to — I think that holds if and only if is totally ordered. (And the embedding

neverpreserves the empty join!)The best you can say is that and (and you can easily verfiy these facts from first principles). The embedding does however preserve any inf which exists in — and this in itself is a nice exercise.

Anyway, the non-preservation of joins under means that information about behavior of joins in , including how they interact with meets, will not reflect back to corresponding behavior in .

Since try as you might you can’t prove the distributive law just on the basis of the lattice axioms, the suspicion naturally arises that it just doesn’t hold generally. By now we have some examples to test things on — check them out! In particular, you came up with a pathological example (capped with a top element to make it a lattice) — this could come in handy for the sake of counterexamples.

April 6, 2008 at 5:06 pm

VishalThere are a couple ideas floating around in my mind right now. Let me just throw them out without being very rigorous for now.

Suppose is a set of propositions such that our operation is (“logical implication”), where such that holds and for all . Then, our operation induces a partial order on . In addition, whenever . So, is totally ordered.

Now, define our poset map where each is taken to . Then, it seems that using Dedekind’s lemma, we can conclude that PMI is equivalent to the strong form of PMI!

————————–

Now, we can perhaps categorify things a little bit here(?). (Hope I am doing this right!) We may consider as a category where our propositions are objects of , with being morphisms between propositions, such that is a morphism if . The morphisms in this case are non-invertible. We know identities exist because for any object . Also, our identity law holds. In addition, the associative law holds for any morphisms and .

April 7, 2008 at 12:43 am

Todd TrimbleThose are some pretty neat ideas, Vishal! I’d never pictured the connection between ordinary induction and strong induction in the light of Dedekind’s lemma, so you’ve taught me something new here.

I think it would be great if you would develop this further and maybe write something on this. I wouldn’t mind discussing it some more with you in email; I had some thoughts on this in connection with another great idea of Dedekind: his cuts for defining the real numbers. (I could talk about it here, and was about to do so, but the comment was getting to be longer than I wanted it.)

Your categorification idea actually touches upon a big topic, some of which we at the n-Category Café have talked about recently. Yes, you’re quite right: posets can be seen as special types of categories, namely ones in which there is at most morphism for any pair of objects (and where we write if there is such a morphism). In fact, the point of view I take as I write these blog entries is precisely that: to think of posets, lattices, etc. as particular sorts of small categorical structures, and to think of meets, joins, etc. as objects in the category which enjoy universal mapping properties, and to think of the various sorts of morphism (such as the Dedekind embedding) as functors, and so on. I think it might be heavy-handed to launch immediately into a detailed description of all these categorical tools, but they are secretly in the back of my mind as I write, and they guide the organization of concepts and calculations. From a pedagogical perspective, posets are thus “simplified categories”, which nevertheless allow one to showcase categorical ideas in action.

But a lot of recent research actually focuses on turning this around. That is, many poset structures can be seen as “pale shadows” of richer categorical structures which lurk in the background, waiting to be discovered — and this is closer what is ordinarily meant by “categorification” (although this word actually carries a wide variety of meanings or senses, like the word “quantization”). An example would be the insight, made manifest in Linear Logic, that Boolean algebras are but special cases of so-called *-autonomous categories, which are far richer in structure. Similarly, it was traditional in mathematical logic to focus on provability (for given logical formulas , when is there a proof of an entailment ?), but this is but the posetal shadow of a far richer discussion where one thinks of formulas as connoting data types, and instead of focusing on mere existence of a proof, actually

distinguishingbetween various processes or programs which would lead from type to type …Thousands of pages can be and have been written on this; for a taste of some of the excitement, I might recommend the recent “Rosetta Stone” paper by John Baez and Mike Stay, which is written for a broad audience [actually, for physicists with no presupposed background in logic].

April 8, 2008 at 9:01 pm

Distributivity, Topology, and Heyting algebras « Vishal Lama’s blog[...] by Todd Trimble Tags: distributive lattice, heyting algebra, intuitionistic logic, topology Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., [...]

April 23, 2008 at 2:09 pm

Boolean algebras, Galois connections, and double negation « Vishal Lama’s blog[...] definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships [...]

April 30, 2008 at 2:13 pm

Boolean algebras, Boolean rings, and baby Stone duality « Vishal Lama’s blog[...] between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting [...]

May 3, 2008 at 6:39 am

Vishal LamaTodd, sorry about the delay in responding to your posts. But, here I am, now!

Okay, let us see if I’ve got some solutions to the last exercise of this post. (I will post them one by one.)

(i)

Morphism of meet-semilattices——————————————————

Note that X = {0, p, q, 1} is a poset ordered by divisibility, where p, q are prime numbers. Then, 0 is the top element and 1 is the bottom element. Clearly, X is a meet-semilattice. Also, for any x, x’ in X,

x /\ x’ = gcd(x, x’).

Also, note that Y = {{p,q}, {p}, {q}, {}} is a poset ordered by inclusion. Then, {p, q} is the top element and {} is the bottom element. Clearly, Y is also a meet-semilattice. Again, for any y, y’ in Y,

y /\ y’ is the intersection of sets y and y’.

Then, a morphism of meet-semilattices f: X –> Y is given by

f(0) = {p, q}, f(p) = {p}, f(q) = {q} and f(1) = {}.

May 3, 2008 at 6:57 am

Vishal Lama(ii)

Morphism of join-semilattices———————————————————

We construct a similar example like we did in (i) above.

Note that X = {pq, p, q, 1} (where p, q are prime numbers) ordered by divisibility is a join-semilattice. Here, pq is the top element and 1 is the bottom element. Also, for any x, x’ in X, we have

x \/ x’ = lcm(x, x’).

Also, note that Y = {{p, q}, {p}, {q}, {}} ordered by inclusion is a join-semilattice. Here, {p, q} is the top element and {} is the bottom element. Also, for any y, y’ in Y, we have

y \/ y’ is the union of sets y and y’.

Then, a morphism of join-semilattices f: X –> Y is given by

f(pq) = {p , q}, f(p) = p, f(q) = q and f(1) = {}.

May 3, 2008 at 2:48 pm

Todd TrimbleThanks for putting out this effort, Vishal — this could be very useful for anyone trying to follow my notes.

In case I didn’t make myself clear in the last exercise: I want an example of a morphism of inf-lattices which is not a morphism of sup-lattices (or a morphism of meet-semilattices which is not a morphism of join-semilattices). Your examples are morphisms of both because they are poset isomorphisms [see below], and (exercise!) poset isomorphisms f: X –> Y preserve any infs and sups which happen to exist in X.

However, the examples of X you cooked up will serve! Consider in each case the obvious inclusion map from X to the set of all natural numbers 0, 1, 2, … ordered by divisibility. What can you say?

Here’s another one to chew on. Define a poset isomorphism f: P –> Q to be a poset map for which there exists a poset map g: Q –> P such that the composite of f followed by g is the identity on P, and g followed by f is the identity on Q. Give an example of a bijective poset map which is not an isomorphism!

Interestingly, though, bijective morphisms of meet-semilattices

areisomorphisms of meet-semilattices! This is actually a manifestation of what I was saying earlier, that a meet-semilattice can be defined as set equipped with a binary operation which is commutative, associative, idempotent, and unital [having an identity] — i.e., we are dealing here with a purely algebraic notion (sets equipped with operations satisfying certain equations), and generally speaking, isomorphisms of algebraic structures of any given type are the same as bijective homomorphisms of structures of that type. But this sort of general abstract nonsense is leading us far afield, isn’t it?May 3, 2008 at 4:55 pm

Vishal LamaTodd, I need a slight clarification. You mentioned that the morphism f: X –> Y in example (i) holds for both meet-semilattices and join-semilattices. But X is not a join-semilattice, is it? For instance, p \/ q = lcm(p , q) is not defined in X, which implies f(p \/ q) is not defined as well, but f(p) \/ f(q), which is {p, q}, is clearly defined in Y. I think I am missing something here, perhaps.

May 3, 2008 at 5:35 pm

Todd TrimbleYes, there are interesting subtleties. The set {0, p, q, 1}, considered in its own right, *is* a lattice! For example, taken over just the elements 0, p, q, 1, the element 0 is the absolute minimum of those elements which are upper bounds of p and q (it’s the only upper bound of those elements, obviously, within that set), and so 0 would have to be reckoned as the join p \/ q in that poset.

However, you’re right that when we consider {0, p, q, 1} as a sub-poset of the natural numbers N ordered by divisibility, the join p \/ q relative to that subset does not match the join relative to N. In other words, the inclusion is a morphism of posets which is not a lattice homomorphism, even though the domain and codomain are lattices.

July 12, 2008 at 3:35 pm

Basic Category Theory, III: Representability, adjoints, and the Yoneda lemma « Todd and Vishal’s blog[...] is the underpinning of a Principle of Duality in the theory of categories (extending the principle of duality in the theory of posets). As the construction of opposite categories suggests, the dual of a [...]

July 18, 2008 at 9:30 pm

Ultrafilter convergence, compactness, and the spectrum of a Boolean algebra « Todd and Vishal’s blog[...] notion of ultrafilter is dual, so an ultrafilter in a Boolean algebra is defined to be a subset [...]

September 11, 2008 at 12:16 am

ETCS: Internalizing the logic « Todd and Vishal’s blog[...] of arbitrary definable families of subsets, the power sets are internal inf-lattices. But inf-lattices are sup-lattices as well; on this basis we will be able to construct the colimits (e.g., finite sums, coequalizers) [...]

February 5, 2009 at 6:39 pm

Vishal LamaI hadn’t taken note of this earlier but barely did a while ago that in the proof of the

Principle of Dualityin the theory of posets, we note that all the three axioms (reflexive, transitive and antisymmetric) are self-dual. That made me wonder if the following general idea is true:if there are two equivalent sets of axioms, X and Y, that both describe a particular structure (e.g. poset) and that the axioms in X are all self-dual, then all the axioms in Y must also be self-dual.I hope that wasn’t such a terrible question to ask!

February 5, 2009 at 7:11 pm

Todd TrimbleBeen quiet around here lately! I’ve been clocking in some time over at the newly created nLab attached to the n-Category Cafe, but sitting and working in a private corner of my own out of sight. I hope to break the silence soon.

The answer to your question is a bit fussy, and involves the distinction between a set of axioms and the theory they generate. In short, I’d say, “no, the axioms of Y need not be self-dual themselves, but the theory they generate, being equivalent to the theory for X, would be self-dual”.

By “theory generated by an axiom set”, I just mean the collection of all consequences which are derivable from the axioms using logical rules of inference. To give an example, take the theory of distributive lattices. One axiom set might be {finite meets are associative, commutative, idempotent, and have an identity; finite joins are associative, commutative, idempotent, and have an identity; meets distribute over joins}. That’s nine axioms. The set of the first eight axioms is self-dual [each axiom for finite meets has a dual counterpart for joins, and conversely]. But the dual of the ninth isn’t in there; it would say that joins distribute over meets. If you add that in as a tenth, then the

axiom set(call it X, looking at your comment) would of course be self-dual.Nevertheless, the

theorygenerated by the nine axioms is self-dual, because the distributivity of joins over meets is a provable consequence of the nine axioms. So if the set of nine axioms is called Y, then I’d say that X and Y are equivalent axiom sets (that is, they generate the same theory), X is a self-dual axiom set, but Y is not.February 5, 2009 at 7:15 pm

Todd TrimbleForgot to mention the two absorption laws in the comment above. Throw those in.

February 5, 2009 at 8:12 pm

Vishal LamaThanks, Todd, that makes sense, of course!

Yeah, it’s been quiet here for some time now, but I have been taking this opportunity to go through all of your posts on “Stone Duality”, one by one, and they are all such a joy to read! I think having Steve Awodey’s book, Category Theory, by my side has also greatly helped me in appreciating the ideas contained in your posts.

February 5, 2009 at 8:21 pm

Todd TrimbleThanks!

I haven’t seen Steve for ages, but I got to know him pretty well when I was living in Chicago. He was the last PhD student of Saunders Mac Lane and he is now a professor at Carnegie Mellon (officially in the philosophy department, I believe), as part of a strong logic group in which Dana Scott is also involved. From what I’ve seen, his book on category theory is well done.

February 5, 2009 at 8:50 pm

Vishal LamaAh, now that you’ve mentioned about Steve’s association with the logic group (at CMU) in which Dana Scott is (was?) involved, his motivation for presenting an example (in the aforesaid book) of a functor between , an associated category of a functional programming language , and the category of Scott domains as being the denotational semantics of the language in the category is made clearer.