You are currently browsing the tag archive for the ‘lattices’ tag.

Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!

In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.

Let be a poset. If are elements of , a *join* of and is an element with the property that for any ,

if and only if ( and ).

For a first example, consider the poset of subsets of ordered by inclusion. The join in that case is given by taking the union, i.e., we have

if and only if ( and ).

Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.

Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of and is an element such that for all ,

if and only if ( and ).

Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of by ). This is an instance of a very important concept. *In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. *Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.

(Just to put in some fine print or boilerplate, let me just say that a *formula* in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain ), which can be built up by taking as a primitive binary predicate on . A *theorem* in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set ), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)

The critical item to install before we’re off and running is the following meta-principle:

**Principle of Duality**: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.

**Proof**: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, , is itself! — and of course an axiom is a theorem. The transitivity axiom, and implies , is also self-dual (when you dualize it, it looks essentially the same except that the variables and are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done.

So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements and is denoted .

Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation is (and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all ) . In fact, we can say

**Theorem**: The join operation is associative, commutative, and idempotent.

**Proof**: Just apply the principle of duality to the corresponding theorem for the meet operation.

Just to get used to these ideas, here are some exercises.

- State the dual of the Yoneda principle (as stated here).
- Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)

To continue: we say a poset is a *join-semilattice* if it has all finite joins (including the empty join, which is the bottom element satisfying for all ). A *lattice* is a poset which has all finite meets and finite joins.

Time for some examples.

- The set of natural numbers 0, 1, 2, 3, … under the divisibility order ( if divides ) is a lattice. (What is the join of two elements? What is the bottom element?))
- The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
- The set of subsets of a set is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
- The set of subspaces of a vector space is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces , is the vector space which they
*jointly generate*(i.e., the set of vector sums with , which is closed under addition and scalar multiplication).

The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of *closure* of the ordinary union seems relevant (e.g., in the vector space example, close up the union of and under the vector space operations), and indeed this can be made precise in many cases of interest.

To explain this, let’s take a fresh look at the definition of join: the defining property was

if and only if ( and ).

What this is really saying is that among all the elements which “contain” both and , the element is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups , , take the intersection of all subgroups containing both and — that intersection is the group-theoretic closure of the union .

There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:

**Definition**: Let be a poset, and suppose . The *infimum* of , if it exists, is an element such that for all , if and only if for all .

By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of by .

We say that a poset is an *inf-lattice* if there is an infimum for every subset. Similarly, the *supremum* of , if it exists, is an element such that for all , if and only if for all . A poset is a *sup-lattice* if there is a supremum for every subset. [I'll just quickly remark that the notions of inf-lattice and sup-lattice belong to *second-order* logic, since it involves quantifying over all subsets (or over all elements of ).]

Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the

**Theorem**: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.

**Proof**: Suppose is an inf-lattice, and let . Let be the set of *upper bounds* of . I claim that (“least upper bound”) is the supremum of . Indeed, from and the definition of infimum, we know that if , i.e., if for all . On the other hand, we also know that if , then for every , and hence by the defining property of infimum (i.e., really is an upper bound of ). So, if , we conclude by transitivity that for every . This completes the proof.

**Corollary**: Every *finite* meet-semilattice is a lattice.

Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of *homomorphism*. The notions are straightforward enough: a *morphism of meet-semilattices* is a function which takes finite meets in to finite meets in (, and where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices ( and where the 0’s denote bottom elements). A *morphism of inf-lattices* is a function such that for all subsets , where denotes the direct image of under . And there is a dual notion of morphism of sup-lattices: . Finally, a *morphism of lattices* is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.

Despite the theorem above , it is **not** true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.

**Exercise**: Come up with some examples of morphisms which exhibit these distinctions.

My name is Todd Trimble. As regular readers of this blog may have noticed by now, I’ve recently been actively commenting on some of the themes introduced by our host Vishal, and he’s now asked whether I’d like to write some posts of my own. Thank you Vishal for the invitation!

As made clear in some of my comments, my own perspective on a lot of mathematics is greatly informed and influenced by category theory — but that’s not what I’m setting out to talk about here, not yet anyway. For reasons not altogether clear to me, the mere mention of category theory often scares people, or elicits other emotional reactions (sneers, chortles, challenges along the lines “what is this stuff good for, anyway?” — I’ve seen it all).

Anyway, I’d like to try something a little different this time — instead of blathering about categories, I’ll use some of Vishal’s past posts as a springboard to jump into other mathematics which I find interesting, and I won’t need to talk about categories at all unless a strong organic need is felt for it (or if it’s brought back “by popular demand”). But, the spirit if not the letter of categorical thinking will still strongly inform my exposition — those readers who already know categories will often be able to read between the lines and see what I’m up to. Those who do not will still be exposed to what I believe are powerful categorical ways of thinking.

I’d like to start off talking about a very pretty area of mathematics which ties together various topics in algebra, topology, logic, geometry… I’m talking about mathematics in the neighborhood of so-called “Stone duality” (after the great Marshall Stone). I’m hoping to pitch this as though I were teaching an undergraduate course, at roughly a junior or senior level in a typical American university. [*Full disclosure*: I'm no longer a professional academic, although I often play one on the Internet :-) ] At times I will allude to topics which presuppose some outside knowledge, but hey,that’s okay. No one’s being graded here (thank goodness!).

First, I need to discuss some preliminaries which will eventually lead up to the concept of Boolean algebra — the algebra which underlies propositional logic.

A *partial order* on a set is a binary relation (a subset ), where we write if , satisfying the following conditions:

- (Reflexivity) for every ;
- (Transitivity) For all , ( and ) implies .
- (Antisymmetry) For all , ( and ) implies .

A *partially ordered set* (poset for short) is a set equipped with a partial order. Posets occur all over mathematics, and many are likely already familiar to you. Here are just a few examples:

- The set of natural numbers ordered by divisibility ( if divides ).
- The set of subsets of a set (where is the relation of inclusion of one subset in another).
- The set of subgroups of a group (where again is the inclusion relation between subgroups).
- The set of ideals in a ring (ordered by inclusion).

The last three examples clearly follow a similar pattern, and in fact, there is a sense in which *every* poset P can be construed in just this way: as a set of certain types of subset ordered by inclusion. This is proved in a way very reminiscent of the Cayley lemma (that every group can be represented as a group of permutations of a set). You can think of such results as saying “no matter how abstractly a group [or poset] may be presented, it can always be *re*presented in a concrete way, in terms of permutations [or subsets]“.

To make this precise, we need one more notion, parallel to the notion of group homomorphism. If and are posets, a *poset map* from to is a function which preserves the partial order (that is, if in , then in ). Here then is our representation result:

Lemma(Dedekind): Any poset may be faithfully represented in its power set , partially ordered by inclusion. That is, there exists a poset map that isinjective(what we mean by “faithful”: the map is one-to-one).

**Proof**: Define to be the function which takes to the subset (which we view as an element of the power set). To check this is a poset map, we must show that if , then is included in . This is easy: if belongs to , *i.e.*, if , then from and the transitivity property, , hence belongs to .

Finally, we must show that is injective; that is, implies . In other words, we must show that if

,

then . But, by the reflexivity property, we know ; therefore belongs to the set displayed on the left, and therefore to the set on the right. Thus . By similar reasoning, . Then, by the antisymmetry property, , as desired.

The Dedekind lemma turns out to be extremely useful (it and the Cayley lemma are subsumed under an even more useful result called the *Yoneda lemma* — perhaps more on this later). Before I illustrate its uses, let me rephrase slightly the injectivity property of the Dedekind embedding : it says,

If (for all in iff ), then .

This principle will be used over and over again, so I want to give it a name: I’ll call it the Yoneda principle.

Here is a typical use. Given elements in a poset , we say that an element is a *meet* of and if for all ,

if and only if ( and ).

Fact: there is at most one meet of and . That is, if and are both meets of and , then .

**Proof**: For all if and only if ( and ) if and only if . Therefore, by the Yoneda principle.

Therefore, we can refer to *the* meet of two elements and (if it exists); it is usually denoted . Because , we have and .

**Example: **In a concrete poset, like the poset of all subsets of a set or subgroups of a group, the meet of two elements is their intersection.

**Example**: Consider the natural numbers ordered by divisibility. The meet satisfies and (i.e., divides both and ). At the same time, the meet property says that any number which divides both and must also divide . It follows that the meet in this poset is the gcd of and .

Here are some more results which can be proved with the help of the Yoneda principle. I’ll just work through one of them, and leave the others as exercises.

- (idempotence of meet)
- (commutativity of meet)
- (associativity of meet)

To prove 3., we can use the Yoneda principle: for all in the poset, we have

iff and

iff and and

iff and

iff .

Hence, by Yoneda.

In fact, we can unambiguously refer to the meet of any finite number of elements by the evident property:

iff and and

— this uniquely defines the meet on the left, by Yoneda, and the order in which the appear makes no difference.

But wait — what if the number of elements is zero? That is, what is the *empty* meet? Well, the condition “ and ” becomes vacuous (there is no for which the condition is *not* satisfied), so whatever the empty meet is, call it , we must have for all . So is just the *top* element of the poset (if one exists). Another name for the top element is “the terminal element”, and another notation for it is ‘‘.

**Definition**: A *meet semi-lattice* is a poset which has all finite meets, including the empty one.

**Exercises**:

- Prove that in a meet-semilattice, for all .
- Is there a top element for the natural numbers ordered by divisibility?

## Recent Comments