You are currently browsing the category archive for the ‘Posets and Lattices’ category.
In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.
Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.
I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:
Definition: A Boolean ring is a commutative ring (with identity ) in which every element is idempotent, i.e., satisfies .
Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.
Proposition 1: For every element in a Boolean ring, .
Proof: By idempotence, we have . Since , we may additively cancel in the ring to conclude .
This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over to be an abelian group together with a ring homomorphism to the ring of abelian group homomorphisms from to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element to scalar-multiplication ).
Anyway, the point is that we can now apply some linear algebra to study this -vector space; in particular, a finite Boolean ring is a finite-dimensional vector space over . By choosing a basis, we see that is vector-space isomorphic to where is the dimension. So the cardinality of a finite Boolean ring must be of the form . Hold that thought!
Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring , we may construct a corresponding Boolean algebra structure on the underlying set of , uniquely determined by the stipulation that the multiplication of the Boolean ring match the meet operation of the Boolean algebra. Conversely, given a Boolean algebra , we may construct a corresponding Boolean ring structure on , and this construction is inverse to the previous one.
In one direction, suppose is a Boolean ring. We know from before that a binary operation on a set that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of — can be identified with the meet operation of a meet-semilattice structure on , uniquely specified by taking its partial order to be defined by: iff . It immediately follows from this definition that the additive identity satisfies for all (is the bottom element), and the multiplicative identity satisfies for all (is the top element).
Notice also that , by idempotence. This leads one to suspect that will be the complement of in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting , i.e., is equal to its putative double negation.
Proposition 2: is order-reversing.
Proof: Looking at the definition of the order, this says that if , then . This is immediate.
So, is an order-reversing map (an order-preserving map ) which is a bijection (since it is its own inverse). We conclude that is a poset isomorphism. Since has meets and , also has meets (and the isomorphism preserves them). But meets in are joins in . Hence has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function takes meets in to joins in ; that is,
or, replacing by and by ,
whence , using the proposition 1 above.
Proposition 3: is the complement of .
Proof: We already saw . Also
using the formula for join we just computed. This completes the proof.
So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have . On the other hand, , using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.
Naturally, we want to invert the process: starting with a Boolean algebra structure on a set , construct a corresponding Boolean ring structure on whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy , so that if (i.e., if and are disjoint): this gives a partial definition of addition. Continuing this thought, if we express as a disjoint sum of some element and , we then conclude , whence by cancellation. In the case where the Boolean algebra is a power set , this element is the symmetric difference of and . This generalizes: if we define the addition by the symmetric difference formula , then is disjoint from , so that
after a short calculation using the complementation and distributivity axioms. After more work, one shows that is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.
Exercise: Verify this last assertion.
However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.
Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism between Boolean rings, it is clear that preserves the meet and join of any two elements [since it preserves multiplication and addition] and of course also the complement of any ; therefore is a map of the corresponding Boolean algebras. Conversely, a map of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product and sum in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,
Theorem 1: The above processes define functors , , which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.
- Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.
Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality — the same as the cardinality of the power set Boolean algebra if has cardinality . The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra is naturally isomorphic to one of the form ; one of our tasks is to describe in terms of in a “natural” (or rather, functorial) way. From the Boolean ring perspective, is a basis of the underlying -vector space of ; to pin it down exactly, we use the full ring structure.
is naturally a basis of ; more precisely, under the embedding defined by , every subset is uniquely a disjoint sum of finitely many elements of : where : naturally, iff . For each , we can treat the coefficient as a function of valued in . Let denote the set of functions ; this becomes a Boolean ring under the obvious pointwise definitions and . The function which takes to the coefficient function is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)
Or, we can turn this around: for each , we get a Boolean ring map which takes to . Let denote the set of Boolean ring maps .
Proposition 4: For a finite set , the function that sends to is a bijection (in other words, an isomorphism).
Proof: We must show that for every Boolean ring map , there exists a unique such that , i.e., such that for all . So let be given, and let be the intersection (or Boolean ring product) of all for which . Then
.
I claim that must be a singleton for some (evidently unique) . For , forcing for some . But then according to how was defined, and so . To finish, I now claim for all . But iff iff iff . This completes the proof.
This proposition is a vital clue, for if is to be isomorphic to a power set (equivalently, to ), the proposition says that the in question can be retrieved reciprocally (up to isomorphism) as .
With this in mind, our first claim is that there is a canonical Boolean ring homomorphism
which sends to the function which maps to (i.e., evaluates at ). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that for all . But by definition, this is the equation , which holds since is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.
Theorem 2: If is a finite Boolean ring, then the Boolean ring map
is an isomorphism. (So, there is a natural isomorphism .)
Proof: First we prove injectivity: suppose is nonzero. Then , so the ideal is a proper ideal. Let be a maximal proper ideal containing , so that is both a field and a Boolean ring. Then (otherwise any element not equal to would be a zero divisor on account of ). The evident composite
yields a homomorphism for which , so . Therefore is nonzero, as desired.
Now we prove surjectivity. A function is determined by the set of elements mapping to under , and each such homomorphism , being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if belong to , then so does . Let be the join of the finitely many elements of ; notice (actually, this proves that every ideal of a finite Boolean ring is principal). In fact, writing for the unique element such that , we have
(certainly for all such , since , but also belongs to the intersection of these kernels and hence to , whence ).
Now let ; I claim that , proving surjectivity. We need to show for all . In one direction, we already know from the above that if , then belongs to the kernel of , so , whence .
For the other direction, suppose , or that . Now the kernel of is principal, say for some . We have , so
from which it follows that for some . But then is a proper ideal containing the maximal ideals and ; by maximality it follows that . Since and have the same kernels, they are equal. And therefore . We have now proven both directions of the statement ( if and only if ), and the proof is now complete.
- Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset closed under joins and such that for , a filter is (by definition) a subset closed under meets and such that whenever (this second condition is equivalent to upward-closure: and implies ). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if is an ideal, then the set of negations is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if is a Boolean algebra map, then the inverse image is a filter, just as the inverse image is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.
For those who know some category theory: what is really going on here is that we have a power set functor
(taking a function between finite sets to the inverse image map , which is a map between finite Boolean algebras) and a functor
which we could replace by its opposite , and the canonical maps of proposition 4 and theorem 2,
are components (at and ) of the counit and unit for an adjunction . The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories and . This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.
In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.
There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:
A Boolean algebra is a distributive lattice in which every element has a complement.
(If is a lattice and , a complement of is an element such that and . A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)
- Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set , ordered by subset inclusion. As we know, this is a distributive lattice, and the complement of a subset satisfies and .
- Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction , disjunction , and negation in ordinary classical logic. In particular, given a theory , there is a preorder whose elements are sentences (closed formulas) of , ordered by if the entailment is provable in using classical logic. By passing to logical equivalence classes ( iff in ), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory .
Exercise: Give an example of a complemented lattice which is not distributive.
As a possible leading hint for the previous exercise, here is a first order of business:
Proposition: In a distributive lattice, complements of elements are unique when they exist.
Proof: If both and are complementary to , then . Since , we have . Similarly , so
The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).
Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:
- if and only if
- for all elements
Proof: First let be a Boolean algebra, and let denote the complement of an element . Then I claim that if and only if , proving that admits an implication . Then, taking , it follows that , whence 1. follows. Also, since (by definition of complement) is the complement of if and only if is the complement of , we have , whence 2. follows.
[Proof of claim: if , then . On the other hand, if , then . This completes the proof of the claim and of the forward implication.]
In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication ). In particular, it is distributive. From , we have (from 1.) ; since is automatic by definition of , we get . From , we have also (from 1.) that ; since is automatic by definition of , we have . Thus under 1., every element has a complement .
On the other hand, suppose is a Heyting algebra satisfying 2.: . As above, we know . By the corollary below, we also know the function takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that is its own inverse, it follows that also takes meets to joins. Hence . Thus for a Heyting algebra which satisfies 2., every element has a complement . This completes the proof.
- Exercise: Show that Boolean algebras can also be characterized as meet-semilattices equipped with an operation for which if and only if .
The proof above invoked the De Morgan law . The claim is that this De Morgan law (not the other !) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:
Lemma: For any element of a Heyting algebra , the function is an order-reversing map (equivalently, an order-preserving map , or an order-preserving map ). It is adjoint to itself, in the sense that is right adjoint to .
Proof: First, we show that in (equivalently, in ) implies . But this conclusion holds iff , which is clear from . Second, the adjunction holds because
in if and only if
in if and only if
in if and only if
in if and only if
in
Corollary: takes any inf which exists in to the corresponding inf in . Equivalently, it takes any sup in to the corresponding inf in , i.e., . (In particular, this applies to finite joins in , and in particular, it applies to the case , where we conclude, e.g., the De Morgan law .)
- Remark: If we think of sups as sums and infs as products, then we can think of implications as behaving like exponentials . Indeed, our earlier result that preserves infs can then be recast in exponential notation as saying , and our present corollary that takes sups to infs can then be recast as saying . Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!
Let me reprise part of the lemma (in the case ), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that is order-reversing, and that there is a three-way equivalence:
if and only if if and only if .
This situation is an instance of what is called a “Galois connection” in mathematics. If and are posets (or even preorders), a Galois connection between them consists of two order-reversing functions , such that for all , we have if and only if . (It’s actually an instance of an adjoint pair: if we consider as an order-preserving map and an order-preserving map , then in if and only if in .)
Here are some examples:
- The original example arises of course in Galois theory. If is a field and is a finite Galois extension with Galois group (of field automorphisms which fix the elements belonging to ), then there is a Galois connection consisting of maps and . This works as follows: to each subset , define to be . In the other direction, to each subset , define to be . Both and are order-reversing (for example, the larger the subset , the more stringent the conditions for an element to belong to ). Moreover, we have
iff ( for all ) iff
so we do get a Galois connection. It is moreover clear that for any , is an intermediate subfield between and , and for any , is a subgroup of . A principal result of Galois theory is that and are inverse to one another when restricted to the lattice of subgroups of and the lattice of fields intermediate between and . Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.
- Another basic Galois connection arises in algebraic geometry, between subsets (of a polynomial algebra over a field ) and subsets . Given , define (the zero locus of ) to be . On the other hand, define (the ideal of ) to be . As in the case of Galois theory above, we clearly have a three-way equivalence
iff ( for all ) iff
so that , define a Galois connection between power sets (of the -variable polynomial algebra and of -dimensional affine space ). One defines an (affine algebraic) variety to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that is an ideal of the polynomial algebra. Not every ideal of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals equal to their radical are. Thus, and become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.
- Both of the examples above are particular cases of a very general construction. Let be sets and let be any relation between them. Then set up a Galois connection which in one direction takes a subset to , and in the other takes to . Once again we have a three-way equivalence
iff iff .
There are tons of examples of this flavor.
As indicated above, a Galois connection between posets is essentially the same thing as an adjoint pair between the posets (or between if you prefer; Galois connections are after all symmetric in ). I would like to record a few basic results about Galois connections/adjoint pairs.
Proposition:
- Given order-reversing maps , which form a Galois connection, we have for all and for all . (Given poset maps which form an adjoint pair , we have for all and for all .)
- Given a Galois connection as above, for all and for all . (Given an adjoint pair as above, the same equations hold.) Therefore a Galois connection induces a Galois correspondence between the elements of the form and the elements of the form .
Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption , if and only if , which is certainly true. The other statement is dual.
(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map
to from 1. gives . Applying from 1. to gives . Hence . The other equation is dual.
Incidentally, the equations of 2. show why an algebraic variety is the zero locus of its ideal (see example 2. above): if for some set of polynomials , then . They also show that for any element in a Heyting algebra, we have , even though is in general false.
Let be a Galois connection (or an adjoint pair). By the proposition, is an order-preserving map with the following properties:
for all
for all .
Poset maps with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance is a group, then the operator which takes a subset to the subgroup generated by is a closure operator. Or, if is a topological space, then the operator which takes a subset to its topological closure is a closure operator. Or, if is a poset, then the operator which takes to is a closure operator. Examples like these can be multiplied at will.
One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if is a closure operator, then a fixed point of (or a -closed element of ) is an element such that . The collection of fixed points is partially ordered by the order in . For example, the lattice of fixed points of the operator above is the lattice of subgroups of . For any closure operator , notice that is the same as the image of .
One particular use is that the fixed points of the double negation closure on a Heyting algebra form a Boolean algebra , and the map is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.
The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.
Exercise: If is a Heyting algebra and , prove the “exponential law” . Conclude that .
Exercise: We have seen that in a Heyting algebra. Use this to prove .
Exercise: Show that double negation on a Heyting algebra preserves finite meets. (The inequality is easy. The reverse inequality takes more work; try using the previous two exercises.)
Exercise: If is a closure operator, show that the inclusion map is right adjoint to the projection to the image of . Conclude that meets of elements in are calculated as they would be as elements in , and also that preserves joins.
Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?
In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the basic relationships between the logical connectives “and”, “or”, and “implies”. Our discussion disclosed a fundamental relationship between distributive laws and the algebra of implication, which we put to work to discover the structure of the “internal Heyting algebra logic” of a topology.
I’d like to pause and reflect on the general technique we used to establish this relationship; like the Yoneda principle and the Principle of Duality, it comes up with striking frequency, and so it will be useful for us to give it a name. As it turns out, this particular proof technique is analogous to the way adjoints are used in linear algebra. Such analogies go all the way back to work of C. S. Peirce, who like Boole was a great pioneer in the discovery of relationships between algebra and logic. At a deeper level, similar analogies were later rediscovered in category theory, and are connected with some of the most potent ideas category theory has to offer.
Our proof that meets distribute over sups in the presence of an implication operator is an example of this technique. Here is another example of similar flavor.
Theorem: In a Heyting algebra , the operator preserves any infs which happen to exist in , for any element . [In particular, this operator is a morphism of meet-semilattices, i.e., , and .]
Proof: Suppose that has an inf, which here will be denoted . Then for all , we have
if and only if
if and only if
(for all , ) if and only if
for all , .
By the defining property of inf, these logical equivalences show that is indeed the inf of the subset , or in other words that , as desired.
In summary, what we did in this proof is “slide” the operator on the right of the inequality over to the operator on the left, then invoke the defining property of infs, and then slide back to on the right. This sliding trick is analogous to how adjoint mappings work in linear algebra.
In fact, everything we have done so far with posets can be translated in terms of matrix algebra, provided that our matrix entries, instead of being real or complex numbers, are truth values ( for “true”, “false”). These truth values are added and multiplied in the way familiar from truth tables, with join playing the role of addition and meet playing the role of multiplication. In fact the lattice is a very simple distributive lattice, and so most of the familiar arithmetic properties of addition and multiplication (associativity, commutativity, distributivity) do carry over, which is all we need to carry out the most basic aspects of matrix algebra. However, observe that has no additive inverse (for here ) — the type of structure we are dealing with is often called a “rig” (like a ring, but without assuming negatives). On the other hand, this lattice is, conveniently, a sup-lattice, thinking of sups as arbitrary sums, whether finitary or infinitary.
Peirce recognized that a relation can be classified by a truth-valued matrix. Take for example a binary relation on a set , i.e., a subset . We can imagine each point as a pixel in the plane, and highlight by lighting up just those pixels which belong to . This is the same as giving an -matrix , with rows indexed by elements and columns by elements , where the -entry is (on) if is in , and if not. In a similar way, any relation is classified by a -matrix whose entries are truth values.
As an example, the identity matrix has a at the -entry if and only if . Thus the identity matrix classifies the equality relation.
A poset is a set equipped with a binary relation satisfying the reflexive, transitive, and antisymmetry properties. Let us translate these into matrix algebra terms. First reflexivity: it says that implies . In matrix algebra terms, it says , which we abbreviate in the customary way:
(Reflexivity) .
Now let’s look at transitivity. It says
( and ) implies .
The “and” here refers to the meet or multiplication in the rig of truth values , and the existential quantifier can be thought of as a (possibly infinitary) join or sum indexed over elements . Thus, for each pair , the hypothesis of the implication has truth value
which is just the -entry of the square of the matrix . Therefore, transitivity can be very succinctly expressed in matrix algebra terms as the condition
(Transitivity) .
- Remark: More generally, given a relation from to , and another relation from to , the relational composite is defined to be the set of pairs for which there exists with and . But this just means that its classifying matrix is the ordinary matrix product !
Let’s now look at the antisymmetry condition: ( and ) implies . The clause is the flip of ; at the matrix level, this flip corresponds to taking the transpose. Thus antisymmetry can be expressed in matrix terms as
(Antisymmetry)
where denotes the transpose of , and the matrix meet means we take the meet at each entry.
- Remark: From the matrix algebra perspective, the antisymmetry axiom is less well motivated than the reflexivity and transitivity axioms. There’s a moral hiding beneath that story: from the category-theoretic perspective, the antisymmetry axiom is relatively insignificant. That is, if we view a poset as a category, then the antisymmetry condition is tantamount to the condition that isomorphic objects are equal (in the parlance, one says the category is “skeletal”) — this extra condition makes no essential difference, because isomorphic objects are essentially the same anyway. So: if we were to simply drop the antisymmetry axiom but keep the reflexivity and transitivity axioms (leading to what are called preordered sets, as opposed to partially ordered sets), then the theory of preordered sets develops exactly as the theory of partially ordered sets, except that in places where we conclude “ is equal to ” in the theory of posets, we would generally conclude “ is isomorphic to ” in the theory of preordered sets.
Preordered sets do occur in nature. For example, the set of sentences in a theory is preordered by the entailment relation ( is derivable from in the theory). (The way one gets a poset out of this is to pass to a quotient set, by identifying sentences which are logically equivalent in the theory.)
Exercises:
- (For those who know some topology) Suppose is a topological space. Given , define if belongs to the closure of ; show this is a preorder. Show this preorder is a poset precisely when is a -space.
- If carries a group structure, define for elements if for some integer ; show this is a preorder. When is it a poset?
Since posets or preorders are fundamental to everything we’re doing, I’m going to reserve a special pairing notation for their classifying matrices: define
if and only if .
Many of the concepts we have developed so far for posets can be succinctly expressed in terms of the pairing.
Example: The Yoneda principle (together with its dual) is simply the statement that if is a poset, then if and only if (as functionals valued in ) if and only if .
Example: A mapping from a poset to a poset is a function such that .
Example: If is a poset, its dual or opposite has the same elements but the opposite order, i.e., . The principle of duality says that the opposite of a poset is a poset. This can be (re)proved by invoking formal properties of matrix transpose, e.g., if , then .
By far the most significant concept that can be expressed in terms of these pairings that of adjoint mappings:
Definition: Let be posets [or preorders], and , be poset mappings. We say is an adjoint pair (with the left adjoint of , and the right adjoint of ) if
or, in other words, if if and only if . We write . Notice that the concept of left adjoint is dual to the concept of right adjoint (N.B.: they are not the same, because clearly the pairing is not generally symmetric in and ).
Here are some examples which illustrate the ubiquity of this concept:
- Let be a poset. Let be the poset where iff ( and ). There is an obvious poset mapping , the diagonal mapping, which takes to . Then a meet operation is precisely a right adjoint to the diagonal mapping. Indeed, it says that if and only if .
- Dually, a join operation is precisely a left adjoint to the diagonal mapping .
- More generally, for any set , there is a diagonal map which maps to the -tuple . Its right adjoint , if one exists, sends an -tuple to the inf of the set . Its left adjoint would send the tuple to the sup of that set.
- If is a Heyting algebra, then for each , the conjunction operator is left adjoint to the implication operator .
- If is a sup-lattice, then the operator which sends a subset to is left adjoint to the Dedekind embedding . Indeed, we have if and only if (for all ) if and only if .
As items 1, 2, and 4 indicate, the rules for how the propositional connectives operate are governed by adjoint pairs. This gives some evidence for Lawvere’s great insight that all rules of inference in logic are expressed by interlocking pairs of adjoint mappings.
Proposition: If and where and are composable mappings, then .
Proof: . Notice that the statement is analogous to the usual rule , where refers to taking an adjoint with respect to given inner product forms.
We can use this proposition to give slick proofs of some results we’ve seen. For example, to prove that Heyting algebras are distributive lattices, i.e., that , just take left adjoints on both sides of the tautology , where is right adjoint to . The left adjoint of the left side of the tautology is (by the proposition) applied to . The left adjoint of the right side is applied to . The conclusion follows.
Much more generally, we have the
Theorem: Right adjoints preserve any infs which exist in . Dually, left adjoints preserve any sups which exist in .
Proof: where the last inf is interpreted in the inf-lattice . This equals . This completes the proof of the first statement (why?). The second follows from duality.
Exercise: If is a Heyting algebra, then there is a poset mapping for any element . Describe the left adjoint of this mapping. Conclude that this mapping takes infs in (i.e., sups in ) to the corresponding infs in .
Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset of all subsets of a set , and the poset of all subspaces of a vector space .
I take it that most readers are already familiar with many of the properties of the poset ; there is for example the distributive law , and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset , as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).
To get a sense of the weirdness of , take for example a 2-dimensional vector space . The bottom element is the zero space , the top element is , and the rest of the elements of are 1-dimensional: lines through the origin. For 1-dimensional spaces , there is no relation unless and coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:
V = 1 / | \ / | \ x y z \ | / \ | / 0
Observe that for distinct elements in the middle level, we have for example (0 is the largest element contained in both and ), and also for example (1 is the smallest element containing and ). It follows that , whereas . The distributive law fails in !
Definition: A lattice is distributive if for all . That is to say, a lattice is distributive if the map , taking an element to , is a morphism of join-semilattices.
- Exercise: Show that in a meet-semilattice, is a poset map. Is it also a morphism of meet-semilattices? If has a bottom element, show that the map preserves it.
- Exercise: Show that in any lattice, we at least have for all elements .
Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:
Theorem: The notion of distributive lattice is self-dual.
Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, , follows from the distributive lattice axioms.
Expand the right side to , by distributivity. This reduces to , by an absorption law. Expand this again, by distributivity, to . This reduces to , by the other absorption law. This completes the proof.
Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:
Definition: A frame is a sup-lattice for which is a morphism of sup-lattices, for every . In other words, for every subset , we have , or, as is often written,
Example: A power set , as always partially ordered by inclusion, is a frame. In this case, it means that for any subset and any collection of subsets , we have
This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.
Example: If is a set, a topology on is a subset of the power set, partially ordered by inclusion as is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element and the empty meet or top element of are elements of , and also:
- If are elements of , then so is .
- If is a collection of elements of , then is an element of .
A topological space is a set which is equipped with a topology ; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.
The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let be the set of -tuples of real numbers. First, define the open ball in centered at a point and of radius to be the set < . Then, define a subset to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.
It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)
Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula (“ implies “) is equivalent to “either not or ” — symbolically, we could define as . That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.
The main property we ask of implication is modus ponens: given and , we may infer . In symbols, writing the inference or entailment relation as , this is expressed as . And, we ask that implication be the weakest possible such assumption, i.e., that material implication be the weakest whose presence in conjunction with entails . In other words, for given and , we now define implication by the property
if and only if
As a very easy exercise, show by Yoneda that an implication is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:
Theorem:
- (1) If is a meet-semilattice which admits an implication operator, then for every element , the operator preserves any sups which happen to exist in .
- (2) If is a frame, then admits an implication operator.
Proof: (1) Suppose has a sup in , here denoted . We have
if and only if
if and only if
for all if and only if
for all if and only if
.
Since this is true for all , the (dual of the) Yoneda principle tells us that , as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that satisfies the defining property of that sup.)
(2) Suppose are elements of a frame . Define to be . By definition, if , then . Conversely, if , then
where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by (according to the defining property of sups). Hence , as desired.
Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as , so long as we trust that really does what we ask of implication. We’ll come to that point again later.
Part (2) has some interesting consequences vis Ã vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let be open sets in a topology. According to our prescription, we define as the sup (the union) of all open sets with the property that . We can think of this inclusion as living in the power set . Then, assuming our formula for implication in the Boolean algebra (where denotes the complement of ), we would have . And thus, our implication in the topology is the union of all open sets contained in the (usually non-open) set . That is to say, is the largest open contained in , otherwise known as the interior of . Hence our formula:
= int
Definition: A Heyting algebra is a lattice which admits an implication for any two elements . A complete Heyting algebra is a complete lattice which admits an implication for any two elements.
Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).
Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.
Exercise: Show that in a Heyting algebra.
Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation to be . For the Heyting algebra given by a topology, what can you say about when is open and dense?
Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!
In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.
Let be a poset. If are elements of , a join of and is an element with the property that for any ,
if and only if ( and ).
For a first example, consider the poset of subsets of ordered by inclusion. The join in that case is given by taking the union, i.e., we have
if and only if ( and ).
Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.
Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of and is an element such that for all ,
if and only if ( and ).
Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of by ). This is an instance of a very important concept. In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.
(Just to put in some fine print or boilerplate, let me just say that a formula in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain ), which can be built up by taking as a primitive binary predicate on . A theorem in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set ), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)
The critical item to install before we’re off and running is the following meta-principle:
Principle of Duality: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.
Proof: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, , is itself! — and of course an axiom is a theorem. The transitivity axiom, and implies , is also self-dual (when you dualize it, it looks essentially the same except that the variables and are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done.
So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements and is denoted .
Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation is (and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all ) . In fact, we can say
Theorem: The join operation is associative, commutative, and idempotent.
Proof: Just apply the principle of duality to the corresponding theorem for the meet operation.
Just to get used to these ideas, here are some exercises.
- State the dual of the Yoneda principle (as stated here).
- Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)
To continue: we say a poset is a join-semilattice if it has all finite joins (including the empty join, which is the bottom element satisfying for all ). A lattice is a poset which has all finite meets and finite joins.
Time for some examples.
- The set of natural numbers 0, 1, 2, 3, … under the divisibility order ( if divides ) is a lattice. (What is the join of two elements? What is the bottom element?))
- The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
- The set of subsets of a set is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
- The set of subspaces of a vector space is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces , is the vector space which they jointly generate (i.e., the set of vector sums with , which is closed under addition and scalar multiplication).
The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of closure of the ordinary union seems relevant (e.g., in the vector space example, close up the union of and under the vector space operations), and indeed this can be made precise in many cases of interest.
To explain this, let’s take a fresh look at the definition of join: the defining property was
if and only if ( and ).
What this is really saying is that among all the elements which “contain” both and , the element is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups , , take the intersection of all subgroups containing both and — that intersection is the group-theoretic closure of the union .
There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:
Definition: Let be a poset, and suppose . The infimum of , if it exists, is an element such that for all , if and only if for all .
By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of by .
We say that a poset is an inf-lattice if there is an infimum for every subset. Similarly, the supremum of , if it exists, is an element such that for all , if and only if for all . A poset is a sup-lattice if there is a supremum for every subset. [I’ll just quickly remark that the notions of inf-lattice and sup-lattice belong to second-order logic, since it involves quantifying over all subsets (or over all elements of ).]
Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the
Theorem: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.
Proof: Suppose is an inf-lattice, and let . Let be the set of upper bounds of . I claim that (“least upper bound”) is the supremum of . Indeed, from and the definition of infimum, we know that if , i.e., if for all . On the other hand, we also know that if , then for every , and hence by the defining property of infimum (i.e., really is an upper bound of ). So, if , we conclude by transitivity that for every . This completes the proof.
Corollary: Every finite meet-semilattice is a lattice.
Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of homomorphism. The notions are straightforward enough: a morphism of meet-semilattices is a function which takes finite meets in to finite meets in (, and where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices ( and where the 0’s denote bottom elements). A morphism of inf-lattices is a function such that for all subsets , where denotes the direct image of under . And there is a dual notion of morphism of sup-lattices: . Finally, a morphism of lattices is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.
Despite the theorem above , it is not true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.
Exercise: Come up with some examples of morphisms which exhibit these distinctions.
My name is Todd Trimble. As regular readers of this blog may have noticed by now, I’ve recently been actively commenting on some of the themes introduced by our host Vishal, and he’s now asked whether I’d like to write some posts of my own. Thank you Vishal for the invitation!
As made clear in some of my comments, my own perspective on a lot of mathematics is greatly informed and influenced by category theory â€” but that’s not what I’m setting out to talk about here, not yet anyway. For reasons not altogether clear to me, the mere mention of category theory often scares people, or elicits other emotional reactions (sneers, chortles, challenges along the lines “what is this stuff good for, anyway?” â€” I’ve seen it all).
Anyway, I’d like to try something a little different this time â€” instead of blathering about categories, I’ll use some of Vishal’s past posts as a springboard to jump into other mathematics which I find interesting, and I won’t need to talk about categories at all unless a strong organic need is felt for it (or if it’s brought back “by popular demand”). But, the spirit if not the letter of categorical thinking will still strongly inform my exposition â€” those readers who already know categories will often be able to read between the lines and see what I’m up to. Those who do not will still be exposed to what I believe are powerful categorical ways of thinking.
I’d like to start off talking about a very pretty area of mathematics which ties together various topics in algebra, topology, logic, geometry… I’m talking about mathematics in the neighborhood of so-called “Stone duality” (after the great Marshall Stone). I’m hoping to pitch this as though I were teaching an undergraduate course, at roughly a junior or senior level in a typical American university. [Full disclosure: I’m no longer a professional academic, although I often play one on the Internet ðŸ™‚ ] At times I will allude to topics which presuppose some outside knowledge, but hey,that’s okay. No one’s being graded here (thank goodness!).
First, I need to discuss some preliminaries which will eventually lead up to the concept of Boolean algebra — the algebra which underlies propositional logic.
A partial order on a set is a binary relation (a subset ), where we write if , satisfying the following conditions:
- (Reflexivity) for every ;
- (Transitivity) For all , ( and ) implies .
- (Antisymmetry) For all , ( and ) implies .
A partially ordered set (poset for short) is a set equipped with a partial order. Posets occur all over mathematics, and many are likely already familiar to you. Here are just a few examples:
- The set of natural numbers ordered by divisibility ( if divides ).
- The set of subsets of a set (where is the relation of inclusion of one subset in another).
- The set of subgroups of a group (where again is the inclusion relation between subgroups).
- The set of ideals in a ring (ordered by inclusion).
The last three examples clearly follow a similar pattern, and in fact, there is a sense in which every poset P can be construed in just this way: as a set of certain types of subset ordered by inclusion. This is proved in a way very reminiscent of the Cayley lemma (that every group can be represented as a group of permutations of a set). You can think of such results as saying “no matter how abstractly a group [or poset] may be presented, it can always be represented in a concrete way, in terms of permutations [or subsets]”.
To make this precise, we need one more notion, parallel to the notion of group homomorphism. If and are posets, a poset map from to is a function which preserves the partial order (that is, if in , then in ). Here then is our representation result:
Lemma (Dedekind): Any poset may be faithfully represented in its power set , partially ordered by inclusion. That is, there exists a poset map that is injective (what we mean by “faithful”: the map is one-to-one).
Proof: Define to be the function which takes to the subset (which we view as an element of the power set). To check this is a poset map, we must show that if , then is included in . This is easy: if belongs to , i.e., if , then from and the transitivity property, , hence belongs to .
Finally, we must show that is injective; that is, implies . In other words, we must show that if
,
then . But, by the reflexivity property, we know ; therefore belongs to the set displayed on the left, and therefore to the set on the right. Thus . By similar reasoning, . Then, by the antisymmetry property, , as desired.
The Dedekind lemma turns out to be extremely useful (it and the Cayley lemma are subsumed under an even more useful result called the Yoneda lemma â€” perhaps more on this later). Before I illustrate its uses, let me rephrase slightly the injectivity property of the Dedekind embedding : it says,
If (for all in iff ), then .
This principle will be used over and over again, so I want to give it a name: I’ll call it the Yoneda principle.
Here is a typical use. Given elements in a poset , we say that an element is a meet of and if for all ,
if and only if ( and ).
Fact: there is at most one meet of and . That is, if and are both meets of and , then .
Proof: For all if and only if ( and ) if and only if . Therefore, by the Yoneda principle.
Therefore, we can refer to the meet of two elements and (if it exists); it is usually denoted . Because , we have and .
Example: In a concrete poset, like the poset of all subsets of a set or subgroups of a group, the meet of two elements is their intersection.
Example: Consider the natural numbers ordered by divisibility. The meet satisfies and (i.e., divides both and ). At the same time, the meet property says that any number which divides both and must also divide . It follows that the meet in this poset is the gcd of and .
Here are some more results which can be proved with the help of the Yoneda principle. I’ll just work through one of them, and leave the others as exercises.
- (idempotence of meet)
- (commutativity of meet)
- (associativity of meet)
To prove 3., we can use the Yoneda principle: for all in the poset, we have
iff and
iff and and
iff and
iff .
Hence, by Yoneda.
In fact, we can unambiguously refer to the meet of any finite number of elements by the evident property:
iff and and
â€” this uniquely defines the meet on the left, by Yoneda, and the order in which the appear makes no difference.
But wait â€” what if the number of elements is zero? That is, what is the empty meet? Well, the condition “ and ” becomes vacuous (there is no for which the condition is not satisfied), so whatever the empty meet is, call it , we must have for all . So is just the top element of the poset (if one exists). Another name for the top element is “the terminal element”, and another notation for it is ‘‘.
Definition: A meet semi-lattice is a poset which has all finite meets, including the empty one.
Exercises:
- Prove that in a meet-semilattice, for all .
- Is there a top element for the natural numbers ordered by divisibility?
Recent Comments