You are currently browsing the category archive for the ‘Posets and Lattices’ category.

In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.

Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.

I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:

Definition: A Boolean ring is a commutative ring (with identity 1) in which every element x is idempotent, i.e., satisfies x^2 = x.

Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.

Proposition 1: For every element x in a Boolean ring, 2x = 0.

Proof: By idempotence, we have x + 1 = (x+1)^2 = x^2 + 2x + 1. Since x = x^2, we may additively cancel in the ring to conclude 0 = 2x. \Box

This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field \mathbb{Z}_2 consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over \mathbb{Z}_2 to be an abelian group V together with a ring homomorphism \mathbb{Z}_2 \to Hom(V, V) to the ring of abelian group homomorphisms from V to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element r = 0, 1 to scalar-multiplication r \cdot (-): V \to V).

Anyway, the point is that we can now apply some linear algebra to study this \mathbb{Z}_2-vector space; in particular, a finite Boolean ring B is a finite-dimensional vector space over \mathbb{Z}_2. By choosing a basis, we see that B is vector-space isomorphic to \mathbb{Z}_{2}^{n} where n is the dimension. So the cardinality of a finite Boolean ring must be of the form 2^n. Hold that thought!

Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring B, we may construct a corresponding Boolean algebra structure on the underlying set of B, uniquely determined by the stipulation that the multiplication x \cdot y of the Boolean ring match the meet operation x \wedge y of the Boolean algebra. Conversely, given a Boolean algebra B, we may construct a corresponding Boolean ring structure on B, and this construction is inverse to the previous one.

In one direction, suppose B is a Boolean ring. We know from before that a binary operation on a set B that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of B — can be identified with the meet operation of a meet-semilattice structure on B, uniquely specified by taking its partial order to be defined by: x \leq y iff x = x \cdot y. It immediately follows from this definition that the additive identity 0 \in B satisfies 0 \leq y for all y (is the bottom element), and the multiplicative identity 1 \in B satisfies x \leq 1 for all x (is the top element).

Notice also that x \wedge (1-x) = x (1-x) = 0, by idempotence. This leads one to suspect that 1-x will be the complement of x in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting x = 1 - (1 - x), i.e., x is equal to its putative double negation.

Proposition 2: x \mapsto 1-x is order-reversing.

Proof: Looking at the definition of the order, this says that if x = x y, then 1-y = (1-x)(1-y). This is immediate. \Box

So, x \mapsto 1 - x is an order-reversing map B \to B (an order-preserving map B \to B^{op}) which is a bijection (since it is its own inverse). We conclude that B \to B^{op}: x \mapsto 1-x is a poset isomorphism. Since B has meets and B \cong B^{op}, B^{op} also has meets (and the isomorphism preserves them). But meets in B^{op} are joins in B. Hence B has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function f(x) = 1 - x takes meets in B to joins in B; that is,

f(x \wedge y) = 1 - x y = f(x) \vee f(y) = (1 - x) \vee (1 - y)

or, replacing x by 1-x and y by 1-y,

1 - (1-x)(1-y) = x \vee y

whence x \vee y = x + y - x y = x + y + xy, using the proposition 1 above.

Proposition 3: 1 - x is the complement of x.

Proof: We already saw x \wedge (1-x) = x(1-x) = 0. Also

x \vee (1-x) = x + (1 - x) + x(1-x) = x + (1-x) + 0 = 1,

using the formula for join we just computed. This completes the proof. \Box

So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have (x \vee y) \wedge z = (x + y + xy)z = xz + yz + xyz. On the other hand, (x \wedge z) \vee (y \wedge z) = xz + yz + (xz)(yz) = xz + yz + xyz, using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.

Naturally, we want to invert the process: starting with a Boolean algebra structure on a set B, construct a corresponding Boolean ring structure on B whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy x \vee y = x + y + x \wedge y, so that x \vee y = x + y if x \wedge y = 0 (i.e., if x and y are disjoint): this gives a partial definition of addition. Continuing this thought, if we express x \vee y = x + y + x \wedge y as a disjoint sum of some element a and x \wedge y, we then conclude x \vee y = a + x \wedge y, whence a = x + y by cancellation. In the case where the Boolean algebra is a power set PX, this element a is the symmetric difference of x and y. This generalizes: if we define the addition by the symmetric difference formula x + y := (\neg x \wedge y) \vee (x \wedge \neg y), then x + y is disjoint from x \wedge y, so that

(x + y) + x \wedge y

= (x + y) \vee (x \wedge y) = (\neg x \wedge y) \vee (x \wedge \neg y) \vee (x \wedge y) = x \vee y

after a short calculation using the complementation and distributivity axioms. After more work, one shows that x + y is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.

Exercise: Verify this last assertion.

However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.

Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism f: B \to C between Boolean rings, it is clear that f preserves the meet x \cdot y and join x + y + x y of any two elements x, y [since it preserves multiplication and addition] and of course also the complement 1 + x of any x; therefore f is a map of the corresponding Boolean algebras. Conversely, a map f: B \to C of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product x \wedge y and sum (\neg x \wedge y) \vee (x \wedge \neg y) in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,

Theorem 1: The above processes define functors F: \mbox{BoolRing} \to \mbox{BoolAlg}, G: \mbox{BoolAlg} \to \mbox{BoolRing}, which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.

  • Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.

Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality 2^n — the same as the cardinality of the power set Boolean algebra PX if X has cardinality n. The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra B is naturally isomorphic to one of the form PX; one of our tasks is to describe X in terms of B in a “natural” (or rather, functorial) way. From the Boolean ring perspective, X is a basis of the underlying \mathbb{Z}_2-vector space of B; to pin it down exactly, we use the full ring structure.

X is naturally a basis of PX; more precisely, under the embedding i: X \to PX defined by x \mapsto \{x\}, every subset S \subseteq X is uniquely a disjoint sum of finitely many elements of i(X): S = \sum_{x \in X} a_x(S) \{x\} where a_x(S) \in \{0, 1\} = \mathbb{Z}_2: naturally, a_x(S) = 1 iff x \in S. For each S, we can treat the coefficient a_x(S) as a function of x valued in \mathbb{Z}_2. Let \hom(X, \mathbb{Z}_2) denote the set of functions X \to \mathbb{Z}_2; this becomes a Boolean ring under the obvious pointwise definitions (f + g)(x) := f(x) + g(x) and (f g)(x) = f(x) g(x). The function PX \to \hom(X, \mathbb{Z}_2) which takes S \in PX to the coefficient function a_{(-)}(S) is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)

Or, we can turn this around: for each x \in X, we get a Boolean ring map PX \to \mathbb{Z}_2 which takes S to a_x(S). Let \mbox{Bool}(PX, \mathbb{Z}_2) denote the set of Boolean ring maps PX \to \mathbb{Z}_2.

Proposition 4: For a finite set X, the function X \to \mbox{Bool}(PX, \mathbb{Z}_2) that sends x to a_x(-) is a bijection (in other words, an isomorphism).

Proof: We must show that for every Boolean ring map \phi: PX \to \mathbb{Z}_2, there exists a unique x \in X such that \phi = a_x(-), i.e., such that \phi(T) = a_x(T) for all T \in PX. So let \phi be given, and let S be the intersection (or Boolean ring product) of all T \in PX for which \phi(T) = 1. Then

\phi(S) = \phi(\prod_{T: \phi(T) = 1} T) = \prod_{T: \phi(T) = 1} \phi(T) = 1.

I claim that S must be a singleton \{x\} for some (evidently unique) x \in X. For 1 = \phi(S) = \phi(\sum_{x \in S} \{x\}) = \sum_{x \in S} \phi(\{x\}), forcing \phi(\{x\}) = 1 for some x \in S. But then S \subseteq \{x\} according to how S was defined, and so S = \{x\}. To finish, I now claim \phi(T) = a_x(T) for all T \in PX. But \phi(T) = 1 iff S \subseteq T iff x \in T iff a_x(T) = 1. This completes the proof. \Box

This proposition is a vital clue, for if B is to be isomorphic to a power set PX (equivalently, to \hom(X, \mathbb{Z}_2)), the proposition says that the X in question can be retrieved reciprocally (up to isomorphism) as \mbox{Bool}(B, \mathbb{Z}_2).

With this in mind, our first claim is that there is a canonical Boolean ring homomorphism

B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2)

which sends b \in B to the function eval_b which maps \phi \in \mbox{Bool}(B, \mathbb{Z}_2) to \phi(b) (i.e., evaluates \phi at b). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that eval_{b+c}(\phi) = eval_b(\phi) + eval_c(\phi) for all \phi \in \mbox{Bool}(B, \mathbb{Z}_2). But by definition, this is the equation \phi(b+c) = \phi(b) + \phi(c), which holds since \phi is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.

Theorem 2: If B is a finite Boolean ring, then the Boolean ring map

eval: B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2)

is an isomorphism. (So, there is a natural isomorphism B \cong P(\mbox{Bool}(B, \mathbb{Z}_2).)

Proof: First we prove injectivity: suppose b \in B is nonzero. Then \neg b \neq 1, so the ideal (\neg b) = \{a \cdot \neg b: a \in B\} = \{x \in B: x \leq \neg b\} is a proper ideal. Let I be a maximal proper ideal containing \neg b, so that B/I is both a field and a Boolean ring. Then B/I \cong \mathbb{Z}_2 (otherwise any element x \in B/I not equal to 0, 1 \in B/I would be a zero divisor on account of x(1-x) = 0). The evident composite

B \to B/I \cong \mathbb{Z}_2

yields a homomorphism \phi:  B  \to \mathbb{Z}_2 for which \phi(\neg b) = \phi(1-b) = 0, so \phi(b) = eval_b(\phi) = 1. Therefore eval_b is nonzero, as desired.

Now we prove surjectivity. A function g: \mbox{Bool}(B, \mathbb{Z}_2) \to \mathbb{Z}_2 is determined by the set of elements \phi mapping to 1 under g, and each such homomorphism \phi: B \to \mathbb{Z}_2, being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let J be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if x, y belong to J, then so does x \vee y = x + y + x y. Let j be the join of the finitely many elements of J; notice J = \{x \in B: x \leq j\} = (j) (actually, this proves that every ideal of a finite Boolean ring B is principal). In fact, writing k_\phi for the unique element such that \ker(\phi) = (k_\phi), we have

j = \bigwedge_{\phi: g(\phi) = 1} k_\phi

(certainly j \leq k_\phi for all such \phi, since J \subseteq \ker(\phi) = \{x \in B: x \leq k_\phi\}, but also \bigwedge_{g(\phi) = 1} k_\phi belongs to the intersection of these kernels and hence to J = \{x \in B: x \leq j\}, whence \bigwedge_{g(\phi) = 1} k_\phi \leq j).

Now let b = 1- j; I claim that g = eval_b, proving surjectivity. We need to show g(\phi) = eval_b(\phi) = \phi(b) for all \phi \in \mbox{Bool}(B, \mathbb{Z}_2). In one direction, we already know from the above that if g(\phi) = 1, then j belongs to the kernel of \phi, so \phi(j) = 0, whence \phi(b) = \phi(1-j) = 1.

For the other direction, suppose \psi(b) = 1, or that \psi(j) = 0. Now the kernel of \psi is principal, say (k) for some k \neq 1. We have j \leq k, so

k = k \vee j = k \vee \bigwedge_{g(\phi) = 1} k_\phi = \bigwedge_{g(\phi) = 1} k \vee k_\phi

from which it follows that k \vee k_\phi \neq 1 for some \phi \in g^{-1}(1). But then (k \vee k_\phi) is a proper ideal containing the maximal ideals (k) and (k_\phi); by maximality it follows that (k) = (k \vee k_\phi) = (k_\phi). Since \psi and \phi have the same kernels, they are equal. And therefore g(\psi) = g(\phi) = 1. We have now proven both directions of the statement (\psi(b) = 1 if and only if g(\psi) = 1), and the proof is now complete. \Box

  • Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements b and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset I \subseteq B closed under joins and such that x \wedge y \in I for x \in B, y \in I, a filter is (by definition) a subset F closed under meets and such that x \vee y \in F whenever x \in B, y \in F (this second condition is equivalent to upward-closure: y \in F and y \leq x implies x \in F). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if I is an ideal, then the set of negations \{\neg x: x \in I\} is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if f: B \to C is a Boolean algebra map, then the inverse image f^{-1}(1) is a filter, just as the inverse image f^{-1}(0) is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.

For those who know some category theory: what is really going on here is that we have a power set functor

P(-) = \hom(-, \mathbb{Z}_2): \mbox{FinSet}^{op} \to \mbox{FinBool}

(taking a function f: X \to Y between finite sets to the inverse image map f^{-1}: PY \to PX, which is a map between finite Boolean algebras) and a functor

Q(-) = \mbox{Bool}(-, \mathbb{Z}_2): \mbox{FinBool}^{op} \to \mbox{FinSet}

which we could replace by its opposite Q(-)^{op}: \mbox{FinBool} \to \mbox{FinSet}^{op}, and the canonical maps of proposition 4 and theorem 2,

X \to \mbox{Bool}(\hom(X, \mathbb{Z}_2), \mathbb{Z}_2),

B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2),

are components (at X and B) of the counit and unit for an adjunction Q(-)^{op} \dashv P(-). The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories \mbox{FinSet}^{op} and \mbox{FinBool}. This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.

In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.

There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:

A Boolean algebra is a distributive lattice in which every element has a complement.

(If X is a lattice and x \in X, a complement of x is an element y such that x \wedge y = 0 and x \vee y = 1. A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)

  • Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set PX, ordered by subset inclusion. As we know, this is a distributive lattice, and the complement S^c of a subset S \subseteq X satisfies S \cap S^c = \emptyset and S \cup S^c = X.
  • Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction \wedge, disjunction \vee, and negation \neg in ordinary classical logic. In particular, given a theory \mathbf{T}, there is a preorder whose elements are sentences (closed formulas) p of \mathbf{T}, ordered by p \leq q if the entailment p \to q is provable in \mathbf{T} using classical logic. By passing to logical equivalence classes (p \equiv q iff p \leftrightarrow q in \mathbf{T}), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory \mathbf{T}.

Exercise: Give an example of a complemented lattice which is not distributive.

As a possible leading hint for the previous exercise, here is a first order of business:

Proposition: In a distributive lattice, complements of elements are unique when they exist.

Proof: If both b and c are complementary to a, then b = b \wedge 1 = b \wedge (a \vee c) = (b \wedge a) \vee (b \wedge c) = 0 \vee (b \wedge c) = b \wedge c. Since b = b \wedge c, we have b \leq c. Similarly c = b \wedge c, so b = c. \Box

The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).

Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:

  1. (a \wedge x \leq y) if and only if (a \leq \neg x \vee y)
  2. \neg \neg x = x for all elements x

Proof: First let X be a Boolean algebra, and let x^c denote the complement of an element x \in X. Then I claim that a \wedge x \leq y if and only if a \leq x^c \vee y, proving that X admits an implication x \Rightarrow y = x^c \vee y. Then, taking y = 0, it follows that \neg x := (x \Rightarrow 0) = x^c \vee 0 = x^c, whence 1. follows. Also, since (by definition of complement) x is the complement of y if and only if y is the complement of x, we have x^{c c} = x, whence 2. follows.

[Proof of claim: if a \leq x^c \vee y, then x \wedge a \leq x \wedge (x^c \vee y) = (x \wedge x^c) \vee (x \wedge y) \leq 0 \vee y = y. On the other hand, if x \wedge a \leq y, then a = 1 \wedge a \leq (x^c \vee x) \wedge (x^c \vee a) =  x^c \vee (x \wedge a) \leq x^c \vee y. This completes the proof of the claim and of the forward implication.]

In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication \neg x \vee y). In particular, it is distributive. From \neg x \leq \neg x \vee 0, we have (from 1.) x \wedge \neg x \leq 0; since 0 \leq x \wedge \neg x is automatic by definition of 0 = \bot, we get 0 = x \wedge \neg x. From 1 \wedge x \leq x, we have also (from 1.) that 1 \leq \neg x \vee x; since \neg x \vee x \leq 1 is automatic by definition of 1, we have \neg x \vee x = 1. Thus under 1., every element x has a complement \neg x.

On the other hand, suppose X is a Heyting algebra satisfying 2.: \neg \neg x = x. As above, we know x \wedge \neg x = 0. By the corollary below, we also know the function \neg: X \to X takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that \neg is its own inverse, it follows that \neg also takes meets to joins. Hence \neg x \vee x = \neg x \vee \neg \neg x = \neg(x \wedge \neg x) = \neg 0 = 1. Thus for a Heyting algebra which satisfies 2., every element x has a complement \neg x. This completes the proof. \Box

  • Exercise: Show that Boolean algebras can also be characterized as meet-semilattices X equipped with an operation \neg: X \to X for which a \wedge x \leq y if and only if a \leq \neg(x \wedge \neg y).

The proof above invoked the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y. The claim is that this De Morgan law (not the other \neg(x \wedge y) = \neg x \vee \neg y!) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:

Lemma: For any element c of a Heyting algebra X, the function - \Rightarrow c: X \to X is an order-reversing map (equivalently, an order-preserving map X^{op} \to X, or an order-preserving map X \to X^{op}). It is adjoint to itself, in the sense that - \Rightarrow c: X^{op} \to X is right adjoint to - \Rightarrow c: X \to X^{op}.

Proof: First, we show that a \leq b in X (equivalently, b \leq a in X^{op}) implies (b \Rightarrow c) \leq (a \Rightarrow c). But this conclusion holds iff (b \Rightarrow c) \wedge a \leq c, which is clear from (b \Rightarrow c) \wedge a \leq (b \Rightarrow c) \wedge b \leq c. Second, the adjunction holds because

(b \Rightarrow c) \leq  a in X^{op} if and only if

a \leq (b \Rightarrow c) in X if and only if

a \wedge b \leq c in X if and only if

b \wedge a \leq c in X if and only if

b \leq (a \Rightarrow c) in X. \Box

Corollary: - \Rightarrow c: X^{op} \to X takes any inf which exists in X^{op} to the corresponding inf in X. Equivalently, it takes any sup in X to the corresponding inf in X, i.e., (\bigvee_{s \in S} s) \Rightarrow c = \bigwedge_{s \in S} (s \Rightarrow c). (In particular, this applies to finite joins in X, and in particular, it applies to the case c = 0, where we conclude, e.g., the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y.)

  • Remark: If we think of sups as sums and infs as products, then we can think of implications x \Rightarrow y as behaving like exponentials y^x. Indeed, our earlier result that x \Rightarrow (-) preserves infs \bigwedge_{s \in S} y_s can then be recast in exponential notation as saying (\prod_{s \in S} y_s)^x = \prod_{s \in S} (y_s)^x, and our present corollary that (- \Rightarrow y) takes sups to infs can then be recast as saying y^{\sum_{s \in S} x_s} = \prod_{s \in S} y^{x_s}. Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!

Let me reprise part of the lemma (in the case c = 0), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that \neg = (-)\Rightarrow 0: X \to X is order-reversing, and that there is a three-way equivalence:

a \leq \neg b if and only if a \wedge b = 0 if and only if b \leq \neg a.

This situation is an instance of what is called a “Galois connection” in mathematics. If X and Y are posets (or even preorders), a Galois connection between them consists of two order-reversing functions f: X \to Y, g: Y \to X such that for all x \in X, y \in Y, we have y \leq f(x) if and only if x \leq g(y). (It’s actually an instance of an adjoint pair: if we consider f as an order-preserving map X \to Y^{op} and g an order-preserving map Y^{op} \to X, then f(x) \leq y in Y^{op} if and only if x \leq g(y) in X.)

Here are some examples:

  1. The original example arises of course in Galois theory. If k is a field and k \subseteq E is a finite Galois extension with Galois group G = Gal(E/k) (of field automorphisms g: E \to E which fix the elements belonging to k), then there is a Galois connection consisting of maps Aut_{(-)}(E): PE \to PG and Fix: PG \to PE. This works as follows: to each subset S \subseteq E, define Aut_S(E) to be \{g \in G: g(s) = s \mbox{ for all } s \in S \}. In the other direction, to each subset T \subseteq G, define Fix(T) to be \{x \in E: g(x) = x \mbox{ for all } g \in T\}. Both Aut_{(-)}(E) and Fix(-) are order-reversing (for example, the larger the subset T \subseteq G, the more stringent the conditions for an element x \in E to belong to Fix(T)). Moreover, we have

    S \subseteq Fix(T) iff (g(x) = x for all x \in S, g \in T) iff T \subseteq Aut_S(E)

    so we do get a Galois connection. It is moreover clear that for any T \subseteq G, Fix(T) is an intermediate subfield between k and E, and for any S \subseteq E, Aut_S(E) is a subgroup of G. A principal result of Galois theory is that Fix(-) and Aut_{(-)}(E) are inverse to one another when restricted to the lattice of subgroups of G and the lattice of fields intermediate between k and E. Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.

  2. Another basic Galois connection arises in algebraic geometry, between subsets J \subseteq k[x_1, \ldots, x_n] (of a polynomial algebra over a field k) and subsets V \subseteq k^n. Given J, define Z(J) (the zero locus of J) to be \{(a_1, \ldots, a_n): f(a_1, \ldots, a_n) = 0 \mbox{ for each polynomial } f \in J\}. On the other hand, define I(V) (the ideal of V) to be \{f \in k[x_1, \ldots, x_n]: f(a) = 0 \mbox{ for all } a = (a_1, \ldots, a_n) \in V\}. As in the case of Galois theory above, we clearly have a three-way equivalence

    V \subseteq Z(J) iff (f(a) = 0 for all a \in V, f \in J) iff J \subseteq I(V)

    so that Z(-), I(-) define a Galois connection between power sets (of the n-variable polynomial algebra and of n-dimensional affine space k^n). One defines an (affine algebraic) variety V \subseteq k^n to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that I(V) is an ideal of the polynomial algebra. Not every ideal I of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals I equal to their radical rad(I) = \{f \in k[x_1, \ldots, x_n]: f^n \in I \mbox{ for some } n \geq 1\} are. Thus, Z(-) and I(-) become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.

  3. Both of the examples above are particular cases of a very general construction. Let X, Y be sets and let R \subseteq X \times Y be any relation between them. Then set up a Galois connection which in one direction takes a subset S \subseteq X to S \backslash R := \{y \in Y: (x, y) \in R \mbox{ for all } x \in S\}, and in the other takes T \subseteq Y to R/T := \{x \in X: (x, y) \in R \mbox{ for all } y \in T\}. Once again we have a three-way equivalence

    S \subseteq R/T iff S \times T \subseteq R iff T \subseteq S \backslash R.

    There are tons of examples of this flavor.

As indicated above, a Galois connection between posets X, Y is essentially the same thing as an adjoint pair between the posets X, Y^{op} (or between X^{op}, Y if you prefer; Galois connections are after all symmetric in X, Y). I would like to record a few basic results about Galois connections/adjoint pairs.

Proposition:

  1. Given order-reversing maps f: X \to Y, g: Y \to X which form a Galois connection, we have x \leq g f(x) for all x \in X and y \leq f g(y) for all y \in Y. (Given poset maps f, g which form an adjoint pair f \dashv g, we have x \leq g f(x) for all x \in X and f g(y) \leq y for all y \in Y.)
  2. Given a Galois connection as above, f(x) = f g f(x) for all x \in X and g(y) = g f g(y) for all y \in Y. (Given an adjoint pair f \dashv g as above, the same equations hold.) Therefore a Galois connection (f, g) induces a Galois correspondence between the elements of the form f(x) and the elements of the form g(y).

Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption f \dashv g, x \leq g f(x) if and only if f(x) \leq f(x), which is certainly true. The other statement is dual.

(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map f
to x \leq g f(x) from 1. gives f(x) \leq f g f(x). Applying f g(y) \leq y from 1. to y = f(x) gives f g f(x) \leq f(x). Hence f(x) = f g f(x). The other equation is dual. \Box

Incidentally, the equations of 2. show why an algebraic variety V is the zero locus of its ideal (see example 2. above): if V = Z(S) for some set of polynomials S, then V = Z(S) = Z I Z(S) = Z I(V). They also show that for any element x in a Heyting algebra, we have \neg \neg \neg x = \neg x, even though \neg \neg y = y is in general false.

Let (f, g) be a Galois connection (or f \dashv g an adjoint pair). By the proposition, c = gf: X \to X is an order-preserving map with the following properties:

x \leq c(x) for all x \in X

c c(x) = c(x) for all x \in X.

Poset maps c: X \to X with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance G is a group, then the operator c: PG \to PG which takes a subset S \subseteq G to the subgroup generated by S is a closure operator. Or, if X is a topological space, then the operator c: PX \to PX which takes a subset S \subset X to its topological closure \bar{S} is a closure operator. Or, if X is a poset, then the operator c: PX \to PX which takes S \subseteq X to \{a \in X: a \leq s \mbox{ for some } s \in S\} is a closure operator. Examples like these can be multiplied at will.

One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if c: X \to X is a closure operator, then a fixed point of c (or a c-closed element of X) is an element x such that c(x) = x. The collection Fix(c) of fixed points is partially ordered by the order in X. For example, the lattice of fixed points of the operator c: PG \to PG above is the lattice of subgroups of G. For any closure operator c, notice that Fix(c) is the same as the image c(X) of c.

One particular use is that the fixed points of the double negation closure \neg \neg: X \to X on a Heyting algebra X form a Boolean algebra Fix(\neg\neg), and the map \neg \neg: X \to Fix(\neg \neg) is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.

The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.

Exercise: If X is a Heyting algebra and x, y, z \in X, prove the “exponential law” ((x \wedge y) \Rightarrow z) = (x \Rightarrow (y \Rightarrow z)). Conclude that \neg(x \wedge y) = (y \Rightarrow \neg x) = (x \Rightarrow \neg y).

Exercise: We have seen that (x \Rightarrow y) \wedge x \leq y in a Heyting algebra. Use this to prove (x \Rightarrow y) \wedge (y \Rightarrow z) \leq (x \Rightarrow z).

Exercise: Show that double negation \neg \neg: X \to X on a Heyting algebra preserves finite meets. (The inequality \neg \neg(x \wedge y) \leq \neg \neg x \wedge \neg \neg y is easy. The reverse inequality takes more work; try using the previous two exercises.)

Exercise: If c: X \to X is a closure operator, show that the inclusion map i: Fix(c) \hookrightarrow X is right adjoint to the projection c: X \to Fix(c) to the image of c. Conclude that meets of elements in Fix(\neg \neg) are calculated as they would be as elements in X, and also that \neg \neg: X \to Fix(\neg \neg) preserves joins.

Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?

In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the basic relationships between the logical connectives “and”, “or”, and “implies”. Our discussion disclosed a fundamental relationship between distributive laws and the algebra of implication, which we put to work to discover the structure of the “internal Heyting algebra logic” of a topology.

I’d like to pause and reflect on the general technique we used to establish this relationship; like the Yoneda principle and the Principle of Duality, it comes up with striking frequency, and so it will be useful for us to give it a name. As it turns out, this particular proof technique is analogous to the way adjoints are used in linear algebra. Such analogies go all the way back to work of C. S. Peirce, who like Boole was a great pioneer in the discovery of relationships between algebra and logic. At a deeper level, similar analogies were later rediscovered in category theory, and are connected with some of the most potent ideas category theory has to offer.

Our proof that meets distribute over sups in the presence of an implication operator is an example of this technique. Here is another example of similar flavor.

Theorem: In a Heyting algebra X, the operator p \Rightarrow -: X \to X preserves any infs which happen to exist in X, for any element p \in X. [In particular, this operator is a morphism of meet-semilattices, i.e., (p \Rightarrow (q \wedge r)) = ((p \Rightarrow q) \wedge (p \Rightarrow r)), and (p \Rightarrow 1) = 1.]

Proof: Suppose that S \subseteq X has an inf, which here will be denoted \bigwedge_{s \in S} s. Then for all a \in X, we have

a \leq p \Rightarrow (\bigwedge_{s \in S} s) if and only if

a \wedge p \leq \bigwedge_{s \in S} s if and only if

(for all s \in S, a \wedge p \leq s) if and only if

for all s \in S, a \leq p \Rightarrow s.

By the defining property of inf, these logical equivalences show that p \Rightarrow (\bigwedge_{s \in S} s) is indeed the inf of the subset \{p \Rightarrow s: s \in S\}, or in other words that p \Rightarrow (\bigwedge_{s \in S} s) = \bigwedge_{s \in S} p \Rightarrow s, as desired. \Box \,

In summary, what we did in this proof is “slide” the operator p \Rightarrow - on the right of the inequality over to the operator - \wedge p on the left, then invoke the defining property of infs, and then slide back to p \Rightarrow - on the right. This sliding trick is analogous to how adjoint mappings work in linear algebra.

In fact, everything we have done so far with posets can be translated in terms of matrix algebra, provided that our matrix entries, instead of being real or complex numbers, are truth values 0, 1 (1 for “true”, 0 \mbox{ for} “false”). These truth values are added and multiplied in the way familiar from truth tables, with join playing the role of addition and meet playing the role of multiplication. In fact the lattice \mathbf{2} = \{0, 1\} is a very simple distributive lattice, and so most of the familiar arithmetic properties of addition and multiplication (associativity, commutativity, distributivity) do carry over, which is all we need to carry out the most basic aspects of matrix algebra. However, observe that 1 has no additive inverse (for here 1 + 1 = 1 \vee 1 = 1) — the type of structure we are dealing with is often called a “rig” (like a ring, but without assuming negatives). On the other hand, this lattice is, conveniently, a sup-lattice, thinking of sups as arbitrary sums, whether finitary or infinitary.

Peirce recognized that a relation can be classified by a truth-valued matrix. Take for example a binary relation on a set X, i.e., a subset R \subseteq X \times X. We can imagine each point (x, y) \in X \times X as a pixel in the plane, and highlight R by lighting up just those pixels which belong to R. This is the same as giving an (X \times X)-matrix R(\cdot, \cdot), with rows indexed by elements y and columns by elements x, where the (y, x)-entry R(y, x) is 1 (on) if (x, y) is in R, and 0 \mbox{ (off)} if not. In a similar way, any relation R \subseteq X \times Y is classified by a (Y \times X)-matrix whose entries are truth values.

As an example, the identity matrix has a 1 at the (x, y)-entry if and only if x = y. Thus the identity matrix classifies the equality relation.

A poset is a set X equipped with a binary relation R satisfying the reflexive, transitive, and antisymmetry properties. Let us translate these into matrix algebra terms. First reflexivity: it says that x = y implies (x, y) \in R. In matrix algebra terms, it says Id(y, x) \leq R(y, x), which we abbreviate in the customary way:

(Reflexivity) Id \leq R.

Now let’s look at transitivity. It says

(\exists_y  (x, y) \in R and (y, z) \in R) implies (x, z) \in R.

The “and” here refers to the meet or multiplication in the rig of truth values \mathbf{2}, and the existential quantifier can be thought of as a (possibly infinitary) join or sum indexed over elements y. Thus, for each pair (x, z), the hypothesis of the implication has truth value

\sum_y R(z, y) \cdot R(y, x)

which is just the (z, x)-entry of the square of the matrix R. Therefore, transitivity can be very succinctly expressed in matrix algebra terms as the condition

(Transitivity) R^2 \leq R.

  • Remark: More generally, given a relation R \subseteq X \times Y from X to Y, and another relation S \subseteq Y \times Z from Y to Z, the relational composite S \circ R \subseteq X \times Z is defined to be the set of pairs (x, z) for which there exists y with (x, y) \in R and (y, z) \in S. But this just means that its classifying matrix is the ordinary matrix product S \cdot R!

Let’s now look at the antisymmetry condition: ((x, y) \in R and (y, x) \in R) implies x = y. The clause (y, x) \in R is the flip of (x, y) \in R; at the matrix level, this flip corresponds to taking the transpose. Thus antisymmetry can be expressed in matrix terms as

(Antisymmetry) R \wedge R^\top \leq Id

where R^\top denotes the transpose of R, and the matrix meet \wedge means we take the meet at each entry.

  • Remark: From the matrix algebra perspective, the antisymmetry axiom is less well motivated than the reflexivity and transitivity axioms. There’s a moral hiding beneath that story: from the category-theoretic perspective, the antisymmetry axiom is relatively insignificant. That is, if we view a poset as a category, then the antisymmetry condition is tantamount to the condition that isomorphic objects are equal (in the parlance, one says the category is “skeletal”) — this extra condition makes no essential difference, because isomorphic objects are essentially the same anyway. So: if we were to simply drop the antisymmetry axiom but keep the reflexivity and transitivity axioms (leading to what are called preordered sets, as opposed to partially ordered sets), then the theory of preordered sets develops exactly as the theory of partially ordered sets, except that in places where we conclude “x is equal to y” in the theory of posets, we would generally conclude “x is isomorphic to y” in the theory of preordered sets.

Preordered sets do occur in nature. For example, the set of sentences p, q, ... in a theory is preordered by the entailment relation p \vdash q (q is derivable from p in the theory). (The way one gets a poset out of this is to pass to a quotient set, by identifying sentences which are logically equivalent in the theory.)

Exercises:

  1. (For those who know some topology) Suppose X is a topological space. Given x, y \in X, define x \leq y if x belongs to the closure of y; show this is a preorder. Show this preorder is a poset precisely when X is a T_0-space.
  2. If X carries a group structure, define x \leq y for elements x, y \in X if x = y^n for some integer n; show this is a preorder. When is it a poset?

Since posets or preorders are fundamental to everything we’re doing, I’m going to reserve a special pairing notation for their classifying matrices: define

\langle x, y \rangle = 1 if and only if x \leq y.

Many of the concepts we have developed so far for posets can be succinctly expressed in terms of the pairing.

Example: The Yoneda principle (together with its dual) is simply the statement that if X is a poset, then x = y if and only if \langle -, x \rangle = \langle -, y \rangle (as functionals valued in \mathbf{2}) if and only if \langle x, - \rangle = \langle y, - \rangle.

Example: A mapping from a poset (X, \langle, \rangle_X) to a poset (Y, \langle, \rangle_Y) is a function f: X \to Y such that \langle x, y \rangle_X \leq \langle f(x), f(y) \rangle_Y.

Example: If X is a poset, its dual or opposite X^{op} has the same elements but the opposite order, i.e., \langle x, y \rangle_X = \langle y, x \rangle_{X^{op}}. The principle of duality says that the opposite of a poset is a poset. This can be (re)proved by invoking formal properties of matrix transpose, e.g., if R^2 \leq R, then (R^\top)^2 = (R^2)^\top \leq R^\top.

By far the most significant concept that can be expressed in terms of these pairings that of adjoint mappings:

Definition: Let X, Y be posets [or preorders], and f: X \to Y, g: Y \to X be poset mappings. We say (f, g) is an adjoint pair (with f the left adjoint of g, and g the right adjoint of f) if

\langle f(x), y \rangle_Y = \langle x, g(y) \rangle_X

or, in other words, if f(x) \leq y if and only if x \leq g(y). We write f \dashv g. Notice that the concept of left adjoint is dual to the concept of right adjoint (N.B.: they are not the same, because clearly the pairing \langle x, y \rangle is not generally symmetric in x and y).

Here are some examples which illustrate the ubiquity of this concept:

  1. Let X be a poset. Let X \times X be the poset where (x, y) \leq (x', y') iff (x \leq x' and y \leq y'). There is an obvious poset mapping \delta: X \to X \times X, the diagonal mapping, which takes x to (x, x). Then a meet operation \wedge: X \times X \to X is precisely a right adjoint to the diagonal mapping. Indeed, it says that (a, a) \leq (x, y) if and only if a \leq x \wedge y.
  2. Dually, a join operation \vee: X \times X \to X is precisely a left adjoint to the diagonal mapping \delta: X \to X \times X.
  3. More generally, for any set S, there is a diagonal map \Delta: X \to X^S which maps x \in X to the S-tuple (\ldots, x, x, x, \ldots). Its right adjoint X^S \to X, if one exists, sends an S-tuple (x_s)_{s \in S} to the inf of the set \{x_s: s \in S\}. Its left adjoint would send the tuple to the sup of that set.
  4. If X is a Heyting algebra, then for each x \in X, the conjunction operator x \wedge -: X \to X is left adjoint to the implication operator x \Rightarrow -: X \to X.
  5. If X is a sup-lattice, then the operator \sup: PX \to X which sends a subset S \subseteq X to \sup(S) \in X is left adjoint to the Dedekind embedding i: X \to PX. Indeed, we have \sup(S) \leq a if and only if (for all s \in S, s \leq a) if and only if S \subseteq \{x \in X: x \leq a\} = i(a).

As items 1, 2, and 4 indicate, the rules for how the propositional connectives \wedge, \vee, \Rightarrow operate are governed by adjoint pairs. This gives some evidence for Lawvere’s great insight that all rules of inference in logic are expressed by interlocking pairs of adjoint mappings.

Proposition: If f \dashv g and f' \dashv g' where g: X \to Y and g': Y \to Z are composable mappings, then f \circ f' \dashv g' \circ g.

Proof: \langle f f' z, x \rangle_X = \langle f' z, g x \rangle_Y = \langle z, g' g x \rangle_Z. Notice that the statement is analogous to the usual rule (A B)^\dagger = B^\dagger A^\dagger, where A^\dagger refers to taking an adjoint with respect to given inner product forms.

We can use this proposition to give slick proofs of some results we’ve seen. For example, to prove that Heyting algebras are distributive lattices, i.e., that p \wedge (x \vee y) = (p \wedge x) \vee (p \wedge y), just take left adjoints on both sides of the tautology \delta \circ g = (g \times g) \circ \delta, where g = p \Rightarrow - is right adjoint to p \wedge -. The left adjoint of the left side of the tautology is (by the proposition) p \wedge - applied to \vee. The left adjoint of the right side is \vee applied to (p \wedge -) \times (p \wedge -). The conclusion follows.

Much more generally, we have the

Theorem: Right adjoints g: X \to Y preserve any infs which exist in X. Dually, left adjoints f: Y \to X preserve any sups which exist in Y.

Proof: \langle y, g(\bigwedge_{s \in S} x_s) \rangle_Y = \langle f(y), \bigwedge_{s \in S} x_s \rangle_X = \bigwedge_{s \in S} \langle f(y), x_s \rangle_X where the last inf is interpreted in the inf-lattice \mathbf{2}. This equals \bigwedge_{s \in S} \langle y, g(x_s) \rangle_Y. This completes the proof of the first statement (why?). The second follows from duality.

Exercise: If X is a Heyting algebra, then there is a poset mapping - \Rightarrow c: X^{op} \to X for any element c. Describe the left adjoint of this mapping. Conclude that this mapping takes infs in X^{op} (i.e., sups in X) to the corresponding infs in X.

Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset PX of all subsets of a set X, and the poset Sub(V) of all subspaces of a vector space V.

I take it that most readers are already familiar with many of the properties of the poset PX; there is for example the distributive law A \cap (B \cup C) = (A \cap B) \cup (A \cap C), and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset Sub(V), as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to Sub(V) is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).

To get a sense of the weirdness of Sub(V), take for example a 2-dimensional vector space V. The bottom element is the zero space \{0\}, the top element is V, and the rest of the elements of Sub(V) are 1-dimensional: lines through the origin. For 1-dimensional spaces x, y, there is no relation x \leq y unless x and y coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:

       V = 1
       / | \
     /   |   \
    x    y    z
     \   |   /
       \ | /
         0

Observe that for distinct elements x, y, z in the middle level, we have for example x \wedge y = 0 = x \wedge z (0 is the largest element contained in both x and y), and also for example y \vee z = 1 (1 is the smallest element containing y and z). It follows that x \wedge (y \vee z) = x \wedge 1 = x, whereas (x \wedge y) \vee (x \wedge z) = 0 \vee 0 = 0. The distributive law fails in Sub(V)!

Definition: A lattice is distributive if x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z) for all x, y, z. That is to say, a lattice X is distributive if the map x \wedge -: X \to X, taking an element y to x \wedge y, is a morphism of join-semilattices.

  1. Exercise: Show that in a meet-semilattice, x \wedge -: X \to X is a poset map. Is it also a morphism of meet-semilattices? If X has a bottom element, show that the map x \wedge - preserves it.
  2. Exercise: Show that in any lattice, we at least have (x \wedge y) \vee (x \wedge z) \leq x \wedge (y \vee z) for all elements x, y, z.

Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:

Theorem: The notion of distributive lattice is self-dual.

Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z), follows from the distributive lattice axioms.

Expand the right side to ((x \vee y) \wedge x) \vee ((x \vee y) \wedge z), by distributivity. This reduces to x \vee [(x \vee y) \wedge z], by an absorption law. Expand this again, by distributivity, to x \vee (x \wedge z) \vee (y \wedge z). This reduces to x \vee (y \wedge z), by the other absorption law. This completes the proof. \Box

Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:

Definition: A frame is a sup-lattice for which x \wedge -: X \to X is a morphism of sup-lattices, for every x \in X. In other words, for every subset S \subseteq  X, we have \sup(\{x \wedge s: s \in S\}) = x \wedge \sup(S), or, as is often written,

\bigvee_{s \in S} x \wedge s = x \wedge \bigvee_{s \in S} s.

Example: A power set PX, as always partially ordered by inclusion, is a frame. In this case, it means that for any subset A and any collection of subsets \{B_i: i \in I\}, we have

A \cap (\bigcup_{i \in I} B_i) = \bigcup_{i \in I} A \cap B_i

This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.

Example: If X is a set, a topology on X is a subset \mathbf{T} \subseteq PX of the power set, partially ordered by inclusion as PX is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element \emptyset and the empty meet or top element X of PX are elements of \mathbf{T}, and also:

  1. If U, V are elements of \mathbf{T}, then so is U \cap V.
  2. If \{U_i: i \in I\} is a collection of elements of \mathbf{T}, then \bigcup_{i \in I} U_i is an element of \mathbf{T}.

A topological space is a set X which is equipped with a topology \mathbf{T}; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in PX (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.

The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let X be the set \mathbb{R}^n of n-tuples of real numbers. First, define the open ball in \mathbb{R}^n centered at a point x \in \mathbb{R}^n and of radius r > 0 to be the set \{y \in \mathbb{R}: ||x - y|| < r\}. Then, define a subset U \subseteq \mathbb{R}^n to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.

It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)

Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula p \Rightarrow q (“p implies q“) is equivalent to “either not p or q” — symbolically, we could define p \Rightarrow q as \neg p \vee q. That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.

The main property we ask of implication is modus ponens: given p and p \Rightarrow q, we may infer q. In symbols, writing the inference or entailment relation as \leq, this is expressed as p \wedge (p \Rightarrow q) \leq q. And, we ask that implication be the weakest possible such assumption, i.e., that material implication p \Rightarrow q be the weakest a whose presence in conjunction with p entails q. In other words, for given p and q, we now define implication p \Rightarrow q by the property

(a \wedge p \leq q) if and only if (a \leq p \Rightarrow q).

As a very easy exercise, show by Yoneda that an implication p \Rightarrow q is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:

Theorem:

  • (1) If X is a meet-semilattice which admits an implication operator, then for every element p, the operator p \wedge -: X \to X preserves any sups which happen to exist in X.
  • (2) If X is a frame, then X admits an implication operator.

Proof: (1) Suppose S \subseteq X has a sup in X, here denoted \bigvee_{s \in S} s. We have

(\bigvee_{s \in S} s) \wedge p \leq q if and only if

\bigvee_{s \in S} s \leq p \Rightarrow q if and only if

for all s \in S, (s \leq p \Rightarrow q) if and only if

for all s \in S, (s \wedge p \leq q) if and only if

\bigvee_{s \in S} (s \wedge p) \leq q.

Since this is true for all q, the (dual of the) Yoneda principle tells us that (\bigvee_{s \in S} s) \wedge p = \bigvee_{s \in S} (s \wedge p), as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that (\bigvee_{s \in S} s) \wedge p satisfies the defining property of that sup.)

(2) Suppose p, q are elements of a frame X. Define p \Rightarrow q to be \sup(\{a \in X: a \wedge p \leq q\}). By definition, if a \wedge p \leq q, then a \leq p \Rightarrow q. Conversely, if a \leq p \Rightarrow q, then

a \wedge p \leq \sup\{x: x \wedge p \leq q\} \wedge p = \sup\{x \wedge p: x \wedge p \leq q\},

where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by q (according to the defining property of sups). Hence a \wedge p \leq q, as desired. \Box

Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as PX, so long as we trust that p \Rightarrow q := \neg p \vee q really does what we ask of implication. We’ll come to that point again later.

Part (2) has some interesting consequences vis à vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let U, V be open sets in a topology. According to our prescription, we define U \Rightarrow V as the sup (the union) of all open sets W with the property that W \cap U \subseteq V. We can think of this inclusion as living in the power set PX. Then, assuming our formula U^c \cup V for implication in the Boolean algebra PX (where U^c denotes the complement of U), we would have W \subseteq U^c \cup V. And thus, our implication U \Rightarrow V in the topology is the union of all open sets W contained in the (usually non-open) set U^c \cup V. That is to say, U \Rightarrow V is the largest open contained in U^c \cup V, otherwise known as the interior of U^c \cup V. Hence our formula:

U \Rightarrow V = int(U^c \cup V).

Definition: A Heyting algebra is a lattice H which admits an implication p \Rightarrow q for any two elements p, q \in H. A complete Heyting algebra is a complete lattice which admits an implication for any two elements.

Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).

Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.

Exercise: Show that (0 \Rightarrow 0) = 1 in a Heyting algebra.

Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation \neg x to be x \Rightarrow 0. For the Heyting algebra given by a topology, what can you say about \neg U when U is open and dense?

Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!

In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.

Let (X, \leq) be a poset. If x, y are elements of X, a join of x and y is an element j with the property that for any a \in X,

j \leq a if and only if (x \leq a and y \leq a).

For a first example, consider the poset PX of subsets of X ordered by inclusion. The join in that case is given by taking the union, i.e., we have

S \cup T \subseteq A if and only if (S \subseteq A and T \subseteq A).

Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.

Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of x and y is an element m such that for all a \in X,

a \leq m if and only if (a \leq x and a \leq y).

Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of x \leq y by y \leq x). This is an instance of a very important concept. In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.

(Just to put in some fine print or boilerplate, let me just say that a formula in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain X), which can be built up by taking \leq as a primitive binary predicate on X. A theorem in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set PX), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)

The critical item to install before we’re off and running is the following meta-principle:

Principle of Duality: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.

Proof: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, \forall_{x \in X} x \leq x, is itself! — and of course an axiom is a theorem. The transitivity axiom, \forall_{x, y, z \in X} (x \leq y and y \leq z) implies x \leq z, is also self-dual (when you dualize it, it looks essentially the same except that the variables x and z are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done. \Box

So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements x and y is denoted x \vee y.

Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation x \wedge y is x \vee y (and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all x, y, z) (x \vee y) \vee z = x \vee (y \vee z). In fact, we can say

Theorem: The join operation \vee is associative, commutative, and idempotent.

Proof: Just apply the principle of duality to the corresponding theorem for the meet operation.

Just to get used to these ideas, here are some exercises.

  • State the dual of the Yoneda principle (as stated here).
  • Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)

To continue: we say a poset is a join-semilattice if it has all finite joins (including the empty join, which is the bottom element \bot satisfying \bot \leq a for all a). A lattice is a poset which has all finite meets and finite joins.

Time for some examples.

  • The set of natural numbers 0, 1, 2, 3, … under the divisibility order (x \leq y if x divides y) is a lattice. (What is the join of two elements? What is the bottom element?))
  • The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
  • The set of subsets of a set X is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
  • The set of subspaces of a vector space V is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces U, W is the vector space which they jointly generate (i.e., the set of vector sums u + w with u \in U, w \in W, which is closed under addition and scalar multiplication).

The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of closure of the ordinary union seems relevant (e.g., in the vector space example, close up the union of U and W under the vector space operations), and indeed this can be made precise in many cases of interest.

To explain this, let’s take a fresh look at the definition of join: the defining property was

x \vee y \leq a if and only if (x \leq a and y \leq a).

What this is really saying is that among all the elements a which “contain” both x and y, the element x \vee y is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements a to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups H \subseteq G, J \subseteq G, take the intersection of all subgroups containing both H and J — that intersection is the group-theoretic closure of the union H \cup J.

There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:

Definition: Let X be a poset, and suppose S \subseteq X. The infimum of S, if it exists, is an element m \in X such that for all a \in X, a \leq m if and only if a \leq s for all s \in S.

By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of S by \inf(S).

We say that a poset X is an inf-lattice if there is an infimum for every subset. Similarly, the supremum of S \subseteq X, if it exists, is an element \sup(S) such that for all a \in X, \sup(S) \leq a if and only if s \leq a for all s \in S. A poset is a sup-lattice if there is a supremum for every subset. [I’ll just quickly remark that the notions of inf-lattice and sup-lattice belong to second-order logic, since it involves quantifying over all subsets S \subseteq X (or over all elements of PX).]

Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the

Theorem: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.

Proof: Suppose X is an inf-lattice, and let S \subseteq X. Let U = \{u \in X: \forall_{s \in S} s \leq u\} be the set of upper bounds of S. I claim that \inf(U) (“least upper bound”) is the supremum of S. Indeed, from \inf(U) \leq \inf(U) and the definition of infimum, we know that \inf(U) \leq a if a \in U, i.e., \inf(U) \leq a if s \leq a for all s \in S. On the other hand, we also know that if s \in S, then s \leq u for every u \in U, and hence s \leq \inf(U) by the defining property of infimum (i.e., \inf(U) really is an upper bound of S). So, if \inf(U) \leq a, we conclude by transitivity that s \leq a for every s \in S. This completes the proof. \Box

Corollary: Every finite meet-semilattice is a lattice.

Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of homomorphism. The notions are straightforward enough: a morphism of meet-semilattices f: X \to Y is a function which takes finite meets in X to finite meets in Y (f(x \wedge x') = f(x) \wedge f(x'), and f(1) = 1 where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices (f(x \vee x') = f(x) \vee f(x') and f(0) = 0 where the 0’s denote bottom elements). A morphism of inf-lattices f: X \to Y is a function such that f(\inf(S)) = \inf(f(S)) for all subsets S \subseteq X, where f(S) denotes the direct image of S under f. And there is a dual notion of morphism of sup-lattices: f(\sup(S)) = \sup(f(S)). Finally, a morphism of lattices is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.

Despite the theorem above , it is not true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.

Exercise: Come up with some examples of morphisms which exhibit these distinctions.

My name is Todd Trimble. As regular readers of this blog may have noticed by now, I’ve recently been actively commenting on some of the themes introduced by our host Vishal, and he’s now asked whether I’d like to write some posts of my own. Thank you Vishal for the invitation!

As made clear in some of my comments, my own perspective on a lot of mathematics is greatly informed and influenced by category theory — but that’s not what I’m setting out to talk about here, not yet anyway. For reasons not altogether clear to me, the mere mention of category theory often scares people, or elicits other emotional reactions (sneers, chortles, challenges along the lines “what is this stuff good for, anyway?” — I’ve seen it all).

Anyway, I’d like to try something a little different this time — instead of blathering about categories, I’ll use some of Vishal’s past posts as a springboard to jump into other mathematics which I find interesting, and I won’t need to talk about categories at all unless a strong organic need is felt for it (or if it’s brought back “by popular demand”). But, the spirit if not the letter of categorical thinking will still strongly inform my exposition — those readers who already know categories will often be able to read between the lines and see what I’m up to. Those who do not will still be exposed to what I believe are powerful categorical ways of thinking.

I’d like to start off talking about a very pretty area of mathematics which ties together various topics in algebra, topology, logic, geometry… I’m talking about mathematics in the neighborhood of so-called “Stone duality” (after the great Marshall Stone). I’m hoping to pitch this as though I were teaching an undergraduate course, at roughly a junior or senior level in a typical American university. [Full disclosure: I’m no longer a professional academic, although I often play one on the Internet 🙂 ] At times I will allude to topics which presuppose some outside knowledge, but hey,that’s okay. No one’s being graded here (thank goodness!).

First, I need to discuss some preliminaries which will eventually lead up to the concept of Boolean algebra — the algebra which underlies propositional logic.

A partial order on a set X is a binary relation (a subset R \subseteq X \times X), where we write x \le y if (x, y) \in R, satisfying the following conditions:

  1. (Reflexivity) x \le x for every x \in  X;
  2. (Transitivity) For all x, y, z \in X, (x \le y and y \le z) implies x \le z.
  3. (Antisymmetry) For all x, y \in X, (x \le y and y \le x) implies x = y.

A partially ordered set (poset for short) is a set equipped with a partial order. Posets occur all over mathematics, and many are likely already familiar to you. Here are just a few examples:

  • The set of natural numbers 0, 1, 2,\ldots ordered by divisibility (x \le y if x divides y).
  • The set of subsets of a set X (where \le is the relation of inclusion A \subseteq B of one subset in another).
  • The set of subgroups of a group G (where again \le is the inclusion relation between subgroups).
  • The set of ideals in a ring R (ordered by inclusion).

The last three examples clearly follow a similar pattern, and in fact, there is a sense in which every poset P can be construed in just this way: as a set of certain types of subset ordered by inclusion. This is proved in a way very reminiscent of the Cayley lemma (that every group can be represented as a group of permutations of a set). You can think of such results as saying “no matter how abstractly a group [or poset] may be presented, it can always be represented in a concrete way, in terms of permutations [or subsets]”.

To make this precise, we need one more notion, parallel to the notion of group homomorphism. If X and Y are posets, a poset map from X to Y is a function f: X \to  Y which preserves the partial order (that is, if x \le y in X, then f(x) \le f(y) in Y). Here then is our representation result:

Lemma (Dedekind): Any poset X may be faithfully represented in its power set PX, partially ordered by inclusion. That is, there exists a poset map i: X \to PX that is injective (what we mean by “faithful”: the map is one-to-one).

Proof: Define i: X \to PX to be the function which takes x \in X to the subset \{ a \in X : a \le x \} (which we view as an element of the power set). To check this is a poset map, we must show that if x \le y, then i(x) = \{ a \in X : a \le x \} is included in i(y) = \{ b \in X : b \le y \}. This is easy: if a belongs to i(x), i.e., if a \le x, then from x \le y and the transitivity property, a \le y, hence a belongs to i(y).

Finally, we must show that i: X \to PX is injective; that is, i(x) = i(y) implies x = y. In other words, we must show that if

\{ a \in X : a \le x \} = \{ b \in  X: b \le y \},

then x = y. But, by the reflexivity property, we know x \le x; therefore x belongs to the set displayed on the left, and therefore to the set on the right. Thus x \le y. By similar reasoning, y \le  x. Then, by the antisymmetry property, x = y, as desired. \Box

The Dedekind lemma turns out to be extremely useful (it and the Cayley lemma are subsumed under an even more useful result called the Yoneda lemma — perhaps more on this later). Before I illustrate its uses, let me rephrase slightly the injectivity property of the Dedekind embedding i: X \to PX: it says,

If (for all a in X, a \le x iff a \le y), then x = y.

This principle will be used over and over again, so I want to give it a name: I’ll call it the Yoneda principle.

Here is a typical use. Given elements x, y in a poset X, we say that an element m is a meet of x and y if for all a \in X,

a \le m if and only if (a \le x and a \le  y).

Fact: there is at most one meet of x and y. That is, if m and n are both meets of x and y, then m = n.

Proof: For all a \in X, a \le m if and only if (a \le x and a \le y) if and only if a \le n. Therefore, m = n by the Yoneda principle. \Box

Therefore, we can refer to the meet of two elements x and y (if it exists); it is usually denoted x \wedge y. Because x \wedge y \le x \wedge y, we have x \wedge y \le x and x \wedge y \le y.

Example: In a concrete poset, like the poset of all subsets of a set or subgroups of a group, the meet of two elements is their intersection.

Example: Consider the natural numbers ordered by divisibility. The meet m = x \wedge y satisfies m \leq x and m \leq y (i.e., m divides both x and y). At the same time, the meet property says that any number which divides both x and y must also divide m. It follows that the meet in this poset is the gcd of x and y.

Here are some more results which can be proved with the help of the Yoneda principle. I’ll just work through one of them, and leave the others as exercises.

  1. x \wedge x = x (idempotence of meet)
  2. x \wedge y = y \wedge x (commutativity of meet)
  3. (x \wedge y) \wedge z = x \wedge (y \wedge z) (associativity of meet)

To prove 3., we can use the Yoneda principle: for all a in the poset, we have

a \le ( x \wedge y) \wedge z

iff ( a \le x \wedge y and a \le z )

iff ( a \le x and a \le y and a \le z )

iff ( a \le x and a \le y \wedge z )

iff a \le x \wedge ( y \wedge z ).

Hence, ( x \wedge y) \wedge z =  x \wedge ( y \wedge z ) by Yoneda.

In fact, we can unambiguously refer to the meet of any finite number of elements x_1,  x_2, \ldots, x_n by the evident property:

a \le x_1 \wedge x_2 \wedge \ldots \wedge x_n iff ( a \le x_1 and a \le x_2 and \ldots a \le x_n )

— this uniquely defines the meet on the left, by Yoneda, and the order in which the x_i appear makes no difference.

But wait — what if the number n of elements is zero? That is, what is the empty meet? Well, the condition “a \le x_1 and \ldots a \le x_n” becomes vacuous (there is no a for which the condition is not satisfied), so whatever the empty meet is, call it \top, we must have a \le \top for all a. So \top is just the top element of the poset (if one exists). Another name for the top element is “the terminal element”, and another notation for it is ‘1‘.

Definition: A meet semi-lattice is a poset which has all finite meets, including the empty one.

Exercises:

  • Prove that in a meet-semilattice, x \wedge \top  = x for all x.
  • Is there a top element for the natural numbers ordered by divisibility?

Our other blog

Visitors to this blog

Blog Stats

  • 371,445 hits

Wikio Ranking

Wikio - Top Blogs - Sciences

Current Online Readers

Archives

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031