You are currently browsing the category archive for the ‘Boolean Algebra’ category.

After this brief (?) categorical interlude, I’d like to pick up the main thread again, and take a closer look at the some of the ingredients of baby Stone duality in the context of categorical algebra, specifically through the lens of adjoint functors. By passing a topological light through this lens, we will produce the spectrum of a Boolean algebra: a key construction of full-fledged Stone duality!

Just before the interlude, we were discussing some consequences of baby Stone duality. Taking it from the top, we recalled that there are canonical maps

i_B: B \to \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}): b \mapsto (\phi \mapsto \phi(b))

j_X: X \to \mbox{Bool}(\hom(X, \mathbf{2}), \mathbf{2}): x \mapsto (\sigma \mapsto \sigma(x))

in the categories of Boolean algebras B and sets X. We said these are “natural” maps (even before the notion of naturality had been formally introduced), and recalled our earlier result that these are isomorphisms when B and X are finite (which is manifestly untrue in general; for instance, if B is a free Boolean algebra generated by a countable set, then for simple reasons of cardinality B cannot be a power set).

What we have here is an adjoint pair of functors between the categories Set and \mbox{Bool} of sets and Boolean algebras, each given by a hom-functor:

(P^{op} = \hom(-, \mathbf{2})^{op}: Set \to \mbox{Bool}^{op}) \dashv (Q = \mbox{Bool}(-, \mathbf{2}): \mbox{Bool}^{op} \to Set)

(P^{op} acts the same way on objects and morphisms as P: Set^{op} \to \mbox{Bool}, but is regarded as mapping between the opposite categories). This actually says something very simple: that there is a natural bijection between Boolean algebra maps and functions

\displaystyle \frac{\phi: B \to \hom(X, \mathbf{2})}{\hat{\phi}: X \to \mbox{Bool}(B, \mathbf{2})}

given by the formula \hat{\phi}(x)(b) = \phi(b)(x). [The very simple nature of this formula suggests that it’s nothing special to Boolean algebras — a similar adjunction could be defined for any algebraic theory defined by operations and (universally quantified) equations, replacing \mathbf{2} by any model of that theory.] The unit of the adjunction at X is the function j_X: X \to QP^{op}X, and the counit at B is the Boolean algebra map i_B (regarded as a morphism \varepsilon_B: P^{op}QB \to B mapping the other way in the opposite category \mbox{Bool}^{op}).

The functor QP^{op} is usually described in the language of ultrafilters, as I will now explain.

Earlier, we remarked that an ultrafilter in a Boolean algebra is a maximal filter, dual to a maximal ideal; let’s recall what that means. A maximal ideal I in a Boolean ring B is the kernel of a (unique) ring map

\phi: B \to \mathbf{2}

i.e., has the form I = \phi^{-1}(0) for some such map. Being an ideal, it is an additive subgroup I \subseteq  B such that x \in B, y \in I implies x y = x \wedge y \in I. It follows that if x, y \in I, then x \vee y = x + y + xy \in I, so I is closed under finite joins (including the empty join 0 = \bot). Also, if x \leq y and y \in I, then x = x \wedge y = xy \in I, so that I is “downward-closed”.

Conversely, a downward-closed subset I \subseteq B which is closed under finite joins is an ideal in B (exercise!). Finally, if I is a maximal ideal, then under the quotient map

\phi: B \to B/I \cong \mathbf{2}

we have that for all b \in B, either \phi(b) = 0 or \phi(b) = 1, i.e., that either b \in I or \neg b = 1 - b \in I.

Thus we have redefined the notion of maximal ideal in a Boolean algebra in the first-order theory of posets: a downward-closed set I \subseteq B closed under finite joins, such that every element or its complement (but never both!) is contained in I. [If both x, \neg x \in I, then x \vee \neg x = 1 \in I, whence b \in I for all b \in B (since b \leq 1 and I is downward-closed). But then I isn’t a maximal (proper) ideal!]

The notion of ultrafilter is dual, so an ultrafilter in a Boolean algebra B is defined to be a subset F \subseteq B which

  • Is upward-closed: if x \in F and x \leq y, then y \in F;
  • Is closed under finite meets: if x, y \in F, then x \wedge y \in F;
  • Satisfies dichotomy: for every x \in B, exactly one of x, \neg x belongs to F.

If I is a maximal ideal, then \neg I = \{\neg x: x \in I\} is an ultrafilter, and we have natural bijections between the following concepts:

Boolean algebra maps B \to \mathbf{2} \leftrightarrow maximal ideals I \subseteq B \leftrightarrow ultrafilters F \subseteq B

so that QB = \mbox{Bool}(B, \mathbf{2}) is naturally identified with the set of ultrafilters in B.

If X is a set, then an ultrafilter on X is by definition an ultrafilter in the Boolean algebra P X. Hence QP^{op}X is identified with the set of ultrafilters on X, usually denoted \beta X. The unit map

j_X: X \to QP^{op}X \cong \beta X

maps x \in X to an ultrafilter denoted \mbox{prin}_X(x) \subseteq PX, consisting of all subsets S \subseteq X which contain x, and called the principal ultrafilter generated by x.

We saw that when X is finite, the function j_X (and therefore also \mbox{prin}_X) is a bijection: there every ultrafilter is principal, as part of baby Stone duality (see Proposition 4 here). Here is a slight generalization:

Proposition 1: If an ultrafilter F on X contains a finite set S \subseteq X, then F is principal.

Proof: It is enough to show F contains \{x\} for some x \in S. If not, then F contains the complement \neg\{x\} for every x \in S (by dichotomy), and therefore also the finite intersection

\bigcap_{x \in S} \neg\{x\} = \neg S,

which contradicts the fact that S \in F. \Box

It follows that nonprincipal ultrafilters can exist only on infinite sets X, and that every cofinite subset of X (complement of a finite set) belongs to such an ultrafilter (by dichotomy). The collection of cofinite sets forms a filter, and so the question of existence of nonprincipal ultrafilters is the question of whether the filter of cofinite sets can be extended to an ultrafilter. Under the axiom of choice, the answer is yes:

Proposition 2: Every (proper) filter in a Boolean algebra is contained in some ultrafilter.

Proof: This is dual to the statement that every proper ideal in a Boolean ring is contained in a maximal ideal. Either statement may be proved by appeal to Zorn’s lemma: the collection of filters which contain a given filter has the property that every linear chain of such filters has an upper bound (namely, the union of the chain), and so by Zorn there is a maximal such filter. \Box

As usual, Zorn’s lemma is a kind of black box: it guarantees existence without giving a clue to an explicit construction. In fact, nonprincipal ultrafilters on sets X, like well-orderings of the reals, are notoriously inexplicit: no one has ever seen one directly, and no one ever will.

That said, one can still develop some intuition for ultrafilters. I think of them as something like “fat nets”. Each ultrafilter F on a set X defines a poset (of subsets ordered by inclusion), but I find it more suggestive to consider instead the opposite F^{op}, where U \leq V in F^{op} means V \subseteq U — so that the further or deeper you go in F^{op}, the smaller or more concentrated the element. Since F is closed under finite intersections, F^{op} has finite joins, so that F^{op} is directed (any two elements have an upper bound), just like the elements of a net (or more pedantically, the domain of a net). I call an ultrafilter a “fat net” because its elements, being subsets of X, are “fatter” than mere points.

Intuitively speaking, ultrafilters as nets “move in a definite direction”, in the sense that given an element U \in F, however far in the net, and given a subset T \subseteq U, the ultrafilter-as-net sniffs out a direction in which to proceed, “tunneling” either into T if T \in F, or into its relative complement U \cap \neg T if this belongs to F. In the case of a principal ultrafilter, there is a final element x of the net; otherwise not (but we can think of a nonprincipal ultrafilter as ending at an “ideal point” of the set X if we want).

Since the intuitive imagery here is already vaguely topological, we may as well make the connection with topology more precise. So, suppose now that X comes equipped with a topology. We say that an ultrafilter F on X converges to a point x \in X if each open set U containing x (or each neighborhood of x) belongs to the ultrafilter. In other words, by going deep enough into the ultrafilter-as-net, you get within any chosen neighborhood of the point. We write F \to x to say that F converges to x.

General topology can be completely developed in terms of the notion of ultrafilter convergence, often very efficiently. For example, starting with any relation whatsoever between ultrafilters and points,

c \subseteq \beta(X) \times X,

we can naturally define a topology \mbox{top}(c) on X so that

F \to x with respect to \mbox{top}(c) whenever (F, x) \in c.

Let’s tackle that in stages: in order for the displayed condition to hold, a neighborhood of x must belong to every ultrafilter F for which (F, x) \in c. This suggests that we try defining the filter N_x of neighborhoods of x to be the intersection of ultrafilters

N_x = \bigcap_{F: (F, x) \in c} F.

Then define a subset U \subseteq X to be open if it is a neighborhood of all the points it contains. In other words, define U to be open if

\forall_{(F, x) \in c} x \in U \Rightarrow U \in F.

Proposition 3: This defines a topology, \mbox{top}(c).

Proof: Since X \in F for every ultrafilter F, it is clear that X is open; also, it is vacuously true that the empty set is open. If U, V are open, then for all (F, x) \in c, whenever x \in U \cap V, we have x \in U and x \in V, so that U \in F and V \in F by openness, whence U \cap V \in F since F is closed under intersections. So U \cap V is also open. Finally, suppose U_i is a collection of open sets. For all (F, x) \in c, if x \in \bigcup_i U_i, then x \in U_i for some i, so that U_i \in F by openness, whence \bigcup_i U_i \in F since ultrafilters are upward closed. So \bigcup_i U_i is also open. \Box.

Let’s recap: starting from a topology \tau on X, we’ve defined a convergence relation \mbox{conv}(\tau) \subseteq \beta(X) \times X (consisting of pairs (F, x) such that F \to x), and conversely, given any relation c \subseteq \beta(X) \times X, we’ve defined a topology \mbox{top}(c) on X. What we actually have here is a Galois connection where

c \subseteq \mbox{conv}(\tau) if and only if \tau \subseteq \mbox{top}(c)

Of course not every relation c \subseteq \beta(X) \times X is the convergence relation of a topology, so we don’t quite have a Galois correspondence (that is, \mbox{conv} and \mbox{top} are not quite inverse to one another). But, it is true that every topology \tau is the topology of its ultrafilter convergence relation, i.e., \tau = \mbox{top(conv}(\tau)). For this, it suffices to show that every neighborhood filter N_x is the intersection of the ultrafilters that contain it. But that is true of any filter:

Theorem 1: If N is a filter in PX and A \notin N, then there exists an ultrafilter F for which N \subseteq F and A \notin F.

Proof: First I claim \neg A \cap B \neq 0 for all B \in N; otherwise \neg A \cap B = 0 for some B \in N, whence B \subseteq \neg \neg A = A, so that A \in N since filters are upward closed, contradiction. It follows that N can be extended to the (proper) filter

\{C \in PX: \exists_{B \in N} \neg A \cap B \subseteq C\}

which in turn extends to some ultrafilter F, by proposition 2. Since \neg A \in F, we have A \notin F. \Box

Corollary 1: Every filter is the intersection of all the ultrafilters which contain it.

The ultrafilter convergence approach to topology is particularly convenient for studies of compactness:

Theorem 2: A space X is compact if and only if every ultrafilter F converges to at least one point. It is Hausdorff if and only if every ultrafilter converges to at most one point.

Proof: First suppose that X is compact, and (in view of a contradiction) that F converges to no point of X. This means that for every x \in X there is a neighborhood U_x which does not belong to F, or that \neg U_x \in F. Finitely many of these U_x cover X, by compactness. By De Morgan’s law, this means finitely many \neg U_x have empty intersection. But this would mean \emptyset \in F, since F is closed under finite intersections, contradiction.

In the reverse direction, suppose that every ultrafilter converges. We need to show that if C = \{U_i\}_{i \in I} is any collection of open subsets of X such that no finite subcollection covers X, then the union of the U_i cannot cover X. First, because no finite subcollection covers, we may construct a filter generated by the complements:

F = \{A \subseteq X: \bigcap_{j=1}^n \neg U_{i_j} \subseteq A \mbox{ for finitely many  } U_{i_1}, \ldots, U_{i_n} \in C\}.

Extend this filter to an ultrafilter G; then by assumption \exists_{x \in X} G \to x. If some one of the U_i contained x, then U_i \in G by definition of convergence. But we also have \neg U_i \in F \subseteq G, and this is a contradiction. So, x lies outside the union of the U_i, as was to be shown.

Now let X be Hausdorff, and suppose that F \to x and F \to y. Let U_x, U_y be neighborhoods of x, y respectively with empty intersection. By definition of convergence, we have U_x, U_y \in F, whence \emptyset = U_x \cap U_y \in F, contradiction.

Conversely, suppose every ultrafilter converges to at most one point, and let x, y be two distinct points. Unless there are neighborhoods U_x, U_y of x, y respectively such that U_x \cap U_y = \emptyset (which is what we want), the smallest filter containing the two neighborhood filters N_x, N_y (that is to say, the join N_x \vee N_y in the poset of filters) is proper, and hence extends to an ultrafilter F. But then N_x \subseteq F and N_y \subseteq F, which is to say F \to x and F \to y, contradiction. \Box

Theorem 2 is very useful; among other things it paves the way for a clean and conceptual proof of Tychonoff’s theorem (that an arbitrary product of compact spaces is compact). For now we note that it says that a topology \tau is the topology of a compact Hausdorff space structure on X if and only if the convergence relation \mbox{conv}(\tau) \subseteq \beta(X) \times X is a function. And in practice, functions c: \beta(X) \to X which arise “naturally” tend to be such convergence relations, making X a compact Hausdorff space.

Here is our key example. Let B be a Boolean algebra, and let X = QB = \mbox{Bool}(B, \mathbf{2}), which we have identified with the set of ultrafilters in B. Define a map c: \beta(X) \to X by

\displaystyle \beta (QB) \cong QP^{op}QB \stackrel{Q\varepsilon_B}{\to} QB

where \varepsilon_B: P^{op}QB \to B was the counit (evaluated at B) of the adjunction P^{op} \dashv Q defined at the top of this post. Unpacking the definitions a bit, the map Q \varepsilon_B is the map \mbox{Bool}(i_B, \mathbf{2}), the result of applying the hom-functor \mbox{Bool}(-, \mathbf{2}) to

i_B: B \to P^{op}QB = \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}): b \mapsto (\phi \mapsto \phi(b))

Chasing this a little further, the map c “pulls back” an ultrafilter F \subseteq P^{op}QB to the ultrafilter i_B^{-1}(F) \subseteq B, viewed as an element of QB. We then topologize QB by the topology \mbox{top}(c).

This construction is about as “abstract nonsense” as it gets, but you have to admit that it’s pretty darned canonical! The topological space QB we get in this way is called the spectrum of the Boolean algebra B. If you’ve seen a bit of algebraic geometry, then you probably know another, somewhat more elementary way of defining the spectrum (of B as commutative ring), so we may as well make the connection explicit. However you define it, the result is a compact Hausdorff space structure with some other properties which make it very reminiscent of Cantor space.

It is first of all easy to see that X = QB is compact, i.e., that every ultrafilter F converges. Indeed, the relation c is a function \beta (QB) \to QB, and if you look at the condition for a set U to be open w.r.t. \mbox{top}(c),

\forall_{(F, x = c(F))} x \in U \Rightarrow U \in F,

you see immediately that F converges to x = c(F).

To get Hausdorffness, take two distinct points u, v \in QB (ultrafilters in B). Since these are distinct maximal filters, there exists b \in B such that b belongs to u but not to v, and then \neg b belongs to v but not to u. Define

U(b) := \{w \in QB: b \in w\}.

Proposition 4: U(b) is open in \mbox{top}(c).

Proof: We must check that for all ultrafilters F on QB, that

c(F) \in U(b) \Rightarrow U(b) \in F.

But c(F) = i_B^{-1}(F). By definition of U(b), we are thus reduced to checking that

b \in i_B^{-1}(F) \Rightarrow U(b) \in F

or that i_B(b) \in F \Rightarrow U(b) \in F. But i_B(b) \in P^{op}QB (as a subset of QB) is U(b)! \Box

As a result, U(b) and U(\neg b) are open sets containing the given points u, v. They are disjoint since in fact U(\neg b) = \neg U(b) (indeed, because i_B preserves negation). This gives Hausdorffness, and also that the U(b) are clopen (closed and open).

We actually get a lot more:

Proposition 5: The collection \{U(b): b \in B\} is a basis for the topology \mbox{top}(c) on QB.

Proof: The sets U(b) form a basis for some topology \tau, because U(b) \wedge U(c) = U(b \wedge c) (indeed, i_B preserves meets). By the previous proposition, \tau \subseteq \mbox{top}(c). So the identity on QB gives a continuous comparison map

QB_{\mbox{top}(c)} \to QB_{\tau}

between the two topologies. But a continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism, so \tau = \mbox{top}(c). \Box

  • Remark: In particular, the canonical topology on \beta X = QP^{op}X is compact Hausdorff; this space is called the Stone-Cech compactification of (the discrete space) X. The methods exploited in this lecture can be used to show that in fact \beta X is the free compact Hausdorff space generated from the set X, meaning that the functor \beta is left adjoint to the underlying-set functor U: \mbox{CompHaus} \to Set. In fact, one can go rather further in this vein: a fascinating result (first proved by Eduardo Manes in his PhD thesis) is that the concept of compact Hausdorff space is algebraic (is monadic with respect to the monad \beta): there is a equationally defined theory where the class of J-ary operations (for each cardinal J) is coded by the set of ultrafilters \beta J, and whose models are precisely compact Hausdorff spaces. This goes beyond the scope of these lectures, but for the theory of monads, see the entertaining YouTube lectures by the Catsters!

Last time in this series on Stone duality, we observed a perfect duality between finite Boolean algebras and finite sets, which we called “baby Stone duality”:

  1. Every finite Boolean algebra B is obtained from a finite set X by taking its power set (or set of functions \hom(X, \mathbf{2}) from X to \mathbf{2}, with the Boolean algebra structure it inherits “pointwise” from \mathbf{2} = \{0, 1\}). The set X may be defined to be \mbox{Bool}(B, \mathbf{2}), the set of Boolean algebra homomorphisms from B to \mathbf{2}.
  2. Conversely, every finite set X is obtained from the Boolean algebra B = \hom(X, \mathbf{2}) by taking its “hom-set” \mbox{Bool}(B, \mathbf{2}).

More precisely, there are natural isomorphisms

i_B: B \stackrel{\sim}{\to} \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}),

j_X: X \stackrel{\sim}{\to} \mbox{Bool}(\hom(X, \mathbf{2}), \mathbf{2})

in the categories of finite Boolean algebras and of finite sets, respectively. In the language of category theory, this says that these categories are (equivalent to) one another’s opposite — something I’ve been meaning to explain in more detail, and I promise to get to that, soon! In any case, this duality says (among other things) that finite Boolean algebras, no matter how abstractly presented, can be represented concretely as power sets.

Today I’d like to apply this representation to free Boolean algebras (on finitely many generators). What is a free Boolean algebra? Again, the proper context for discussing this is category theory, but we can at least convey the idea: given a finite set S of letters x, y, z, \ldots, consider the Boolean algebra \mathbf{B}(S) whose elements are logical equivalence classes of formulas you can build up from the letters using the Boolean connectives \wedge, \vee, \neg (and the Boolean constants 0, 1), where two formulas \phi, \phi' are defined to be logically equivalent if \phi \leq \phi' and \phi' \leq \phi can be inferred purely on the basis of the Boolean algebra axioms. This is an excellent example of a very abstract description of a Boolean algebra: syntactically, there are infinitely many formulas you can build up, and the logical equivalence classes are also infinite and somewhat hard to visualize, but the mess can be brought under control using Stone duality, as we now show.

First let me cut to the chase, and describe the key property of free Boolean algebras. Let A be any Boolean algebra; it could be a power set, the lattice of regular open sets in a topology, or whatever, and think of a function f: S \to A from the set of letters to A as modeling or interpreting the atomic formulas x, y, z, \ldots as elements f(x), f(y), f(z), \ldots of A. The essential property of the free Boolean algebra is that we can extend this interpretation f in a unique way to a Boolean algebra map \mathbf{B}(S) \to A. The way this works is that we map a formula like (x \wedge \neg y) \vee z to the obvious formula (f(x) \wedge \neg f(y)) \vee f(z). This is well-defined on logical equivalence classes of formulas because if p = q in \mathbf{B}(S), i.e., if the equality is derivable just from the Boolean algebra axioms, then of course f(p) = f(q) holds in A as the Boolean algebra axioms hold in A. Thus, there is a natural bijective correspondence between functions S \to A and Boolean algebra maps \mathbf{B}(S) \to A; to get back from a Boolean algebra map \mathbf{B}(S) \to A to the function S \to A, simply compose the Boolean algebra map with the function S \to \mathbf{B}(S) which interprets elements of S as equivalence classes of atomic formulas in \mathbf{B}(S).

To get a better grip on \mathbf{B}(S), let me pass to the Boolean ring picture (which, as we saw last time, is equivalent to the Boolean algebra picture). Here the primitive operations are addition and multiplication, so in this picture we build up “formulas” from letters using these operations (e.g., (x + y) \cdot z and the like). In other words, the elements of \mathbf{B}(S) can be considered as “polynomials” in the variables x, y, z, \ldots. Actually, there are some simplifying features of this polynomial algebra; for one thing, in Boolean rings we have idempotence. This means that p^n = p for n \geq 1, and so a monomial term like x^3 y^2 reduces to its support x y. Since each letter appears in a support with exponent 0 or 1, it follows that there are 2^{|S|} possible supports or Boolean monomials, where |S| denotes the cardinality of S.

Idempotence also implies, as we saw last time, that b + b = 0 for all elements b \in \mathbf{B}(S), so that our polynomials = \mathbb{Z}-linear combinations of monomials are really \mathbb{Z}_2-linear combinations of Boolean monomials or supports. In other words, each element of \mathbf{B}(S) is uniquely a linear combination

\sum_{\sigma \in \mbox{supp}(S)} a_\sigma  \sigma where a_\sigma \in \{0, 1\},

i.e., the set of supports \mbox{supp}(S) forms a basis of \mathbf{B}(S) as a \mathbb{Z}_2-vector space. Hence the cardinality of the free Boolean ring is 2^{|\mbox{supp}(S)|} = 2^{2^{|S|}}.

  • Remark: This gives an algorithm for checking logical equivalence of two Boolean algebra formulas: convert the formulas into Boolean ring expressions, and using distributivity, idempotence, etc., write out these expressions as Boolean polynomials = \mathbb{Z}_2-linear combinations of supports. The Boolean algebra formulas are equivalent if and only if the corresponding Boolean polynomials are equal.

But there is another way of understanding free Boolean algebras, via baby Stone duality. Namely, we have the power set representation

i: \mathbf{B}(S) \stackrel{\sim}{\to} \hom(\mbox{Bool}(\mathbf{B}(S), \mathbf{2}), \mathbf{2})

where \mbox{Bool}(\mathbf{B}(S), \mathbf{2}) is the set of Boolean algebra maps \mathbf{B}(S) \to \mathbf{2}. However, the freeness property says that these maps are in bijection with functions S \to \mathbf{2}. What are these functions? They are just truth-value assignments for the elements (atomic formulas, or variables) x, y, z, \ldots \in S; there are again 2^{|S|} many of these. This leads to the method of truth tables: each formula b \in \mathbf{B}(S) induces (in one-one fashion) a function

i(b): \mbox{Bool}(\mathbf{B}(S), \mathbf{2}) \to \mathbf{2}

which takes a Boolean algebra map \phi: \mathbf{B}(S) \to \mathbf{2}, aka a truth-value assignment for the variables x, y, z, \ldots, to the element of \{0, 1\} obtained by instantiating the assigned truth values 0, 1 for the variables and evaluating the resulting Boolean expression for b in \mathbf{2}. (In terms of power sets,

\mathbf{B}(S) \cong P(\mbox{Bool}(\mathbf{B}(S), \mathbf{2}))

identifies each equivalence class of formulas b \in \mathbf{B}(S) with the set of truth-value assignments of variables which render the formula b “true” in \{0, 1\}.) The fact that the representation b \mapsto i(b) is injective means precisely that if formulas b, c are inequivalent, then there is a truth-value assignment which renders one of them “true” and the other “false”, hence that they are distinguishable by truth tables.

  • Remark: This is an instance of what is known as a completeness theorem in logic. On the syntactic side, we have a notion of provability of formulas (that b is logically equivalent to \top, or b = \top in \mathbf{B}(S) if this is derivable from the Boolean algebra axioms). On the semantic side, each Boolean algebra homomorphism \phi: \mathbf{B}(S) \to \mathbf{2} can be regarded as a model of \mathbf{B}(S) in which each formula becomes true or false under \phi. The method of truth tables then says that there are enough models or truth-value assignments to detect provability of formulas, i.e., b is provable if it is true when interpreted in any model \phi. This is precisely what is meant by a completeness theorem.

There are still other ways of thinking about this. Let \phi: B \to \mathbf{2} be a Boolean algebra map, aka a model of B. This model is completely determined by

  • The maximal ideal \phi^{-1}(0) in the Boolean ring B, or
  • The maximal filter or ultrafilter \phi^{-1}(1) in B.

Now, as we saw last time, in the case of finite Boolean algebras, each (maximal) ideal is principal: is of the form \{x \in B: x \leq b\} for some b \in B. Dually, each (ultra)filter is principal: is of the form \{x \in B: c \leq x\} for some c = \neg b \in B. The maximality of the ultrafilter means that there is no nonzero element in B smaller than c; we say that c is an atom in B (NB: not to be confused with atomic formula!). So, we can also say

  • A model of a finite Boolean algebra B is specified by a unique atom of B.

Thus, baby Stone duality asserts a Boolean algebra isomorphism

i: B \to P(\mbox{Atoms}(B)).

Let’s give an example: consider the free Boolean algebra on three elements x, y, z. If you like, draw a Venn diagram generated by three planar regions labeled by x, y, z. The atoms or smallest nonzero elements of the free Boolean algebra are then represented by the 2^3 = 8 regions demarcated by the Venn diagram. That is, the disjoint regions are labeled by the eight atoms

x \wedge y \wedge z, x \wedge y \wedge \neg z, x \wedge \neg y \wedge z, x \wedge \neg y \wedge \neg z,

\neg x \wedge y \wedge z, \neg x \wedge y \wedge \neg z, \neg x \wedge \neg y \wedge z, \neg x \wedge \neg y \wedge \neg z.

According to baby Stone duality, any element in the free Boolean algebra (with 2^8 = 256 elements) is uniquely expressible as a disjoint union of these atoms. Another way of saying this is that the atoms form a basis (alternative to Boolean monomials) of the free Boolean algebra as \mathbb{Z}_2-vector space. For example, as an exercise one may calculate

(x \Rightarrow y) \wedge z = x \wedge y \wedge z + \neg x \wedge y \wedge z + \neg x \wedge \neg y \wedge z.

The unique expression of an element b \in \mathbf{B}(S) (where b is given by a Boolean formula) as a \mathbb{Z}_2-linear combination of atoms is called the disjunctive normal form of the formula. So yet another way of deciding when two Boolean formulas are logically equivalent is to put them both in disjunctive normal form and check whether the resulting expressions are the same. (It’s basically the same idea as checking equality of Boolean polynomials, except we are using a different vector space basis.)

All of the above applies not just to free (finite) Boolean algebras, but to general finite Boolean algebras. So, suppose you have a Boolean algebra B which is generated by finitely many elements x_1, x_2, \ldots, x_n \in B. Generated means that every element in B can be expressed as a Boolean combination of the generating elements. In other words, “generated” means that if we consider the inclusion function S = \{x_1, \ldots, x_n\} \hookrightarrow B, then the unique Boolean algebra map \phi: \mathbf{B}(S) \to B which extends the inclusion is a surjection. Thinking of \phi as a Boolean ring map, we have an ideal I = \phi^{-1}(0), and because \phi is a surjection, it induces a ring isomorphism

B \cong \mathbf{B}(S)/I.

The elements of I can be thought of as equivalence classes of formulas which become false in B under the interpretation \phi. Or, we could just as well (and it may be more natural to) consider instead the filter F = \phi^{-1}(1) of formulas in \mathbf{B}(S) which become true under the interpretation \phi. In any event, what we have is a propositional language \mathbf{B}(S) consisting of classes of formulas, and a filter F \subseteq \mathbf{B}(S) consisting of formulas, which can be thought of as theorems of B. Often one may find a filter F described as the smallest filter which contains certain chosen elements, which one could then call axioms of B.

In summary, any propositional theory (which by definition consists of a set S of propositional variables together with a filter F \subseteq \mathbf{B}(S) of the free Boolean algebra, whose elements are called theorems of the theory) yields a Boolean algebra B = \mathbf{B}(S)/F, where dividing out by F means we take equivalence classes of elements of \mathbf{B}(S) under the equivalence relation b \sim c defined by the condition “b \Leftrightarrow c belongs to F“. The partial order on equivalence classes [b] is defined by [b] \leq [c] iff b \Rightarrow c belongs to F. The Boolean algebra B defined in this way is called the Lindenbaum algebra of the propositional theory.

Conversely, any Boolean algebra B with a specified set of generators x_1, \ldots x_n can be thought of as the Lindenbaum algebra of the propositional theory obtained by taking the x_i as propositional variables, together with the filter \phi^{-1}(1) obtained from the induced Boolean algebra map \phi: \mathbf{B}(S) \to B. A model of the theory should be a Boolean algebra map \mathbf{B}(S) \to \mathbf{2} which interprets the formulas of \mathbf{B}(S) as true or false, but in such a way that the theorems of the theory (the elements of the filter) are all interpreted as “true”. In other words, a model is the same thing as a Boolean algebra map

B \cong \mathbf{B}(S)/F \to \mathbf{2}.

i.e., we may identify a model of a propositional theory with a Boolean algebra map f: B \to \mathbf{2} out of its Lindenbaum algebra.

So the set of models is the set \mbox{Bool}(B, \mathbf{2}), and now baby Stone duality, which gives a canonical isomorphism

i: B \cong \hom(\mbox{Bool}(B, \mathbf{2}), \mathbf{2}),

implies the following

Completeness theorem: If a formula of a finite propositional theory is “true” when interpreted under any model \phi of the theory, then the formula is provable (is a theorem of the theory).

Proof: Let B be the Lindenbaum algebra of the theory, and let b = [p] \in B be the class of formulas provably equivalent to a given formula p under the theory. The Boolean algebra isomorphism i takes an element b \in B to the map \phi \mapsto \phi(b). If \phi(b) = 1 for all models \phi, i.e., if i(b) = 1, then b = 1. But then [p] = 1, i.e., p \in F, the filter of provable formulas. \Box

In summary, we have developed a rich vocabulary in which Boolean algebras are essentially the same things as propositional theories, and where models are in natural bijection with maximal ideals in the Boolean ring, or ultrafilters in the Boolean algebra, or [in the finite case] atoms in the Boolean algebra. But as we will soon see, ultrafilters have a significance far beyond their application in the realm of Boolean algebras; in particular, they crop up in general studies of topology and convergence. This is in fact a vital clue; the key point is that the set of models or ultrafilters \mbox{Bool}(B, \mathbf{2}) carries a canonical topology, and the interaction between Boolean algebras and topological spaces is what Stone duality is all about.

In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.

Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.

I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:

Definition: A Boolean ring is a commutative ring (with identity 1) in which every element x is idempotent, i.e., satisfies x^2 = x.

Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.

Proposition 1: For every element x in a Boolean ring, 2x = 0.

Proof: By idempotence, we have x + 1 = (x+1)^2 = x^2 + 2x + 1. Since x = x^2, we may additively cancel in the ring to conclude 0 = 2x. \Box

This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field \mathbb{Z}_2 consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over \mathbb{Z}_2 to be an abelian group V together with a ring homomorphism \mathbb{Z}_2 \to Hom(V, V) to the ring of abelian group homomorphisms from V to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element r = 0, 1 to scalar-multiplication r \cdot (-): V \to V).

Anyway, the point is that we can now apply some linear algebra to study this \mathbb{Z}_2-vector space; in particular, a finite Boolean ring B is a finite-dimensional vector space over \mathbb{Z}_2. By choosing a basis, we see that B is vector-space isomorphic to \mathbb{Z}_{2}^{n} where n is the dimension. So the cardinality of a finite Boolean ring must be of the form 2^n. Hold that thought!

Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring B, we may construct a corresponding Boolean algebra structure on the underlying set of B, uniquely determined by the stipulation that the multiplication x \cdot y of the Boolean ring match the meet operation x \wedge y of the Boolean algebra. Conversely, given a Boolean algebra B, we may construct a corresponding Boolean ring structure on B, and this construction is inverse to the previous one.

In one direction, suppose B is a Boolean ring. We know from before that a binary operation on a set B that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of B — can be identified with the meet operation of a meet-semilattice structure on B, uniquely specified by taking its partial order to be defined by: x \leq y iff x = x \cdot y. It immediately follows from this definition that the additive identity 0 \in B satisfies 0 \leq y for all y (is the bottom element), and the multiplicative identity 1 \in B satisfies x \leq 1 for all x (is the top element).

Notice also that x \wedge (1-x) = x (1-x) = 0, by idempotence. This leads one to suspect that 1-x will be the complement of x in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting x = 1 - (1 - x), i.e., x is equal to its putative double negation.

Proposition 2: x \mapsto 1-x is order-reversing.

Proof: Looking at the definition of the order, this says that if x = x y, then 1-y = (1-x)(1-y). This is immediate. \Box

So, x \mapsto 1 - x is an order-reversing map B \to B (an order-preserving map B \to B^{op}) which is a bijection (since it is its own inverse). We conclude that B \to B^{op}: x \mapsto 1-x is a poset isomorphism. Since B has meets and B \cong B^{op}, B^{op} also has meets (and the isomorphism preserves them). But meets in B^{op} are joins in B. Hence B has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function f(x) = 1 - x takes meets in B to joins in B; that is,

f(x \wedge y) = 1 - x y = f(x) \vee f(y) = (1 - x) \vee (1 - y)

or, replacing x by 1-x and y by 1-y,

1 - (1-x)(1-y) = x \vee y

whence x \vee y = x + y - x y = x + y + xy, using the proposition 1 above.

Proposition 3: 1 - x is the complement of x.

Proof: We already saw x \wedge (1-x) = x(1-x) = 0. Also

x \vee (1-x) = x + (1 - x) + x(1-x) = x + (1-x) + 0 = 1,

using the formula for join we just computed. This completes the proof. \Box

So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have (x \vee y) \wedge z = (x + y + xy)z = xz + yz + xyz. On the other hand, (x \wedge z) \vee (y \wedge z) = xz + yz + (xz)(yz) = xz + yz + xyz, using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.

Naturally, we want to invert the process: starting with a Boolean algebra structure on a set B, construct a corresponding Boolean ring structure on B whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy x \vee y = x + y + x \wedge y, so that x \vee y = x + y if x \wedge y = 0 (i.e., if x and y are disjoint): this gives a partial definition of addition. Continuing this thought, if we express x \vee y = x + y + x \wedge y as a disjoint sum of some element a and x \wedge y, we then conclude x \vee y = a + x \wedge y, whence a = x + y by cancellation. In the case where the Boolean algebra is a power set PX, this element a is the symmetric difference of x and y. This generalizes: if we define the addition by the symmetric difference formula x + y := (\neg x \wedge y) \vee (x \wedge \neg y), then x + y is disjoint from x \wedge y, so that

(x + y) + x \wedge y

= (x + y) \vee (x \wedge y) = (\neg x \wedge y) \vee (x \wedge \neg y) \vee (x \wedge y) = x \vee y

after a short calculation using the complementation and distributivity axioms. After more work, one shows that x + y is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.

Exercise: Verify this last assertion.

However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.

Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism f: B \to C between Boolean rings, it is clear that f preserves the meet x \cdot y and join x + y + x y of any two elements x, y [since it preserves multiplication and addition] and of course also the complement 1 + x of any x; therefore f is a map of the corresponding Boolean algebras. Conversely, a map f: B \to C of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product x \wedge y and sum (\neg x \wedge y) \vee (x \wedge \neg y) in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,

Theorem 1: The above processes define functors F: \mbox{BoolRing} \to \mbox{BoolAlg}, G: \mbox{BoolAlg} \to \mbox{BoolRing}, which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.

  • Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.

Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality 2^n — the same as the cardinality of the power set Boolean algebra PX if X has cardinality n. The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra B is naturally isomorphic to one of the form PX; one of our tasks is to describe X in terms of B in a “natural” (or rather, functorial) way. From the Boolean ring perspective, X is a basis of the underlying \mathbb{Z}_2-vector space of B; to pin it down exactly, we use the full ring structure.

X is naturally a basis of PX; more precisely, under the embedding i: X \to PX defined by x \mapsto \{x\}, every subset S \subseteq X is uniquely a disjoint sum of finitely many elements of i(X): S = \sum_{x \in X} a_x(S) \{x\} where a_x(S) \in \{0, 1\} = \mathbb{Z}_2: naturally, a_x(S) = 1 iff x \in S. For each S, we can treat the coefficient a_x(S) as a function of x valued in \mathbb{Z}_2. Let \hom(X, \mathbb{Z}_2) denote the set of functions X \to \mathbb{Z}_2; this becomes a Boolean ring under the obvious pointwise definitions (f + g)(x) := f(x) + g(x) and (f g)(x) = f(x) g(x). The function PX \to \hom(X, \mathbb{Z}_2) which takes S \in PX to the coefficient function a_{(-)}(S) is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)

Or, we can turn this around: for each x \in X, we get a Boolean ring map PX \to \mathbb{Z}_2 which takes S to a_x(S). Let \mbox{Bool}(PX, \mathbb{Z}_2) denote the set of Boolean ring maps PX \to \mathbb{Z}_2.

Proposition 4: For a finite set X, the function X \to \mbox{Bool}(PX, \mathbb{Z}_2) that sends x to a_x(-) is a bijection (in other words, an isomorphism).

Proof: We must show that for every Boolean ring map \phi: PX \to \mathbb{Z}_2, there exists a unique x \in X such that \phi = a_x(-), i.e., such that \phi(T) = a_x(T) for all T \in PX. So let \phi be given, and let S be the intersection (or Boolean ring product) of all T \in PX for which \phi(T) = 1. Then

\phi(S) = \phi(\prod_{T: \phi(T) = 1} T) = \prod_{T: \phi(T) = 1} \phi(T) = 1.

I claim that S must be a singleton \{x\} for some (evidently unique) x \in X. For 1 = \phi(S) = \phi(\sum_{x \in S} \{x\}) = \sum_{x \in S} \phi(\{x\}), forcing \phi(\{x\}) = 1 for some x \in S. But then S \subseteq \{x\} according to how S was defined, and so S = \{x\}. To finish, I now claim \phi(T) = a_x(T) for all T \in PX. But \phi(T) = 1 iff S \subseteq T iff x \in T iff a_x(T) = 1. This completes the proof. \Box

This proposition is a vital clue, for if B is to be isomorphic to a power set PX (equivalently, to \hom(X, \mathbb{Z}_2)), the proposition says that the X in question can be retrieved reciprocally (up to isomorphism) as \mbox{Bool}(B, \mathbb{Z}_2).

With this in mind, our first claim is that there is a canonical Boolean ring homomorphism

B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2)

which sends b \in B to the function eval_b which maps \phi \in \mbox{Bool}(B, \mathbb{Z}_2) to \phi(b) (i.e., evaluates \phi at b). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that eval_{b+c}(\phi) = eval_b(\phi) + eval_c(\phi) for all \phi \in \mbox{Bool}(B, \mathbb{Z}_2). But by definition, this is the equation \phi(b+c) = \phi(b) + \phi(c), which holds since \phi is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.

Theorem 2: If B is a finite Boolean ring, then the Boolean ring map

eval: B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2)

is an isomorphism. (So, there is a natural isomorphism B \cong P(\mbox{Bool}(B, \mathbb{Z}_2).)

Proof: First we prove injectivity: suppose b \in B is nonzero. Then \neg b \neq 1, so the ideal (\neg b) = \{a \cdot \neg b: a \in B\} = \{x \in B: x \leq \neg b\} is a proper ideal. Let I be a maximal proper ideal containing \neg b, so that B/I is both a field and a Boolean ring. Then B/I \cong \mathbb{Z}_2 (otherwise any element x \in B/I not equal to 0, 1 \in B/I would be a zero divisor on account of x(1-x) = 0). The evident composite

B \to B/I \cong \mathbb{Z}_2

yields a homomorphism \phi:  B  \to \mathbb{Z}_2 for which \phi(\neg b) = \phi(1-b) = 0, so \phi(b) = eval_b(\phi) = 1. Therefore eval_b is nonzero, as desired.

Now we prove surjectivity. A function g: \mbox{Bool}(B, \mathbb{Z}_2) \to \mathbb{Z}_2 is determined by the set of elements \phi mapping to 1 under g, and each such homomorphism \phi: B \to \mathbb{Z}_2, being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let J be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if x, y belong to J, then so does x \vee y = x + y + x y. Let j be the join of the finitely many elements of J; notice J = \{x \in B: x \leq j\} = (j) (actually, this proves that every ideal of a finite Boolean ring B is principal). In fact, writing k_\phi for the unique element such that \ker(\phi) = (k_\phi), we have

j = \bigwedge_{\phi: g(\phi) = 1} k_\phi

(certainly j \leq k_\phi for all such \phi, since J \subseteq \ker(\phi) = \{x \in B: x \leq k_\phi\}, but also \bigwedge_{g(\phi) = 1} k_\phi belongs to the intersection of these kernels and hence to J = \{x \in B: x \leq j\}, whence \bigwedge_{g(\phi) = 1} k_\phi \leq j).

Now let b = 1- j; I claim that g = eval_b, proving surjectivity. We need to show g(\phi) = eval_b(\phi) = \phi(b) for all \phi \in \mbox{Bool}(B, \mathbb{Z}_2). In one direction, we already know from the above that if g(\phi) = 1, then j belongs to the kernel of \phi, so \phi(j) = 0, whence \phi(b) = \phi(1-j) = 1.

For the other direction, suppose \psi(b) = 1, or that \psi(j) = 0. Now the kernel of \psi is principal, say (k) for some k \neq 1. We have j \leq k, so

k = k \vee j = k \vee \bigwedge_{g(\phi) = 1} k_\phi = \bigwedge_{g(\phi) = 1} k \vee k_\phi

from which it follows that k \vee k_\phi \neq 1 for some \phi \in g^{-1}(1). But then (k \vee k_\phi) is a proper ideal containing the maximal ideals (k) and (k_\phi); by maximality it follows that (k) = (k \vee k_\phi) = (k_\phi). Since \psi and \phi have the same kernels, they are equal. And therefore g(\psi) = g(\phi) = 1. We have now proven both directions of the statement (\psi(b) = 1 if and only if g(\psi) = 1), and the proof is now complete. \Box

  • Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements b and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset I \subseteq B closed under joins and such that x \wedge y \in I for x \in B, y \in I, a filter is (by definition) a subset F closed under meets and such that x \vee y \in F whenever x \in B, y \in F (this second condition is equivalent to upward-closure: y \in F and y \leq x implies x \in F). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if I is an ideal, then the set of negations \{\neg x: x \in I\} is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if f: B \to C is a Boolean algebra map, then the inverse image f^{-1}(1) is a filter, just as the inverse image f^{-1}(0) is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.

For those who know some category theory: what is really going on here is that we have a power set functor

P(-) = \hom(-, \mathbb{Z}_2): \mbox{FinSet}^{op} \to \mbox{FinBool}

(taking a function f: X \to Y between finite sets to the inverse image map f^{-1}: PY \to PX, which is a map between finite Boolean algebras) and a functor

Q(-) = \mbox{Bool}(-, \mathbb{Z}_2): \mbox{FinBool}^{op} \to \mbox{FinSet}

which we could replace by its opposite Q(-)^{op}: \mbox{FinBool} \to \mbox{FinSet}^{op}, and the canonical maps of proposition 4 and theorem 2,

X \to \mbox{Bool}(\hom(X, \mathbb{Z}_2), \mathbb{Z}_2),

B \to \hom(\mbox{Bool}(B, \mathbb{Z}_2), \mathbb{Z}_2),

are components (at X and B) of the counit and unit for an adjunction Q(-)^{op} \dashv P(-). The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories \mbox{FinSet}^{op} and \mbox{FinBool}. This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.

In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.

There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:

A Boolean algebra is a distributive lattice in which every element has a complement.

(If X is a lattice and x \in X, a complement of x is an element y such that x \wedge y = 0 and x \vee y = 1. A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)

  • Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set PX, ordered by subset inclusion. As we know, this is a distributive lattice, and the complement S^c of a subset S \subseteq X satisfies S \cap S^c = \emptyset and S \cup S^c = X.
  • Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction \wedge, disjunction \vee, and negation \neg in ordinary classical logic. In particular, given a theory \mathbf{T}, there is a preorder whose elements are sentences (closed formulas) p of \mathbf{T}, ordered by p \leq q if the entailment p \to q is provable in \mathbf{T} using classical logic. By passing to logical equivalence classes (p \equiv q iff p \leftrightarrow q in \mathbf{T}), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory \mathbf{T}.

Exercise: Give an example of a complemented lattice which is not distributive.

As a possible leading hint for the previous exercise, here is a first order of business:

Proposition: In a distributive lattice, complements of elements are unique when they exist.

Proof: If both b and c are complementary to a, then b = b \wedge 1 = b \wedge (a \vee c) = (b \wedge a) \vee (b \wedge c) = 0 \vee (b \wedge c) = b \wedge c. Since b = b \wedge c, we have b \leq c. Similarly c = b \wedge c, so b = c. \Box

The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).

Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:

  1. (a \wedge x \leq y) if and only if (a \leq \neg x \vee y)
  2. \neg \neg x = x for all elements x

Proof: First let X be a Boolean algebra, and let x^c denote the complement of an element x \in X. Then I claim that a \wedge x \leq y if and only if a \leq x^c \vee y, proving that X admits an implication x \Rightarrow y = x^c \vee y. Then, taking y = 0, it follows that \neg x := (x \Rightarrow 0) = x^c \vee 0 = x^c, whence 1. follows. Also, since (by definition of complement) x is the complement of y if and only if y is the complement of x, we have x^{c c} = x, whence 2. follows.

[Proof of claim: if a \leq x^c \vee y, then x \wedge a \leq x \wedge (x^c \vee y) = (x \wedge x^c) \vee (x \wedge y) \leq 0 \vee y = y. On the other hand, if x \wedge a \leq y, then a = 1 \wedge a \leq (x^c \vee x) \wedge (x^c \vee a) =  x^c \vee (x \wedge a) \leq x^c \vee y. This completes the proof of the claim and of the forward implication.]

In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication \neg x \vee y). In particular, it is distributive. From \neg x \leq \neg x \vee 0, we have (from 1.) x \wedge \neg x \leq 0; since 0 \leq x \wedge \neg x is automatic by definition of 0 = \bot, we get 0 = x \wedge \neg x. From 1 \wedge x \leq x, we have also (from 1.) that 1 \leq \neg x \vee x; since \neg x \vee x \leq 1 is automatic by definition of 1, we have \neg x \vee x = 1. Thus under 1., every element x has a complement \neg x.

On the other hand, suppose X is a Heyting algebra satisfying 2.: \neg \neg x = x. As above, we know x \wedge \neg x = 0. By the corollary below, we also know the function \neg: X \to X takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that \neg is its own inverse, it follows that \neg also takes meets to joins. Hence \neg x \vee x = \neg x \vee \neg \neg x = \neg(x \wedge \neg x) = \neg 0 = 1. Thus for a Heyting algebra which satisfies 2., every element x has a complement \neg x. This completes the proof. \Box

  • Exercise: Show that Boolean algebras can also be characterized as meet-semilattices X equipped with an operation \neg: X \to X for which a \wedge x \leq y if and only if a \leq \neg(x \wedge \neg y).

The proof above invoked the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y. The claim is that this De Morgan law (not the other \neg(x \wedge y) = \neg x \vee \neg y!) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:

Lemma: For any element c of a Heyting algebra X, the function - \Rightarrow c: X \to X is an order-reversing map (equivalently, an order-preserving map X^{op} \to X, or an order-preserving map X \to X^{op}). It is adjoint to itself, in the sense that - \Rightarrow c: X^{op} \to X is right adjoint to - \Rightarrow c: X \to X^{op}.

Proof: First, we show that a \leq b in X (equivalently, b \leq a in X^{op}) implies (b \Rightarrow c) \leq (a \Rightarrow c). But this conclusion holds iff (b \Rightarrow c) \wedge a \leq c, which is clear from (b \Rightarrow c) \wedge a \leq (b \Rightarrow c) \wedge b \leq c. Second, the adjunction holds because

(b \Rightarrow c) \leq  a in X^{op} if and only if

a \leq (b \Rightarrow c) in X if and only if

a \wedge b \leq c in X if and only if

b \wedge a \leq c in X if and only if

b \leq (a \Rightarrow c) in X. \Box

Corollary: - \Rightarrow c: X^{op} \to X takes any inf which exists in X^{op} to the corresponding inf in X. Equivalently, it takes any sup in X to the corresponding inf in X, i.e., (\bigvee_{s \in S} s) \Rightarrow c = \bigwedge_{s \in S} (s \Rightarrow c). (In particular, this applies to finite joins in X, and in particular, it applies to the case c = 0, where we conclude, e.g., the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y.)

  • Remark: If we think of sups as sums and infs as products, then we can think of implications x \Rightarrow y as behaving like exponentials y^x. Indeed, our earlier result that x \Rightarrow (-) preserves infs \bigwedge_{s \in S} y_s can then be recast in exponential notation as saying (\prod_{s \in S} y_s)^x = \prod_{s \in S} (y_s)^x, and our present corollary that (- \Rightarrow y) takes sups to infs can then be recast as saying y^{\sum_{s \in S} x_s} = \prod_{s \in S} y^{x_s}. Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!

Let me reprise part of the lemma (in the case c = 0), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that \neg = (-)\Rightarrow 0: X \to X is order-reversing, and that there is a three-way equivalence:

a \leq \neg b if and only if a \wedge b = 0 if and only if b \leq \neg a.

This situation is an instance of what is called a “Galois connection” in mathematics. If X and Y are posets (or even preorders), a Galois connection between them consists of two order-reversing functions f: X \to Y, g: Y \to X such that for all x \in X, y \in Y, we have y \leq f(x) if and only if x \leq g(y). (It’s actually an instance of an adjoint pair: if we consider f as an order-preserving map X \to Y^{op} and g an order-preserving map Y^{op} \to X, then f(x) \leq y in Y^{op} if and only if x \leq g(y) in X.)

Here are some examples:

  1. The original example arises of course in Galois theory. If k is a field and k \subseteq E is a finite Galois extension with Galois group G = Gal(E/k) (of field automorphisms g: E \to E which fix the elements belonging to k), then there is a Galois connection consisting of maps Aut_{(-)}(E): PE \to PG and Fix: PG \to PE. This works as follows: to each subset S \subseteq E, define Aut_S(E) to be \{g \in G: g(s) = s \mbox{ for all } s \in S \}. In the other direction, to each subset T \subseteq G, define Fix(T) to be \{x \in E: g(x) = x \mbox{ for all } g \in T\}. Both Aut_{(-)}(E) and Fix(-) are order-reversing (for example, the larger the subset T \subseteq G, the more stringent the conditions for an element x \in E to belong to Fix(T)). Moreover, we have

    S \subseteq Fix(T) iff (g(x) = x for all x \in S, g \in T) iff T \subseteq Aut_S(E)

    so we do get a Galois connection. It is moreover clear that for any T \subseteq G, Fix(T) is an intermediate subfield between k and E, and for any S \subseteq E, Aut_S(E) is a subgroup of G. A principal result of Galois theory is that Fix(-) and Aut_{(-)}(E) are inverse to one another when restricted to the lattice of subgroups of G and the lattice of fields intermediate between k and E. Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.

  2. Another basic Galois connection arises in algebraic geometry, between subsets J \subseteq k[x_1, \ldots, x_n] (of a polynomial algebra over a field k) and subsets V \subseteq k^n. Given J, define Z(J) (the zero locus of J) to be \{(a_1, \ldots, a_n): f(a_1, \ldots, a_n) = 0 \mbox{ for each polynomial } f \in J\}. On the other hand, define I(V) (the ideal of V) to be \{f \in k[x_1, \ldots, x_n]: f(a) = 0 \mbox{ for all } a = (a_1, \ldots, a_n) \in V\}. As in the case of Galois theory above, we clearly have a three-way equivalence

    V \subseteq Z(J) iff (f(a) = 0 for all a \in V, f \in J) iff J \subseteq I(V)

    so that Z(-), I(-) define a Galois connection between power sets (of the n-variable polynomial algebra and of n-dimensional affine space k^n). One defines an (affine algebraic) variety V \subseteq k^n to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that I(V) is an ideal of the polynomial algebra. Not every ideal I of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals I equal to their radical rad(I) = \{f \in k[x_1, \ldots, x_n]: f^n \in I \mbox{ for some } n \geq 1\} are. Thus, Z(-) and I(-) become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.

  3. Both of the examples above are particular cases of a very general construction. Let X, Y be sets and let R \subseteq X \times Y be any relation between them. Then set up a Galois connection which in one direction takes a subset S \subseteq X to S \backslash R := \{y \in Y: (x, y) \in R \mbox{ for all } x \in S\}, and in the other takes T \subseteq Y to R/T := \{x \in X: (x, y) \in R \mbox{ for all } y \in T\}. Once again we have a three-way equivalence

    S \subseteq R/T iff S \times T \subseteq R iff T \subseteq S \backslash R.

    There are tons of examples of this flavor.

As indicated above, a Galois connection between posets X, Y is essentially the same thing as an adjoint pair between the posets X, Y^{op} (or between X^{op}, Y if you prefer; Galois connections are after all symmetric in X, Y). I would like to record a few basic results about Galois connections/adjoint pairs.

Proposition:

  1. Given order-reversing maps f: X \to Y, g: Y \to X which form a Galois connection, we have x \leq g f(x) for all x \in X and y \leq f g(y) for all y \in Y. (Given poset maps f, g which form an adjoint pair f \dashv g, we have x \leq g f(x) for all x \in X and f g(y) \leq y for all y \in Y.)
  2. Given a Galois connection as above, f(x) = f g f(x) for all x \in X and g(y) = g f g(y) for all y \in Y. (Given an adjoint pair f \dashv g as above, the same equations hold.) Therefore a Galois connection (f, g) induces a Galois correspondence between the elements of the form f(x) and the elements of the form g(y).

Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption f \dashv g, x \leq g f(x) if and only if f(x) \leq f(x), which is certainly true. The other statement is dual.

(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map f
to x \leq g f(x) from 1. gives f(x) \leq f g f(x). Applying f g(y) \leq y from 1. to y = f(x) gives f g f(x) \leq f(x). Hence f(x) = f g f(x). The other equation is dual. \Box

Incidentally, the equations of 2. show why an algebraic variety V is the zero locus of its ideal (see example 2. above): if V = Z(S) for some set of polynomials S, then V = Z(S) = Z I Z(S) = Z I(V). They also show that for any element x in a Heyting algebra, we have \neg \neg \neg x = \neg x, even though \neg \neg y = y is in general false.

Let (f, g) be a Galois connection (or f \dashv g an adjoint pair). By the proposition, c = gf: X \to X is an order-preserving map with the following properties:

x \leq c(x) for all x \in X

c c(x) = c(x) for all x \in X.

Poset maps c: X \to X with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance G is a group, then the operator c: PG \to PG which takes a subset S \subseteq G to the subgroup generated by S is a closure operator. Or, if X is a topological space, then the operator c: PX \to PX which takes a subset S \subset X to its topological closure \bar{S} is a closure operator. Or, if X is a poset, then the operator c: PX \to PX which takes S \subseteq X to \{a \in X: a \leq s \mbox{ for some } s \in S\} is a closure operator. Examples like these can be multiplied at will.

One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if c: X \to X is a closure operator, then a fixed point of c (or a c-closed element of X) is an element x such that c(x) = x. The collection Fix(c) of fixed points is partially ordered by the order in X. For example, the lattice of fixed points of the operator c: PG \to PG above is the lattice of subgroups of G. For any closure operator c, notice that Fix(c) is the same as the image c(X) of c.

One particular use is that the fixed points of the double negation closure \neg \neg: X \to X on a Heyting algebra X form a Boolean algebra Fix(\neg\neg), and the map \neg \neg: X \to Fix(\neg \neg) is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.

The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.

Exercise: If X is a Heyting algebra and x, y, z \in X, prove the “exponential law” ((x \wedge y) \Rightarrow z) = (x \Rightarrow (y \Rightarrow z)). Conclude that \neg(x \wedge y) = (y \Rightarrow \neg x) = (x \Rightarrow \neg y).

Exercise: We have seen that (x \Rightarrow y) \wedge x \leq y in a Heyting algebra. Use this to prove (x \Rightarrow y) \wedge (y \Rightarrow z) \leq (x \Rightarrow z).

Exercise: Show that double negation \neg \neg: X \to X on a Heyting algebra preserves finite meets. (The inequality \neg \neg(x \wedge y) \leq \neg \neg x \wedge \neg \neg y is easy. The reverse inequality takes more work; try using the previous two exercises.)

Exercise: If c: X \to X is a closure operator, show that the inclusion map i: Fix(c) \hookrightarrow X is right adjoint to the projection c: X \to Fix(c) to the image of c. Conclude that meets of elements in Fix(\neg \neg) are calculated as they would be as elements in X, and also that \neg \neg: X \to Fix(\neg \neg) preserves joins.

Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?

Our other blog

Visitors to this blog

Blog Stats

  • 380,602 hits

Wikio Ranking

Wikio - Top Blogs - Sciences

Current Online Readers

Archives

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031