You are currently browsing the category archive for the ‘Boolean Algebra’ category.
After this brief (?) categorical interlude, I’d like to pick up the main thread again, and take a closer look at the some of the ingredients of baby Stone duality in the context of categorical algebra, specifically through the lens of adjoint functors. By passing a topological light through this lens, we will produce the spectrum of a Boolean algebra: a key construction of full-fledged Stone duality!
Just before the interlude, we were discussing some consequences of baby Stone duality. Taking it from the top, we recalled that there are canonical maps
in the categories of Boolean algebras and sets
. We said these are “natural” maps (even before the notion of naturality had been formally introduced), and recalled our earlier result that these are isomorphisms when
and
are finite (which is manifestly untrue in general; for instance, if
is a free Boolean algebra generated by a countable set, then for simple reasons of cardinality
cannot be a power set).
What we have here is an adjoint pair of functors between the categories and
of sets and Boolean algebras, each given by a hom-functor:
( acts the same way on objects and morphisms as
, but is regarded as mapping between the opposite categories). This actually says something very simple: that there is a natural bijection between Boolean algebra maps and functions
given by the formula . [The very simple nature of this formula suggests that it’s nothing special to Boolean algebras — a similar adjunction could be defined for any algebraic theory defined by operations and (universally quantified) equations, replacing
by any model of that theory.] The unit of the adjunction at
is the function
, and the counit at
is the Boolean algebra map
(regarded as a morphism
mapping the other way in the opposite category
).
The functor is usually described in the language of ultrafilters, as I will now explain.
Earlier, we remarked that an ultrafilter in a Boolean algebra is a maximal filter, dual to a maximal ideal; let’s recall what that means. A maximal ideal in a Boolean ring
is the kernel of a (unique) ring map
i.e., has the form for some such map. Being an ideal, it is an additive subgroup
such that
implies
. It follows that if
, then
, so
is closed under finite joins (including the empty join
). Also, if
and
, then
, so that
is “downward-closed”.
Conversely, a downward-closed subset which is closed under finite joins is an ideal in
(exercise!). Finally, if
is a maximal ideal, then under the quotient map
we have that for all , either
or
, i.e., that either
or
.
Thus we have redefined the notion of maximal ideal in a Boolean algebra in the first-order theory of posets: a downward-closed set closed under finite joins, such that every element or its complement (but never both!) is contained in
. [If both
, then
, whence
for all
(since
and
is downward-closed). But then
isn’t a maximal (proper) ideal!]
The notion of ultrafilter is dual, so an ultrafilter in a Boolean algebra is defined to be a subset
which
- Is upward-closed: if
and
, then
;
- Is closed under finite meets: if
, then
;
- Satisfies dichotomy: for every
, exactly one of
belongs to
.
If is a maximal ideal, then
is an ultrafilter, and we have natural bijections between the following concepts:
Boolean algebra maps
maximal ideals
ultrafilters
so that is naturally identified with the set of ultrafilters in
.
If is a set, then an ultrafilter on
is by definition an ultrafilter in the Boolean algebra
. Hence
is identified with the set of ultrafilters on
, usually denoted
. The unit map
maps to an ultrafilter denoted
, consisting of all subsets
which contain
, and called the principal ultrafilter generated by
.
We saw that when is finite, the function
(and therefore also
) is a bijection: there every ultrafilter is principal, as part of baby Stone duality (see Proposition 4 here). Here is a slight generalization:
Proposition 1: If an ultrafilter on
contains a finite set
, then
is principal.
Proof: It is enough to show contains
for some
. If not, then
contains the complement
for every
(by dichotomy), and therefore also the finite intersection
which contradicts the fact that .
It follows that nonprincipal ultrafilters can exist only on infinite sets , and that every cofinite subset of
(complement of a finite set) belongs to such an ultrafilter (by dichotomy). The collection of cofinite sets forms a filter, and so the question of existence of nonprincipal ultrafilters is the question of whether the filter of cofinite sets can be extended to an ultrafilter. Under the axiom of choice, the answer is yes:
Proposition 2: Every (proper) filter in a Boolean algebra is contained in some ultrafilter.
Proof: This is dual to the statement that every proper ideal in a Boolean ring is contained in a maximal ideal. Either statement may be proved by appeal to Zorn’s lemma: the collection of filters which contain a given filter has the property that every linear chain of such filters has an upper bound (namely, the union of the chain), and so by Zorn there is a maximal such filter.
As usual, Zorn’s lemma is a kind of black box: it guarantees existence without giving a clue to an explicit construction. In fact, nonprincipal ultrafilters on sets , like well-orderings of the reals, are notoriously inexplicit: no one has ever seen one directly, and no one ever will.
That said, one can still develop some intuition for ultrafilters. I think of them as something like “fat nets”. Each ultrafilter on a set
defines a poset (of subsets ordered by inclusion), but I find it more suggestive to consider instead the opposite
, where
in
means
— so that the further or deeper you go in
, the smaller or more concentrated the element. Since
is closed under finite intersections,
has finite joins, so that
is directed (any two elements have an upper bound), just like the elements of a net (or more pedantically, the domain of a net). I call an ultrafilter a “fat net” because its elements, being subsets of
, are “fatter” than mere points.
Intuitively speaking, ultrafilters as nets “move in a definite direction”, in the sense that given an element , however far in the net, and given a subset
, the ultrafilter-as-net sniffs out a direction in which to proceed, “tunneling” either into
if
, or into its relative complement
if this belongs to
. In the case of a principal ultrafilter, there is a final element
of the net; otherwise not (but we can think of a nonprincipal ultrafilter as ending at an “ideal point” of the set
if we want).
Since the intuitive imagery here is already vaguely topological, we may as well make the connection with topology more precise. So, suppose now that comes equipped with a topology. We say that an ultrafilter
on
converges to a point
if each open set
containing
(or each neighborhood of
) belongs to the ultrafilter. In other words, by going deep enough into the ultrafilter-as-net, you get within any chosen neighborhood of the point. We write
to say that
converges to
.
General topology can be completely developed in terms of the notion of ultrafilter convergence, often very efficiently. For example, starting with any relation whatsoever between ultrafilters and points,
we can naturally define a topology on
so that
with respect to
whenever
.
Let’s tackle that in stages: in order for the displayed condition to hold, a neighborhood of must belong to every ultrafilter
for which
. This suggests that we try defining the filter
of neighborhoods of
to be the intersection of ultrafilters
Then define a subset to be open if it is a neighborhood of all the points it contains. In other words, define
to be open if
Proposition 3: This defines a topology, .
Proof: Since for every ultrafilter
, it is clear that
is open; also, it is vacuously true that the empty set is open. If
are open, then for all
, whenever
, we have
and
, so that
and
by openness, whence
since
is closed under intersections. So
is also open. Finally, suppose
is a collection of open sets. For all
, if
, then
for some
, so that
by openness, whence
since ultrafilters are upward closed. So
is also open.
.
Let’s recap: starting from a topology on
, we’ve defined a convergence relation
(consisting of pairs
such that
), and conversely, given any relation
, we’ve defined a topology
on
. What we actually have here is a Galois connection where
if and only if
Of course not every relation is the convergence relation of a topology, so we don’t quite have a Galois correspondence (that is,
and
are not quite inverse to one another). But, it is true that every topology
is the topology of its ultrafilter convergence relation, i.e.,
. For this, it suffices to show that every neighborhood filter
is the intersection of the ultrafilters that contain it. But that is true of any filter:
Theorem 1: If is a filter in
and
, then there exists an ultrafilter
for which
and
.
Proof: First I claim for all
; otherwise
for some
, whence
, so that
since filters are upward closed, contradiction. It follows that
can be extended to the (proper) filter
which in turn extends to some ultrafilter , by proposition 2. Since
, we have
.
Corollary 1: Every filter is the intersection of all the ultrafilters which contain it.
The ultrafilter convergence approach to topology is particularly convenient for studies of compactness:
Theorem 2: A space is compact if and only if every ultrafilter
converges to at least one point. It is Hausdorff if and only if every ultrafilter converges to at most one point.
Proof: First suppose that is compact, and (in view of a contradiction) that
converges to no point of
. This means that for every
there is a neighborhood
which does not belong to
, or that
. Finitely many of these
cover
, by compactness. By De Morgan’s law, this means finitely many
have empty intersection. But this would mean
, since
is closed under finite intersections, contradiction.
In the reverse direction, suppose that every ultrafilter converges. We need to show that if is any collection of open subsets of
such that no finite subcollection covers
, then the union of the
cannot cover
. First, because no finite subcollection covers, we may construct a filter generated by the complements:
Extend this filter to an ultrafilter ; then by assumption
. If some one of the
contained
, then
by definition of convergence. But we also have
, and this is a contradiction. So,
lies outside the union of the
, as was to be shown.
Now let be Hausdorff, and suppose that
and
. Let
be neighborhoods of
respectively with empty intersection. By definition of convergence, we have
, whence
, contradiction.
Conversely, suppose every ultrafilter converges to at most one point, and let be two distinct points. Unless there are neighborhoods
of
respectively such that
(which is what we want), the smallest filter containing the two neighborhood filters
(that is to say, the join
in the poset of filters) is proper, and hence extends to an ultrafilter
. But then
and
, which is to say
and
, contradiction.
Theorem 2 is very useful; among other things it paves the way for a clean and conceptual proof of Tychonoff’s theorem (that an arbitrary product of compact spaces is compact). For now we note that it says that a topology is the topology of a compact Hausdorff space structure on
if and only if the convergence relation
is a function. And in practice, functions
which arise “naturally” tend to be such convergence relations, making
a compact Hausdorff space.
Here is our key example. Let be a Boolean algebra, and let
, which we have identified with the set of ultrafilters in
. Define a map
by
where was the counit (evaluated at
) of the adjunction
defined at the top of this post. Unpacking the definitions a bit, the map
is the map
, the result of applying the hom-functor
to
Chasing this a little further, the map “pulls back” an ultrafilter
to the ultrafilter
, viewed as an element of
. We then topologize
by the topology
.
This construction is about as “abstract nonsense” as it gets, but you have to admit that it’s pretty darned canonical! The topological space we get in this way is called the spectrum of the Boolean algebra
. If you’ve seen a bit of algebraic geometry, then you probably know another, somewhat more elementary way of defining the spectrum (of
as commutative ring), so we may as well make the connection explicit. However you define it, the result is a compact Hausdorff space structure with some other properties which make it very reminiscent of Cantor space.
It is first of all easy to see that is compact, i.e., that every ultrafilter
converges. Indeed, the relation
is a function
, and if you look at the condition for a set
to be open w.r.t.
,
you see immediately that converges to
.
To get Hausdorffness, take two distinct points (ultrafilters in
). Since these are distinct maximal filters, there exists
such that
belongs to
but not to
, and then
belongs to
but not to
. Define
Proposition 4: is open in
.
Proof: We must check that for all ultrafilters on
, that
But . By definition of
, we are thus reduced to checking that
or that . But
(as a subset of
) is
!
As a result, and
are open sets containing the given points
. They are disjoint since in fact
(indeed, because
preserves negation). This gives Hausdorffness, and also that the
are clopen (closed and open).
We actually get a lot more:
Proposition 5: The collection is a basis for the topology
on
.
Proof: The sets form a basis for some topology
, because
(indeed,
preserves meets). By the previous proposition,
. So the identity on
gives a continuous comparison map
between the two topologies. But a continuous bijection from a compact space to a Hausdorff space is necessarily a homeomorphism, so .
- Remark: In particular, the canonical topology on
is compact Hausdorff; this space is called the Stone-Cech compactification of (the discrete space)
. The methods exploited in this lecture can be used to show that in fact
is the free compact Hausdorff space generated from the set
, meaning that the functor
is left adjoint to the underlying-set functor
. In fact, one can go rather further in this vein: a fascinating result (first proved by Eduardo Manes in his PhD thesis) is that the concept of compact Hausdorff space is algebraic (is monadic with respect to the monad
): there is a equationally defined theory where the class of
-ary operations (for each cardinal
) is coded by the set of ultrafilters
, and whose models are precisely compact Hausdorff spaces. This goes beyond the scope of these lectures, but for the theory of monads, see the entertaining YouTube lectures by the Catsters!
Last time in this series on Stone duality, we observed a perfect duality between finite Boolean algebras and finite sets, which we called “baby Stone duality”:
- Every finite Boolean algebra
is obtained from a finite set
by taking its power set (or set of functions
from
to
, with the Boolean algebra structure it inherits “pointwise” from
). The set
may be defined to be
, the set of Boolean algebra homomorphisms from
to
.
- Conversely, every finite set
is obtained from the Boolean algebra
by taking its “hom-set”
.
More precisely, there are natural isomorphisms
in the categories of finite Boolean algebras and of finite sets, respectively. In the language of category theory, this says that these categories are (equivalent to) one another’s opposite — something I’ve been meaning to explain in more detail, and I promise to get to that, soon! In any case, this duality says (among other things) that finite Boolean algebras, no matter how abstractly presented, can be represented concretely as power sets.
Today I’d like to apply this representation to free Boolean algebras (on finitely many generators). What is a free Boolean algebra? Again, the proper context for discussing this is category theory, but we can at least convey the idea: given a finite set of letters
, consider the Boolean algebra
whose elements are logical equivalence classes of formulas you can build up from the letters using the Boolean connectives
(and the Boolean constants
), where two formulas
are defined to be logically equivalent if
and
can be inferred purely on the basis of the Boolean algebra axioms. This is an excellent example of a very abstract description of a Boolean algebra: syntactically, there are infinitely many formulas you can build up, and the logical equivalence classes are also infinite and somewhat hard to visualize, but the mess can be brought under control using Stone duality, as we now show.
First let me cut to the chase, and describe the key property of free Boolean algebras. Let be any Boolean algebra; it could be a power set, the lattice of regular open sets in a topology, or whatever, and think of a function
from the set of letters to
as modeling or interpreting the atomic formulas
as elements
of
. The essential property of the free Boolean algebra is that we can extend this interpretation
in a unique way to a Boolean algebra map
. The way this works is that we map a formula like
to the obvious formula
. This is well-defined on logical equivalence classes of formulas because if
in
, i.e., if the equality is derivable just from the Boolean algebra axioms, then of course
holds in
as the Boolean algebra axioms hold in
. Thus, there is a natural bijective correspondence between functions
and Boolean algebra maps
; to get back from a Boolean algebra map
to the function
, simply compose the Boolean algebra map with the function
which interprets elements of
as equivalence classes of atomic formulas in
.
To get a better grip on , let me pass to the Boolean ring picture (which, as we saw last time, is equivalent to the Boolean algebra picture). Here the primitive operations are addition and multiplication, so in this picture we build up “formulas” from letters using these operations (e.g.,
and the like). In other words, the elements of
can be considered as “polynomials” in the variables
. Actually, there are some simplifying features of this polynomial algebra; for one thing, in Boolean rings we have idempotence. This means that
for
, and so a monomial term like
reduces to its support
. Since each letter appears in a support with exponent 0 or 1, it follows that there are
possible supports or Boolean monomials, where
denotes the cardinality of
.
Idempotence also implies, as we saw last time, that for all elements
, so that our polynomials =
-linear combinations of monomials are really
-linear combinations of Boolean monomials or supports. In other words, each element of
is uniquely a linear combination
where
i.e., the set of supports forms a basis of
as a
-vector space. Hence the cardinality of the free Boolean ring is
.
- Remark: This gives an algorithm for checking logical equivalence of two Boolean algebra formulas: convert the formulas into Boolean ring expressions, and using distributivity, idempotence, etc., write out these expressions as Boolean polynomials =
-linear combinations of supports. The Boolean algebra formulas are equivalent if and only if the corresponding Boolean polynomials are equal.
But there is another way of understanding free Boolean algebras, via baby Stone duality. Namely, we have the power set representation
where is the set of Boolean algebra maps
. However, the freeness property says that these maps are in bijection with functions
. What are these functions? They are just truth-value assignments for the elements (atomic formulas, or variables)
; there are again
many of these. This leads to the method of truth tables: each formula
induces (in one-one fashion) a function
which takes a Boolean algebra map , aka a truth-value assignment for the variables
, to the element of
obtained by instantiating the assigned truth values
for the variables and evaluating the resulting Boolean expression for
in
. (In terms of power sets,
identifies each equivalence class of formulas with the set of truth-value assignments of variables which render the formula
“true” in
.) The fact that the representation
is injective means precisely that if formulas
are inequivalent, then there is a truth-value assignment which renders one of them “true” and the other “false”, hence that they are distinguishable by truth tables.
- Remark: This is an instance of what is known as a completeness theorem in logic. On the syntactic side, we have a notion of provability of formulas (that
is logically equivalent to
, or
in
if this is derivable from the Boolean algebra axioms). On the semantic side, each Boolean algebra homomorphism
can be regarded as a model of
in which each formula becomes true or false under
. The method of truth tables then says that there are enough models or truth-value assignments to detect provability of formulas, i.e.,
is provable if it is true when interpreted in any model
. This is precisely what is meant by a completeness theorem.
There are still other ways of thinking about this. Let be a Boolean algebra map, aka a model of
. This model is completely determined by
- The maximal ideal
in the Boolean ring
, or
- The maximal filter or ultrafilter
in
.
Now, as we saw last time, in the case of finite Boolean algebras, each (maximal) ideal is principal: is of the form for some
. Dually, each (ultra)filter is principal: is of the form
for some
. The maximality of the ultrafilter means that there is no nonzero element in
smaller than
; we say that
is an atom in
(NB: not to be confused with atomic formula!). So, we can also say
- A model of a finite Boolean algebra
is specified by a unique atom of
.
Thus, baby Stone duality asserts a Boolean algebra isomorphism
Let’s give an example: consider the free Boolean algebra on three elements . If you like, draw a Venn diagram generated by three planar regions labeled by
. The atoms or smallest nonzero elements of the free Boolean algebra are then represented by the
regions demarcated by the Venn diagram. That is, the disjoint regions are labeled by the eight atoms
According to baby Stone duality, any element in the free Boolean algebra (with elements) is uniquely expressible as a disjoint union of these atoms. Another way of saying this is that the atoms form a basis (alternative to Boolean monomials) of the free Boolean algebra as
-vector space. For example, as an exercise one may calculate
The unique expression of an element (where
is given by a Boolean formula) as a
-linear combination of atoms is called the disjunctive normal form of the formula. So yet another way of deciding when two Boolean formulas are logically equivalent is to put them both in disjunctive normal form and check whether the resulting expressions are the same. (It’s basically the same idea as checking equality of Boolean polynomials, except we are using a different vector space basis.)
All of the above applies not just to free (finite) Boolean algebras, but to general finite Boolean algebras. So, suppose you have a Boolean algebra which is generated by finitely many elements
. Generated means that every element in
can be expressed as a Boolean combination of the generating elements. In other words, “generated” means that if we consider the inclusion function
, then the unique Boolean algebra map
which extends the inclusion is a surjection. Thinking of
as a Boolean ring map, we have an ideal
, and because
is a surjection, it induces a ring isomorphism
The elements of can be thought of as equivalence classes of formulas which become false in
under the interpretation
. Or, we could just as well (and it may be more natural to) consider instead the filter
of formulas in
which become true under the interpretation
. In any event, what we have is a propositional language
consisting of classes of formulas, and a filter
consisting of formulas, which can be thought of as theorems of
. Often one may find a filter
described as the smallest filter which contains certain chosen elements, which one could then call axioms of
.
In summary, any propositional theory (which by definition consists of a set of propositional variables together with a filter
of the free Boolean algebra, whose elements are called theorems of the theory) yields a Boolean algebra
, where dividing out by
means we take equivalence classes of elements of
under the equivalence relation
defined by the condition “
belongs to
“. The partial order on equivalence classes [
] is defined by [
]
[
] iff
belongs to
. The Boolean algebra
defined in this way is called the Lindenbaum algebra of the propositional theory.
Conversely, any Boolean algebra with a specified set of generators
can be thought of as the Lindenbaum algebra of the propositional theory obtained by taking the
as propositional variables, together with the filter
obtained from the induced Boolean algebra map
. A model of the theory should be a Boolean algebra map
which interprets the formulas of
as true or false, but in such a way that the theorems of the theory (the elements of the filter) are all interpreted as “true”. In other words, a model is the same thing as a Boolean algebra map
i.e., we may identify a model of a propositional theory with a Boolean algebra map out of its Lindenbaum algebra.
So the set of models is the set , and now baby Stone duality, which gives a canonical isomorphism
implies the following
Completeness theorem: If a formula of a finite propositional theory is “true” when interpreted under any model of the theory, then the formula is provable (is a theorem of the theory).
Proof: Let be the Lindenbaum algebra of the theory, and let
be the class of formulas provably equivalent to a given formula
under the theory. The Boolean algebra isomorphism
takes an element
to the map
. If
for all models
, i.e., if
, then
. But then [
]
, i.e.,
, the filter of provable formulas.
In summary, we have developed a rich vocabulary in which Boolean algebras are essentially the same things as propositional theories, and where models are in natural bijection with maximal ideals in the Boolean ring, or ultrafilters in the Boolean algebra, or [in the finite case] atoms in the Boolean algebra. But as we will soon see, ultrafilters have a significance far beyond their application in the realm of Boolean algebras; in particular, they crop up in general studies of topology and convergence. This is in fact a vital clue; the key point is that the set of models or ultrafilters carries a canonical topology, and the interaction between Boolean algebras and topological spaces is what Stone duality is all about.
In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.
Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.
I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:
Definition: A Boolean ring is a commutative ring (with identity ) in which every element
is idempotent, i.e., satisfies
.
Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.
Proposition 1: For every element in a Boolean ring,
.
Proof: By idempotence, we have . Since
, we may additively cancel in the ring to conclude
.
This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over
to be an abelian group
together with a ring homomorphism
to the ring of abelian group homomorphisms from
to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element
to scalar-multiplication
).
Anyway, the point is that we can now apply some linear algebra to study this -vector space; in particular, a finite Boolean ring
is a finite-dimensional vector space over
. By choosing a basis, we see that
is vector-space isomorphic to
where
is the dimension. So the cardinality of a finite Boolean ring must be of the form
. Hold that thought!
Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring , we may construct a corresponding Boolean algebra structure on the underlying set of
, uniquely determined by the stipulation that the multiplication
of the Boolean ring match the meet operation
of the Boolean algebra. Conversely, given a Boolean algebra
, we may construct a corresponding Boolean ring structure on
, and this construction is inverse to the previous one.
In one direction, suppose is a Boolean ring. We know from before that a binary operation on a set
that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of
— can be identified with the meet operation of a meet-semilattice structure on
, uniquely specified by taking its partial order to be defined by:
iff
. It immediately follows from this definition that the additive identity
satisfies
for all
(is the bottom element), and the multiplicative identity
satisfies
for all
(is the top element).
Notice also that , by idempotence. This leads one to suspect that
will be the complement of
in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting
, i.e.,
is equal to its putative double negation.
Proposition 2: is order-reversing.
Proof: Looking at the definition of the order, this says that if , then
. This is immediate.
So, is an order-reversing map
(an order-preserving map
) which is a bijection (since it is its own inverse). We conclude that
is a poset isomorphism. Since
has meets and
,
also has meets (and the isomorphism preserves them). But meets in
are joins in
. Hence
has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function
takes meets in
to joins in
; that is,
or, replacing by
and
by
,
whence , using the proposition 1 above.
Proposition 3: is the complement of
.
Proof: We already saw . Also
using the formula for join we just computed. This completes the proof.
So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have . On the other hand,
, using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.
Naturally, we want to invert the process: starting with a Boolean algebra structure on a set , construct a corresponding Boolean ring structure on
whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy
, so that
if
(i.e., if
and
are disjoint): this gives a partial definition of addition. Continuing this thought, if we express
as a disjoint sum of some element
and
, we then conclude
, whence
by cancellation. In the case where the Boolean algebra is a power set
, this element
is the symmetric difference of
and
. This generalizes: if we define the addition by the symmetric difference formula
, then
is disjoint from
, so that
after a short calculation using the complementation and distributivity axioms. After more work, one shows that is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.
Exercise: Verify this last assertion.
However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.
Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism between Boolean rings, it is clear that
preserves the meet
and join
of any two elements
[since it preserves multiplication and addition] and of course also the complement
of any
; therefore
is a map of the corresponding Boolean algebras. Conversely, a map
of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product
and sum
in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,
Theorem 1: The above processes define functors ,
, which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.
- Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.
Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality — the same as the cardinality of the power set Boolean algebra
if
has cardinality
. The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra
is naturally isomorphic to one of the form
; one of our tasks is to describe
in terms of
in a “natural” (or rather, functorial) way. From the Boolean ring perspective,
is a basis of the underlying
-vector space of
; to pin it down exactly, we use the full ring structure.
is naturally a basis of
; more precisely, under the embedding
defined by
, every subset
is uniquely a disjoint sum of finitely many elements of
:
where
: naturally,
iff
. For each
, we can treat the coefficient
as a function of
valued in
. Let
denote the set of functions
; this becomes a Boolean ring under the obvious pointwise definitions
and
. The function
which takes
to the coefficient function
is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)
Or, we can turn this around: for each , we get a Boolean ring map
which takes
to
. Let
denote the set of Boolean ring maps
.
Proposition 4: For a finite set , the function
that sends
to
is a bijection (in other words, an isomorphism).
Proof: We must show that for every Boolean ring map , there exists a unique
such that
, i.e., such that
for all
. So let
be given, and let
be the intersection (or Boolean ring product) of all
for which
. Then
.
I claim that must be a singleton
for some (evidently unique)
. For
, forcing
for some
. But then
according to how
was defined, and so
. To finish, I now claim
for all
. But
iff
iff
iff
. This completes the proof.
This proposition is a vital clue, for if is to be isomorphic to a power set
(equivalently, to
), the proposition says that the
in question can be retrieved reciprocally (up to isomorphism) as
.
With this in mind, our first claim is that there is a canonical Boolean ring homomorphism
which sends to the function
which maps
to
(i.e., evaluates
at
). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that
for all
. But by definition, this is the equation
, which holds since
is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.
Theorem 2: If is a finite Boolean ring, then the Boolean ring map
is an isomorphism. (So, there is a natural isomorphism .)
Proof: First we prove injectivity: suppose is nonzero. Then
, so the ideal
is a proper ideal. Let
be a maximal proper ideal containing
, so that
is both a field and a Boolean ring. Then
(otherwise any element
not equal to
would be a zero divisor on account of
). The evident composite
yields a homomorphism for which
, so
. Therefore
is nonzero, as desired.
Now we prove surjectivity. A function is determined by the set of elements
mapping to
under
, and each such homomorphism
, being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let
be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if
belong to
, then so does
. Let
be the join of the finitely many elements of
; notice
(actually, this proves that every ideal of a finite Boolean ring
is principal). In fact, writing
for the unique element such that
, we have
(certainly for all such
, since
, but also
belongs to the intersection of these kernels and hence to
, whence
).
Now let ; I claim that
, proving surjectivity. We need to show
for all
. In one direction, we already know from the above that if
, then
belongs to the kernel of
, so
, whence
.
For the other direction, suppose , or that
. Now the kernel of
is principal, say
for some
. We have
, so
from which it follows that for some
. But then
is a proper ideal containing the maximal ideals
and
; by maximality it follows that
. Since
and
have the same kernels, they are equal. And therefore
. We have now proven both directions of the statement (
if and only if
), and the proof is now complete.
- Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements
and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset
closed under joins and such that
for
, a filter is (by definition) a subset
closed under meets and such that
whenever
(this second condition is equivalent to upward-closure:
and
implies
). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if
is an ideal, then the set of negations
is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if
is a Boolean algebra map, then the inverse image
is a filter, just as the inverse image
is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.
For those who know some category theory: what is really going on here is that we have a power set functor
(taking a function between finite sets to the inverse image map
, which is a map between finite Boolean algebras) and a functor
which we could replace by its opposite , and the canonical maps of proposition 4 and theorem 2,
are components (at and
) of the counit and unit for an adjunction
. The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories
and
. This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.
In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.
There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:
A Boolean algebra is a distributive lattice in which every element has a complement.
(If is a lattice and
, a complement of
is an element
such that
and
. A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)
- Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set
, ordered by subset inclusion. As we know, this is a distributive lattice, and the complement
of a subset
satisfies
and
.
- Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction
, disjunction
, and negation
in ordinary classical logic. In particular, given a theory
, there is a preorder whose elements are sentences (closed formulas)
of
, ordered by
if the entailment
is provable in
using classical logic. By passing to logical equivalence classes (
iff
in
), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory
.
Exercise: Give an example of a complemented lattice which is not distributive.
As a possible leading hint for the previous exercise, here is a first order of business:
Proposition: In a distributive lattice, complements of elements are unique when they exist.
Proof: If both and
are complementary to
, then
. Since
, we have
. Similarly
, so
The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).
Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:
if and only if
for all elements
Proof: First let be a Boolean algebra, and let
denote the complement of an element
. Then I claim that
if and only if
, proving that
admits an implication
. Then, taking
, it follows that
, whence 1. follows. Also, since (by definition of complement)
is the complement of
if and only if
is the complement of
, we have
, whence 2. follows.
[Proof of claim: if , then
. On the other hand, if
, then
. This completes the proof of the claim and of the forward implication.]
In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication ). In particular, it is distributive. From
, we have (from 1.)
; since
is automatic by definition of
, we get
. From
, we have also (from 1.) that
; since
is automatic by definition of
, we have
. Thus under 1., every element
has a complement
.
On the other hand, suppose is a Heyting algebra satisfying 2.:
. As above, we know
. By the corollary below, we also know the function
takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that
is its own inverse, it follows that
also takes meets to joins. Hence
. Thus for a Heyting algebra which satisfies 2., every element
has a complement
. This completes the proof.
- Exercise: Show that Boolean algebras can also be characterized as meet-semilattices
equipped with an operation
for which
if and only if
.
The proof above invoked the De Morgan law . The claim is that this De Morgan law (not the other
!) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:
Lemma: For any element of a Heyting algebra
, the function
is an order-reversing map (equivalently, an order-preserving map
, or an order-preserving map
). It is adjoint to itself, in the sense that
is right adjoint to
.
Proof: First, we show that in
(equivalently,
in
) implies
. But this conclusion holds iff
, which is clear from
. Second, the adjunction holds because
in
if and only if
in
if and only if
in
if and only if
in
if and only if
in
Corollary: takes any inf which exists in
to the corresponding inf in
. Equivalently, it takes any sup in
to the corresponding inf in
, i.e.,
. (In particular, this applies to finite joins in
, and in particular, it applies to the case
, where we conclude, e.g., the De Morgan law
.)
- Remark: If we think of sups as sums and infs as products, then we can think of implications
as behaving like exponentials
. Indeed, our earlier result that
preserves infs
can then be recast in exponential notation as saying
, and our present corollary that
takes sups to infs can then be recast as saying
. Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!
Let me reprise part of the lemma (in the case ), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that
is order-reversing, and that there is a three-way equivalence:
if and only if
if and only if
.
This situation is an instance of what is called a “Galois connection” in mathematics. If and
are posets (or even preorders), a Galois connection between them consists of two order-reversing functions
,
such that for all
, we have
if and only if
. (It’s actually an instance of an adjoint pair: if we consider
as an order-preserving map
and
an order-preserving map
, then
in
if and only if
in
.)
Here are some examples:
- The original example arises of course in Galois theory. If
is a field and
is a finite Galois extension with Galois group
(of field automorphisms
which fix the elements belonging to
), then there is a Galois connection consisting of maps
and
. This works as follows: to each subset
, define
to be
. In the other direction, to each subset
, define
to be
. Both
and
are order-reversing (for example, the larger the subset
, the more stringent the conditions for an element
to belong to
). Moreover, we have
iff (
for all
) iff
so we do get a Galois connection. It is moreover clear that for any
,
is an intermediate subfield between
and
, and for any
,
is a subgroup of
. A principal result of Galois theory is that
and
are inverse to one another when restricted to the lattice of subgroups of
and the lattice of fields intermediate between
and
. Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.
- Another basic Galois connection arises in algebraic geometry, between subsets
(of a polynomial algebra over a field
) and subsets
. Given
, define
(the zero locus of
) to be
. On the other hand, define
(the ideal of
) to be
. As in the case of Galois theory above, we clearly have a three-way equivalence
iff (
for all
) iff
so that
,
define a Galois connection between power sets (of the
-variable polynomial algebra and of
-dimensional affine space
). One defines an (affine algebraic) variety
to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that
is an ideal of the polynomial algebra. Not every ideal
of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals
equal to their radical
are. Thus,
and
become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.
- Both of the examples above are particular cases of a very general construction. Let
be sets and let
be any relation between them. Then set up a Galois connection which in one direction takes a subset
to
, and in the other takes
to
. Once again we have a three-way equivalence
iff
iff
.
There are tons of examples of this flavor.
As indicated above, a Galois connection between posets is essentially the same thing as an adjoint pair between the posets
(or between
if you prefer; Galois connections are after all symmetric in
). I would like to record a few basic results about Galois connections/adjoint pairs.
Proposition:
- Given order-reversing maps
,
which form a Galois connection, we have
for all
and
for all
. (Given poset maps
which form an adjoint pair
, we have
for all
and
for all
.)
- Given a Galois connection as above,
for all
and
for all
. (Given an adjoint pair
as above, the same equations hold.) Therefore a Galois connection
induces a Galois correspondence between the elements of the form
and the elements of the form
.
Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption ,
if and only if
, which is certainly true. The other statement is dual.
(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map
to from 1. gives
. Applying
from 1. to
gives
. Hence
. The other equation is dual.
Incidentally, the equations of 2. show why an algebraic variety is the zero locus of its ideal (see example 2. above): if
for some set of polynomials
, then
. They also show that for any element
in a Heyting algebra, we have
, even though
is in general false.
Let be a Galois connection (or
an adjoint pair). By the proposition,
is an order-preserving map with the following properties:
for all
for all
.
Poset maps with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance
is a group, then the operator
which takes a subset
to the subgroup generated by
is a closure operator. Or, if
is a topological space, then the operator
which takes a subset
to its topological closure
is a closure operator. Or, if
is a poset, then the operator
which takes
to
is a closure operator. Examples like these can be multiplied at will.
One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if is a closure operator, then a fixed point of
(or a
-closed element of
) is an element
such that
. The collection
of fixed points is partially ordered by the order in
. For example, the lattice of fixed points of the operator
above is the lattice of subgroups of
. For any closure operator
, notice that
is the same as the image
of
.
One particular use is that the fixed points of the double negation closure on a Heyting algebra
form a Boolean algebra
, and the map
is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.
The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.
Exercise: If is a Heyting algebra and
, prove the “exponential law”
. Conclude that
.
Exercise: We have seen that in a Heyting algebra. Use this to prove
.
Exercise: Show that double negation on a Heyting algebra preserves finite meets. (The inequality
is easy. The reverse inequality takes more work; try using the previous two exercises.)
Exercise: If is a closure operator, show that the inclusion map
is right adjoint to the projection
to the image of
. Conclude that meets of elements in
are calculated as they would be as elements in
, and also that
preserves joins.
Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?
Recent Comments