You are currently browsing the tag archive for the ‘stone duality’ tag.
I wish to bring the attention of our readers to the Carnival of Mathematics hosted by Charles at Rigorous Trivialities. I guess most of you already know about it. Among other articles/posts, one of Todd’s recent post Basic Category Theory I is part of the carnival. He followed it up with another post titled Basic Category Theory II. There will be a third post on the same topic some time soon. This sub-series of posts on basic category theory, if you recall, is part of the larger series on Stone Duality, which all began with Toward Stone Duality: Posets and Meets. Hope you enjoy the Carnival!
In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.
Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.
I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:
Definition: A Boolean ring is a commutative ring (with identity ) in which every element
is idempotent, i.e., satisfies
.
Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.
Proposition 1: For every element in a Boolean ring,
.
Proof: By idempotence, we have . Since
, we may additively cancel in the ring to conclude
.
This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over
to be an abelian group
together with a ring homomorphism
to the ring of abelian group homomorphisms from
to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element
to scalar-multiplication
).
Anyway, the point is that we can now apply some linear algebra to study this -vector space; in particular, a finite Boolean ring
is a finite-dimensional vector space over
. By choosing a basis, we see that
is vector-space isomorphic to
where
is the dimension. So the cardinality of a finite Boolean ring must be of the form
. Hold that thought!
Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring , we may construct a corresponding Boolean algebra structure on the underlying set of
, uniquely determined by the stipulation that the multiplication
of the Boolean ring match the meet operation
of the Boolean algebra. Conversely, given a Boolean algebra
, we may construct a corresponding Boolean ring structure on
, and this construction is inverse to the previous one.
In one direction, suppose is a Boolean ring. We know from before that a binary operation on a set
that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of
— can be identified with the meet operation of a meet-semilattice structure on
, uniquely specified by taking its partial order to be defined by:
iff
. It immediately follows from this definition that the additive identity
satisfies
for all
(is the bottom element), and the multiplicative identity
satisfies
for all
(is the top element).
Notice also that , by idempotence. This leads one to suspect that
will be the complement of
in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting
, i.e.,
is equal to its putative double negation.
Proposition 2: is order-reversing.
Proof: Looking at the definition of the order, this says that if , then
. This is immediate.
So, is an order-reversing map
(an order-preserving map
) which is a bijection (since it is its own inverse). We conclude that
is a poset isomorphism. Since
has meets and
,
also has meets (and the isomorphism preserves them). But meets in
are joins in
. Hence
has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function
takes meets in
to joins in
; that is,
or, replacing by
and
by
,
whence , using the proposition 1 above.
Proposition 3: is the complement of
.
Proof: We already saw . Also
using the formula for join we just computed. This completes the proof.
So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have . On the other hand,
, using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.
Naturally, we want to invert the process: starting with a Boolean algebra structure on a set , construct a corresponding Boolean ring structure on
whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy
, so that
if
(i.e., if
and
are disjoint): this gives a partial definition of addition. Continuing this thought, if we express
as a disjoint sum of some element
and
, we then conclude
, whence
by cancellation. In the case where the Boolean algebra is a power set
, this element
is the symmetric difference of
and
. This generalizes: if we define the addition by the symmetric difference formula
, then
is disjoint from
, so that
after a short calculation using the complementation and distributivity axioms. After more work, one shows that is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.
Exercise: Verify this last assertion.
However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.
Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism between Boolean rings, it is clear that
preserves the meet
and join
of any two elements
[since it preserves multiplication and addition] and of course also the complement
of any
; therefore
is a map of the corresponding Boolean algebras. Conversely, a map
of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product
and sum
in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,
Theorem 1: The above processes define functors ,
, which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.
- Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.
Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality — the same as the cardinality of the power set Boolean algebra
if
has cardinality
. The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra
is naturally isomorphic to one of the form
; one of our tasks is to describe
in terms of
in a “natural” (or rather, functorial) way. From the Boolean ring perspective,
is a basis of the underlying
-vector space of
; to pin it down exactly, we use the full ring structure.
is naturally a basis of
; more precisely, under the embedding
defined by
, every subset
is uniquely a disjoint sum of finitely many elements of
:
where
: naturally,
iff
. For each
, we can treat the coefficient
as a function of
valued in
. Let
denote the set of functions
; this becomes a Boolean ring under the obvious pointwise definitions
and
. The function
which takes
to the coefficient function
is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)
Or, we can turn this around: for each , we get a Boolean ring map
which takes
to
. Let
denote the set of Boolean ring maps
.
Proposition 4: For a finite set , the function
that sends
to
is a bijection (in other words, an isomorphism).
Proof: We must show that for every Boolean ring map , there exists a unique
such that
, i.e., such that
for all
. So let
be given, and let
be the intersection (or Boolean ring product) of all
for which
. Then
.
I claim that must be a singleton
for some (evidently unique)
. For
, forcing
for some
. But then
according to how
was defined, and so
. To finish, I now claim
for all
. But
iff
iff
iff
. This completes the proof.
This proposition is a vital clue, for if is to be isomorphic to a power set
(equivalently, to
), the proposition says that the
in question can be retrieved reciprocally (up to isomorphism) as
.
With this in mind, our first claim is that there is a canonical Boolean ring homomorphism
which sends to the function
which maps
to
(i.e., evaluates
at
). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that
for all
. But by definition, this is the equation
, which holds since
is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.
Theorem 2: If is a finite Boolean ring, then the Boolean ring map
is an isomorphism. (So, there is a natural isomorphism .)
Proof: First we prove injectivity: suppose is nonzero. Then
, so the ideal
is a proper ideal. Let
be a maximal proper ideal containing
, so that
is both a field and a Boolean ring. Then
(otherwise any element
not equal to
would be a zero divisor on account of
). The evident composite
yields a homomorphism for which
, so
. Therefore
is nonzero, as desired.
Now we prove surjectivity. A function is determined by the set of elements
mapping to
under
, and each such homomorphism
, being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let
be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if
belong to
, then so does
. Let
be the join of the finitely many elements of
; notice
(actually, this proves that every ideal of a finite Boolean ring
is principal). In fact, writing
for the unique element such that
, we have
(certainly for all such
, since
, but also
belongs to the intersection of these kernels and hence to
, whence
).
Now let ; I claim that
, proving surjectivity. We need to show
for all
. In one direction, we already know from the above that if
, then
belongs to the kernel of
, so
, whence
.
For the other direction, suppose , or that
. Now the kernel of
is principal, say
for some
. We have
, so
from which it follows that for some
. But then
is a proper ideal containing the maximal ideals
and
; by maximality it follows that
. Since
and
have the same kernels, they are equal. And therefore
. We have now proven both directions of the statement (
if and only if
), and the proof is now complete.
- Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements
and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset
closed under joins and such that
for
, a filter is (by definition) a subset
closed under meets and such that
whenever
(this second condition is equivalent to upward-closure:
and
implies
). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if
is an ideal, then the set of negations
is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if
is a Boolean algebra map, then the inverse image
is a filter, just as the inverse image
is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.
For those who know some category theory: what is really going on here is that we have a power set functor
(taking a function between finite sets to the inverse image map
, which is a map between finite Boolean algebras) and a functor
which we could replace by its opposite , and the canonical maps of proposition 4 and theorem 2,
are components (at and
) of the counit and unit for an adjunction
. The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories
and
. This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.
My name is Todd Trimble. As regular readers of this blog may have noticed by now, I’ve recently been actively commenting on some of the themes introduced by our host Vishal, and he’s now asked whether I’d like to write some posts of my own. Thank you Vishal for the invitation!
As made clear in some of my comments, my own perspective on a lot of mathematics is greatly informed and influenced by category theory — but that’s not what I’m setting out to talk about here, not yet anyway. For reasons not altogether clear to me, the mere mention of category theory often scares people, or elicits other emotional reactions (sneers, chortles, challenges along the lines “what is this stuff good for, anyway?” — I’ve seen it all).
Anyway, I’d like to try something a little different this time — instead of blathering about categories, I’ll use some of Vishal’s past posts as a springboard to jump into other mathematics which I find interesting, and I won’t need to talk about categories at all unless a strong organic need is felt for it (or if it’s brought back “by popular demand”). But, the spirit if not the letter of categorical thinking will still strongly inform my exposition — those readers who already know categories will often be able to read between the lines and see what I’m up to. Those who do not will still be exposed to what I believe are powerful categorical ways of thinking.
I’d like to start off talking about a very pretty area of mathematics which ties together various topics in algebra, topology, logic, geometry… I’m talking about mathematics in the neighborhood of so-called “Stone duality” (after the great Marshall Stone). I’m hoping to pitch this as though I were teaching an undergraduate course, at roughly a junior or senior level in a typical American university. [Full disclosure: I’m no longer a professional academic, although I often play one on the Internet 🙂 ] At times I will allude to topics which presuppose some outside knowledge, but hey,that’s okay. No one’s being graded here (thank goodness!).
First, I need to discuss some preliminaries which will eventually lead up to the concept of Boolean algebra — the algebra which underlies propositional logic.
A partial order on a set is a binary relation (a subset
), where we write
if
, satisfying the following conditions:
- (Reflexivity)
for every
;
- (Transitivity) For all
, (
and
) implies
.
- (Antisymmetry) For all
, (
and
) implies
.
A partially ordered set (poset for short) is a set equipped with a partial order. Posets occur all over mathematics, and many are likely already familiar to you. Here are just a few examples:
- The set of natural numbers
ordered by divisibility (
if
divides
).
- The set of subsets of a set
(where
is the relation of inclusion
of one subset in another).
- The set of subgroups of a group
(where again
is the inclusion relation between subgroups).
- The set of ideals in a ring
(ordered by inclusion).
The last three examples clearly follow a similar pattern, and in fact, there is a sense in which every poset P can be construed in just this way: as a set of certain types of subset ordered by inclusion. This is proved in a way very reminiscent of the Cayley lemma (that every group can be represented as a group of permutations of a set). You can think of such results as saying “no matter how abstractly a group [or poset] may be presented, it can always be represented in a concrete way, in terms of permutations [or subsets]”.
To make this precise, we need one more notion, parallel to the notion of group homomorphism. If and
are posets, a poset map from
to
is a function
which preserves the partial order (that is, if
in
, then
in
). Here then is our representation result:
Lemma (Dedekind): Any poset
may be faithfully represented in its power set
, partially ordered by inclusion. That is, there exists a poset map
that is injective (what we mean by “faithful”: the map is one-to-one).
Proof: Define to be the function which takes
to the subset
(which we view as an element of the power set). To check this is a poset map, we must show that if
, then
is included in
. This is easy: if
belongs to
, i.e., if
, then from
and the transitivity property,
, hence
belongs to
.
Finally, we must show that is injective; that is,
implies
. In other words, we must show that if
,
then . But, by the reflexivity property, we know
; therefore
belongs to the set displayed on the left, and therefore to the set on the right. Thus
. By similar reasoning,
. Then, by the antisymmetry property,
, as desired.
The Dedekind lemma turns out to be extremely useful (it and the Cayley lemma are subsumed under an even more useful result called the Yoneda lemma — perhaps more on this later). Before I illustrate its uses, let me rephrase slightly the injectivity property of the Dedekind embedding : it says,
If (for all in
iff
), then
.
This principle will be used over and over again, so I want to give it a name: I’ll call it the Yoneda principle.
Here is a typical use. Given elements in a poset
, we say that an element
is a meet of
and
if for all
,
if and only if (
and
).
Fact: there is at most one meet of
and
. That is, if
and
are both meets of
and
, then
.
Proof: For all if and only if (
and
) if and only if
. Therefore,
by the Yoneda principle.
Therefore, we can refer to the meet of two elements and
(if it exists); it is usually denoted
. Because
, we have
and
.
Example: In a concrete poset, like the poset of all subsets of a set or subgroups of a group, the meet of two elements is their intersection.
Example: Consider the natural numbers ordered by divisibility. The meet satisfies
and
(i.e.,
divides both
and
). At the same time, the meet property says that any number which divides both
and
must also divide
. It follows that the meet in this poset is the gcd of
and
.
Here are some more results which can be proved with the help of the Yoneda principle. I’ll just work through one of them, and leave the others as exercises.
(idempotence of meet)
(commutativity of meet)
(associativity of meet)
To prove 3., we can use the Yoneda principle: for all in the poset, we have
iff and
iff and
and
iff and
iff .
Hence, by Yoneda.
In fact, we can unambiguously refer to the meet of any finite number of elements by the evident property:
iff
and
and
— this uniquely defines the meet on the left, by Yoneda, and the order in which the appear makes no difference.
But wait — what if the number of elements is zero? That is, what is the empty meet? Well, the condition “
and
” becomes vacuous (there is no
for which the condition is not satisfied), so whatever the empty meet is, call it
, we must have
for all
. So
is just the top element of the poset (if one exists). Another name for the top element is “the terminal element”, and another notation for it is ‘
‘.
Definition: A meet semi-lattice is a poset which has all finite meets, including the empty one.
Exercises:
- Prove that in a meet-semilattice,
for all
.
- Is there a top element for the natural numbers ordered by divisibility?
Recent Comments