In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.
Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.
I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:
Definition: A Boolean ring is a commutative ring (with identity ) in which every element is idempotent, i.e., satisfies .
Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.
Proposition 1: For every element in a Boolean ring, .
Proof: By idempotence, we have . Since , we may additively cancel in the ring to conclude .
This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over to be an abelian group together with a ring homomorphism to the ring of abelian group homomorphisms from to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element to scalar-multiplication ).
Anyway, the point is that we can now apply some linear algebra to study this -vector space; in particular, a finite Boolean ring is a finite-dimensional vector space over . By choosing a basis, we see that is vector-space isomorphic to where is the dimension. So the cardinality of a finite Boolean ring must be of the form . Hold that thought!
Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring , we may construct a corresponding Boolean algebra structure on the underlying set of , uniquely determined by the stipulation that the multiplication of the Boolean ring match the meet operation of the Boolean algebra. Conversely, given a Boolean algebra , we may construct a corresponding Boolean ring structure on , and this construction is inverse to the previous one.
In one direction, suppose is a Boolean ring. We know from before that a binary operation on a set that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of — can be identified with the meet operation of a meet-semilattice structure on , uniquely specified by taking its partial order to be defined by: iff . It immediately follows from this definition that the additive identity satisfies for all (is the bottom element), and the multiplicative identity satisfies for all (is the top element).
Notice also that , by idempotence. This leads one to suspect that will be the complement of in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting , i.e., is equal to its putative double negation.
Proposition 2: is order-reversing.
Proof: Looking at the definition of the order, this says that if , then . This is immediate.
So, is an order-reversing map (an order-preserving map ) which is a bijection (since it is its own inverse). We conclude that is a poset isomorphism. Since has meets and , also has meets (and the isomorphism preserves them). But meets in are joins in . Hence has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function takes meets in to joins in ; that is,
or, replacing by and by ,
whence , using the proposition 1 above.
Proposition 3: is the complement of .
Proof: We already saw . Also
using the formula for join we just computed. This completes the proof.
So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have . On the other hand, , using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.
Naturally, we want to invert the process: starting with a Boolean algebra structure on a set , construct a corresponding Boolean ring structure on whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy , so that if (i.e., if and are disjoint): this gives a partial definition of addition. Continuing this thought, if we express as a disjoint sum of some element and , we then conclude , whence by cancellation. In the case where the Boolean algebra is a power set , this element is the symmetric difference of and . This generalizes: if we define the addition by the symmetric difference formula , then is disjoint from , so that
after a short calculation using the complementation and distributivity axioms. After more work, one shows that is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.
Exercise: Verify this last assertion.
However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.
Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism between Boolean rings, it is clear that preserves the meet and join of any two elements [since it preserves multiplication and addition] and of course also the complement of any ; therefore is a map of the corresponding Boolean algebras. Conversely, a map of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product and sum in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,
Theorem 1: The above processes define functors , , which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.
- Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.
Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality — the same as the cardinality of the power set Boolean algebra if has cardinality . The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra is naturally isomorphic to one of the form ; one of our tasks is to describe in terms of in a “natural” (or rather, functorial) way. From the Boolean ring perspective, is a basis of the underlying -vector space of ; to pin it down exactly, we use the full ring structure.
is naturally a basis of ; more precisely, under the embedding defined by , every subset is uniquely a disjoint sum of finitely many elements of : where : naturally, iff . For each , we can treat the coefficient as a function of valued in . Let denote the set of functions ; this becomes a Boolean ring under the obvious pointwise definitions and . The function which takes to the coefficient function is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)
Or, we can turn this around: for each , we get a Boolean ring map which takes to . Let denote the set of Boolean ring maps .
Proposition 4: For a finite set , the function that sends to is a bijection (in other words, an isomorphism).
Proof: We must show that for every Boolean ring map , there exists a unique such that , i.e., such that for all . So let be given, and let be the intersection (or Boolean ring product) of all for which . Then
.
I claim that must be a singleton for some (evidently unique) . For , forcing for some . But then according to how was defined, and so . To finish, I now claim for all . But iff iff iff . This completes the proof.
This proposition is a vital clue, for if is to be isomorphic to a power set (equivalently, to ), the proposition says that the in question can be retrieved reciprocally (up to isomorphism) as .
With this in mind, our first claim is that there is a canonical Boolean ring homomorphism
which sends to the function which maps to (i.e., evaluates at ). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that for all . But by definition, this is the equation , which holds since is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.
Theorem 2: If is a finite Boolean ring, then the Boolean ring map
is an isomorphism. (So, there is a natural isomorphism .)
Proof: First we prove injectivity: suppose is nonzero. Then , so the ideal is a proper ideal. Let be a maximal proper ideal containing , so that is both a field and a Boolean ring. Then (otherwise any element not equal to would be a zero divisor on account of ). The evident composite
yields a homomorphism for which , so . Therefore is nonzero, as desired.
Now we prove surjectivity. A function is determined by the set of elements mapping to under , and each such homomorphism , being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if belong to , then so does . Let be the join of the finitely many elements of ; notice (actually, this proves that every ideal of a finite Boolean ring is principal). In fact, writing for the unique element such that , we have
(certainly for all such , since , but also belongs to the intersection of these kernels and hence to , whence ).
Now let ; I claim that , proving surjectivity. We need to show for all . In one direction, we already know from the above that if , then belongs to the kernel of , so , whence .
For the other direction, suppose , or that . Now the kernel of is principal, say for some . We have , so
from which it follows that for some . But then is a proper ideal containing the maximal ideals and ; by maximality it follows that . Since and have the same kernels, they are equal. And therefore . We have now proven both directions of the statement ( if and only if ), and the proof is now complete.
- Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset closed under joins and such that for , a filter is (by definition) a subset closed under meets and such that whenever (this second condition is equivalent to upward-closure: and implies ). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if is an ideal, then the set of negations is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if is a Boolean algebra map, then the inverse image is a filter, just as the inverse image is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.
For those who know some category theory: what is really going on here is that we have a power set functor
(taking a function between finite sets to the inverse image map , which is a map between finite Boolean algebras) and a functor
which we could replace by its opposite , and the canonical maps of proposition 4 and theorem 2,
are components (at and ) of the counit and unit for an adjunction . The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories and . This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.
10 comments
Comments feed for this article
May 9, 2008 at 1:15 am
arcadianfunctor
This is just fantastic! Stone duality, and the Fourier transform in particular, have been a strong motivation in my current work on mass operators. Do you think you could write a post on a ternary (Z3) analogue of the Boolean case? We have been thinking of this in the context of tricategories, because the appearance of mass operators here suggests that the Zn should be associated NOT to set cardinality but rather to categorical dimension……
May 9, 2008 at 12:05 pm
Todd Trimble
That sounds very interesting, Kea (or would you prefer Marni?)! Unfortunately, I don’t know what Z3 or Zn refer to, so it isn’t clear that I could say anything intelligent here — but if you can describe what you have in mind or point me to a reference, I’d be happy to think about it. I should warn that I don’t know too much physics, but I do know a bit about higher-dimensional categories.
May 9, 2008 at 6:57 pm
arcadianfunctor
Either Kea or Marni are fine, thanks. I mean that I have been wondering a bit about axioms for higher multicategorical toposes that have some number theoretic flavour. The role of the two point set as (a) a subobject classifier for Set (b) a basis for Stone duality (c) the 0 and 1 of lattices, should be extended to a ternary analogue (a 1-topos is 2 dimensional in the sense that one needs to work in Cat to describe the duality functors etc) which uses perhaps a triangle of adjunctions between 3 special (multi)cat objects. Multicategories just seem to be the right thing. Kapranov’s NC Fourier transform is another motivation. Yet another motivation comes from the parity cube for tricategories (or tetracategories) which breaks the pentagon relation and forces us to look beyond braided monoidal structures. A ternary topos would use three parity cubes, not one.
Michael Rios says that these correspond to the 27 functions on 0,1,2 which he associates with the 3×3 octonion Jordan algebra (which is physically important). As for the physics, there are plenty of links on my blog. I’m afraid proper references are hard to come by.
May 22, 2008 at 3:33 pm
Free Boolean algebras, truth tables, disjunctive normal form, and completeness theorems « Todd and Vishal’s blog
[...] Todd Trimble Tags: Boolean Algebra, completeness theorem, disjunctive normal form, truth tables Last time in this series on Stone duality, we observed a perfect duality between finite Boolean algebras and [...]
July 18, 2008 at 6:47 pm
Ultrafilter convergence, compactness, and the spectrum of a Boolean algebra « Todd and Vishal’s blog
[...] maps (even before the notion of naturality had been formally introduced), and recalled our earlier result that these are isomorphisms when and are finite (which is manifestly untrue in general; for [...]
September 23, 2009 at 4:36 am
Vishal Lama
I am carefully going through these notes these days, and I might have questions to ask, later!
As an aside, it is interesting to note the following: Commutativity of addition in a ring follows if the ring has identity . (Of course, commutativity of addition is already implied from the definition of a ring.) Anyway, using the two distributive laws, we note that , and that the same expression is also equal to . And so, through additive cancellation, we have for all in .
September 23, 2009 at 1:06 pm
Todd Trimble
Yes, that’s right! Actually, your observation is similar to (or at least reminds me of) something called the Eckmann-Hilton lemma, a very useful algebraic principle. The “Eckmann-Hilton lemma” is actually a rubric for a collection of results which pop up in algebraic topology (where it originated), category theory, higher category theory, and other contexts where there are units/identities and interchange of algebraic operations.
You used distributivity on both sides to prove your result. Here’s an example which shows you had to do that. Let be a group, with group multiplication denoted by . Define a new operation by . Then we have one-sided distributivity
and the identity element of is also a left identity for , but obviously this doesn’t force commutativity!
September 23, 2009 at 11:55 pm
Vishal Lama
Ah, very nice example. And, to use it further, suppose we have the other distributive law as well: . Then, putting (identity in ) and noting that is the left identity for , we have , which would mean that all the elements of are idempotent! Is that telling us something more?
On a somewhat different note, I was mulling over a construction called Dorroh’s extension, which says that any arbitrary non-unital (or unital) ring can be embedded in a unital ring. Indeed, let be our arbitrary ring, then , together with and , is the aforesaid extension (with multiplicative identity as in which is the additive identity of ) and with being the required embedding. I was wondering if there is anything special about the above construction.
September 24, 2009 at 1:38 am
Todd Trimble
Yep, if every satisfies , then every satisfies : is the trivial group.
It’s called Dorroh’s extension? I never knew that (I know the construction of course; I just never heard the name). I can’t think of much exciting to say about it. Let me note that the formal expression is meant to be intuitively thought of as , and that the rules for addition and multiplication follow this intuition straightforwardly. Also, the embedding is given by .
I will say that while some authors’ convention is that a ring need not have a unit, that seems to be a bit old-fashioned: more and more, the default seems to be that rings have units by definition, and that “rngs” are what we call the structures with no unit assumption. (The term is due to Jacobson; think “ring” without the “i” [the identity].) I think the old-fashioned convention was partly there so that ideals of rings, or kernels of ring homomorphisms, would also be rings, but it’s perhaps more useful nowadays to think of ideals of a ring instead as submodules of (considered as a module over the ring ).
I’m not sure how compelling an argument I can give for why rings “ought to” have units (why that is the better convention), but generally units are extremely convenient for many arguments. The notion of category includes units (identities) for good reason: it makes possible the notion of isomorphism, the argument of the Yoneda lemma, and many other things. Not having units is an inconvenience, and this is just as true for monoids and for rings. Dorroh’s extension shows that we can harmlessly adjoin a unit anyway [and also that every rng arises as an ideal of a ring], so why not assume it’s there? So there are good arguments I think for Jacobson’s convention.
One naturally occurring example of a rng (i.e., where a unit is not naturally available) is the algebra of under convolution product; a unit would be a Dirac functional (a distribution not given by an function). But usually units are “naturally” part of the picture.
Notice that even if already has a unit, this construction adjoins a new unit. This is in contrast to other situations: for example, considered up to isomorphism, the field of fractions construction (for an integral domain) doesn’t adjoin extra elements if the domain was already a field. The fact that Dorroh’s extension adds something new even if the rng is a ring is connected with the fact that a rng homomorphism between rings need not be a ring homomorphism, because a rng homomorphism need not preserve identities.
November 23, 2010 at 1:46 am
Boolean rings, ultrafilters, and Stone’s representation theorem « Annoying Precision
[...] to introduce them from a different perspective. Some of the topics below are also covered in these posts by Todd [...]