My name is Todd Trimble. As regular readers of this blog may have noticed by now, I’ve recently been actively commenting on some of the themes introduced by our host Vishal, and he’s now asked whether I’d like to write some posts of my own. Thank you Vishal for the invitation!
As made clear in some of my comments, my own perspective on a lot of mathematics is greatly informed and influenced by category theory — but that’s not what I’m setting out to talk about here, not yet anyway. For reasons not altogether clear to me, the mere mention of category theory often scares people, or elicits other emotional reactions (sneers, chortles, challenges along the lines “what is this stuff good for, anyway?” — I’ve seen it all).
Anyway, I’d like to try something a little different this time — instead of blathering about categories, I’ll use some of Vishal’s past posts as a springboard to jump into other mathematics which I find interesting, and I won’t need to talk about categories at all unless a strong organic need is felt for it (or if it’s brought back “by popular demand”). But, the spirit if not the letter of categorical thinking will still strongly inform my exposition — those readers who already know categories will often be able to read between the lines and see what I’m up to. Those who do not will still be exposed to what I believe are powerful categorical ways of thinking.
I’d like to start off talking about a very pretty area of mathematics which ties together various topics in algebra, topology, logic, geometry… I’m talking about mathematics in the neighborhood of so-called “Stone duality” (after the great Marshall Stone). I’m hoping to pitch this as though I were teaching an undergraduate course, at roughly a junior or senior level in a typical American university. [Full disclosure: I’m no longer a professional academic, although I often play one on the Internet 🙂 ] At times I will allude to topics which presuppose some outside knowledge, but hey,that’s okay. No one’s being graded here (thank goodness!).
First, I need to discuss some preliminaries which will eventually lead up to the concept of Boolean algebra — the algebra which underlies propositional logic.
A partial order on a set is a binary relation (a subset
), where we write
if
, satisfying the following conditions:
- (Reflexivity)
for every
;
- (Transitivity) For all
, (
and
) implies
.
- (Antisymmetry) For all
, (
and
) implies
.
A partially ordered set (poset for short) is a set equipped with a partial order. Posets occur all over mathematics, and many are likely already familiar to you. Here are just a few examples:
- The set of natural numbers
ordered by divisibility (
if
divides
).
- The set of subsets of a set
(where
is the relation of inclusion
of one subset in another).
- The set of subgroups of a group
(where again
is the inclusion relation between subgroups).
- The set of ideals in a ring
(ordered by inclusion).
The last three examples clearly follow a similar pattern, and in fact, there is a sense in which every poset P can be construed in just this way: as a set of certain types of subset ordered by inclusion. This is proved in a way very reminiscent of the Cayley lemma (that every group can be represented as a group of permutations of a set). You can think of such results as saying “no matter how abstractly a group [or poset] may be presented, it can always be represented in a concrete way, in terms of permutations [or subsets]”.
To make this precise, we need one more notion, parallel to the notion of group homomorphism. If and
are posets, a poset map from
to
is a function
which preserves the partial order (that is, if
in
, then
in
). Here then is our representation result:
Lemma (Dedekind): Any poset
may be faithfully represented in its power set
, partially ordered by inclusion. That is, there exists a poset map
that is injective (what we mean by “faithful”: the map is one-to-one).
Proof: Define to be the function which takes
to the subset
(which we view as an element of the power set). To check this is a poset map, we must show that if
, then
is included in
. This is easy: if
belongs to
, i.e., if
, then from
and the transitivity property,
, hence
belongs to
.
Finally, we must show that is injective; that is,
implies
. In other words, we must show that if
,
then . But, by the reflexivity property, we know
; therefore
belongs to the set displayed on the left, and therefore to the set on the right. Thus
. By similar reasoning,
. Then, by the antisymmetry property,
, as desired.
The Dedekind lemma turns out to be extremely useful (it and the Cayley lemma are subsumed under an even more useful result called the Yoneda lemma — perhaps more on this later). Before I illustrate its uses, let me rephrase slightly the injectivity property of the Dedekind embedding : it says,
If (for all in
iff
), then
.
This principle will be used over and over again, so I want to give it a name: I’ll call it the Yoneda principle.
Here is a typical use. Given elements in a poset
, we say that an element
is a meet of
and
if for all
,
if and only if (
and
).
Fact: there is at most one meet of
and
. That is, if
and
are both meets of
and
, then
.
Proof: For all if and only if (
and
) if and only if
. Therefore,
by the Yoneda principle.
Therefore, we can refer to the meet of two elements and
(if it exists); it is usually denoted
. Because
, we have
and
.
Example: In a concrete poset, like the poset of all subsets of a set or subgroups of a group, the meet of two elements is their intersection.
Example: Consider the natural numbers ordered by divisibility. The meet satisfies
and
(i.e.,
divides both
and
). At the same time, the meet property says that any number which divides both
and
must also divide
. It follows that the meet in this poset is the gcd of
and
.
Here are some more results which can be proved with the help of the Yoneda principle. I’ll just work through one of them, and leave the others as exercises.
(idempotence of meet)
(commutativity of meet)
(associativity of meet)
To prove 3., we can use the Yoneda principle: for all in the poset, we have
iff and
iff and
and
iff and
iff .
Hence, by Yoneda.
In fact, we can unambiguously refer to the meet of any finite number of elements by the evident property:
iff
and
and
— this uniquely defines the meet on the left, by Yoneda, and the order in which the appear makes no difference.
But wait — what if the number of elements is zero? That is, what is the empty meet? Well, the condition “
and
” becomes vacuous (there is no
for which the condition is not satisfied), so whatever the empty meet is, call it
, we must have
for all
. So
is just the top element of the poset (if one exists). Another name for the top element is “the terminal element”, and another notation for it is ‘
‘.
Definition: A meet semi-lattice is a poset which has all finite meets, including the empty one.
Exercises:
- Prove that in a meet-semilattice,
for all
.
- Is there a top element for the natural numbers ordered by divisibility?
22 comments
Comments feed for this article
April 2, 2008 at 6:17 am
Vishal
I hope I am “allowed” to present my solutions to the exercises!
1. For all
, we have
iff
and
iff
. Therefore,
by Yoneda.
2.
. (Note that
is always unique.)
If our poset is the set of natural numbers ordered by divisibility, then the above two results imply that
. And, if our poset is the power set
of a set
ordered by set-inclusion, then
, from which it immediately follows that
for all
.
April 2, 2008 at 1:13 pm
Todd Trimble
Yup! Regarding 2., it’s funny — normally one thinks of “m divides n” as implying that m is less than n (in the usual order), but that’s not quite right, since everything divides 0! The element 0 is greatest in the divisibility order.
Or, as it says in the Bible, “The least shall be greatest among you.” 🙂
Okay, here’s another exercise. We said that in a meet-semilattice, the operation that sends (x, y) to x /\ y is associative, commutative, and idempotent (and exercise 1. says it has an identity element,
). See if you can turn this around, and show that if you have an operation on a set X with these properties, then there exists a meet-semilattice structure on X for which the operation is the meet.
April 2, 2008 at 2:18 pm
Vishal
To be completely honest, on exercise (2), my gut reaction was that no such
exists and for precisely the reason you mentioned! Then, I thought for a while before I realized that there is indeed one. 🙂
A quick question on the above exercise. All we have to do is show that such an operation “induces” a poset structure on
and that
has all finite meets including the empty one, is that right?
April 2, 2008 at 2:55 pm
Todd Trimble
It’s just slightly more than that — you want to get a meet-semilattice out of an idempotent commutative monoid (which is what these structures are called), and show that the operation you started with coincides with the meet of this poset. (That actually forces the issue — just ask yourself how you could define
in terms of
.)
What you really want to say is that there’s an equivalence between the notion of idempotent commutative monoid and the notion of meet-semilattice. That’s the real point behind this exercise.
April 2, 2008 at 4:25 pm
Vishal
Here’s a partial solution, I think.
Suppose
is a poset with
as the ordering relation and
as the meet operation.
Claim:
iff
.
Proof: So, suppose
. Now,
by reflexivity. Therefore,
by the meet property. But, again, by the meet property,
. So, by antisymmetry, we have
. This proves the “if” part.
Now, we prove the “only if” part. So, suppose
. By the meet property,
; therefore,
. And, this completes our proof.
Now, all we need to show is that the given operation on
actually coincides with the meet of
. Is that right?
April 2, 2008 at 4:55 pm
Todd Trimble
The claim definitely contains a crucial idea. Let me throw back what you’ve said in a slightly different way: in the exercise, we don’t start off knowing that X is a poset; we just start with this commutative idempotent operation (let me call it
, to avoid prejudice). But, in order to have that
coincide with the meet
of a poset
, your claim tells us that we must have
iff
. Therefore, we can use this to define the partial order we are after, i.e., say
precisely when
.
Then prove from this definition that you actually do get a partial order, i.e., that you get reflexivity, transitivity, and antisymmetry, based on what you assumed of
. And then, that
is actually the meet of
and
, as you say.
April 2, 2008 at 8:17 pm
Vishal
Very cool. Thanks for that clarification! Here is my solution. (I will use a slightly different notation for the operation.)
Let * : X \times X -> X be our commutative idempotent operation. Then, we have
x*x = x, x*y = y*x and x*(y*z) = (x*y)*z for all x, y, z in X.
Now, define x <= y whenever x*y = x.
We show that <= is really a partial order.
i) (Reflexivity): x*x = x implies x <= x.
ii) (Transitivity): Suppose x <= y and y <= z. Then, we have x*y = x and y*z = y. Substitute the latter in the former to get x*(y*z) = x. Then, by associativity, (x*y)*z = x, and so, x*z = x. This implies x <= z.
iii) (Antisymmetry): Suppose x <= y and y <= x. Then, we have x*y = x and y*x = y, and therefore by commutativity, x = y.
And this completes our proof.
Now, we prove that x*y really is the meet of x and y, i.e. we prove that for all a,
a <= x*y iff ( a <= x and a <= y ).
We first prove the “if” part. So, suppose a <= x and a <= y. Then, a*x = a and a*y = a, from which we obtain (a*x)*y = a, and by associativity, we have a*(x*y) = a, which implies a <= x*y.
We now prove the “only if” part. So, suppose a <= x*y. Then, a*(x*y) = a. Then, (a*(x*y))*x = a*x, and using associativity, commutativity and the fact that x*x = x, we obtain a*(x*y) = a*x. But we have a*(x*y) = a; hence, a*x = a, which implies a <= x. Similarly, a*y = a and hence a <= y.
And this completes our proof!
April 2, 2008 at 10:27 pm
Todd Trimble
Excelente, Vishal! Very nice.
Also, under the assumption that the binary operation has an identity element
, it’s trivial that this element is the top element with respect to the partial order as defined above.
A small comment on the “only if”: that x*y <= x follows quickly from idempotence. Then, given a <= x*y, it follows that a <= x by transitivity, and similarly a <= y.
This sort of exercise in axiomatics is not to everyone’s taste (I actually rather like this sort of thing myself), but it points to the fact that basic concepts in poset and lattice theory tend to have many equivalent definitions — for example, there must be scores if not hundreds of known definitions of “Boolean algebra” (we’ll see a few soon). I usually prefer to take the partial order as primitive, and define operations like the meet /\ in terms of it — this leads to efficient proofs of the main formal properties of such operations, as indicated in my post (and, I have to say, with this approach the connections with category theory are much clearer).
The other approach is what one might call “purely algebraic” — we start with operations subject to purely equational axioms [like associativity, commutativity, idempotence], and then derive non-algebraic data such as the partial order — but the resultant proofs are often more cumbersome. On the plus side, there are some technical advantages in knowing that a theory can be presented in purely algebraic form, because an awful lot is known about algebraic theories in general.
April 2, 2008 at 11:42 pm
Vishal
Todd,
I couldn’t have done it without that slight push from you. So, thanks!
And, good point on the “only if” part. I clearly see it now! This has been very interesting and enjoyable so far!
April 3, 2008 at 12:03 am
Vishal
As a side note, after having gone through this post , I have this vague feeling now that somewhere down the line the principle of mathematical induction (PMI) is going to come up!
April 3, 2008 at 3:50 pm
Todd Trimble
I didn’t have any specific plans myself for bringing in PMI, but I’d be interested in what you’re thinking. I mean, in some sense you’re probably right since the principle is so fundamental and ubiquitous.
There’s one point in my post which may be puzzling to some readers; I can certainly remember once being puzzled about it myself. When I proved the associativity of meet, I boiled down both of the statements
and
to “
and
and
“. The point that puzzled me could be expressed as, “Hey, aren’t we implicitly assuming that the word ‘and’ [functioning as a linguistic binary operator] is associative? If so, isn’t there something just a little bit circular about the reasoning here?”
(I was reminded of this when I tried to consider how induction might enter, and remembered that the general law of associativity in a monoid, which allows one to remove parentheses and arrive at expressions
, is derived from the usual associativity law by an inductive argument. But in fact induction isn’t needed to prove the general associativity of meet, as I will try to explain below.)
A reflex reaction to this apparent circularity (and it’s a slightly cop-out reaction, but I’ll say it anyway) is that the human act of making inferences is not something that is completely formalizable. It’s like the old problem of the dictionary: there is something circular about defining words in terms of other words — the way out of the paradox is to acknowledge that the ultimate referents of words reside in human actions and human experience, and that’s not something perfectly capturable within the language itself. In the present case, it would mean that the way we use the word “and” in ordinary discourse makes it commutative and associative automatically — it’s part of the way we speakers of English handle the word “and” (and presumably something analogous occurs in other human languages, although I’m not sure what the experts would say about, e.g., Pirahã).
But let me analyze this a little further, by indicating that the word ‘and’ can be avoided altogether. The point is that the linguistic construct “p and q and r and … s” is just another way of saying that all the statements in the collection {p, q, r, …, s} hold simultaneously (and there is no linear order imposed on the statements — think of them as floating together in linguistic space). What the category theorists often do is write the defining property of meet like this:
a <= x a <= y
———————-
a <= m
where the dividing line is read as “if and only if”, and the collection of statements on top is just that, a collection of statements which hold simultaneously (in no particular order) if and only if the statement on bottom holds. This should make it crystal-clear (with an assist from Yoneda) that the meet operation just has to be associative and commutative and idempotent!
More on this when I come to discuss inf-lattices and sup-lattices…
April 3, 2008 at 5:02 pm
Vishal
Sometimes, I feel I do need to improve my English quite a bit – I don’t use the correct phrases to express my thoughts! When I wrote “…PMI is going to come up…”, I actually meant that PMI must be lurking somewhere down there. I didn’t mean that you would write about it. Sorry about that incorrect linguistic expression! 🙂 I am trying to collect my thoughts on the possible link between posets and PMI in a way that’s more concrete. I will get back to it soon.
Interestingly, I always assumed that “and” [linguistic binary operator] was associative. I thought it was but “natural” to expect so! But, thanks for clarifying that point, anyway. However, what confused me a bit was this. When we “build” Boolean Algebra later and use it as a basis for propositional logic, then wouldn’t the “and” logical operator be simply the meet of two propositions? And that would involve circularity because we did define meet in terms of “and”. But, your clarification now has erased that confusion.
[I have no idea why WordPress is not displaying the LaTeX code in the comments properly!]
April 4, 2008 at 4:48 pm
Lattices and Duality « Vishal Lama’s blog
[…] Calculus by Todd Trimble Tags: inf-lattices, joins, lattices, principle of duality, sup-lattices Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice […]
April 30, 2008 at 2:32 pm
Boolean algebras, Boolean rings, and baby Stone duality « Vishal Lama’s blog
[…] one direction, suppose is a Boolean ring. We know from before that a binary operation on a set that is commutative, associative, unital [has a unit or identity] […]
June 22, 2008 at 4:38 pm
Basic category theory, I « Todd and Vishal’s blog
[…] hiatus (sorry about that!), I would like to resume the series on Stone duality. You may recall I started this series by saying that my own point of view on mathematics is strongly informed by category theory, […]
June 29, 2008 at 2:12 pm
Basic Category Theory, II « Todd and Vishal’s blog
[…] up to isomorphism by knowing what maps into it look like [they look like pairs of maps ]. In the first lecture in the Stone duality, we stated the Yoneda principle just for posets; now we are generalizing it to […]
July 12, 2008 at 12:23 pm
Basic Category Theory, III: Representability, adjoints, and the Yoneda lemma « Todd and Vishal’s blog
[…] to return to today’s topic. Way back when, when we were first discussing posets, most of our examples of posets were of a […]
September 1, 2008 at 8:03 am
ZFC and ETCS: Elementary Theory of the Category of Sets « Todd and Vishal’s blog
[…] of off-shoot of the general discussion of ultrafilters I started in connection with the series on Stone duality, and because it seems kind of cool. And I will. But this got finished first, and I thought that it […]
September 11, 2008 at 12:10 am
ETCS: Internalizing the logic « Todd and Vishal’s blog
[…] the meet operator of an internal meet-semilattice structure on : it is commutative, associative, and idempotent (because that is true of external […]
November 24, 2009 at 2:52 am
Primes and ideals « Annoying Precision
[…] Dedekind accomplished in much the same way) in that both are a special case of thinking about the Yoneda embedding for posets. To think about the reals in terms of the order properties of one embeds in its power […]
November 23, 2010 at 1:46 am
Boolean rings, ultrafilters, and Stone’s representation theorem « Annoying Precision
[…] like to introduce them from a different perspective. Some of the topics below are also covered in these posts by Todd […]
May 24, 2011 at 4:29 pm
Stephen P. King
Is there any way to define posets within the set of complex numbers such that we can identify the Stone spaces of the posets?