Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset of all subsets of a set , and the poset of all subspaces of a vector space .

I take it that most readers are already familiar with many of the properties of the poset ; there is for example the distributive law , and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset , as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, *classical logic* if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).

To get a sense of the weirdness of , take for example a 2-dimensional vector space . The bottom element is the zero space , the top element is , and the rest of the elements of are 1-dimensional: lines through the origin. For 1-dimensional spaces , there is no relation unless and coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:

V = 1 / | \ / | \ x y z \ | / \ | / 0

Observe that for distinct elements in the middle level, we have for example (0 is the largest element contained in both and ), and also for example (1 is the smallest element containing and ). It follows that , whereas . The distributive law fails in !

**Definition**: A lattice is *distributive* if for all . That is to say, a lattice is distributive if the map , taking an element to , is a morphism of join-semilattices.

**Exercise**: Show that in a meet-semilattice, is a poset map. Is it also a morphism of meet-semilattices? If has a bottom element, show that the map preserves it.**Exercise**: Show that in any lattice, we at least have for all elements .

Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:

**Theorem**: The notion of distributive lattice is self-dual.

**Proof**: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, , follows from the distributive lattice axioms.

Expand the right side to , by distributivity. This reduces to , by an absorption law. Expand this again, by distributivity, to . This reduces to , by the other absorption law. This completes the proof.

Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:

**Definition**: A *frame* is a sup-lattice for which is a morphism of sup-lattices, for every . In other words, for every subset , we have , or, as is often written,

**Example**: A power set , as always partially ordered by inclusion, is a frame. In this case, it means that for any subset and any collection of subsets , we have

This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.

**Example**: If is a set, a *topology* on is a subset of the power set, partially ordered by inclusion as is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element and the empty meet or top element of are elements of , and also:

- If are elements of , then so is .
- If is a collection of elements of , then is an element of .

A *topological space* is a set which is equipped with a topology ; the elements of the topology are called *open subsets* of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.

The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let be the set of -tuples of real numbers. First, define the open ball in centered at a point and of radius to be the set < . Then, define a subset to be *open* if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.

It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the *triangle inequality*. (Exercise?)

Topology is a huge field in its own right; much of our interest here will be in its interplay with *logic*. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the *implication connective* in logic. Most readers probably know that in ordinary logic, the formula (“ implies “) is equivalent to “either not or ” — symbolically, we could define as . That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.

The main property we ask of implication is *modus ponens*: given and , we may infer . In symbols, writing the inference or entailment relation as , this is expressed as . And, **we ask that implication be the weakest possible** **such assumption**, i.e., that material implication be the weakest whose presence in conjunction with entails . In other words, for given and , we now define implication by the property

if and only if

As a very easy exercise, show by Yoneda that an implication is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:

**Theorem**:

- (1) If is a meet-semilattice which admits an implication operator, then for every element , the operator preserves any sups which happen to exist in .
- (2) If is a frame, then admits an implication operator.

**Proof**: (1) Suppose has a sup in , here denoted . We have

if and only if

if and only if

for all if and only if

for all if and only if

.

Since this is true for all , the (dual of the) Yoneda principle tells us that , as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that satisfies the defining property of that sup.)

(2) Suppose are elements of a frame . Define to be . By definition, if , then . Conversely, if , then

where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by (according to the defining property of sups). Hence , as desired.

Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as , so long as we trust that really does what we ask of implication. We’ll come to that point again later.

Part (2) has some interesting consequences *vis à vis* topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let be open sets in a topology. According to our prescription, we define as the sup (the union) of all open sets with the property that . We can think of this inclusion as living in the power set . Then, assuming our formula for implication in the Boolean algebra (where denotes the complement of ), we would have . And thus, our implication *in the topology* is the union of all open sets contained in the (usually non-open) set . That is to say, is the *largest* open contained in , otherwise known as the *interior* of . Hence our formula:

= int

**Definition**: A *Heyting algebra* is a lattice which admits an implication for any two elements . A *complete Heyting algebra* is a complete lattice which admits an implication for any two elements.

Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make *intensional* distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a *morphism of frames* is a poset map which preserves finite meets and arbitrary sups. A *morphism of Heyting algebras* preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A *morphism of complete Heyting algebras* also preserves all structure in sight (sups, infs, and implication).

Heyting algebras are usually **not** Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies *intuitionistic propositional calculus*.

**Exercise**: Show that in a Heyting algebra.

**Exercise**: (For those who know some general topology.) In a Heyting algebra, we define the negation to be . For the Heyting algebra given by a topology, what can you say about when is open and *dense*?

## 14 comments

Comments feed for this article

April 9, 2008 at 6:35 pm

Krishnamurthy IyerI have been a lurker on this blog for some time now, and I must say I really enjoy reading the blog in general, and this series in particular. I hope to see more of this. Thanks a lot.

kris

April 10, 2008 at 11:21 am

Todd TrimbleThanks for saying so, kris! I’ll keep ‘em coming.

April 16, 2008 at 3:35 pm

Truth-valued matrix algebra and adjoints « Vishal Lama’s blog[...] Propositional Calculus by Todd Trimble Tags: adjoints, matrix algebra, propositional logic In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the [...]

April 23, 2008 at 2:07 pm

Boolean algebras, Galois connections, and double negation « Vishal Lama’s blog[...] one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract [...]

May 4, 2008 at 9:06 am

Vishal LamaBefore commenting further, let me write down a list of propositions (related to meets/joins in lattices) that can be used in proving many results. These will be particularly helpful in proving the statement in exercise (2). The proofs aren’t too hard, so I won’t write them down.

i) and . (The first statement is the meet property and the second is obtained by dualizing the first.)

ii) iff . (This extremely useful statement can be proved using (i).) Its dual is,

iff .

iii) If and , then and .

iv) If , then and .

v) If and , then and .

vi) ( and ) iff . (This is the

defining propertyof joins. Thanks to Todd for pointing this one out!)Solution to exercise (2):

——————————-

Using the meet property (shown in (i)) and (v), we have

.

Again, using the meet property and (v), we have

.

Combining the above two statements by using (iii), we finally get

.

May 4, 2008 at 12:07 pm

Todd TrimbleOkay, looks good, Vishal! I hope some others following along will join in too.

I may offer my own solutions or comments on solutions later, after we get a good batch of them. Maybe I’ll even collect them into a post.

June 29, 2008 at 2:59 pm

Basic Category Theory, II « Todd and Vishal’s blog[...] wait, we’ve seen this before: is what we earlier called the implication . So implication is really a function space [...]

September 11, 2008 at 12:12 am

ETCS: Internalizing the logic « Todd and Vishal’s blog[...] recall the notion of Heyting algebra in ordinary naive set-theoretic terms: it’s a lattice that has a material implication [...]

February 15, 2010 at 8:07 am

Vishal LamaSolution to exercise (1):

——————————

Perhaps, this is a tad too late, but it wouldn’t hurt if I wrote down the solution now! :)

Suppose Then, . Therefore, using the commutative, associative and meet properties of the meet operation, . Thus, , and so, in a meet-semilattice, the map is a poset map. The latter is also a morphism of meet-semilattices since . Finally, if has a bottom element, , the map preserves it since .

No, wait, the last statement can (should) simply be proved using Yoneda since a meet semi-lattice may not have complements! (

updated)February 15, 2010 at 5:54 pm

Todd TrimbleFirst part is right, certainly! Although idempotence should also be mentioned because it’s used.

Right, don’t use complements. Actually, I wouldn’t use Yoneda either — a direct proof can be given: give reasons why both and .

February 15, 2010 at 7:05 pm

Vishal LamaAh, yes, idempotence, too. Thank you for bringing that up.

Ok, follows directly from the meet property () and holds because is the bottom element. So, using the anti-symmetric property, we have . No doubt, this line of proof (you suggested) is much better!

February 15, 2010 at 8:00 pm

Vishal LamaSolution to “very easy exercise”: Suppose for given and , and are two implications. Then, iff () iff . Therefore, by Yoneda, .

February 15, 2010 at 11:09 pm

Vishal LamaOn the penultimate exercise, wouldn’t it be true that in a Heyting algebra, ?

February 20, 2010 at 12:46 am

Todd TrimbleSorry not to have responded sooner; I’m on vacation and have been checking email only intermittently.

Right on both counts — that is, your solution to the “very easy” exercise is correct and so is your observation in the last comment. Good!