You are currently browsing the tag archive for the ‘heyting algebra’ tag.

In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.

There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:

A Boolean algebra is a distributive lattice in which every element has a complement.

(If X is a lattice and x \in X, a complement of x is an element y such that x \wedge y = 0 and x \vee y = 1. A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)

  • Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set PX, ordered by subset inclusion. As we know, this is a distributive lattice, and the complement S^c of a subset S \subseteq X satisfies S \cap S^c = \emptyset and S \cup S^c = X.
  • Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction \wedge, disjunction \vee, and negation \neg in ordinary classical logic. In particular, given a theory \mathbf{T}, there is a preorder whose elements are sentences (closed formulas) p of \mathbf{T}, ordered by p \leq q if the entailment p \to q is provable in \mathbf{T} using classical logic. By passing to logical equivalence classes (p \equiv q iff p \leftrightarrow q in \mathbf{T}), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory \mathbf{T}.

Exercise: Give an example of a complemented lattice which is not distributive.

As a possible leading hint for the previous exercise, here is a first order of business:

Proposition: In a distributive lattice, complements of elements are unique when they exist.

Proof: If both b and c are complementary to a, then b = b \wedge 1 = b \wedge (a \vee c) = (b \wedge a) \vee (b \wedge c) = 0 \vee (b \wedge c) = b \wedge c. Since b = b \wedge c, we have b \leq c. Similarly c = b \wedge c, so b = c. \Box

The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).

Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:

  1. (a \wedge x \leq y) if and only if (a \leq \neg x \vee y)
  2. \neg \neg x = x for all elements x

Proof: First let X be a Boolean algebra, and let x^c denote the complement of an element x \in X. Then I claim that a \wedge x \leq y if and only if a \leq x^c \vee y, proving that X admits an implication x \Rightarrow y = x^c \vee y. Then, taking y = 0, it follows that \neg x := (x \Rightarrow 0) = x^c \vee 0 = x^c, whence 1. follows. Also, since (by definition of complement) x is the complement of y if and only if y is the complement of x, we have x^{c c} = x, whence 2. follows.

[Proof of claim: if a \leq x^c \vee y, then x \wedge a \leq x \wedge (x^c \vee y) = (x \wedge x^c) \vee (x \wedge y) \leq 0 \vee y = y. On the other hand, if x \wedge a \leq y, then a = 1 \wedge a \leq (x^c \vee x) \wedge (x^c \vee a) =  x^c \vee (x \wedge a) \leq x^c \vee y. This completes the proof of the claim and of the forward implication.]

In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication \neg x \vee y). In particular, it is distributive. From \neg x \leq \neg x \vee 0, we have (from 1.) x \wedge \neg x \leq 0; since 0 \leq x \wedge \neg x is automatic by definition of 0 = \bot, we get 0 = x \wedge \neg x. From 1 \wedge x \leq x, we have also (from 1.) that 1 \leq \neg x \vee x; since \neg x \vee x \leq 1 is automatic by definition of 1, we have \neg x \vee x = 1. Thus under 1., every element x has a complement \neg x.

On the other hand, suppose X is a Heyting algebra satisfying 2.: \neg \neg x = x. As above, we know x \wedge \neg x = 0. By the corollary below, we also know the function \neg: X \to X takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that \neg is its own inverse, it follows that \neg also takes meets to joins. Hence \neg x \vee x = \neg x \vee \neg \neg x = \neg(x \wedge \neg x) = \neg 0 = 1. Thus for a Heyting algebra which satisfies 2., every element x has a complement \neg x. This completes the proof. \Box

  • Exercise: Show that Boolean algebras can also be characterized as meet-semilattices X equipped with an operation \neg: X \to X for which a \wedge x \leq y if and only if a \leq \neg(x \wedge \neg y).

The proof above invoked the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y. The claim is that this De Morgan law (not the other \neg(x \wedge y) = \neg x \vee \neg y!) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:

Lemma: For any element c of a Heyting algebra X, the function - \Rightarrow c: X \to X is an order-reversing map (equivalently, an order-preserving map X^{op} \to X, or an order-preserving map X \to X^{op}). It is adjoint to itself, in the sense that - \Rightarrow c: X^{op} \to X is right adjoint to - \Rightarrow c: X \to X^{op}.

Proof: First, we show that a \leq b in X (equivalently, b \leq a in X^{op}) implies (b \Rightarrow c) \leq (a \Rightarrow c). But this conclusion holds iff (b \Rightarrow c) \wedge a \leq c, which is clear from (b \Rightarrow c) \wedge a \leq (b \Rightarrow c) \wedge b \leq c. Second, the adjunction holds because

(b \Rightarrow c) \leq  a in X^{op} if and only if

a \leq (b \Rightarrow c) in X if and only if

a \wedge b \leq c in X if and only if

b \wedge a \leq c in X if and only if

b \leq (a \Rightarrow c) in X. \Box

Corollary: - \Rightarrow c: X^{op} \to X takes any inf which exists in X^{op} to the corresponding inf in X. Equivalently, it takes any sup in X to the corresponding inf in X, i.e., (\bigvee_{s \in S} s) \Rightarrow c = \bigwedge_{s \in S} (s \Rightarrow c). (In particular, this applies to finite joins in X, and in particular, it applies to the case c = 0, where we conclude, e.g., the De Morgan law \neg(x \vee y) = \neg x \wedge \neg y.)

  • Remark: If we think of sups as sums and infs as products, then we can think of implications x \Rightarrow y as behaving like exponentials y^x. Indeed, our earlier result that x \Rightarrow (-) preserves infs \bigwedge_{s \in S} y_s can then be recast in exponential notation as saying (\prod_{s \in S} y_s)^x = \prod_{s \in S} (y_s)^x, and our present corollary that (- \Rightarrow y) takes sups to infs can then be recast as saying y^{\sum_{s \in S} x_s} = \prod_{s \in S} y^{x_s}. Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!

Let me reprise part of the lemma (in the case c = 0), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that \neg = (-)\Rightarrow 0: X \to X is order-reversing, and that there is a three-way equivalence:

a \leq \neg b if and only if a \wedge b = 0 if and only if b \leq \neg a.

This situation is an instance of what is called a “Galois connection” in mathematics. If X and Y are posets (or even preorders), a Galois connection between them consists of two order-reversing functions f: X \to Y, g: Y \to X such that for all x \in X, y \in Y, we have y \leq f(x) if and only if x \leq g(y). (It’s actually an instance of an adjoint pair: if we consider f as an order-preserving map X \to Y^{op} and g an order-preserving map Y^{op} \to X, then f(x) \leq y in Y^{op} if and only if x \leq g(y) in X.)

Here are some examples:

  1. The original example arises of course in Galois theory. If k is a field and k \subseteq E is a finite Galois extension with Galois group G = Gal(E/k) (of field automorphisms g: E \to E which fix the elements belonging to k), then there is a Galois connection consisting of maps Aut_{(-)}(E): PE \to PG and Fix: PG \to PE. This works as follows: to each subset S \subseteq E, define Aut_S(E) to be \{g \in G: g(s) = s \mbox{ for all } s \in S \}. In the other direction, to each subset T \subseteq G, define Fix(T) to be \{x \in E: g(x) = x \mbox{ for all } g \in T\}. Both Aut_{(-)}(E) and Fix(-) are order-reversing (for example, the larger the subset T \subseteq G, the more stringent the conditions for an element x \in E to belong to Fix(T)). Moreover, we have

    S \subseteq Fix(T) iff (g(x) = x for all x \in S, g \in T) iff T \subseteq Aut_S(E)

    so we do get a Galois connection. It is moreover clear that for any T \subseteq G, Fix(T) is an intermediate subfield between k and E, and for any S \subseteq E, Aut_S(E) is a subgroup of G. A principal result of Galois theory is that Fix(-) and Aut_{(-)}(E) are inverse to one another when restricted to the lattice of subgroups of G and the lattice of fields intermediate between k and E. Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.

  2. Another basic Galois connection arises in algebraic geometry, between subsets J \subseteq k[x_1, \ldots, x_n] (of a polynomial algebra over a field k) and subsets V \subseteq k^n. Given J, define Z(J) (the zero locus of J) to be \{(a_1, \ldots, a_n): f(a_1, \ldots, a_n) = 0 \mbox{ for each polynomial } f \in J\}. On the other hand, define I(V) (the ideal of V) to be \{f \in k[x_1, \ldots, x_n]: f(a) = 0 \mbox{ for all } a = (a_1, \ldots, a_n) \in V\}. As in the case of Galois theory above, we clearly have a three-way equivalence

    V \subseteq Z(J) iff (f(a) = 0 for all a \in V, f \in J) iff J \subseteq I(V)

    so that Z(-), I(-) define a Galois connection between power sets (of the n-variable polynomial algebra and of n-dimensional affine space k^n). One defines an (affine algebraic) variety V \subseteq k^n to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that I(V) is an ideal of the polynomial algebra. Not every ideal I of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals I equal to their radical rad(I) = \{f \in k[x_1, \ldots, x_n]: f^n \in I \mbox{ for some } n \geq 1\} are. Thus, Z(-) and I(-) become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.

  3. Both of the examples above are particular cases of a very general construction. Let X, Y be sets and let R \subseteq X \times Y be any relation between them. Then set up a Galois connection which in one direction takes a subset S \subseteq X to S \backslash R := \{y \in Y: (x, y) \in R \mbox{ for all } x \in S\}, and in the other takes T \subseteq Y to R/T := \{x \in X: (x, y) \in R \mbox{ for all } y \in T\}. Once again we have a three-way equivalence

    S \subseteq R/T iff S \times T \subseteq R iff T \subseteq S \backslash R.

    There are tons of examples of this flavor.

As indicated above, a Galois connection between posets X, Y is essentially the same thing as an adjoint pair between the posets X, Y^{op} (or between X^{op}, Y if you prefer; Galois connections are after all symmetric in X, Y). I would like to record a few basic results about Galois connections/adjoint pairs.

Proposition:

  1. Given order-reversing maps f: X \to Y, g: Y \to X which form a Galois connection, we have x \leq g f(x) for all x \in X and y \leq f g(y) for all y \in Y. (Given poset maps f, g which form an adjoint pair f \dashv g, we have x \leq g f(x) for all x \in X and f g(y) \leq y for all y \in Y.)
  2. Given a Galois connection as above, f(x) = f g f(x) for all x \in X and g(y) = g f g(y) for all y \in Y. (Given an adjoint pair f \dashv g as above, the same equations hold.) Therefore a Galois connection (f, g) induces a Galois correspondence between the elements of the form f(x) and the elements of the form g(y).

Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption f \dashv g, x \leq g f(x) if and only if f(x) \leq f(x), which is certainly true. The other statement is dual.

(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map f
to x \leq g f(x) from 1. gives f(x) \leq f g f(x). Applying f g(y) \leq y from 1. to y = f(x) gives f g f(x) \leq f(x). Hence f(x) = f g f(x). The other equation is dual. \Box

Incidentally, the equations of 2. show why an algebraic variety V is the zero locus of its ideal (see example 2. above): if V = Z(S) for some set of polynomials S, then V = Z(S) = Z I Z(S) = Z I(V). They also show that for any element x in a Heyting algebra, we have \neg \neg \neg x = \neg x, even though \neg \neg y = y is in general false.

Let (f, g) be a Galois connection (or f \dashv g an adjoint pair). By the proposition, c = gf: X \to X is an order-preserving map with the following properties:

x \leq c(x) for all x \in X

c c(x) = c(x) for all x \in X.

Poset maps c: X \to X with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance G is a group, then the operator c: PG \to PG which takes a subset S \subseteq G to the subgroup generated by S is a closure operator. Or, if X is a topological space, then the operator c: PX \to PX which takes a subset S \subset X to its topological closure \bar{S} is a closure operator. Or, if X is a poset, then the operator c: PX \to PX which takes S \subseteq X to \{a \in X: a \leq s \mbox{ for some } s \in S\} is a closure operator. Examples like these can be multiplied at will.

One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if c: X \to X is a closure operator, then a fixed point of c (or a c-closed element of X) is an element x such that c(x) = x. The collection Fix(c) of fixed points is partially ordered by the order in X. For example, the lattice of fixed points of the operator c: PG \to PG above is the lattice of subgroups of G. For any closure operator c, notice that Fix(c) is the same as the image c(X) of c.

One particular use is that the fixed points of the double negation closure \neg \neg: X \to X on a Heyting algebra X form a Boolean algebra Fix(\neg\neg), and the map \neg \neg: X \to Fix(\neg \neg) is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.

The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.

Exercise: If X is a Heyting algebra and x, y, z \in X, prove the “exponential law” ((x \wedge y) \Rightarrow z) = (x \Rightarrow (y \Rightarrow z)). Conclude that \neg(x \wedge y) = (y \Rightarrow \neg x) = (x \Rightarrow \neg y).

Exercise: We have seen that (x \Rightarrow y) \wedge x \leq y in a Heyting algebra. Use this to prove (x \Rightarrow y) \wedge (y \Rightarrow z) \leq (x \Rightarrow z).

Exercise: Show that double negation \neg \neg: X \to X on a Heyting algebra preserves finite meets. (The inequality \neg \neg(x \wedge y) \leq \neg \neg x \wedge \neg \neg y is easy. The reverse inequality takes more work; try using the previous two exercises.)

Exercise: If c: X \to X is a closure operator, show that the inclusion map i: Fix(c) \hookrightarrow X is right adjoint to the projection c: X \to Fix(c) to the image of c. Conclude that meets of elements in Fix(\neg \neg) are calculated as they would be as elements in X, and also that \neg \neg: X \to Fix(\neg \neg) preserves joins.

Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?

Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset PX of all subsets of a set X, and the poset Sub(V) of all subspaces of a vector space V.

I take it that most readers are already familiar with many of the properties of the poset PX; there is for example the distributive law A \cap (B \cup C) = (A \cap B) \cup (A \cap C), and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset Sub(V), as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to Sub(V) is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).

To get a sense of the weirdness of Sub(V), take for example a 2-dimensional vector space V. The bottom element is the zero space \{0\}, the top element is V, and the rest of the elements of Sub(V) are 1-dimensional: lines through the origin. For 1-dimensional spaces x, y, there is no relation x \leq y unless x and y coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:

       V = 1
       / | \
     /   |   \
    x    y    z
     \   |   /
       \ | /
         0

Observe that for distinct elements x, y, z in the middle level, we have for example x \wedge y = 0 = x \wedge z (0 is the largest element contained in both x and y), and also for example y \vee z = 1 (1 is the smallest element containing y and z). It follows that x \wedge (y \vee z) = x \wedge 1 = x, whereas (x \wedge y) \vee (x \wedge z) = 0 \vee 0 = 0. The distributive law fails in Sub(V)!

Definition: A lattice is distributive if x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z) for all x, y, z. That is to say, a lattice X is distributive if the map x \wedge -: X \to X, taking an element y to x \wedge y, is a morphism of join-semilattices.

  1. Exercise: Show that in a meet-semilattice, x \wedge -: X \to X is a poset map. Is it also a morphism of meet-semilattices? If X has a bottom element, show that the map x \wedge - preserves it.
  2. Exercise: Show that in any lattice, we at least have (x \wedge y) \vee (x \wedge z) \leq x \wedge (y \vee z) for all elements x, y, z.

Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:

Theorem: The notion of distributive lattice is self-dual.

Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z), follows from the distributive lattice axioms.

Expand the right side to ((x \vee y) \wedge x) \vee ((x \vee y) \wedge z), by distributivity. This reduces to x \vee [(x \vee y) \wedge z], by an absorption law. Expand this again, by distributivity, to x \vee (x \wedge z) \vee (y \wedge z). This reduces to x \vee (y \wedge z), by the other absorption law. This completes the proof. \Box

Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:

Definition: A frame is a sup-lattice for which x \wedge -: X \to X is a morphism of sup-lattices, for every x \in X. In other words, for every subset S \subseteq  X, we have \sup(\{x \wedge s: s \in S\}) = x \wedge \sup(S), or, as is often written,

\bigvee_{s \in S} x \wedge s = x \wedge \bigvee_{s \in S} s.

Example: A power set PX, as always partially ordered by inclusion, is a frame. In this case, it means that for any subset A and any collection of subsets \{B_i: i \in I\}, we have

A \cap (\bigcup_{i \in I} B_i) = \bigcup_{i \in I} A \cap B_i

This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.

Example: If X is a set, a topology on X is a subset \mathbf{T} \subseteq PX of the power set, partially ordered by inclusion as PX is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element \emptyset and the empty meet or top element X of PX are elements of \mathbf{T}, and also:

  1. If U, V are elements of \mathbf{T}, then so is U \cap V.
  2. If \{U_i: i \in I\} is a collection of elements of \mathbf{T}, then \bigcup_{i \in I} U_i is an element of \mathbf{T}.

A topological space is a set X which is equipped with a topology \mathbf{T}; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in PX (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.

The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let X be the set \mathbb{R}^n of n-tuples of real numbers. First, define the open ball in \mathbb{R}^n centered at a point x \in \mathbb{R}^n and of radius r > 0 to be the set \{y \in \mathbb{R}: ||x - y|| < r\}. Then, define a subset U \subseteq \mathbb{R}^n to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.

It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)

Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula p \Rightarrow q (“p implies q“) is equivalent to “either not p or q” — symbolically, we could define p \Rightarrow q as \neg p \vee q. That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.

The main property we ask of implication is modus ponens: given p and p \Rightarrow q, we may infer q. In symbols, writing the inference or entailment relation as \leq, this is expressed as p \wedge (p \Rightarrow q) \leq q. And, we ask that implication be the weakest possible such assumption, i.e., that material implication p \Rightarrow q be the weakest a whose presence in conjunction with p entails q. In other words, for given p and q, we now define implication p \Rightarrow q by the property

(a \wedge p \leq q) if and only if (a \leq p \Rightarrow q).

As a very easy exercise, show by Yoneda that an implication p \Rightarrow q is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:

Theorem:

  • (1) If X is a meet-semilattice which admits an implication operator, then for every element p, the operator p \wedge -: X \to X preserves any sups which happen to exist in X.
  • (2) If X is a frame, then X admits an implication operator.

Proof: (1) Suppose S \subseteq X has a sup in X, here denoted \bigvee_{s \in S} s. We have

(\bigvee_{s \in S} s) \wedge p \leq q if and only if

\bigvee_{s \in S} s \leq p \Rightarrow q if and only if

for all s \in S, (s \leq p \Rightarrow q) if and only if

for all s \in S, (s \wedge p \leq q) if and only if

\bigvee_{s \in S} (s \wedge p) \leq q.

Since this is true for all q, the (dual of the) Yoneda principle tells us that (\bigvee_{s \in S} s) \wedge p = \bigvee_{s \in S} (s \wedge p), as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that (\bigvee_{s \in S} s) \wedge p satisfies the defining property of that sup.)

(2) Suppose p, q are elements of a frame X. Define p \Rightarrow q to be \sup(\{a \in X: a \wedge p \leq q\}). By definition, if a \wedge p \leq q, then a \leq p \Rightarrow q. Conversely, if a \leq p \Rightarrow q, then

a \wedge p \leq \sup\{x: x \wedge p \leq q\} \wedge p = \sup\{x \wedge p: x \wedge p \leq q\},

where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by q (according to the defining property of sups). Hence a \wedge p \leq q, as desired. \Box

Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as PX, so long as we trust that p \Rightarrow q := \neg p \vee q really does what we ask of implication. We’ll come to that point again later.

Part (2) has some interesting consequences vis à vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let U, V be open sets in a topology. According to our prescription, we define U \Rightarrow V as the sup (the union) of all open sets W with the property that W \cap U \subseteq V. We can think of this inclusion as living in the power set PX. Then, assuming our formula U^c \cup V for implication in the Boolean algebra PX (where U^c denotes the complement of U), we would have W \subseteq U^c \cup V. And thus, our implication U \Rightarrow V in the topology is the union of all open sets W contained in the (usually non-open) set U^c \cup V. That is to say, U \Rightarrow V is the largest open contained in U^c \cup V, otherwise known as the interior of U^c \cup V. Hence our formula:

U \Rightarrow V = int(U^c \cup V).

Definition: A Heyting algebra is a lattice H which admits an implication p \Rightarrow q for any two elements p, q \in H. A complete Heyting algebra is a complete lattice which admits an implication for any two elements.

Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).

Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.

Exercise: Show that (0 \Rightarrow 0) = 1 in a Heyting algebra.

Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation \neg x to be x \Rightarrow 0. For the Heyting algebra given by a topology, what can you say about \neg U when U is open and dense?

Our other blog

Visitors to this blog

Blog Stats

  • 371,445 hits

Wikio Ranking

Wikio - Top Blogs - Sciences

Current Online Readers

Archives

March 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031