You are currently browsing the category archive for the ‘Propositional Calculus’ category.
This post is a continuation of the discussion of “the elementary theory of the category of sets” [ETCS] which we had begun last time, here and in the comments which followed. My thanks go to all who commented, for some useful feedback and thought-provoking questions.
Today I’ll describe some of the set-theoretic operations and “internal logic” of ETCS. I have a feeling that some people are going to love this, and some are going to hate it. My main worry is that it will leave some readers bewildered or exasperated, thinking that category theory has an amazing ability to make easy things difficult.
- An aside: has anyone out there seen the book Mathematics Made Difficult? It’s probably out of print by now, but I recommend checking it out if you ever run into it — it’s a kind of extended in-joke which pokes fun at category theory and abstract methods generally. Some category theorists I know take a dim view of this book; I for my part found certain passages hilarious, in some cases making me laugh out loud for five minutes straight. There are category-theory-based books and articles out there which cry out for parody!
In an attempt to nip my concerns in the bud, let me remind my readers that there are major differences between the way that standard set theories like ZFC treat membership and the way ETCS treats membership, and that differences at such a fundamental level are bound to propagate throughout the theoretical development, and impart a somewhat different character or feel between the theories. The differences may be summarized as follows:
- Membership in ZFC is a global relation between objects of the same type (sets).
- Membership in ETCS is a local relation between objects of different types (“generalized” elements or functions, and sets).
Part of what we meant by “local” is that an element per se is always considered relative to a particular set to which it belongs; strictly speaking, as per the discussion last time, the same element is never considered as belonging to two different sets. That is, in ETCS, an (ordinary) element of a set is defined to be a morphism
; since the codomain is fixed, the same morphism cannot be an element
of a different set
. This implies in particular that in ETCS, there is no meaningful global intersection operation on sets, which in ZFC is defined by:
Instead, in ETCS, what we have is a local intersection operation on subsets of a set. But even the word “subset” requires care, because of how we are now treating membership. So let’s back up, and lay out some simple but fundamental definitions of terms as we are now using them.
Given two monomorphisms , we write
(
if the monos are understood, or
if we wish to emphasize this is local to
) if there is a morphism
such that
. Since
is monic, there can be at most one such morphism
; since
is monic, such
must be monic as well. We say
define the same subset if this
is an isomorphism. So: subsets of
are defined to be isomorphism classes of monomorphisms into
. As a simple exercise, one may show that monos
into
define the same subset if and only if
and
. The (reflexive, transitive) relation
on monomorphisms thus induces a reflexive, transitive, antisymmetric relation, i.e., a partial order on subsets of
.
Taking some notational liberties, we write to indicate a subset of
(as isomorphism class of monos). If
is a generalized element, let us say
is in a subset
if it factors (evidently uniquely) through any representative mono
, i.e., if there exists
such that
. Now the intersection of two subsets
and
is defined to be the subset
defined by the pullback of any two representative monos
. Following the “Yoneda principle”, it may equivalently be defined up to isomorphism by specifying its generalized elements:
Thus, intersection works essentially the same way as in ZFC, only it’s local to subsets of a given set.
While we’re at it, let’s reformulate the power set axiom in this language: it says simply that for each set there is a set
and a subset
, such that for any relation
, there is a unique “classifying map”
whereby, under
, we have
The equality is an equality between subsets, and the inverse image on the right is defined by a pullback. In categorical set theory notation,
Hence, there are natural bijections
between subsets and classifying maps. The subset corresponding to is denoted
or
, and is called the extension of
.
The set plays a particularly important role; it is called the “subset classifier” because subsets
are in natural bijection with functions
. [Cf. classifying spaces in the theory of fiber bundles.]
In ordinary set theory, the role of is played by the 2-element set
. Here subsets
are classified by their characteristic functions
, defined by
iff
. We thus have
; the elementhood relation
boils down to
. Something similar happens in ETCS set theory:
Lemma 1: The domain of elementhood is terminal.
Proof: A map , that is, a map
which is in
, corresponds exactly to a subset
which contains all of
(i.e., the subobject
). Since the only such subset is
, there is exactly one map
.
Hence elementhood is given by an element
. The power set axiom says that a subset
is retrieved from its classifying map
as the pullback
.
Part of the power of, well, power sets is in a certain dialectic between external operations on subsets and internal operations on ; one can do some rather amazing things with this. The intuitive (and pre-axiomatic) point is that if
has finite products, equalizers, and power objects, then
is a representing object for the functor
which maps an object to the collection of subobjects of
, and which maps a morphism (“function”)
to the “inverse image” function
, that sends a subset
to the subset
given by the pullback of the arrows
,
. By the Yoneda lemma, this representability means that external natural operations on the
correspond to internal operations on the object
. As we will see, one can play off the external and internal points of view against each other to build up a considerable amount of logical structure, enough for just about any mathematical purpose.
- Remark: A category satisfying just the first three axioms of ETCS, namely existence of finite products, equalizers, and power objects, is called an (elementary) topos. Most or perhaps all of this post will use just those axioms, so we are really doing some elementary topos theory. As I was just saying, we can build up a tremendous amount of logic internally within a topos, but there’s a catch: this logic will be in general intuitionistic. One gets classical logic (including law of the excluded middle) if one assumes strong extensionality [where we get the definition of a well-pointed topos]. Topos theory has a somewhat fearsome reputation, unfortunately; I’m hoping these notes will help alleviate some of the sting.
To continue this train of thought: by the Yoneda lemma, the representing isomorphism
is determined by a universal element , i.e., a subset of
, namely the mono
. In that sense,
plays the role of a universal subset. The Yoneda lemma implies that external natural operations on general posets
are completely determined by how they work on the universal subset.
Internal Meets
To illustrate these ideas, let us consider intersection. Externally, the intersection operation is a natural transformation
This corresponds to a natural transformation
which (by Yoneda) is given by a function . Working through the details, this function is obtained by putting
and chasing
through the composite
Let’s analyze this bit by bit. The subset is given by
and the subset is given by
Hence is given by the pullback of the functions
and
, which is just
The map is thus defined to be the classifying map of
.
To go from the internal meet back to the external intersection operation, let
be two subsets, with classifying maps
. By the definition of
, we have that for all generalized elements
if and only if
(where the equality signs are interpreted with the help of equalizers). This holds true iff is in the subset
and is in the subset
, i.e., if and only if
is in the subset
. Thus
is indeed the classifying map of
. In other words,
.
A by-product of the interplay between the internal and external is that the internal intersection operator
is the meet operator of an internal meet-semilattice structure on : it is commutative, associative, and idempotent (because that is true of external intersection). The identity element for
is the element
. In particular,
carries an internal poset structure: given generalized elements
, we may define
if and only if
and this defines a reflexive, symmetric, antisymmetric relation :
equivalently described as the equalizer
of the maps and
. We have that
if and only if
.
Internal Implication
Here we begin to see some of the amazing power of the interplay between internal and external logical operations. We will prove that carries an internal Heyting algebra structure (ignoring joins for the time being).
Let’s recall the notion of Heyting algebra in ordinary naive set-theoretic terms: it’s a lattice that has a material implication operator
such that, for all
,
if and only if
Now: by the universal property of , a putative implication operation
is uniquely determined as the classifying map of its inverse image
, whose collection of generalized elements is
Given , the equality here is equivalent to
(because is maximal), which in turn is equivalent to
This means we should define to be the classifying map of the subset
Theorem 1: admits internal implication.
Proof: We must check that for any three generalized elements , we have
if and only if
Passing to the external picture, let be the corresponding subsets of
. Now: according to how we defined
a generalized element
is in
if and only if
. This applies in particular to any monomorphism
that represents the subset
.
Lemma 2: The composite
is the classifying map of the subset .
Proof: As subsets of ,
where the last equation holds because both sides are the subsets defined as the pullback of two representative monos
,
.
Continuing the proof of theorem 1, we see by lemma 2 that the condition corresponds externally to the condition
and this condition is equivalent to . Passing back to the internal picture, this is equivalent to
, and the proof of theorem 1 is complete.
Cartesian Closed Structure
Next we address a comment made by “James”, that a category satisfying the ETCS axioms is cartesian closed. As everything else in this article, this uses only the fact that such a category is a topos: has finite products, equalizers, and “power sets”.
Proposition 1: If are “sets”, then
represents an exponential
Proof: By the power set axiom, there is a bijection between maps into the power set and relations:
which is natural in . By the same token, there is a natural bijection
Putting these together, we have a natural isomorphism
and this representability means precisely that plays the role of an exponential
.
Corollary 1: .
The universal element of this representation is an evaluation map , which is just the classifying map of the subset
.
Thus, represents the set of all functions
(that is, relations from
to
). This is all we need to continue the discussion of internal logic in this post, but let’s also sketch how we get full cartesian closure. [Warning: for those who are not comfortable with categorical reasoning, this sketch could be rough going in places.]
As per our discussion, we want to internalize the set of such relations which are graphs of functions, i.e., maps where each
is a singleton, in other words which factor as
where is the singleton mapping:
We see from this set-theoretic description that classifies the equality relation
which we can think of as either the equalizer of the pair of maps or, what is the same, the diagonal map
.
Thus, we put , and it is not too hard to show that the singleton mapping
is a monomorphism. As usual, we get this monomorphism as the pullback
of
along its classifying map
.
Now: a right adjoint such as preserves all limits, and in particular pullbacks, so we ought then to have a pullback
B^A ---------------> 1^A | | sigma^A | | t^A V V P(B)^A -------------> P(1)^A (chi_sigma)^A
Of course, we don’t even have yet, but this should give us an idea: define
, and in particular its domain
, by taking the pullback of the right-hand map along the bottom map. In case there is doubt, the map on the bottom is defined Yoneda-wise, applying the isomorphism
to the element in the hom-set (on the left) given by the composite
The map on the right of the pullback is defined similarly. That this recipe really gives a construction of will be left as an exercise for the reader.
Universal Quantification
As further evidence of the power of the internal-external dialectic, we show how to internalize universal quantification.
As we are dealing here now with predicate logic, let’s begin by defining some terms as to be used in ETCS and topos theory:
- An ordinary predicate of type
is a function
. Alternatively, it is an ordinary element
. It corresponds (naturally and bijectively) to a subset
.
- A generalized predicate of type
is a function
. It may be identified with (corresponds naturally and bijectively to) a function
, or to a subset
.
We are trying to define an operator which will take a predicate of the form
[conventionally written
] to a predicate
[conventionally written
]. Externally, this corresponds to a natural operation which takes subsets of
to subsets of
. Internally, it corresponds to an operation of the form
This function is determined by the subset , defined elementwise by
Now, in ordinary logic, is true if and only if
is true for all
, or, in slightly different words, if
is constantly true over all of
:
The expression on the right (global truth over ) corresponds to a function
, indeed a monomorphism since any function with domain
is monic. Thus we are led to define the desired quantification operator
as the classifying map of
.
Let’s check how this works externally. Let be a generalized predicate of type
. Then according to how
has just been defined,
classifies the subset
There is an interesting adjoint relationship between universal quantification and substitution (aka “pulling back”). By “substitution”, we mean that given any predicate on
, we can always pull back to a predicate on
(substituting in a dummy variable
of type
, forming e.g.
) by composing with the projection
. In terms of subsets, substitution along
is the natural external operation
Then, for any predicate , we have the adjoint relationship
if and only if
so that substitution along is left adjoint to universal quantification along
. This is easy to check; I’ll leave that to the reader.
Internal Intersection Operators
Now we put all of the above together, to define an internal intersection operator
which intuitively takes an element (a family
of subsets of
) to its intersection
, as a subset
.
Let’s first write out a logical formula which expresses intersection:
We have all the ingredients to deal with the logical formula on the right: we have an implication operator as part of the internal Heyting algebra structure on
, and we have the quantification operator
. The atomic expressions
and
refer to internal elementhood:
means
is in
, and
means
is in
.
There is a slight catch, in that the predicates “” and “
” (as generalized predicates over
, where
lives) are taken over different domains. The first is of the form
, and the second is of the form
. No matter: we just substitute in some dummy variables. That is, we just pull these maps back to a common domain
, forming the composites
and
Putting all this together, we form the composite
This composite directly expresses the definition of the internal predicate given above. By cartesian closure, this map
induces the desired internal intersection operator,
.
This construction provides an important bridge to getting the rest of the internal logic of ETCS. Since we can can construct the intersection of arbitrary definable families of subsets, the power sets are internal inf-lattices. But inf-lattices are sup-lattices as well; on this basis we will be able to construct the colimits (e.g., finite sums, coequalizers) that we need. Similarly, the intersection operators easily allow us to construct image factorizations: any function
can be factored (in an essentially unique way) as an epi or surjection
to the image, followed by a mono or injection
. The trick is to define the image as the smallest subset of
through which
factors, by taking the intersection of all such subsets. Image factorization leads in turn to the construction of existential quantification.
As remarked above, the internal logic of a topos is generally intuitionistic (the law of excluded middle is not satisfied). But, if we add in the axiom of strong extensionality of ETCS, then we’re back to ordinary classical logic, where the law of excluded middle is satisfied, and where we just have the two truth values “true” and “false”. This means we will be able to reason in ETCS set theory just as we do in ordinary mathematics, taking just a bit of care with how we treat membership. The foregoing discussion gives indication that logical operations in categorical set theory work in ways familiar from naive set theory, and that basic set-theoretic constructions like intersection are well-grounded in ETCS.
Last time in this series on Stone duality, we observed a perfect duality between finite Boolean algebras and finite sets, which we called “baby Stone duality”:
- Every finite Boolean algebra
is obtained from a finite set
by taking its power set (or set of functions
from
to
, with the Boolean algebra structure it inherits “pointwise” from
). The set
may be defined to be
, the set of Boolean algebra homomorphisms from
to
.
- Conversely, every finite set
is obtained from the Boolean algebra
by taking its “hom-set”
.
More precisely, there are natural isomorphisms
in the categories of finite Boolean algebras and of finite sets, respectively. In the language of category theory, this says that these categories are (equivalent to) one another’s opposite — something I’ve been meaning to explain in more detail, and I promise to get to that, soon! In any case, this duality says (among other things) that finite Boolean algebras, no matter how abstractly presented, can be represented concretely as power sets.
Today I’d like to apply this representation to free Boolean algebras (on finitely many generators). What is a free Boolean algebra? Again, the proper context for discussing this is category theory, but we can at least convey the idea: given a finite set of letters
, consider the Boolean algebra
whose elements are logical equivalence classes of formulas you can build up from the letters using the Boolean connectives
(and the Boolean constants
), where two formulas
are defined to be logically equivalent if
and
can be inferred purely on the basis of the Boolean algebra axioms. This is an excellent example of a very abstract description of a Boolean algebra: syntactically, there are infinitely many formulas you can build up, and the logical equivalence classes are also infinite and somewhat hard to visualize, but the mess can be brought under control using Stone duality, as we now show.
First let me cut to the chase, and describe the key property of free Boolean algebras. Let be any Boolean algebra; it could be a power set, the lattice of regular open sets in a topology, or whatever, and think of a function
from the set of letters to
as modeling or interpreting the atomic formulas
as elements
of
. The essential property of the free Boolean algebra is that we can extend this interpretation
in a unique way to a Boolean algebra map
. The way this works is that we map a formula like
to the obvious formula
. This is well-defined on logical equivalence classes of formulas because if
in
, i.e., if the equality is derivable just from the Boolean algebra axioms, then of course
holds in
as the Boolean algebra axioms hold in
. Thus, there is a natural bijective correspondence between functions
and Boolean algebra maps
; to get back from a Boolean algebra map
to the function
, simply compose the Boolean algebra map with the function
which interprets elements of
as equivalence classes of atomic formulas in
.
To get a better grip on , let me pass to the Boolean ring picture (which, as we saw last time, is equivalent to the Boolean algebra picture). Here the primitive operations are addition and multiplication, so in this picture we build up “formulas” from letters using these operations (e.g.,
and the like). In other words, the elements of
can be considered as “polynomials” in the variables
. Actually, there are some simplifying features of this polynomial algebra; for one thing, in Boolean rings we have idempotence. This means that
for
, and so a monomial term like
reduces to its support
. Since each letter appears in a support with exponent 0 or 1, it follows that there are
possible supports or Boolean monomials, where
denotes the cardinality of
.
Idempotence also implies, as we saw last time, that for all elements
, so that our polynomials =
-linear combinations of monomials are really
-linear combinations of Boolean monomials or supports. In other words, each element of
is uniquely a linear combination
where
i.e., the set of supports forms a basis of
as a
-vector space. Hence the cardinality of the free Boolean ring is
.
- Remark: This gives an algorithm for checking logical equivalence of two Boolean algebra formulas: convert the formulas into Boolean ring expressions, and using distributivity, idempotence, etc., write out these expressions as Boolean polynomials =
-linear combinations of supports. The Boolean algebra formulas are equivalent if and only if the corresponding Boolean polynomials are equal.
But there is another way of understanding free Boolean algebras, via baby Stone duality. Namely, we have the power set representation
where is the set of Boolean algebra maps
. However, the freeness property says that these maps are in bijection with functions
. What are these functions? They are just truth-value assignments for the elements (atomic formulas, or variables)
; there are again
many of these. This leads to the method of truth tables: each formula
induces (in one-one fashion) a function
which takes a Boolean algebra map , aka a truth-value assignment for the variables
, to the element of
obtained by instantiating the assigned truth values
for the variables and evaluating the resulting Boolean expression for
in
. (In terms of power sets,
identifies each equivalence class of formulas with the set of truth-value assignments of variables which render the formula
“true” in
.) The fact that the representation
is injective means precisely that if formulas
are inequivalent, then there is a truth-value assignment which renders one of them “true” and the other “false”, hence that they are distinguishable by truth tables.
- Remark: This is an instance of what is known as a completeness theorem in logic. On the syntactic side, we have a notion of provability of formulas (that
is logically equivalent to
, or
in
if this is derivable from the Boolean algebra axioms). On the semantic side, each Boolean algebra homomorphism
can be regarded as a model of
in which each formula becomes true or false under
. The method of truth tables then says that there are enough models or truth-value assignments to detect provability of formulas, i.e.,
is provable if it is true when interpreted in any model
. This is precisely what is meant by a completeness theorem.
There are still other ways of thinking about this. Let be a Boolean algebra map, aka a model of
. This model is completely determined by
- The maximal ideal
in the Boolean ring
, or
- The maximal filter or ultrafilter
in
.
Now, as we saw last time, in the case of finite Boolean algebras, each (maximal) ideal is principal: is of the form for some
. Dually, each (ultra)filter is principal: is of the form
for some
. The maximality of the ultrafilter means that there is no nonzero element in
smaller than
; we say that
is an atom in
(NB: not to be confused with atomic formula!). So, we can also say
- A model of a finite Boolean algebra
is specified by a unique atom of
.
Thus, baby Stone duality asserts a Boolean algebra isomorphism
Let’s give an example: consider the free Boolean algebra on three elements . If you like, draw a Venn diagram generated by three planar regions labeled by
. The atoms or smallest nonzero elements of the free Boolean algebra are then represented by the
regions demarcated by the Venn diagram. That is, the disjoint regions are labeled by the eight atoms
According to baby Stone duality, any element in the free Boolean algebra (with elements) is uniquely expressible as a disjoint union of these atoms. Another way of saying this is that the atoms form a basis (alternative to Boolean monomials) of the free Boolean algebra as
-vector space. For example, as an exercise one may calculate
The unique expression of an element (where
is given by a Boolean formula) as a
-linear combination of atoms is called the disjunctive normal form of the formula. So yet another way of deciding when two Boolean formulas are logically equivalent is to put them both in disjunctive normal form and check whether the resulting expressions are the same. (It’s basically the same idea as checking equality of Boolean polynomials, except we are using a different vector space basis.)
All of the above applies not just to free (finite) Boolean algebras, but to general finite Boolean algebras. So, suppose you have a Boolean algebra which is generated by finitely many elements
. Generated means that every element in
can be expressed as a Boolean combination of the generating elements. In other words, “generated” means that if we consider the inclusion function
, then the unique Boolean algebra map
which extends the inclusion is a surjection. Thinking of
as a Boolean ring map, we have an ideal
, and because
is a surjection, it induces a ring isomorphism
The elements of can be thought of as equivalence classes of formulas which become false in
under the interpretation
. Or, we could just as well (and it may be more natural to) consider instead the filter
of formulas in
which become true under the interpretation
. In any event, what we have is a propositional language
consisting of classes of formulas, and a filter
consisting of formulas, which can be thought of as theorems of
. Often one may find a filter
described as the smallest filter which contains certain chosen elements, which one could then call axioms of
.
In summary, any propositional theory (which by definition consists of a set of propositional variables together with a filter
of the free Boolean algebra, whose elements are called theorems of the theory) yields a Boolean algebra
, where dividing out by
means we take equivalence classes of elements of
under the equivalence relation
defined by the condition “
belongs to
“. The partial order on equivalence classes [
] is defined by [
]
[
] iff
belongs to
. The Boolean algebra
defined in this way is called the Lindenbaum algebra of the propositional theory.
Conversely, any Boolean algebra with a specified set of generators
can be thought of as the Lindenbaum algebra of the propositional theory obtained by taking the
as propositional variables, together with the filter
obtained from the induced Boolean algebra map
. A model of the theory should be a Boolean algebra map
which interprets the formulas of
as true or false, but in such a way that the theorems of the theory (the elements of the filter) are all interpreted as “true”. In other words, a model is the same thing as a Boolean algebra map
i.e., we may identify a model of a propositional theory with a Boolean algebra map out of its Lindenbaum algebra.
So the set of models is the set , and now baby Stone duality, which gives a canonical isomorphism
implies the following
Completeness theorem: If a formula of a finite propositional theory is “true” when interpreted under any model of the theory, then the formula is provable (is a theorem of the theory).
Proof: Let be the Lindenbaum algebra of the theory, and let
be the class of formulas provably equivalent to a given formula
under the theory. The Boolean algebra isomorphism
takes an element
to the map
. If
for all models
, i.e., if
, then
. But then [
]
, i.e.,
, the filter of provable formulas.
In summary, we have developed a rich vocabulary in which Boolean algebras are essentially the same things as propositional theories, and where models are in natural bijection with maximal ideals in the Boolean ring, or ultrafilters in the Boolean algebra, or [in the finite case] atoms in the Boolean algebra. But as we will soon see, ultrafilters have a significance far beyond their application in the realm of Boolean algebras; in particular, they crop up in general studies of topology and convergence. This is in fact a vital clue; the key point is that the set of models or ultrafilters carries a canonical topology, and the interaction between Boolean algebras and topological spaces is what Stone duality is all about.
In this installment, I will introduce the concept of Boolean algebra, one of the main stars of this series, and relate it to concepts introduced in previous lectures (distributive lattice, Heyting algebra, and so on). Boolean algebra is the algebra of classical propositional calculus, and so has an abstract logical provenance; but one of our eventual goals is to show how any Boolean algebra can also be represented in concrete set-theoretic (or topological) terms, as part of a powerful categorical duality due to Stone.
There are lots of ways to define Boolean algebras. Some definitions were for a long time difficult conjectures (like the Robbins conjecture, established only in the last ten years or so with the help of computers) — testament to the richness of the concept. Here we’ll discuss just a few definitions. The first is a traditional one, and one which is pretty snappy:
A Boolean algebra is a distributive lattice in which every element has a complement.
(If is a lattice and
, a complement of
is an element
such that
and
. A lattice is said to be complemented if every element has a complement. Observe that the notions of complement and complemented lattice are manifestly self-dual. Since the notion of distributive lattice is self-dual, so therefore is the notion of Boolean algebra.)
- Example: Probably almost everyone reading this knows the archetypal example of a Boolean algebra: a power set
, ordered by subset inclusion. As we know, this is a distributive lattice, and the complement
of a subset
satisfies
and
.
- Example: Also well known is that the Boolean algebra axioms mirror the usual interactions between conjunction
, disjunction
, and negation
in ordinary classical logic. In particular, given a theory
, there is a preorder whose elements are sentences (closed formulas)
of
, ordered by
if the entailment
is provable in
using classical logic. By passing to logical equivalence classes (
iff
in
), we get a poset with meets, joins, and complements satisfying the Boolean algebra axioms. This is called the Lindenbaum algebra of the theory
.
Exercise: Give an example of a complemented lattice which is not distributive.
As a possible leading hint for the previous exercise, here is a first order of business:
Proposition: In a distributive lattice, complements of elements are unique when they exist.
Proof: If both and
are complementary to
, then
. Since
, we have
. Similarly
, so
The definition of Boolean algebra we have just given underscores its self-dual nature, but we gain more insight by packaging it in a way which stresses adjoint relationships — Boolean algebras are the same things as special types of Heyting algebras (recall that a Heyting algebra is a lattice which admits an implication operator satisfying an adjoint relationship with the meet operator).
Theorem: A lattice is a Boolean algebra if and only if it is a Heyting algebra in which either of the following properties holds:
if and only if
for all elements
Proof: First let be a Boolean algebra, and let
denote the complement of an element
. Then I claim that
if and only if
, proving that
admits an implication
. Then, taking
, it follows that
, whence 1. follows. Also, since (by definition of complement)
is the complement of
if and only if
is the complement of
, we have
, whence 2. follows.
[Proof of claim: if , then
. On the other hand, if
, then
. This completes the proof of the claim and of the forward implication.]
In the other direction, given a lattice which satisfies 1., it is automatically a Heyting algebra (with implication ). In particular, it is distributive. From
, we have (from 1.)
; since
is automatic by definition of
, we get
. From
, we have also (from 1.) that
; since
is automatic by definition of
, we have
. Thus under 1., every element
has a complement
.
On the other hand, suppose is a Heyting algebra satisfying 2.:
. As above, we know
. By the corollary below, we also know the function
takes 0 to 1 and joins to meets (De Morgan law); since condition 2. is that
is its own inverse, it follows that
also takes meets to joins. Hence
. Thus for a Heyting algebra which satisfies 2., every element
has a complement
. This completes the proof.
- Exercise: Show that Boolean algebras can also be characterized as meet-semilattices
equipped with an operation
for which
if and only if
.
The proof above invoked the De Morgan law . The claim is that this De Morgan law (not the other
!) holds in a general Heyting algebra — the relevant result was actually posed as an exercise from the previous lecture:
Lemma: For any element of a Heyting algebra
, the function
is an order-reversing map (equivalently, an order-preserving map
, or an order-preserving map
). It is adjoint to itself, in the sense that
is right adjoint to
.
Proof: First, we show that in
(equivalently,
in
) implies
. But this conclusion holds iff
, which is clear from
. Second, the adjunction holds because
in
if and only if
in
if and only if
in
if and only if
in
if and only if
in
Corollary: takes any inf which exists in
to the corresponding inf in
. Equivalently, it takes any sup in
to the corresponding inf in
, i.e.,
. (In particular, this applies to finite joins in
, and in particular, it applies to the case
, where we conclude, e.g., the De Morgan law
.)
- Remark: If we think of sups as sums and infs as products, then we can think of implications
as behaving like exponentials
. Indeed, our earlier result that
preserves infs
can then be recast in exponential notation as saying
, and our present corollary that
takes sups to infs can then be recast as saying
. Later we will state another exponential law for implication. It is correct to assume that this is no notational accident!
Let me reprise part of the lemma (in the case ), because it illustrates a situation which comes up over and over again in mathematics. In part it asserts that
is order-reversing, and that there is a three-way equivalence:
if and only if
if and only if
.
This situation is an instance of what is called a “Galois connection” in mathematics. If and
are posets (or even preorders), a Galois connection between them consists of two order-reversing functions
,
such that for all
, we have
if and only if
. (It’s actually an instance of an adjoint pair: if we consider
as an order-preserving map
and
an order-preserving map
, then
in
if and only if
in
.)
Here are some examples:
- The original example arises of course in Galois theory. If
is a field and
is a finite Galois extension with Galois group
(of field automorphisms
which fix the elements belonging to
), then there is a Galois connection consisting of maps
and
. This works as follows: to each subset
, define
to be
. In the other direction, to each subset
, define
to be
. Both
and
are order-reversing (for example, the larger the subset
, the more stringent the conditions for an element
to belong to
). Moreover, we have
iff (
for all
) iff
so we do get a Galois connection. It is moreover clear that for any
,
is an intermediate subfield between
and
, and for any
,
is a subgroup of
. A principal result of Galois theory is that
and
are inverse to one another when restricted to the lattice of subgroups of
and the lattice of fields intermediate between
and
. Such a bijective correspondence induced by a Galois connection is called a Galois correspondence.
- Another basic Galois connection arises in algebraic geometry, between subsets
(of a polynomial algebra over a field
) and subsets
. Given
, define
(the zero locus of
) to be
. On the other hand, define
(the ideal of
) to be
. As in the case of Galois theory above, we clearly have a three-way equivalence
iff (
for all
) iff
so that
,
define a Galois connection between power sets (of the
-variable polynomial algebra and of
-dimensional affine space
). One defines an (affine algebraic) variety
to be a zero locus of some set. Then, on very general grounds (see below), any variety is the zero locus of its ideal. On the other hand, notice that
is an ideal of the polynomial algebra. Not every ideal
of the polynomial algebra is the ideal of its zero locus, but according to the famous Hilbert Nullstellensatz, those ideals
equal to their radical
are. Thus,
and
become inverse to one another when restricted to the lattice of varieties and the lattice of radical ideals, by the Nullstellensatz: there is a Galois correspondence between these objects.
- Both of the examples above are particular cases of a very general construction. Let
be sets and let
be any relation between them. Then set up a Galois connection which in one direction takes a subset
to
, and in the other takes
to
. Once again we have a three-way equivalence
iff
iff
.
There are tons of examples of this flavor.
As indicated above, a Galois connection between posets is essentially the same thing as an adjoint pair between the posets
(or between
if you prefer; Galois connections are after all symmetric in
). I would like to record a few basic results about Galois connections/adjoint pairs.
Proposition:
- Given order-reversing maps
,
which form a Galois connection, we have
for all
and
for all
. (Given poset maps
which form an adjoint pair
, we have
for all
and
for all
.)
- Given a Galois connection as above,
for all
and
for all
. (Given an adjoint pair
as above, the same equations hold.) Therefore a Galois connection
induces a Galois correspondence between the elements of the form
and the elements of the form
.
Proof: (1.) It suffices to prove the statements for adjoint pairs. But under the assumption ,
if and only if
, which is certainly true. The other statement is dual.
(2.) Again it suffices to prove the equations for the adjoint pair. Applying the order-preserving map
to from 1. gives
. Applying
from 1. to
gives
. Hence
. The other equation is dual.
Incidentally, the equations of 2. show why an algebraic variety is the zero locus of its ideal (see example 2. above): if
for some set of polynomials
, then
. They also show that for any element
in a Heyting algebra, we have
, even though
is in general false.
Let be a Galois connection (or
an adjoint pair). By the proposition,
is an order-preserving map with the following properties:
for all
for all
.
Poset maps with these properties are called closure operators. We have earlier discussed examples of closure operators: if for instance
is a group, then the operator
which takes a subset
to the subgroup generated by
is a closure operator. Or, if
is a topological space, then the operator
which takes a subset
to its topological closure
is a closure operator. Or, if
is a poset, then the operator
which takes
to
is a closure operator. Examples like these can be multiplied at will.
One virtue of closure operators is that they give a useful means of constructing new posets from old. Specifically, if is a closure operator, then a fixed point of
(or a
-closed element of
) is an element
such that
. The collection
of fixed points is partially ordered by the order in
. For example, the lattice of fixed points of the operator
above is the lattice of subgroups of
. For any closure operator
, notice that
is the same as the image
of
.
One particular use is that the fixed points of the double negation closure on a Heyting algebra
form a Boolean algebra
, and the map
is a Heyting algebra map. This is not trivial! And it gives a means of constructing some rather exotic Boolean algebras (“atomless Boolean algebras”) which may not be so familiar to many readers.
The following exercises are in view of proving these results. If no one else does, I will probably give solutions next time or sometime soon.
Exercise: If is a Heyting algebra and
, prove the “exponential law”
. Conclude that
.
Exercise: We have seen that in a Heyting algebra. Use this to prove
.
Exercise: Show that double negation on a Heyting algebra preserves finite meets. (The inequality
is easy. The reverse inequality takes more work; try using the previous two exercises.)
Exercise: If is a closure operator, show that the inclusion map
is right adjoint to the projection
to the image of
. Conclude that meets of elements in
are calculated as they would be as elements in
, and also that
preserves joins.
Exercise: Show that the fixed points of the double negation operator on a topology (as Heyting algebra) are the regular open sets, i.e., those open sets equal to the interior of their closure. Give some examples of non-regular open sets. Incidentally, is the lattice you get by taking the opposite of a topology also a Heyting algebra?
In our last installment in this series on Stone duality, we introduced the notion of Heyting algebra, which captures the basic relationships between the logical connectives “and”, “or”, and “implies”. Our discussion disclosed a fundamental relationship between distributive laws and the algebra of implication, which we put to work to discover the structure of the “internal Heyting algebra logic” of a topology.
I’d like to pause and reflect on the general technique we used to establish this relationship; like the Yoneda principle and the Principle of Duality, it comes up with striking frequency, and so it will be useful for us to give it a name. As it turns out, this particular proof technique is analogous to the way adjoints are used in linear algebra. Such analogies go all the way back to work of C. S. Peirce, who like Boole was a great pioneer in the discovery of relationships between algebra and logic. At a deeper level, similar analogies were later rediscovered in category theory, and are connected with some of the most potent ideas category theory has to offer.
Our proof that meets distribute over sups in the presence of an implication operator is an example of this technique. Here is another example of similar flavor.
Theorem: In a Heyting algebra , the operator
preserves any infs which happen to exist in
, for any element
. [In particular, this operator is a morphism of meet-semilattices, i.e.,
, and
.]
Proof: Suppose that has an inf, which here will be denoted
. Then for all
, we have
if and only if
if and only if
(for all ,
) if and only if
for all ,
.
By the defining property of inf, these logical equivalences show that is indeed the inf of the subset
, or in other words that
, as desired.
In summary, what we did in this proof is “slide” the operator on the right of the inequality over to the operator
on the left, then invoke the defining property of infs, and then slide back to
on the right. This sliding trick is analogous to how adjoint mappings work in linear algebra.
In fact, everything we have done so far with posets can be translated in terms of matrix algebra, provided that our matrix entries, instead of being real or complex numbers, are truth values (
for “true”,
“false”). These truth values are added and multiplied in the way familiar from truth tables, with join playing the role of addition and meet playing the role of multiplication. In fact the lattice
is a very simple distributive lattice, and so most of the familiar arithmetic properties of addition and multiplication (associativity, commutativity, distributivity) do carry over, which is all we need to carry out the most basic aspects of matrix algebra. However, observe that
has no additive inverse (for here
) — the type of structure we are dealing with is often called a “rig” (like a ring, but without assuming negatives). On the other hand, this lattice is, conveniently, a sup-lattice, thinking of sups as arbitrary sums, whether finitary or infinitary.
Peirce recognized that a relation can be classified by a truth-valued matrix. Take for example a binary relation on a set , i.e., a subset
. We can imagine each point
as a pixel in the plane, and highlight
by lighting up just those pixels which belong to
. This is the same as giving an
-matrix
, with rows indexed by elements
and columns by elements
, where the
-entry
is
(on) if
is in
, and
if not. In a similar way, any relation
is classified by a
-matrix whose entries are truth values.
As an example, the identity matrix has a at the
-entry if and only if
. Thus the identity matrix classifies the equality relation.
A poset is a set equipped with a binary relation
satisfying the reflexive, transitive, and antisymmetry properties. Let us translate these into matrix algebra terms. First reflexivity: it says that
implies
. In matrix algebra terms, it says
, which we abbreviate in the customary way:
(Reflexivity)
.
Now let’s look at transitivity. It says
(
and
) implies
.
The “and” here refers to the meet or multiplication in the rig of truth values , and the existential quantifier can be thought of as a (possibly infinitary) join or sum indexed over elements
. Thus, for each pair
, the hypothesis of the implication has truth value
which is just the -entry of the square of the matrix
. Therefore, transitivity can be very succinctly expressed in matrix algebra terms as the condition
(Transitivity)
.
- Remark: More generally, given a relation
from
to
, and another relation
from
to
, the relational composite
is defined to be the set of pairs
for which there exists
with
and
. But this just means that its classifying matrix is the ordinary matrix product
!
Let’s now look at the antisymmetry condition: ( and
) implies
. The clause
is the flip of
; at the matrix level, this flip corresponds to taking the transpose. Thus antisymmetry can be expressed in matrix terms as
(Antisymmetry)
where denotes the transpose of
, and the matrix meet
means we take the meet at each entry.
- Remark: From the matrix algebra perspective, the antisymmetry axiom is less well motivated than the reflexivity and transitivity axioms. There’s a moral hiding beneath that story: from the category-theoretic perspective, the antisymmetry axiom is relatively insignificant. That is, if we view a poset as a category, then the antisymmetry condition is tantamount to the condition that isomorphic objects are equal (in the parlance, one says the category is “skeletal”) — this extra condition makes no essential difference, because isomorphic objects are essentially the same anyway. So: if we were to simply drop the antisymmetry axiom but keep the reflexivity and transitivity axioms (leading to what are called preordered sets, as opposed to partially ordered sets), then the theory of preordered sets develops exactly as the theory of partially ordered sets, except that in places where we conclude “
is equal to
” in the theory of posets, we would generally conclude “
is isomorphic to
” in the theory of preordered sets.
Preordered sets do occur in nature. For example, the set of sentences in a theory is preordered by the entailment relation
(
is derivable from
in the theory). (The way one gets a poset out of this is to pass to a quotient set, by identifying sentences which are logically equivalent in the theory.)
Exercises:
- (For those who know some topology) Suppose
is a topological space. Given
, define
if
belongs to the closure of
; show this is a preorder. Show this preorder is a poset precisely when
is a
-space.
- If
carries a group structure, define
for elements
if
for some integer
; show this is a preorder. When is it a poset?
Since posets or preorders are fundamental to everything we’re doing, I’m going to reserve a special pairing notation for their classifying matrices: define
if and only if
.
Many of the concepts we have developed so far for posets can be succinctly expressed in terms of the pairing.
Example: The Yoneda principle (together with its dual) is simply the statement that if is a poset, then
if and only if
(as functionals valued in
) if and only if
.
Example: A mapping from a poset to a poset
is a function
such that
.
Example: If is a poset, its dual or opposite
has the same elements but the opposite order, i.e.,
. The principle of duality says that the opposite of a poset is a poset. This can be (re)proved by invoking formal properties of matrix transpose, e.g., if
, then
.
By far the most significant concept that can be expressed in terms of these pairings that of adjoint mappings:
Definition: Let be posets [or preorders], and
,
be poset mappings. We say
is an adjoint pair (with
the left adjoint of
, and
the right adjoint of
) if
or, in other words, if if and only if
. We write
. Notice that the concept of left adjoint is dual to the concept of right adjoint (N.B.: they are not the same, because clearly the pairing
is not generally symmetric in
and
).
Here are some examples which illustrate the ubiquity of this concept:
- Let
be a poset. Let
be the poset where
iff (
and
). There is an obvious poset mapping
, the diagonal mapping, which takes
to
. Then a meet operation
is precisely a right adjoint to the diagonal mapping. Indeed, it says that
if and only if
.
- Dually, a join operation
is precisely a left adjoint to the diagonal mapping
.
- More generally, for any set
, there is a diagonal map
which maps
to the
-tuple
. Its right adjoint
, if one exists, sends an
-tuple
to the inf of the set
. Its left adjoint would send the tuple to the sup of that set.
- If
is a Heyting algebra, then for each
, the conjunction operator
is left adjoint to the implication operator
.
- If
is a sup-lattice, then the operator
which sends a subset
to
is left adjoint to the Dedekind embedding
. Indeed, we have
if and only if (for all
) if and only if
.
As items 1, 2, and 4 indicate, the rules for how the propositional connectives operate are governed by adjoint pairs. This gives some evidence for Lawvere’s great insight that all rules of inference in logic are expressed by interlocking pairs of adjoint mappings.
Proposition: If and
where
and
are composable mappings, then
.
Proof: . Notice that the statement is analogous to the usual rule
, where
refers to taking an adjoint with respect to given inner product forms.
We can use this proposition to give slick proofs of some results we’ve seen. For example, to prove that Heyting algebras are distributive lattices, i.e., that , just take left adjoints on both sides of the tautology
, where
is right adjoint to
. The left adjoint of the left side of the tautology is (by the proposition)
applied to
. The left adjoint of the right side is
applied to
. The conclusion follows.
Much more generally, we have the
Theorem: Right adjoints preserve any infs which exist in
. Dually, left adjoints
preserve any sups which exist in
.
Proof: where the last inf is interpreted in the inf-lattice
. This equals
. This completes the proof of the first statement (why?). The second follows from duality.
Exercise: If is a Heyting algebra, then there is a poset mapping
for any element
. Describe the left adjoint of this mapping. Conclude that this mapping takes infs in
(i.e., sups in
) to the corresponding infs in
.
Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset of all subsets of a set
, and the poset
of all subspaces of a vector space
.
I take it that most readers are already familiar with many of the properties of the poset ; there is for example the distributive law
, and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset
, as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to
is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).
To get a sense of the weirdness of , take for example a 2-dimensional vector space
. The bottom element is the zero space
, the top element is
, and the rest of the elements of
are 1-dimensional: lines through the origin. For 1-dimensional spaces
, there is no relation
unless
and
coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:
V = 1 / | \ / | \ x y z \ | / \ | / 0
Observe that for distinct elements in the middle level, we have for example
(0 is the largest element contained in both
and
), and also for example
(1 is the smallest element containing
and
). It follows that
, whereas
. The distributive law fails in
!
Definition: A lattice is distributive if for all
. That is to say, a lattice
is distributive if the map
, taking an element
to
, is a morphism of join-semilattices.
- Exercise: Show that in a meet-semilattice,
is a poset map. Is it also a morphism of meet-semilattices? If
has a bottom element, show that the map
preserves it.
- Exercise: Show that in any lattice, we at least have
for all elements
.
Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:
Theorem: The notion of distributive lattice is self-dual.
Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, , follows from the distributive lattice axioms.
Expand the right side to , by distributivity. This reduces to
, by an absorption law. Expand this again, by distributivity, to
. This reduces to
, by the other absorption law. This completes the proof.
Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:
Definition: A frame is a sup-lattice for which is a morphism of sup-lattices, for every
. In other words, for every subset
, we have
, or, as is often written,
Example: A power set , as always partially ordered by inclusion, is a frame. In this case, it means that for any subset
and any collection of subsets
, we have
This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.
Example: If is a set, a topology on
is a subset
of the power set, partially ordered by inclusion as
is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element
and the empty meet or top element
of
are elements of
, and also:
- If
are elements of
, then so is
.
- If
is a collection of elements of
, then
is an element of
.
A topological space is a set which is equipped with a topology
; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in
(unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.
The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let be the set
of
-tuples of real numbers. First, define the open ball in
centered at a point
and of radius
to be the set
<
. Then, define a subset
to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.
It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)
Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula (“
implies
“) is equivalent to “either not
or
” — symbolically, we could define
as
. That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.
The main property we ask of implication is modus ponens: given and
, we may infer
. In symbols, writing the inference or entailment relation as
, this is expressed as
. And, we ask that implication be the weakest possible such assumption, i.e., that material implication
be the weakest
whose presence in conjunction with
entails
. In other words, for given
and
, we now define implication
by the property
if and only if
As a very easy exercise, show by Yoneda that an implication is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:
Theorem:
- (1) If
is a meet-semilattice which admits an implication operator, then for every element
, the operator
preserves any sups which happen to exist in
.
- (2) If
is a frame, then
admits an implication operator.
Proof: (1) Suppose has a sup in
, here denoted
. We have
if and only if
if and only if
for all if and only if
for all if and only if
.
Since this is true for all , the (dual of the) Yoneda principle tells us that
, as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that
satisfies the defining property of that sup.)
(2) Suppose are elements of a frame
. Define
to be
. By definition, if
, then
. Conversely, if
, then
where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by (according to the defining property of sups). Hence
, as desired.
Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as , so long as we trust that
really does what we ask of implication. We’ll come to that point again later.
Part (2) has some interesting consequences vis à vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let be open sets in a topology. According to our prescription, we define
as the sup (the union) of all open sets
with the property that
. We can think of this inclusion as living in the power set
. Then, assuming our formula
for implication in the Boolean algebra
(where
denotes the complement of
), we would have
. And thus, our implication
in the topology is the union of all open sets
contained in the (usually non-open) set
. That is to say,
is the largest open contained in
, otherwise known as the interior of
. Hence our formula:
= int
Definition: A Heyting algebra is a lattice which admits an implication
for any two elements
. A complete Heyting algebra is a complete lattice which admits an implication for any two elements.
Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).
Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.
Exercise: Show that in a Heyting algebra.
Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation to be
. For the Heyting algebra given by a topology, what can you say about
when
is open and dense?
Previously, on “Stone duality”, we introduced the notions of poset and meet-semilattice (formalizing the conjunction operator “and”), as a first step on the way to introducing Boolean algebras. Our larger goal in this series will be to discuss Stone duality, where it is shown how Boolean algebras can be represented “concretely”, in terms of the topology of their so-called Stone spaces — a wonderful meeting ground for algebra, topology, logic, geometry, and even analysis!
In this installment we will look at the notion of lattice and various examples of lattice, and barely scratch the surface — lattice theory is a very deep and multi-faceted theory with many unanswered questions. But the idea is simple enough: lattices formalize the notions of “and” and “or” together. Let’s have a look.
Let be a poset. If
are elements of
, a join of
and
is an element
with the property that for any
,
if and only if (
and
).
For a first example, consider the poset of subsets of
ordered by inclusion. The join in that case is given by taking the union, i.e., we have
if and only if (
and
).
Given the close connection between unions of sets and the disjunction “or”, we can therefore say, roughly, that joins are a reasonable mathematical way to formalize the structure of disjunction. We will say a little more on that later when we discuss mathematical logic.
Notice there is a close formal resemblance between how we defined joins and how we defined meets. Recall that a meet of and
is an element
such that for all
,
if and only if (
and
).
Curiously, the logical structure in the definitions of meet and join is essentially the same; the only difference is that we switched the inequalities (i.e., replaced all instances of by
). This is an instance of a very important concept. In the theory of posets, the act of modifying a logical formula or theorem by switching all the inequalities but otherwise leaving the logical structure the same is called taking the dual of the formula or theorem. Thus, we would say that the dual of the notion of meet is the notion of join (and vice-versa). This turns out to be a very powerful idea, which in effect will allow us to cut our work in half.
(Just to put in some fine print or boilerplate, let me just say that a formula in the first-order theory of posets is a well-formed expression in first-order logic (involving the usual logical connectives and logical quantifiers and equality over a domain ), which can be built up by taking
as a primitive binary predicate on
. A theorem in the theory of posets is a sentence (a closed formula, meaning that all variables are bound by quantifiers) which can be deduced, following standard rules of inference, from the axioms of reflexivity, transitivity, and antisymmetry. We occasionally also consider formulas and theorems in second-order logic (permitting logical quantification over the power set
), and in higher-order logic. If this legalistic language is scary, don’t worry — just check the appropriate box in the End User Agreement, and reason the way you normally do.)
The critical item to install before we’re off and running is the following meta-principle:
Principle of Duality: If a logical formula F is a theorem in the theory of posets, then so is its dual F’.
Proof: All we need to do is check that the duals of the axioms in the theory of posets are also theorems; then F’ can be proved just by dualizing the entire proof of F. Now the dual of the reflexivity axiom, , is itself! — and of course an axiom is a theorem. The transitivity axiom,
and
implies
, is also self-dual (when you dualize it, it looks essentially the same except that the variables
and
are switched — and there is a basic convention in logic that two sentences which differ only by renaming the variables are considered syntactically equivalent). Finally, the antisymmetry axiom is also self-dual in this way. Hence we are done.
So, for example, by the principle of duality, we know automatically that the join of two elements is unique when it exists — we just dualize our earlier theorem that the meet is unique when it exists. The join of two elements and
is denoted
.
Be careful, when you dualize, that any shorthand you used to abbreviate an expression in the language of posets is also replaced by its dual. For example, the dual of the notation is
(and vice-versa of course), and so the dual of the associativity law which we proved for meet is (for all
)
. In fact, we can say
Theorem: The join operation is associative, commutative, and idempotent.
Proof: Just apply the principle of duality to the corresponding theorem for the meet operation.
Just to get used to these ideas, here are some exercises.
- State the dual of the Yoneda principle (as stated here).
- Prove the associativity of join from scratch (from the axioms for posets). If you want, you may invoke the dual of the Yoneda principle in your proof. (Note: in the sequel, we will apply the term “Yoneda principle” to cover both it and its dual.)
To continue: we say a poset is a join-semilattice if it has all finite joins (including the empty join, which is the bottom element satisfying
for all
). A lattice is a poset which has all finite meets and finite joins.
Time for some examples.
- The set of natural numbers 0, 1, 2, 3, … under the divisibility order (
if
divides
) is a lattice. (What is the join of two elements? What is the bottom element?))
- The set of natural numbers under the usual order is a join-semilattice (the join of two elements here is their maximum), but not a lattice (because it lacks a top element).
- The set of subsets of a set
is a lattice. The join of two subsets is their union, and the bottom element is the empty set.
- The set of subspaces of a vector space
is a lattice. The meet of two subspaces is their ordinary intersection; the join of two subspaces
,
is the vector space which they jointly generate (i.e., the set of vector sums
with
, which is closed under addition and scalar multiplication).
The join in the last example is not the naive set-theoretic union of course (and similar remarks hold for many other concrete lattices, such as the lattice of all subgroups of a group, and the lattice of ideals of a ring), so it might be worth asking if there is a uniform way of describing joins in cases like these. Certainly the idea of taking some sort of closure of the ordinary union seems relevant (e.g., in the vector space example, close up the union of and
under the vector space operations), and indeed this can be made precise in many cases of interest.
To explain this, let’s take a fresh look at the definition of join: the defining property was
if and only if (
and
).
What this is really saying is that among all the elements which “contain” both
and
, the element
is the absolute minimum. This suggests a simple idea: why not just take the “intersection” (i.e., meet) of all such elements
to get that absolute minimum? In effect, construct joins as certain kinds of meets! For example, to construct the join of two subgroups
,
, take the intersection of all subgroups containing both
and
— that intersection is the group-theoretic closure of the union
.
There’s a slight catch: this may involve taking the meet of infinitely many elements. But there is no difficulty in saying what this means:
Definition: Let be a poset, and suppose
. The infimum of
, if it exists, is an element
such that for all
,
if and only if
for all
.
By the usual Yoneda argument, infima are unique when they exist (you might want to write that argument out to make sure it’s quite clear). We denote the infimum of by
.
We say that a poset is an inf-lattice if there is an infimum for every subset. Similarly, the supremum of
, if it exists, is an element
such that for all
,
if and only if
for all
. A poset is a sup-lattice if there is a supremum for every subset. [I’ll just quickly remark that the notions of inf-lattice and sup-lattice belong to second-order logic, since it involves quantifying over all subsets
(or over all elements of
).]
Trivially, every inf-lattice is a meet-semilattice, and every sup-lattice is a join-semilattice. More interestingly, we have the
Theorem: Every inf-lattice is a sup-lattice (!). Dually, every sup-lattice is an inf-lattice.
Proof: Suppose is an inf-lattice, and let
. Let
be the set of upper bounds of
. I claim that
(“least upper bound”) is the supremum of
. Indeed, from
and the definition of infimum, we know that
if
, i.e.,
if
for all
. On the other hand, we also know that if
, then
for every
, and hence
by the defining property of infimum (i.e.,
really is an upper bound of
). So, if
, we conclude by transitivity that
for every
. This completes the proof.
Corollary: Every finite meet-semilattice is a lattice.
Even though every inf-lattice is a sup-lattice and conversely (sometimes people just call them “complete lattices”), there are important distinctions to be made when we consider what is the appropriate notion of homomorphism. The notions are straightforward enough: a morphism of meet-semilattices is a function which takes finite meets in
to finite meets in
(
, and
where the 1’s denote top elements). There is a dual notion of morphism of join-semilattices (
and
where the 0’s denote bottom elements). A morphism of inf-lattices
is a function such that
for all subsets
, where
denotes the direct image of
under
. And there is a dual notion of morphism of sup-lattices:
. Finally, a morphism of lattices is a function which preserves all finite meets and finite joins, and a morphism of complete lattices is one which preserves all infs and sups.
Despite the theorem above , it is not true that a morphism of inf-lattices must be a morphism of sup-lattices. It is not true that a morphism of finite meet-semilattices must be a lattice morphism. Therefore, in contexts where homomorphisms matter (which is just about all the time!), it is important to keep the qualifying prefixes around and keep the distinctions straight.
Exercise: Come up with some examples of morphisms which exhibit these distinctions.
Let’s see if we can build this from ground up. We first define a statement (or sometimes, a proposition) to be a meaningful assertion that is either true or false. Well, meaningful means we should be able to say for sure if a statement is true or false. So, something like “Hello, there!” is not counted as a statement but “the sun is made of butter” is. The latter is evidently false but the former is neither true nor false. Now, it can get quite cumbersome after a while if we keep using statements such as “the sun is made of butter” every time we need to use them. Thus, it is useful to have variables, or to be precise, propositional variables, to denote all statements. We usually prefer to use and so on for such variables.
Now, all of this would be rather boring if we had just symbols such as etc. to denote statements. Thus, a statement like “Archimedes was a philosopher” is not that interesting in itself. In fact, all the statements (in our formal system) would be “isolated” ones in the sense that we wouldn’t be able to logically “connect” one statement to another. We want to be able to express sentences like “
and
“, “
implies
” and so on. So, we add something called logical connectives (also called operator symbols) to the picture. There are four basic ones:
(conjunction),
(disjunction),
(material implication), which are all of arity 2 and
(negation) which is of arity 1. Using these logical connectives, we can now form compound statements such as
(i.e.
and
),
(i.e.
or
),
(i.e.
), and
(i.e.
implies
.) Note that each of
and
requires two propositional variables in order for it to make any sense; this is expressed by saying their arity is 2. On the other hand,
has arity 1 since it is applied to exactly one propositional variable.
We also introduce another logical operator called logical equivalence (,) which has arity 2. It is really convenient to have logical equivalence on hand, as we shall see later. We say
if and only if “
“. What this basically means is, if
is true then so is
and if
is true then so is
. Another equivalent way of saying this is, if
is true then so is
and if
is false then so is
.
Before we proceed further, we make a few observations. First, if and
are propositional variables, then by definition each of those is either true or false. Formally speaking, the truth value of
or
is either true or false. This is equally true of the compound statements
and
. Of course, the truth values of these four compound statements depend on
and
. We will delve into this in the next post.
Second, we don’t really need all the four basic operators. Two of those, viz. and
suffice for all logical purposes. This means all statements involving
and/or
can be “converted” to ones that involve only
and
. However, we can also choose the “minimal” set
, instead, for the purpose for which we chose the minimal set
. In fact, there are lots of other possible combinations of operators that can serve our purpose equally well. Which minimal set of operators we choose depends sometimes on personal taste and at other times on practical considerations. So, for example, while designing circuits in the field of computer hardware, the minimal operator set that is used is
. In fact, all that’s really needed is this particular operator set. Here
.
So, what have we got so far? Well, we have a formal notion of a statement (or proposition.) We have access to propositional variables (, etc.) that may be used to denote statements. We know how to create the negation of a given statement using the
logical connective. We also know how to “connect” any two statements using conjunction, disjunction and material implication that are symbolically represented by the logical connectives
and
, respectively. And, lastly, given any two statements
and
, we have defined what it means for the two to be logically equivalent (which is symbolically represented by
) to each other. Indeed,
if and only if (
).
We shall see in the later posts that the above “small” formal system (for propositional calculus) we have built thus far is, in fact, quite powerful. We can, indeed, already employ quite a bit of it in “ordinary” mathematics. But, more on this, later!
I wish to use this part of the blog to quickly go through the basic elements of propositional calculus, and then later move on to predicate calculus in another part of the blog, followed by the fundamentals of relational algebra in yet another part. I might then go through the problem of query optimization in RDBMS after that. Let’s see how far this goes.
Recent Comments