You are currently browsing the category archive for the ‘Naive Set Theory’ category.

This is a post on “foundations of mathematics” (eek!). I was motivated to write it while I’ve been struggling to understand better certain applications of ultrafilters — namely the theory of measurable cardinals — from a point of view and language that I feel comfortable with. My original intent was to blog about that, as a kind of off-shoot of the general discussion of ultrafilters I started in connection with the series on Stone duality, and because it seems kind of cool. And I will. But this got finished first, and I thought that it would be of interest to some who have been following my category theory posts.

A lot of confusion seems to reign around “the categorical approach to foundations” and what it might entail; some seem to think it involves a “doing-away with elements” that we all know and love, or doing away with sets and supplanting them with categories, or something like that. That’s an unfortunate misunderstanding. My own attitude is pragmatic: I’m all in favor of mathematicians using ordinary “naive” (pre-axiomatic) set theory to express their thoughts if that’s the familiar and appropriate conveyance — I mean, obviously I do it myself. It’s our common heritage, learned through years of undergraduate and graduate school experience and beyond. I’m not proposing for a moment to “overthrow” it.

What I do propose to discuss is a formalized set theory which embodies this rich tradition, but which takes advantage of categorical insights honed over the decades, and which I would argue is ‘natural’ in its ease to accept formulas in naive set theory and give them a foundation true to mathematical practice; I also argue it addresses certain criticisms which I feel could be put to that hallowed foundational theory, ZFC. I should own up that this theory is not immune to criticism, a main one being that a certain amount of preface and commentary is required to make it accessible (and I don’t think category theorists have done a particularly hot job doing that, frankly).

Let’s start by putting down what we want in very simple, pragmatic terms:

• A (formalized) ‘set theory’ worthy of the name ought to realize a conception of sets as “completed collections”, and allow for the existence of enough sets and relations to fulfill the needs of working mathematicians.

This is intentionally vague. The “needs of working mathematicians” fluctuate over time and place and person. Some of the core needs would include the existence of the sets of natural numbers and real numbers, for instance. On the other hand, set theorists may have greater needs than specialists in the theory of several complex variables. For now I’ll ignore some of the deeper needs of set theorists, and try to focus on the basic stuff you’d need to formalize what goes on in your average graduate school text (to put it vaguely, again).

We will discuss two formalizations of set theory: ZFC, and Lawvere’s Elementary Theory of the Category of Sets [ETCS]. The first “needs no introduction”, as they say. The second is an autonomous category-based theory, described in detail below, and proposed by Saunders Mac Lane as an alternative approach to “foundations of mathematics” (see his book with Moerdijk). Either formalization provides fully adequate infrastructure to support the naive set theory of working mathematicians, but there are significant conceptual differences between them, centering precisely on how the notion of membership is handled. I’ll start with the more familiar ZFC.

As everyone knows, ZFC formalizes a conception of “set” as collection extensionally determined by the members it contains, and the ZFC axioms ensure a rich supply of ways in which to construct new sets from old (pairings, unions, power sets, etc.). Considering how old and well-developed this theory is, and the plenitude of available accounts, I won’t say much here on its inner development. Instead, I want to pose a question and answer to highlight a key ZFC conception, and which we use to focus our discussion:

Question: “What are the members of sets?”

This may seem innocent enough, but the consequences are quite far-reaching. It says that “membership” is a relation $\in$ from the collection $V$ of all “sets” to itself. (Speaking at a pre-axiomatic level, a relation from a set $X$ to a set $Y$ is a subset $R \subseteq X \times Y$. So a structure for ZFC set theory consists of a “universe of discourse” $V$, together with a collection $\in \ \subseteq V \times V$ of pairs of elements of $V$, called the membership relation.)

Why is this a big deal? A reasonable analogue might be dynamical systems. If $X$ and $Y$ are manifolds, say, then you can study the properties of a given smooth map $f: X \to Y$ and maybe say interesting things of course, but in the case $X = Y$, you get the extra bonus that outputs can be fed back in as inputs, and infinite processes are born: you can study periodic orbits, long-term behaviors, and so on, and this leads to some very intricate mathematics, even when $X$ is a simple manifold like a 2-sphere.

My point is that something analogous is happening in ZFC: we have a (binary) relation $\in$ from $V$ to itself, and we get a rich “dynamics” and feedback by iterative relational composition of $\in$ with itself, or by composing other derived binary relations from $V$ to itself. (Perhaps I should recall here, again at a pre-axiomatic level, that the composite $S \circ R$ of a relation $R \subseteq X \times Y$ and $S \subseteq Y \times Z$ is the subset

$\{(x, z): \exists_{y \in Y} (x, y) \in R \wedge (y, z) \in S\} \subseteq X \times Z$.)

A “behavior” then would correspond to an iterated membership chain

$(x \in y) \wedge (y \in z) \wedge (z \in w) \wedge \ldots$

and there are certain constraints on behavior provided by things like the axiom of foundation (no infinitely long backward evolutions). The deep meaning of the extensionality axiom is that a “set” $s$ is uniquely specified by the abstract structure of the tree of possible backward evolutions or behaviors starting from the “root set” $s$. This gives some intuitive but honest idea of the world of sets according to the ZFC picture: sets are tree-like constructions. The ZFC axioms are very rich, having to do with incredibly powerful operations on trees, and the combinatorial results are extremely complicated.

There are other formulations of ZFC. One is by posets: given any relation $R \subseteq V \times V$ (never mind one satisfying the ZFC axioms), one can create a reflexive and transitive relation $\leq$, defined by the first-order formula

$a \leq b$ if and only if $\forall_{x \in V} (x, a) \in R \Rightarrow (x, b) \in R$

The “extensionality axiom” for $R$ can then be formulated as the condition that $\leq$ also be antisymmetric, so that it is a partial ordering on $V$. If $R = \in$ is the membership relation for a model of ZFC, then this $\leq$ is of course just the usual “subset relation” between elements of $V$.

Then, by adding in a suitable “singleton” operator $s: V \to V$ so that

$x \in y$ if and only if $s(x) = \{x\} \leq y$

the rest of the ZFC axioms can be equivalently recast as conditions on the augmented poset structure $(V, \leq, s)$. In fact, Joyal and Moerdijk wrote a slim volume, Algebraic Set Theory, which gives a precise (and for a categorist, attractive) sense in which models of axiomatic frameworks like ZF can be expressed as certain initial algebras [of structure type $(V, \leq, s)$] within an ambient category of classes, effectively capturing the “cumulative hierarchy” conception underlying ZFC in categorical fashion.

The structure of a ZFC poset $(V, \leq)$ is rich and interesting, of course, but in some ways a little odd or inconvenient: e.g., it has a bottom element of course (the “empty set”), but no top (which would run straight into Russell’s paradox). Categorically, there are some cute things to point out about this poset, usually left unsaid; for example, taking “unions” is left adjoint to taking “power sets”:

$\bigcup F \leq X$  if and only if  $F \leq PX$.

In summary: ZFC is an axiomatic theory (in the language of first-order logic with equality), with one basic type $V$ and one basic predicate $\in$ of binary type $V \times V$, satisfying a number of axioms. The key philosophic point is that there is no typed distinction between “elements” and “sets”: both are of type $V$, and there is a consequent very complicated dynamical “mixing” which results just on the basis of a short list of axioms: enough in principle to found all of present-day mathematics! I think the fact that one gets such great power, so economically, from apparently such slender initial data, is a source of great pride and pleasure among those who uphold the ZFC conception (or that of close kin like NBG) as a gold standard in foundations.

My own reaction is that ZFC is perhaps way too powerful! For example, the fact that $\in$ is an endo-relation makes possible the kind of feedback which can result in things like Russell’s paradox, if one is not careful. Even if one is free from the paradoxes, though, the point remains that ZFC pumps out not only all of mathematics, but all sorts of dross and weird by-products that are of no conceivable interest or relevance to mathematics. One might think, for example, that to understand a model of ZFC, we have to be able to understand which definable pairs $(a, b)$ satisfy $a \in b$. So, in principle, we can ask ourselves such otherwise meaningless gibberish as “what in our model and implementation is the set-theoretic intersection of the real number $\pi$ and Cantor space?” and expect to get a well-defined answer. When you get right down to it, the idea that everything in mathematics (like say the number $e$) is a “set” is just plain bizarre, and actually very far removed from the way mathematicians normally think. And yet this is how we are encouraged to think, if we are asked to take ZFC seriously as a foundations.

One might argue that all expressions and theorems of normal mathematics are interpretable or realizable in the single theory ZFC, and that’s really all we ever asked for — the details of the actual implementation (like, ‘what is an ordered pair?’) being generally of little genuine interest to mathematicians (which is why the mathematician in the street who says ZFC is so great usually can’t say with much specificity what ZFC is). But this would seem to demote ZFC foundations, for most mathematicians, to a security blanket — nice to know it’s there, maybe, but otherwise fairly irrelevant to their concerns. But if there really is such a disconnect between how a mathematician thinks of her materials at a fundamental level and how it specifically gets coded up as trees in ZFC, with such huge wads of uninteresting or irrelevant stuff in its midst, we might re-examine just how appropriate ZFC is as “foundations” of our subject, or at least ask ourselves how much of it we usefully retain and how we might eliminate the dross.

We turn now to consider a categorical approach, ETCS. This will require retooling the way we think of mathematical membership. There are three respects in which “membership” or “elementhood” differs here from the way it is handled in ZFC:

1. “Elements” and “sets” are entities of different types. (Meaning, elements are not themselves presupposed to be sets.)
2. When we say “element”, we never mean an object considered in isolation; we always consider it relative to the specific “set” it is considered to be a member of. (That is, strictly speaking, the same object is never thought of as “belonging to” two distinct sets — use of such language must be carefully circumscribed.)
3. We consider not just “elements” in the usual sense, but what are sometimes called “generalized elements”. Civilians call them “functions”. Thus, an element of type $X$ over a domain of variation $U$ is fancy terminology for a function $f: U \to X$. We will call them functions or “generalized elements”, depending on the intuition we have in mind. A function $x: 1 \to X$ corresponds to an ordinary element of $X$.

Each of these corresponds to some aspect of normal practice, but taken together they are sufficiently different in how they treat “membership” that they might need some getting used to. The first corresponds to a decision to treat elements of a “set” like $\mathbb{R}$ as ‘urelements’: they are not considered to have elements themselves and are not considered as having any internal structure; they are just atoms. What counts in a mathematical structure then is not what the constituents are ‘like’ themselves, but only how they are interrelated among themselves qua the structure they are considered being part of.

This brings us right to the second point. It corresponds e.g. to a decision never to consider a number like ‘3’ in isolation or as some Platonic essence, but always with respect to an ambient system to which it is bound, as in ‘3 qua natural number’, ‘3 qua rational number’, etc. It is a firm resolve to always honor context-dependence. Naturally, we can in a sense transport ‘3’ from one context to another via a specified function like $\mathbb{N} \to \mathbb{Q}$, but strictly speaking we’ve then changed the element. This raises interesting questions, like “what if anything plays the role of extensionality?”, or “what does it mean to take the intersection of sets?”. (Globally speaking, in ETCS we don’t — but we can, with a bit of care, speak of the intersection of two “subsets” of a given set. For any real mathematical purpose, this is good enough.)

My own sense is that it may be this second precept which takes the most getting used to — it certainly gives the lie to sometimes-heard accusations that categorical set theory is just a “slavish translation of ZFC into categorical terms”. Clearly, we are witnessing here radical departure from how membership is treated in ZFC. Such unbending commitment to the principle of context-dependence might even be felt to be overkill, a perhaps pedantic exercise in austerity or purity, or that it robs us of some freedom in how we want to manipulate sets. A few quick answers: no, we don’t lose any essential freedoms. Yes, the formal language may seem slightly awkward or stilted at certain points, but the bridges between the naive and formal are mercifully fairly obvious and easily navigated. Lastly, by treating membership not as a global endo-relation on sets, but as local and relative, we effectively eliminate all the extraneous dreck and driftwood which one rightly ignores when examining the mathematics of ZFC.

The third precept is familiar from the way category theorists and logicians have used generalized elements to extend set-theoretic notation, e.g., to chase diagrams in abelian categories, or to describe sheaf semantics of intuitionistic set theory, or to flesh out the Curry-Howard isomorphism. It is a technical move in some sense, but one which is easy to grow accustomed to, and very convenient. In ETCS, there is a strong “extensionality principle” (technically, the requirement that the terminal object is a generator) which guarantees enough “ordinary” elements $x: 1 \to X$ to make any distinctions that can sensibly be made, but experience with topos theory suggests that for many applications, it is often convenient to drop or significantly modify that principle. If anything in ETCS is a nod to traditional set theory, it is such a strong extensionality principle. [The Yoneda principle, which deals with generalized elements, is also an extensionality principle: it says that a set is determined uniquely (to within uniquely specified isomorphism) by its generalized elements.]

Okay, it is probably time to lay out the axioms of ETCS. The basic data are just those of a category; here, we are going to think of objects as “sets”, and morphisms $x: a \to b$ as functions or equivalently as “elements of a set $b$ over a domain of variation $a$“. The latter is a mouthful, and it is sometimes convenient to suppress explicit mention of the domain $a$, so that “$x \in b$” just means some morphism $x$ with codomain $b$. More on this below. The axioms of ETCS are the axioms of category theory, plus existence axioms which guarantee enough structure to express and support naive set theory (under the strictures imposed by precepts 1-3 above). For those who speak the lingo, the axioms below are those of a well-pointed topos with natural number object and axiom of choice. (This can be augmented later with a replacement axiom, so as to achieve bi-interpretability with full ZFC.)

Remark: As ETCS breaks the “dynamical” aspects of ZFC, and additionally treats issues of membership in a perhaps unaccustomed manner, its axioms do take longer to state. This should come as no surprise. Actually, we’ve discussed some of them already in other posts on category theory; we will repeat ourselves but make some minor adjustments to reflect normal notational practice of naive set theory, and build bridges between the naive and formal.

Axiom of products. For any two sets $a, b$, there is a set $c$ and functions $p_1: c \to a$, $p_2: c \to b$, such that given two elements $x \in a, y \in b$ over the same domain, there exists a unique element $\langle x, y \rangle \in c$ over that domain for which

$p_1\langle x, y \rangle = x \qquad p_2\langle x, y \rangle = y$

A choice of product $c$ is usually denoted $a \times b$. To make a bridge with naive set-theory notation, we suggestively write

$a \times b :=_i \{\langle x, y \rangle: x \in a, y \in b\}$

where the funny equality sign and bracketing notation on the right simply mean that the cartesian product is uniquely defined up to isomorphism by its collection of (generalized) elements, which correspond to pairs of elements, in accordance with the Yoneda principle as explained in the discussion here.

We also assume the existence of an “empty product” or terminal object 1: this is a set with a unique element $x \in 1$ over any domain.

Axiom of equalizers. For any two functions $\displaystyle f, g: a \stackrel{\to}{\to} b$, there exists a function $i: e \to a$ such that

1. $f i = g i$,
2. Given $x \in a$ over some domain such that $f x = g x$, there exists a unique $x' \in e$ over the same domain such that $i x' = x$.

An equalizer $i: e \to a$ is again defined up to isomorphism by its collection of generalized elements, denoted $e :=_i \{x \in a: f(x) = g(x)\} \hookrightarrow a$, again in accordance with the Yoneda principle.

Using the last two axioms, we can form pullbacks: given functions $f: a \to c, g: b \to c$, we can form the set denoted

$\{\langle x, y \rangle \in a \times b: f x = g y\}$

using the product and equalizer indicated by this notation.

Before stating the next axiom, a few important remarks. We recall that a function $i: a \to b$ is injective if for every $x, y \in a$ over the same domain, $i x = i y$ implies $x = y$. In that case we think of $i: a \to b$ as defining a “subset” $a$ of $b$, whose (generalized) elements correspond to those elements $z \in b$ which factor (evidently uniquely) through $i$. It is in that sense that we say $z \in b$ also “belongs to” a subset $a$ (cf. precept 2). A relation from $a$ to $b$ is an injective function or subset $r \hookrightarrow a \times b$.

Axiom of power sets. For every set $b$ there is a choice of power set $P(b)$ and a relation $\in_b \hookrightarrow b \times P(b)$, so that for every relation $r \hookrightarrow a \times b$, there exists a unique function $\chi_r: a \to P(b)$ such that $r$ is obtained up to isomorphism as the pullback

$\{\langle x, y \rangle \in a \times b: y \in_b \chi_r(x)\}$

In other words, $\langle x, y \rangle$ belongs to $r$ if and only if $\langle y, \chi_r(x) \rangle$ belongs to $\in_b$.

Axiom of strong extensionality. For functions $\displaystyle f, g: a \to b$, we have $f = g$ if and only if $f x = g x$ for all “ordinary” elements $x: 1 \to a$.

Axiom of natural number object. There is a set $\mathbf{N}$, together with an element $0: 1 \to \mathbf{N}$ and a function $s: \mathbf{N} \to \mathbf{N}$, which is initial among sets equipped with such data. That is, given a set $a$ together with an element $x: 1 \to a$ and a function $f: a \to a$, there exists a unique function $h: \mathbf{N} \to a$ such that

$h(0) = x; \qquad h s = f h$

Or, in elementwise notation, $h(n+1) = f h(n)$ for every (generalized) element $n \in \mathbf{N}$, where $n+1$ means $s(n)$. Under strong extensionality, we may drop the qualifier “generalized”.

Before stating the last axiom, we formulate a notion of “surjective” function: $f: a \to b$ is surjective if for any two functions $g, h: b \to c$, we have $g = h$ if and only if $g f = h f$. This is dual to the notion of being injective, and under the axiom of strong extensionality, is equivalent to the familiar notion: that $f$ is surjective if for every element $y: 1 \to b$, there exists an element $x: 1 \to a$ such that $f x = y$.

Axiom of choice. Every surjective function $s: a \to b$ admits a section, i.e., a function $i: b \to a$ such that $s i = 1_b$, the identity function.

This completes the list of axioms for ETCS. I have been at pains to try to describe them in notation which is natural from the standpoint of naive set theory, with the clear implication that any formula of naive set theory is readily translated into the theory ETCS (provided we pay appropriate attention to our precepts governing membership), and that this theory provides a rigorous foundation for mainstream mathematics.

To make good on this claim, further discussion is called for. First, I have not discussed how classical first-order logic is internalized in this setting (which we would need to do justice to a comprehension or separation scheme), nor have I discussed the existence or construction of colimits. I plan to take this up later, provided I have the energy for it. Again, the plan would be to stick as closely as possible to naive set-theoretic reasoning. (This might actually be useful: the categorical treatments found in many texts tend to be technical, often involving things like monad theory and Beck’s theorem, which makes it hard for those not expert in category theory to get into. I want to show this need not be the case.)

Also, some sort of justification for the claim that ETCS “founds” mainstream mathematics is called for. Minimally, one should sketch how the reals are constructed, for instance, and one should include enough “definability theory” to make it plausible that almost all constructions in ordinary mathematics find a natural home in ETCS. What is excluded? Mainly certain parts of set theory, and parts of category theory (ha!) which involve certain parts of set theory, but this is handled by strengthening the theory with more axioms; I particularly have in mind a discussion of the replacement axiom, and perhaps large cardinal axioms. More to come!

Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset $PX$ of all subsets of a set $X$, and the poset $Sub(V)$ of all subspaces of a vector space $V$.

I take it that most readers are already familiar with many of the properties of the poset $PX$; there is for example the distributive law $A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$, and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset $Sub(V)$, as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to $Sub(V)$ is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).

To get a sense of the weirdness of $Sub(V)$, take for example a 2-dimensional vector space $V$. The bottom element is the zero space $\{0\}$, the top element is $V$, and the rest of the elements of $Sub(V)$ are 1-dimensional: lines through the origin. For 1-dimensional spaces $x, y$, there is no relation $x \leq y$ unless $x$ and $y$ coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:

       V = 1
/ | \
/   |   \
x    y    z
\   |   /
\ | /
0

Observe that for distinct elements $x, y, z$ in the middle level, we have for example $x \wedge y = 0 = x \wedge z$ (0 is the largest element contained in both $x$ and $y$), and also for example $y \vee z = 1$ (1 is the smallest element containing $y$ and $z$). It follows that $x \wedge (y \vee z) = x \wedge 1 = x$, whereas $(x \wedge y) \vee (x \wedge z) = 0 \vee 0 = 0$. The distributive law fails in $Sub(V)$!

Definition: A lattice is distributive if $x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)$ for all $x, y, z$. That is to say, a lattice $X$ is distributive if the map $x \wedge -: X \to X$, taking an element $y$ to $x \wedge y$, is a morphism of join-semilattices.

1. Exercise: Show that in a meet-semilattice, $x \wedge -: X \to X$ is a poset map. Is it also a morphism of meet-semilattices? If $X$ has a bottom element, show that the map $x \wedge -$ preserves it.
2. Exercise: Show that in any lattice, we at least have $(x \wedge y) \vee (x \wedge z) \leq x \wedge (y \vee z)$ for all elements $x, y, z$.

Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:

Theorem: The notion of distributive lattice is self-dual.

Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, $x \vee (y \wedge z) = (x \vee y) \wedge (x \vee z)$, follows from the distributive lattice axioms.

Expand the right side to $((x \vee y) \wedge x) \vee ((x \vee y) \wedge z)$, by distributivity. This reduces to $x \vee [(x \vee y) \wedge z]$, by an absorption law. Expand this again, by distributivity, to $x \vee (x \wedge z) \vee (y \wedge z)$. This reduces to $x \vee (y \wedge z)$, by the other absorption law. This completes the proof. $\Box$

Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:

Definition: A frame is a sup-lattice for which $x \wedge -: X \to X$ is a morphism of sup-lattices, for every $x \in X$. In other words, for every subset $S \subseteq X$, we have $\sup(\{x \wedge s: s \in S\}) = x \wedge \sup(S)$, or, as is often written,

$\bigvee_{s \in S} x \wedge s = x \wedge \bigvee_{s \in S} s.$

Example: A power set $PX$, as always partially ordered by inclusion, is a frame. In this case, it means that for any subset $A$ and any collection of subsets $\{B_i: i \in I\}$, we have

$A \cap (\bigcup_{i \in I} B_i) = \bigcup_{i \in I} A \cap B_i$

This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.

Example: If $X$ is a set, a topology on $X$ is a subset $\mathbf{T} \subseteq PX$ of the power set, partially ordered by inclusion as $PX$ is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element $\emptyset$ and the empty meet or top element $X$ of $PX$ are elements of $\mathbf{T}$, and also:

1. If $U, V$ are elements of $\mathbf{T}$, then so is $U \cap V$.
2. If $\{U_i: i \in I\}$ is a collection of elements of $\mathbf{T}$, then $\bigcup_{i \in I} U_i$ is an element of $\mathbf{T}$.

A topological space is a set $X$ which is equipped with a topology $\mathbf{T}$; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in $PX$ (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.

The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let $X$ be the set $\mathbb{R}^n$ of $n$-tuples of real numbers. First, define the open ball in $\mathbb{R}^n$ centered at a point $x \in \mathbb{R}^n$ and of radius $r > 0$ to be the set $\{y \in \mathbb{R}: ||x - y||$ < $r\}$. Then, define a subset $U \subseteq \mathbb{R}^n$ to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.

It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)

Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula $p \Rightarrow q$ (“$p$ implies $q$“) is equivalent to “either not $p$ or $q$” — symbolically, we could define $p \Rightarrow q$ as $\neg p \vee q$. That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.

The main property we ask of implication is modus ponens: given $p$ and $p \Rightarrow q$, we may infer $q$. In symbols, writing the inference or entailment relation as $\leq$, this is expressed as $p \wedge (p \Rightarrow q) \leq q$. And, we ask that implication be the weakest possible such assumption, i.e., that material implication $p \Rightarrow q$ be the weakest $a$ whose presence in conjunction with $p$ entails $q$. In other words, for given $p$ and $q$, we now define implication $p \Rightarrow q$ by the property

$(a \wedge p \leq q)$ if and only if $(a \leq p \Rightarrow q).$

As a very easy exercise, show by Yoneda that an implication $p \Rightarrow q$ is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:

Theorem:

• (1) If $X$ is a meet-semilattice which admits an implication operator, then for every element $p$, the operator $p \wedge -: X \to X$ preserves any sups which happen to exist in $X$.
• (2) If $X$ is a frame, then $X$ admits an implication operator.

Proof: (1) Suppose $S \subseteq X$ has a sup in $X$, here denoted $\bigvee_{s \in S} s$. We have

$(\bigvee_{s \in S} s) \wedge p \leq q$ if and only if

$\bigvee_{s \in S} s \leq p \Rightarrow q$ if and only if

for all $s \in S, (s \leq p \Rightarrow q)$ if and only if

for all $s \in S, (s \wedge p \leq q)$ if and only if

$\bigvee_{s \in S} (s \wedge p) \leq q$.

Since this is true for all $q$, the (dual of the) Yoneda principle tells us that $(\bigvee_{s \in S} s) \wedge p = \bigvee_{s \in S} (s \wedge p)$, as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that $(\bigvee_{s \in S} s) \wedge p$ satisfies the defining property of that sup.)

(2) Suppose $p, q$ are elements of a frame $X$. Define $p \Rightarrow q$ to be $\sup(\{a \in X: a \wedge p \leq q\})$. By definition, if $a \wedge p \leq q$, then $a \leq p \Rightarrow q$. Conversely, if $a \leq p \Rightarrow q$, then

$a \wedge p \leq \sup\{x: x \wedge p \leq q\} \wedge p = \sup\{x \wedge p: x \wedge p \leq q\},$

where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by $q$ (according to the defining property of sups). Hence $a \wedge p \leq q$, as desired. $\Box$

Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as $PX$, so long as we trust that $p \Rightarrow q := \neg p \vee q$ really does what we ask of implication. We’ll come to that point again later.

Part (2) has some interesting consequences vis à vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let $U, V$ be open sets in a topology. According to our prescription, we define $U \Rightarrow V$ as the sup (the union) of all open sets $W$ with the property that $W \cap U \subseteq V$. We can think of this inclusion as living in the power set $PX$. Then, assuming our formula $U^c \cup V$ for implication in the Boolean algebra $PX$ (where $U^c$ denotes the complement of $U$), we would have $W \subseteq U^c \cup V$. And thus, our implication $U \Rightarrow V$ in the topology is the union of all open sets $W$ contained in the (usually non-open) set $U^c \cup V$. That is to say, $U \Rightarrow V$ is the largest open contained in $U^c \cup V$, otherwise known as the interior of $U^c \cup V$. Hence our formula:

$U \Rightarrow V$ = int$(U^c \cup V).$

Definition: A Heyting algebra is a lattice $H$ which admits an implication $p \Rightarrow q$ for any two elements $p, q \in H$. A complete Heyting algebra is a complete lattice which admits an implication for any two elements.

Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).

Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.

Exercise: Show that $(0 \Rightarrow 0) = 1$ in a Heyting algebra.

Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation $\neg x$ to be $x \Rightarrow 0$. For the Heyting algebra given by a topology, what can you say about $\neg U$ when $U$ is open and dense?

The difference between sets $A$ and $B$, also known as the relative complement of $B$ in $A$, is the set $A - B$ defined by

$A - B = \{ x: x \in A \mbox{ and } x \not\in B \}$.

If we assume the existence of a universe, $U$, such that all the sets are subsets of $U$, then we can considerably simplify our notation. So, for instance, $U - B$ can simply be written as $B'$, which denotes the complement of $B$ in $U$. Similarly, $U' = U - U = \emptyset \,$, $\emptyset ' = U - \emptyset = U$, and so on. A quick look at a few more facts:

• $(A')' = A$,
• $A \cap A' = \emptyset$,
• $A \cup A' = U$
• $A \subset B$ if and only if $B' \subset A'$.

The last one is proved as follows. We prove the “if” part first. Suppose $B' \subset A'$. If $x \in A$, then clearly $x \not\in A'$. But, since $B' \subset A'$, we have $x \not\in B'$, which implies $x \in B$. Hence, $A \subset B$. This closes the proof of the “if” part. Now, we prove the “only if” part. So, suppose $A \subset B$. Now, if $x \in B'$, then clearly $x \not\in B$. But, since $A \subset B$, we have $x \not\in A$, which implies $x \in A'$. Hence, $B' \subset A'$. This closes the proof of the “only if” part, and, we are done.

The following are the well-known DeMorgan’s laws (about complements):

$(A \cup B)' = A' \cap B'$, and $(A \cap B)' = A' \cup B'$.

Let’s quickly prove the first one. Suppose $x$ belongs to the left hand side. Then, $x \not\in A \cup B$, which implies $x \not\in A$ and $x \not\in B$, which implies $x \in A'$ and $x \in B'$, which implies $x \in A' \cap B'$. This proves that the left hand side is a subset of the right hand side. We can similarly prove the right hand side is a subset of the left hand side, and this closes our proof.

Though it isn’t very apparent, but if we look carefully at the couple of problems whose proofs we did above, we note something called the principle of duality for sets. One encounters such dual principles in mathematics quite often. In this case, this dual principle is stated a follows.

Principle of duality (for sets): If in an inclusion or equation involving unions, intersections and complements of subsets of $U$ (the universe) we replace each set by its complement, interchange unions and intersections, and reverse all set-inclusions, the result is another theorem.

Using the above principle, it is easy to “derive” one of the DeMorgan’s laws from another and vice versa. In addition, DeMorgan’s laws can be extended to larger collections of sets instead of just pairs.

Here are a few exercises on complementation.

1. $A - B = A \cap B'$,
2. $A \subset B$ if and only if $A - B = \emptyset$,
3. $A - (A - B) = A \cap B$,
4. $A \cap (B - C) = (A \cap B) - (A \cap C)$,
5. $A \cap B \subset (A \cap B) \cup (A \cap C)$,
6. $(A \cup C) \cap (B \cup C') \subset A \cup B$.

We will prove the last one, leaving the remaining as exercises to the reader. Suppose $x$ belongs to the left hand side. Then, $x \in A \cup C$ and $x \in B \cup C'$. Now, note that if $x \in C$, then $x \not\in C'$, which implies $x \in B$, which implies $x \in A \cup B$. If, on the other hand, $x \in C'$, then $x \not\in C$, which implies $x \in A$, which implies $x \in A \cup B$. Hence, in either case, the left hand side is a subset of $A \cup B$, and we are done.

We now define the symmetric difference (or Boolean sum) of two sets $A$ and $B$ as follows:

$A \triangle B = (A-B) \cup (B-A)$.

This is basically the set of all elements in $A$ or $B$ but not in $A \cap B$. In other words, $A \triangle B = (A \cup B) - (A \cap B)$. Again, a few facts (involving symmetric differences) that aren’t hard to prove:

1. $A \triangle \emptyset = A$,
2. $A \triangle A = \emptyset$,
3. $A \triangle B = B \triangle A$ (commutativity),
4. $(A \triangle B) \triangle C = A \triangle (B \triangle C)$ (associativity),
5. $(A \triangle B) \triangle (B \triangle C) = A \triangle C$,
6. $A \cap (B \triangle C) = (A \cap B) \triangle (A \cap C)$.

This brings us now to the axiom of powers, which basically states if $E$ is a set then there exists a set that contains all the possible subsets of $E$ as its elements.

Axiom of powers: If $E$ is a set, then there exists a set (collection) $P$, such that if $X \subset E$, then $X \in P$.

The set $P$, described above, may be too “comprehens ive”, i.e., it may contain sets other than the subsets of $E$. Once again, we “fix” this by applying the axiom of specification to form the new set $P = \{ X: X \subset E \}$. The set $P$ is called the power set of $E$, and the axiom of extension, again, guarantees its uniqueness. We denote $P$ by $P(E)$ to show the dependence of $P$ on $E$. A few illustrative examples: $P(\emptyset) = \{ \emptyset \}$, $P(\{ a \}) = \{ \emptyset, \{ a \} \}$, $P(\{ a, b \}) = \{ \emptyset, \{ a \}, \{ b \}, \{ a, b \} \}$, and so on.

Note that if $E$ is a finite set, containing $n$ elements, then the power set $P(E)$ contains $2^n$ elements. The “usual” way to prove this is by either using a simple combinatorial argument or by using some algebra. The combinatorial argument is as follows. An element in $E$ either belongs to a subset of $E$ or it doesn’t: there are thus two choices; since there are $n$ elements in $E$, the number of all possible subsets of $E$ is therefore $2^n$. A more algebraic way of proving the same result is as follows. The number of subsets with $k$ elements is $\binom{n}{k}$. So, the number of subsets of $E$ is $\sum_{k=0}^{n}\binom{n}{k}$. But, from the binomial theorem, we have $\displaystyle (1 + x)^n = \sum_{k=0}^{n}\binom{n}{k}x^k$. Putting $x = 1$, we get $2^n$ as our required answer.

A few elementary facts:

1. $\bigcap_{X \in P(E)} X = \emptyset$.
2. If $E \subset F$, then $P(E) \subset P(F)$.
3. $E = \bigcup P(E)$.

EXERCISES:

1. Prove that $P(E) \bigcap P(F) = P(E \bigcap F)$.

2. Prove that $P(E) \bigcup P(F) \subset P(E \bigcup F)$.

We are now ready to discuss a couple of familiar set theoretic operations: unions and intersections. Given two sets $A$ and $B$, it would be “nice” to have a set $U$ that contains all the elements that belong to at least one of $A$ or $B$. In fact, it would be nicer to generalize this to a collection of sets instead of just two, though we must be careful about using words like “two”, “three” and so on, since we haven’t really defined what numbers are so far. We don’t want to run the risk of inadvertently introducing circularity in our arguments! Anyway, this brings us to the following axiom.

Axiom of unions: For every collection of sets, there exists a set that contains all the elements that belong to at least one set of the given collection.

In other words, for every collection $C$, there exists a set $U$ such that if $x \in X$ for some $X$ in $C$, then $x \in U$. Now, the set $U$ may contain “extra” elements that may not belong to any $X$ in $C$. This can be easily fixed by invoking the axiom of specification to form the set $\{ x \in U: x \in X \mbox{ for some } X \mbox{ in } C \}$. This set is called the union of the collection $C$ of set. Its uniqueness is guaranteed by the axiom of extension.

Generally, if $C$ is a collection of sets, then the union is denoted by $\bigcup \{ X: X \in C \}$, or $\bigcup_{X \in C} X$.

A quick look at a couple of simple facts.

1) $\bigcup \{ X: X \in \emptyset \} = \emptyset$, and

2) $\bigcup \{ X: X \in \{ A \} \} = A$.

We finally arrive at the definition of the union of two sets, $A$ and $B$. $A \cup B = \{ x: x \in A \mbox{ or } x \in B \}$.

Below is a list of a few facts about unions of pairs:

• $A \cup \emptyset = A$,
• $A \cup B = B \cup A$ (commutativity),
• $A \cup ( B \cup C) = (A \cup B) \cup C$ (associativity),
• $A \cup A = A$ (idempotence),
• $A \subset B$ if and only if $A \cup B = B$.

Now, we define the intersection of two sets, $A$ and $B$ as follows.

$A \cap B = \{ x: x \in A \mbox{ and } x \in B\}$.

Once again, a few facts about intersections of pairs (analogous to the ones involving unions):

• $A \cap \emptyset = \emptyset$,
• $A \cap B = B \cap A$ ,
• $A \cap ( B \cap C) = (A \cap B) \cap C$,
• $A \cap A = A$,
• $A \subset B$ if and only if $A \cap B = A$.

Also, if $A \cap B = \emptyset$, then the sets $A$ and $B$ are called disjoint sets.

Two useful distributive laws involving unions and intersections:

• $A \cap (B \cup C) = (A \cap B) \cup (A \cap C)$,
• $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$.

We prove the first one of the above. The proof of the second one is left as an exercise to the reader. The proof relies on the idea that we show each side is a subset of the other. So, suppose $x$ belongs to the left hand side; then $x \in A$ and $x \in B \cup C$, which implies $x \in A$ and $x \in B$ or $C$, which implies $x \in A \cap B$ or $x \in A \cap C$, which implies $x \in (A \cap B) \cup (A \cap C)$; hence $x$ belongs to the right hand side. This proves that the left hand side is a subset of the right hand side. A similar argument shows that the right hand side is a subset of the left hand side. And, we are done.

The operation of the intersection of sets from a collection, $C$, is similar to that of the union of sets from $C$. However, the definition will require that we prohibit $C$ from being empty, and we will see why in the next section. So, for each collection, $C$, there exists a set $V$ such that $x \in V$ if and only if $x \in X$ for every $X$ in $C$. To construct such a set $V$, we choose any set $A$ in $C$ – this is possible because $C \not= \emptyset$ – and write $\{ x \in A: x \in X \mbox{ for all } X \mbox{ in } C\}$.

Note that the above construction is only used to prove that $V$ exists. The existence of $V$ doesn’t depend on any arbitrary set $A$ in the collection $C$. We can, in fact, write

$\{ x: x \in X \mbox{ for all } X \mbox{ in } C\}$.

The set $V$ is called the intersection of the collection $C$ of sets. The axiom of extension, once again, guarantees its uniqueness. The usual notation for such a set $V$ is $\bigcap \{ X: X \in C \}$ or $\bigcap_{X \in C} X$.

EXERCISE: $(A \cap B) \cup C = A \cap (B \cup C)$ if and only if $C \subset A$.

SOLUTION: We first prove the “if” part. So, suppose $C \subset A$. Now, if $x \in (A \cap B) \cup C$, then either $x \in A \cap B$ or $x \in C$. In the first case, $x \in A$ and $x \in B$, which implies $x \in A$ and $x \in B \cup C$. In the second case, we again have $x \in A$ (since $C \subset A$), which implies $x \in A$ and $x \in B \cup C$. In either case, we have $x \in A \cap (B \cup C)$. Hence, $(A \cap B) \cup C$ is a subset of $A \cap (B \cup C)$.

Similarly, if $x \in A \cap (B \cup C)$, then $x \in A$ and $x \in B \cup C$. Now, if $x \in B$, then $x \in A \cap B$, which implies $x \in (A \cap B) \cup C$. And, if $x \in C$, then once again $x \in (A \cap B) \cup C$. Thus, in either case, $x \in (A \cap B) \cup C$. Hence, $A \cap (B \cup C)$ is a subset of $(A \cap B) \cup C$. We, thus, proved $(A \cap B) \cup C = A \cap (B \cup C)$. This concludes the proof of the “if” part.

Now, we prove the “only if” part. So, suppose $(A \cap B) \cup C = A \cap (B \cup C)$. If $x \in C$, then $x$ belongs to the left hand side of the equality, which implies $x$ belongs to the right hand side. This implies $x \in A$ (and $x \in B \cup C$.) Hence, $C \subset A$. And, we are done.

After postulating a couple of important axioms in the previous two sections, we now arrive at a couple of important results.

1. There exists an empty set. (In fact, there exists exactly one!)

2. The empty set is a subset of every set.

Indeed, to prove the first result, suppose $A$ is some set. Then, the set $\{ x \in A: x \not= x \}$ is clearly an empty set, i.e. it doesn’t contain any elements. To “picture” this, imagine an empty box with nothing inside it. In fact, we can apply the axiom of specification to $A$ with any universally false sentence to create an empty set. The empty set is denoted by $\emptyset$. The axiom of extension, on the other hand, guarantees there can be only one empty set.

Now, how do we argue that $\emptyset \subset A$, for any arbitrary set $A$? Well, the reasoning is an indirect one, and for most beginners, it doesn’t seem like a complete one. There is something in the argument that doesn’t feel quite “right!” However, there is nothing “incomplete” about the argument, and here it is anyway.

Suppose, for the sake of contradiction, the emptyset, $\emptyset$, is not a subset of $A.$ Then, there exists an element in $\emptyset$ that doesn’t belong to $A.$ But, the empty set is empty, and hence, no such element exists! This means our initial hypothesis is false. Hence, we conclude (maybe, still somewhat reluctantly) $\emptyset \subset A$.

Now, the set theory we have developed thus far isn’t a very rich one; after all, we have only showed there is only one set and that it is empty! Can we do better? Can we come up with an axiom that can help us construct new sets? Well, it turns out, there is one.

Axiom of pairing: If $a$ and $b$ are two sets, then there exists a set $A$ such that $a \in A$ and $b \in A$.

The above axiom guarantees that if there are two sets, $a$ and $b$, then there exists another one, $A$, that contains both of these. However, $A$ may contain elements other than $a$ and $b$. So, can we guarantee there is a set that contains exactly $a$ and $b$ and nothing else? Indeed, we can. We just apply the axiom of specification to $A$ with the sentence “$x = a$ or $x = b$.” Thus, the set $\{ x \in A: x = a \mbox{ or } x = b\}$ is the required one.

The above construction of a particular set illustrates one important fact: all the remaining principles of set construction are pseudo-special cases of the axiom of specification. Indeed, if it were given that there exists a set containing some particular elements, then the existence of a set containing exactly those elements (and nothing else) would follow as a special case of the axiom of specification.

Now, observe if $a$ is a set, then the axiom of pairing implies the existence of the set $\{ a, a \}$, which is the same as the set $\{ a \}$ and is called a singleton of $a$. Also, note that $\emptyset$ and $\{ \emptyset \}$ are different sets; the first has no elements at all, whereas the second has exactly one element, viz. the empty set. In fact, there is a minimalist (inductive) way of constructing the set of natural numbers, $N$, (due to von Neumann) using the axiom of infinity as follows.

$0 = \emptyset, 1 = \{ 0 \}, 2 = \{ 0, 1 \}, \ldots$

But, more on this later.

The axiom of extension (discussed in Section 1) is unique in the sense that it postulates the existence of a relation between belonging and equality. All the other axioms of set theory, on the other hand, are designed to create new sets out of old ones!

The axiom of specification, loosely speaking, states that given some arbitrary (but well-defined) set $U$ (our universe), if we can make some “intelligent” assertion about the elements of $U$, then we specify or characterize a subset of $U$. An intelligent assertion about the elements of $U$ could, for example, specify a property that is shared by some elements of $U$ and not shared by the other elements. In the end, we will take up an example about an assertion that is tied to the famous Russell’s paradox.

For now, let us discuss a simple example. Suppose $A$ is the set of all (living) women. If we use $x$ to denote an arbitrary element of $A$, then the sentence$x$ is married” is true for some of the elements of $A$ and false for others. Thus, we could generate a subset of $A$ using such a sentence. So, the subset of all the women who are married is denoted by $\{ x \in A: x \mbox{ is married}\}$. To take another example, if $N$ is the set of natural numbers, then $\{x \in N: 1 < x \leq 5 \} = \{ 2, 3, 4, 5 \}$. Now, note that the subset $\{ 2 \}$ of $N$ is not the same as the number $2.$ Loosely speaking, a box containing a hat is not the same thing as the hat itself.

Now, we only need to define what a sentence is before we can precisely formulate our axiom of specification. The following rules would be a formal way to (recursively) define a sentence:

$x \in A$” is a sentence.”

$A = B$” is a sentence.

If $X$ is a sentence, then $(\mbox{not}(X))$ is a sentence.

If $X, Y$ are sentences, then $(X \mbox{ and } Y)$ is a sentence.

If $X, Y$ are sentences, then $(X \mbox{ or } Y)$ is a sentence.

If $X, Y$ are sentences, then $(\mbox{if } X \mbox{ then } Y)$ is a sentence.

If $X, Y$ are sentences, then $(X \mbox{ if and only if } Y)$ is a sentence.

If $X$ is a sentence, then $(\mbox {for some }y (X) )$ is a sentence.

If $X$ is a sentence, then $(\mbox {for all }y (X))$ is a sentence.

Note that the two types of sentences, “$x \in A$” and “$A = B$“, stated in the first two rules, are what we would call atomic sentences, while the rest of the other rules specify (valid) ways of generating (infinitely) many sentences from those two atomic sentences using the usual logical operators. Also, note that some of the rules above are rather redundant because it is possible to convert certain sentences having a set of logical operators to another sentence having a different set of logical operators. For example, “$X \mbox{ or } Y$” can be written as “$\mbox{not ( not(X) and not(Y))}$“, and so on. Anyway, we are digressing too far from our objective.

Having defined what a sentence is, we can now formulate the major principle of set theory, often referred to by its German name Aussonderungsaxiom.

Axiom of specification: To every set $A$ and to every condition $S(x)$, there corresponds a set $B$ whose elements are exactly those elements $x$ of $A$ for which $S(x)$ holds.

A “condition” here just means a sentence. The letter $x$ is free in the sentence $S(x)$, meaning $x$ occurs in $S(x)$ at least once without occurring in the phrases “for some $x$” or “for all $x$“. Now, the axiom of extension guarantees us that the axiom of specification determines the set B uniquely, and we usually write $B = \{ x \in A: S(x) \}$.

This finally brings us to the example we mentioned in the beginning of this section. Let us define $S(x) := x \not\in x$. Suppose $A$ is some arbitrary set. Let $B = \{ x \in A: x \not\in x \}$. Then for all $y$,

$(*) \,\, y \in B \mbox{ if and only if } (y \in A \mbox{ and } y \not\in y)$.

Can we have $B \in A$? The answer is no, and here’s why. Suppose, for the sake of contradiction, $B \in A$. Then, we have either $B \in B$, or $B \not\in B$. If $B \in B$, then using $(*)$, we have $B \not\in B$, a contradiction. And, if $B \not\in B$, then using $(*)$ again, the assumption $B \in A$ yields $B \in B$, a contradiction. This proves that $B \in A$ is false, and hence we conclude $B \not\in A$. Note that our set $A$ was an arbitrary one, and we just showed that there is something (viz. B) that does not belong to $A$. We have, thus, essentially proved that

there is no universe.

Here, “universe” means “universe of discourse”, a set that contains all the objects that enter into that discussion.

It was mentioned earlier that the above example has something to do with Russell’s paradox. We shall see why. In the earlier pre-axiomatic approaches to set theory, the existence of a universe was taken for granted. Now, in the above example, we just showed that $B \not\in A$ implies the non-existence of a universe. So, if we assume that a universe exists, then it implies that $B \in A$, but we have already shown that this leads to a contradiction! And this was exactly the content of Russell’s paradox. In Halmos’ own words:

The moral is that it is impossible , especially in mathematics, to get something for nothing. To specify a set, it is not enough to pronounce some magic words (which may form a sentence such as “$x \not\in x$“); it is necessary also to have at hand a set whose elements the magic words apply.

We encounter sets, or if we prefer, collections of objects, everyday in our lives. A herd of cattle, a group of women, or a bunch of yahoos are all instances of sets of living beings. “The mathematical concept of a set can be used as the foundation for all known mathematics.” The purpose here is to develop the basic properties of sets. As a slight digression, I wouldn’t consider myself a Platonist; hence, I don’t believe there are some abstract properties of sets “out there” and that the chief purpose of mathematics is to discover those abstract things, so to speak. Even though the idea of a set is ubiquitous and it seems like the very concept of a set is “external” to us, I still think that we must build, or rather postulate, the existence of the fundamental properties of sets. (I think I am more of a quasi-empiricist.)

Now, we won’t define what a set is, just as we don’t define what points or lines are in the familiar axiomatic approach to elementary geometry. So, we somewhat rely on our intuition to develop a definition of sets. Of course, our intuition may go wrong once in a while, but one of the very purposes of our exposition is to reason very clearly about our intuitive ideas, so that we can correct them any time if we discover they are wrong.

Now, a very reasonable thing to “expect” from a set is it should have elements or members. So, for example, Einstein was a member of the set of all the people who lived in the past. In mathematics, a line has points as its members, and a plane has lines as its members. The last example is a particularly important one for it underscores the idea that sets can be members of other sets!

So, a way to formalize the above notion is by developing the concept of belonging. This is a primitive (undefined) concept in axiomatic set theory. If $x$ is a member of $A$ ($x$ is contained in $A$, or $x$ is an element of $A$), we write $x \in A$. ($\in$ is a derivation of the Greek letter epsilon, $\epsilon$, introduced by Peano in 1888.) If $x$ is not an element of $A$, we write $x \not\in A$. Note that we generally reserve lowercase letters ($x, y$, etc) for members or elements of a set, and we use uppercase letters to denote sets.

A possible relation between sets, more elementary than belonging, is equality. If two sets $A$ and $B$ are equal, we write $A = B.$ If two sets $A$ and $B$ are not equal, we write $A \not= B.$

Now, the most basic property of belonging is its relation to equality, which brings us to the following formulation of our first axiom of set theory.

Axiom of extension: Two sets are equal if and only if they have the same elements.

Let us examine the relation between equality and belonging a little more deeply. Suppose we consider human beings instead of sets, and change our definition of belonging a little. If $x$ and $A$ are human beings, we write $x \in A$ whenever $x$ is an ancestor of $A$. Then our new (or analogous) axiom of extension would say if two human beings $A$ and $B$ are equal then they have the same ancestors (this is the “only if” part, and it is certainly true), and also that if $A$ and $B$ have the same ancestors, then they are equal (this is the “if” part, and it certainly is false.) $A$ and $B$ could be two sisters, in which case they have the same ancestors but they are certainly not the same person.

Conclusion: The axiom of extension is not just a logically necessary property of equality but a non-trivial statement about belonging.

Also, note that the two sets $A = \{ x, y \}$ and $B = \{ x, x, y, y, y \}$ have the same elements, and hence, by the axiom of extension, $A = B$, even though it seems like $A$ has just two elements while $B$ has five! It is due to this that we drop duplicates while writing down the elements of a set. So, in the above example, it is customary to simply write $B = \{ x,y \}$.

Now, we come to the definition of a subset. Suppose $A$ and $B$ are sets. If every member of $A$ is a member of $B$, then we say $A$ is a subset of $B$, or $B$ includes $A$, and write $A \subset B$ or $B \supset A$. This definition, clearly, implies that every set $A$ is a subset of itself, i.e. $A \subset A$, which demonstrates the reflexive property of set inclusion. (Of course, equality also satisfies the reflexive property, i.e. $A = A$.) We say $A$ is a proper subset of $B$ whenever $A \subset B$ but $A \not= B$. Now, if $A \subset B$ and $B \subset C$, then $A \subset C$, which demonstrates the transitive property of set inclusion. (Once again, equality also satisfies this property, i.e. if $A = B$ and $B = C$, then $A = C$.) However, we note that set inclusion doesn’t satisfy the symmetric property. This means, if$A \subset B$, then it doesn’t necessarily imply $B \subset A$. (On the other hand, equality satisfies the symmetric property, i.e. if $A = B$, then $B = A$.)

But, set inclusion does satisfy one very important property: the antisymmetric one. If we have $A \subset B$ and $B \subset A$, then $A$ and $B$ have the same elements, and therefore, by the axiom of extension, $A = B$. In fact, we can reformulate the axiom of extension as follows:

Axiom of extension(another version): Two sets $A$ and $B$ are equal if and only if $A \subset B$ and $B \subset A$.

In mathematics, the above is almost always used whenever we are required to prove that two sets $A$ and $B$ are equal. All we do is show that $A \subset B$ and $B \subset A$, and invoke the (above version of) axiom of extension to conclude that $A = B$.

Before we conclude, we note that conceptually belonging ($\in$) and set inclusion ($\subset$) are two different things. $A \subset A$ always holds, but $A \in A$ is “false”; at least, it isn’t true of any reasonable set that anyone has ever constructed! This means, unlike set inclusion, belonging does not satisfy the reflexive property. Again, unlike set inclusion, belonging does not satisfy the transitive property. For example, a person could be considered a member of a country and a country could be considered a member of the United Nations Organizations (UNO); however, a person is not a member of the UNO.

I have just started reading Paul R. Halmos’ classic text Naive Set Theory, and I intend to blog on each section of the book. The purpose is mainly to internalize all the material presented in the book and at the same time provide the gist of each section, so that I can always come back and read whenever I feel like doing so. The actual text, divided into 25 sections (or chapters, if you will), comprises 102 pages. Halmos’ original intention was “to tell the beginning student of advanced mathematics the basic set theoretic facts of life, and to do so with the minimum of philosophical discourse and logical formalism… The style is usually informal to the point of being conversational.

The reader is warned that “the expert specialist will find nothing new here.” Halmos recommends Hausdorff’s Set Theory and Axiomatic Set Theory by Suppes for a more extensive treatment of the subject. Nevertheless, the treatment by Halmos is not trivial at all. I personally feel his exposition is impeccable!

Almost all the ideas presented in the following posts belong to the author of the book, and I make absolutely no claims to originality in the exposition.

• 308,368 hits