You are currently browsing the category archive for the ‘Uncategorized’ category.
In this post, I’d like to move from abstract, general considerations of Boolean algebras to more concrete ones, by analyzing what happens in the finite case. A rather thorough analysis can be performed, and we will get our first taste of a simple categorical duality, the finite case of Stone duality which we call “baby Stone duality”.
Since I have just mentioned the “c-word” (categories), I should say that a strong need for some very basic category theory makes itself felt right about now. It is true that Marshall Stone stated his results before the language of categories was invented, but it’s also true (as Stone himself recognized, after categories were invented) that the most concise and compelling and convenient way of stating them is in the language of categories, and it would be crazy to deny ourselves that luxury.
I’ll begin with a relatively elementary but very useful fact discovered by Stone himself — in retrospect, it seems incredible that it was found only after decades of study of Boolean algebras. It says that Boolean algebras are essentially the same things as what are called Boolean rings:
Definition: A Boolean ring is a commutative ring (with identity ) in which every element is idempotent, i.e., satisfies .
Before I explain the equivalence between Boolean algebras and Boolean rings, let me tease out a few consequences of this definition.
Proposition 1: For every element in a Boolean ring, .
Proof: By idempotence, we have . Since , we may additively cancel in the ring to conclude .
This proposition implies that the underlying additive group of a Boolean ring is a vector space over the field consisting of two elements. I won’t go into details about this, only that it follows readily from the proposition if we define a vector space over to be an abelian group together with a ring homomorphism to the ring of abelian group homomorphisms from to itself (where such homomorphisms are “multiplied” by composing them; the idea is that this ring homomorphism takes an element to scalar-multiplication ).
Anyway, the point is that we can now apply some linear algebra to study this -vector space; in particular, a finite Boolean ring is a finite-dimensional vector space over . By choosing a basis, we see that is vector-space isomorphic to where is the dimension. So the cardinality of a finite Boolean ring must be of the form . Hold that thought!
Now, the claim is that Boolean algebras and Boolean rings are essentially the same objects. Let me make this more precise: given a Boolean ring , we may construct a corresponding Boolean algebra structure on the underlying set of , uniquely determined by the stipulation that the multiplication of the Boolean ring match the meet operation of the Boolean algebra. Conversely, given a Boolean algebra , we may construct a corresponding Boolean ring structure on , and this construction is inverse to the previous one.
In one direction, suppose is a Boolean ring. We know from before that a binary operation on a set that is commutative, associative, unital [has a unit or identity] and idempotent — here, the multiplication of — can be identified with the meet operation of a meet-semilattice structure on , uniquely specified by taking its partial order to be defined by: iff . It immediately follows from this definition that the additive identity satisfies for all (is the bottom element), and the multiplicative identity satisfies for all (is the top element).
Notice also that , by idempotence. This leads one to suspect that will be the complement of in the Boolean algebra we are trying to construct; we are partly encouraged in this by noting , i.e., is equal to its putative double negation.
Proposition 2: is order-reversing.
Proof: Looking at the definition of the order, this says that if , then . This is immediate.
So, is an order-reversing map (an order-preserving map ) which is a bijection (since it is its own inverse). We conclude that is a poset isomorphism. Since has meets and , also has meets (and the isomorphism preserves them). But meets in are joins in . Hence has both meets and joins, i.e., is a lattice. More exactly, we are saying that the function takes meets in to joins in ; that is,
or, replacing by and by ,
whence , using the proposition 1 above.
Proposition 3: is the complement of .
Proof: We already saw . Also
using the formula for join we just computed. This completes the proof.
So the lattice is complemented; the only thing left to check is distributivity. Following the definitions, we have . On the other hand, , using idempotence once again. So the distributive law for the lattice is satisfied, and therefore we get a Boolean algebra from a Boolean ring.
Naturally, we want to invert the process: starting with a Boolean algebra structure on a set , construct a corresponding Boolean ring structure on whose multiplication is the meet of the Boolean algebra (and also show the two processes are inverse to one another). One has to construct an appropriate addition operation for the ring. The calculations above indicate that the addition should satisfy , so that if (i.e., if and are disjoint): this gives a partial definition of addition. Continuing this thought, if we express as a disjoint sum of some element and , we then conclude , whence by cancellation. In the case where the Boolean algebra is a power set , this element is the symmetric difference of and . This generalizes: if we define the addition by the symmetric difference formula , then is disjoint from , so that
after a short calculation using the complementation and distributivity axioms. After more work, one shows that is the addition operation for an abelian group, and that multiplication distributes over addition, so that one gets a Boolean ring.
Exercise: Verify this last assertion.
However, the assertion of equivalence between Boolean rings and Boolean algebras has a little more to it: recall for example our earlier result that sup-lattices “are” inf-lattices, or that frames “are” complete Heyting algebras. Those results came with caveats: that while e.g. sup-lattices are extensionally the same as inf-lattices, their morphisms (i.e., structure-preserving maps) are different. That is to say, the category of sup-lattices cannot be considered “the same as” or equivalent to the category of inf-lattices, even if they have the same objects.
Whereas here, in asserting Boolean algebras “are” Boolean rings, we are making the stronger statement that the category of Boolean rings is the same as (is isomorphic to) the category of Boolean algebras. In one direction, given a ring homomorphism between Boolean rings, it is clear that preserves the meet and join of any two elements [since it preserves multiplication and addition] and of course also the complement of any ; therefore is a map of the corresponding Boolean algebras. Conversely, a map of Boolean algebras preserves meet, join, and complementation (or negation), and therefore preserves the product and sum in the corresponding Boolean ring. In short, the operations of Boolean rings and Boolean algebras are equationally interdefinable (in the official parlance, they are simply different ways of presenting of the same underlying Lawvere algebraic theory). In summary,
Theorem 1: The above processes define functors , , which are mutually inverse, between the category of Boolean rings and the category of Boolean algebras.
- Remark: I am taking some liberties here in assuming that the reader is already familiar with, or is willing to read up on, the basic notion of category, and of functor (= structure-preserving map between categories, preserving identity morphisms and composites of morphisms). I will be introducing other categorical concepts piece by piece as the need arises, in a sort of apprentice-like fashion.
Let us put this theorem to work. We have already observed that a finite Boolean ring (or Boolean algebra) has cardinality — the same as the cardinality of the power set Boolean algebra if has cardinality . The suspicion arises that all finite Boolean algebras arise in just this way: as power sets of finite sets. That is indeed a theorem: every finite Boolean algebra is naturally isomorphic to one of the form ; one of our tasks is to describe in terms of in a “natural” (or rather, functorial) way. From the Boolean ring perspective, is a basis of the underlying -vector space of ; to pin it down exactly, we use the full ring structure.
is naturally a basis of ; more precisely, under the embedding defined by , every subset is uniquely a disjoint sum of finitely many elements of : where : naturally, iff . For each , we can treat the coefficient as a function of valued in . Let denote the set of functions ; this becomes a Boolean ring under the obvious pointwise definitions and . The function which takes to the coefficient function is a Boolean ring map which is one-to-one and onto, i.e., is a Boolean ring isomorphism. (Exercise: verify this fact.)
Or, we can turn this around: for each , we get a Boolean ring map which takes to . Let denote the set of Boolean ring maps .
Proposition 4: For a finite set , the function that sends to is a bijection (in other words, an isomorphism).
Proof: We must show that for every Boolean ring map , there exists a unique such that , i.e., such that for all . So let be given, and let be the intersection (or Boolean ring product) of all for which . Then
I claim that must be a singleton for some (evidently unique) . For , forcing for some . But then according to how was defined, and so . To finish, I now claim for all . But iff iff iff . This completes the proof.
This proposition is a vital clue, for if is to be isomorphic to a power set (equivalently, to ), the proposition says that the in question can be retrieved reciprocally (up to isomorphism) as .
With this in mind, our first claim is that there is a canonical Boolean ring homomorphism
which sends to the function which maps to (i.e., evaluates at ). That this is a Boolean ring map is almost a tautology; for instance, that it preserves addition amounts to the claim that for all . But by definition, this is the equation , which holds since is a Boolean ring map. Preservation of multiplication is proved in exactly the same manner.
Theorem 2: If is a finite Boolean ring, then the Boolean ring map
is an isomorphism. (So, there is a natural isomorphism .)
Proof: First we prove injectivity: suppose is nonzero. Then , so the ideal is a proper ideal. Let be a maximal proper ideal containing , so that is both a field and a Boolean ring. Then (otherwise any element not equal to would be a zero divisor on account of ). The evident composite
yields a homomorphism for which , so . Therefore is nonzero, as desired.
Now we prove surjectivity. A function is determined by the set of elements mapping to under , and each such homomorphism , being surjective, is uniquely determined by its kernel, which is a maximal ideal. Let be the intersection of these maximal ideals; it is an ideal. Notice that an ideal is closed under joins in the Boolean algebra, since if belong to , then so does . Let be the join of the finitely many elements of ; notice (actually, this proves that every ideal of a finite Boolean ring is principal). In fact, writing for the unique element such that , we have
(certainly for all such , since , but also belongs to the intersection of these kernels and hence to , whence ).
Now let ; I claim that , proving surjectivity. We need to show for all . In one direction, we already know from the above that if , then belongs to the kernel of , so , whence .
For the other direction, suppose , or that . Now the kernel of is principal, say for some . We have , so
from which it follows that for some . But then is a proper ideal containing the maximal ideals and ; by maximality it follows that . Since and have the same kernels, they are equal. And therefore . We have now proven both directions of the statement ( if and only if ), and the proof is now complete.
- Remark: In proving both injectivity and surjectivity, we had in each case to pass back and forth between certain elements and their negations, in order to take advantage of some ring theory (kernels, principal ideals, etc.). In the usual treatments of Boolean algebra theory, one circumvents this passage back-and-forth by introducing the notion of a filter of a Boolean algebra, dual to the notion of ideal. Thus, whereas an ideal is a subset closed under joins and such that for , a filter is (by definition) a subset closed under meets and such that whenever (this second condition is equivalent to upward-closure: and implies ). There are also notions of principal filter and maximal filter, or ultrafilter as it is usually called. Notice that if is an ideal, then the set of negations is a filter, by the De Morgan laws, and vice-versa. So via negation, there is a bijective correspondence between ideals and filters, and between maximal ideals and ultrafilters. Also, if is a Boolean algebra map, then the inverse image is a filter, just as the inverse image is an ideal. Anyway, the point is that had we already had the language of filters, the proof of theorem 2 could have been written entirely in that language by straightforward dualization (and would have saved us a little time by not going back and forth with negation). In the sequel we will feel free to use the language of filters, when desired.
For those who know some category theory: what is really going on here is that we have a power set functor
(taking a function between finite sets to the inverse image map , which is a map between finite Boolean algebras) and a functor
which we could replace by its opposite , and the canonical maps of proposition 4 and theorem 2,
are components (at and ) of the counit and unit for an adjunction . The actual statements of proposition 4 and theorem 2 imply that the counit and unit are natural isomorphisms, and therefore we have defined an adjoint equivalence between the categories and . This is the proper categorical statement of Stone duality in the finite case, or what we are calling “baby Stone duality”. I will make some time soon to explain what these terms mean.
[Update: Look for another (slicker) solution I found after coming up with the first one.]
My friend, John, asked me today if I had a solution to the following (well-known) problem which may be found, among other sources, in Chapter Zero (!) of the very famous book, Mathematical Circles (Russian Experience).
Three tablespoons of milk from a glass of milk are poured into a glass of tea, and the liquid is thoroughly mixed. Then three tablespoons of this mixture are poured back into the glass of milk. Which is greater now: the percentage of milk in the tea or the percentage of tea in the milk?
Note that there is nothing special about transferring three tablespoons of milk and/or tea from one glass to another – the problem doesn’t really change if we transfer one tablespoon of milk/tea instead, and that there is nothing special about transferring “volumes” – we could instead keep a count of, say, the number of molecules transferred. We may, therefore, pose ourselves the following analogous “discrete” problem whose solution provides more “insight” into what’s really going on.
Jar W contains white objects (and no other objects) and jar B contains black objects (and no other objects.) We transfer objects from jar W to jar B. We then thoroughly mix - in fact, we don’t have to – the contents of jar B, following which we transfer objects, this time, from jar B to jar W. Which is greater now: the percentage of black objects in jar W or the percentage of white objects in jar B?
Solution 1: Let us keep track of the number of black and white objects in both the jars before and after the transfers of objects from one jar to another. So, initially, in jar W,
# of white objects = , and # of black objects = .
Also, in jar B,
# of white objects = , and # of black objects = .
Now, we transfer objects from jar W to jar B. So, in jar W,
# of white objects = , and # of black objects = .
Also, in jar B,
# of white objects = , and # of black objects = .
Finally, we transfer objects from jar B to jar W. Let the number of white objects out of those objects be . Then, the number of black objects transferred equals . Therefore, now, in jar W,
# of white objects = , and # of black objects = .
Also, in jar B,
# of white objects = , and # of black objects = .
From here, it is easy to see that the percentage of black objects in jar W is the same as the percentage of white objects in jar B! And, we are done.
Solution 2: (I think this is a slicker one, and I found it after pondering a little over the first solution I wrote above!) This one uses the idea of invariants, and there are, in fact, two of ‘em in this problem! Note that at any given time,
# of white objects = # of black objects = .
The above is the first invariant.
Also, note that after we transfer objects from jar W to jar B and then objects from jar B to jar W, the number of objects in each jar is also . This is the second invariant. And, now the problem is almost solved!
Suppose, after we do the transfers of objects from jar W to jar B and then from jar B to jar W, the # of white objects in jar W is . Then it is easy to see that the # of black objects in jar W is (using the second invariant mentioned above.) Similarly, using the first invariant, the # of white objects in jar B = . Therefore, using the second invariant again, the # of black objects in jar B = . And, from this the conclusion immediately follows!
Last time in this series on Stone duality, we introduced the concept of lattice and various cousins (e.g., inf-lattice, sup-lattice). We said a lattice is a poset with finite meets and joins, and that inf-lattices and sup-lattices have arbitrary meets and joins (meaning that every subset, not just every finite one, has an inf and sup). Examples include the poset of all subsets of a set , and the poset of all subspaces of a vector space .
I take it that most readers are already familiar with many of the properties of the poset ; there is for example the distributive law , and De Morgan laws, and so on — we’ll be exploring more of that in depth soon. The poset , as a lattice, is a much different animal: if we think of meets and joins as modeling the logical operations “and” and “or”, then the logic internal to is a weird one — it’s actually much closer to what is sometimes called “quantum logic”, as developed by von Neumann, Mackey, and many others. Our primary interest in this series will be in the direction of more familiar forms of logic, classical logic if you will (where “classical” here is meant more in a physicist’s sense than a logician’s).
To get a sense of the weirdness of , take for example a 2-dimensional vector space . The bottom element is the zero space , the top element is , and the rest of the elements of are 1-dimensional: lines through the origin. For 1-dimensional spaces , there is no relation unless and coincide. So we can picture the lattice as having three levels according to dimension, with lines drawn to indicate the partial order:
V = 1 / | \ / | \ x y z \ | / \ | / 0
Observe that for distinct elements in the middle level, we have for example (0 is the largest element contained in both and ), and also for example (1 is the smallest element containing and ). It follows that , whereas . The distributive law fails in !
Definition: A lattice is distributive if for all . That is to say, a lattice is distributive if the map , taking an element to , is a morphism of join-semilattices.
- Exercise: Show that in a meet-semilattice, is a poset map. Is it also a morphism of meet-semilattices? If has a bottom element, show that the map preserves it.
- Exercise: Show that in any lattice, we at least have for all elements .
Here is an interesting theorem, which illustrates some of the properties of lattices we’ve developed so far:
Theorem: The notion of distributive lattice is self-dual.
Proof: The notion of lattice is self-dual, so all we have to do is show that the dual of the distributivity axiom, , follows from the distributive lattice axioms.
Expand the right side to , by distributivity. This reduces to , by an absorption law. Expand this again, by distributivity, to . This reduces to , by the other absorption law. This completes the proof.
Distributive lattices are important, but perhaps even more important in mathematics are lattices where we have not just finitary, but infinitary distributivity as well:
Definition: A frame is a sup-lattice for which is a morphism of sup-lattices, for every . In other words, for every subset , we have , or, as is often written,
Example: A power set , as always partially ordered by inclusion, is a frame. In this case, it means that for any subset and any collection of subsets , we have
This is a well-known fact from naive set theory, but soon we will see an alternative proof, thematically closer to the point of view of these notes.
Example: If is a set, a topology on is a subset of the power set, partially ordered by inclusion as is, which is closed under finite meets and arbitrary sups. This means the empty sup or bottom element and the empty meet or top element of are elements of , and also:
- If are elements of , then so is .
- If is a collection of elements of , then is an element of .
A topological space is a set which is equipped with a topology ; the elements of the topology are called open subsets of the space. Topologies provide a primary source of examples of frames; because the sups and meets in a topology are constructed the same way as in (unions and finite intersections), it is clear that the requisite infinite distributivity law holds in a topology.
The concept of topology was originally rooted in analysis, where it arose by contemplating very generally what one means by a “continuous function”. I imagine many readers who come to a blog titled “Topological Musings” will already have had a course in general topology! but just to be on the safe side I’ll give now one example of a topological space, with a promise of more to come later. Let be the set of -tuples of real numbers. First, define the open ball in centered at a point and of radius to be the set < . Then, define a subset to be open if it can be expressed as the union of a collection, finite or infinite, of (possibly overlapping) open balls; the topology is by definition the collection of open sets.
It’s clear from the definition that the collection of open sets is indeed closed under arbitrary unions. To see it is closed under finite intersections, the crucial lemma needed is that the intersection of two overlapping open balls is itself a union of smaller open balls. A precise proof makes essential use of the triangle inequality. (Exercise?)
Topology is a huge field in its own right; much of our interest here will be in its interplay with logic. To that end, I want to bring in, in addition to the connectives “and” and “or” we’ve discussed so far, the implication connective in logic. Most readers probably know that in ordinary logic, the formula (“ implies “) is equivalent to “either not or ” — symbolically, we could define as . That much is true — in ordinary Boolean logic. But instead of committing ourselves to this reductionistic habit of defining implication in this way, or otherwise relying on Boolean algebra as a crutch, I want to take a fresh look at material implication and what we really ask of it.
The main property we ask of implication is modus ponens: given and , we may infer . In symbols, writing the inference or entailment relation as , this is expressed as . And, we ask that implication be the weakest possible such assumption, i.e., that material implication be the weakest whose presence in conjunction with entails . In other words, for given and , we now define implication by the property
if and only if
As a very easy exercise, show by Yoneda that an implication is uniquely determined when it exists. As the next theorem shows, not all lattices admit an implication operator; in order to have one, it is necessary that distributivity holds:
- (1) If is a meet-semilattice which admits an implication operator, then for every element , the operator preserves any sups which happen to exist in .
- (2) If is a frame, then admits an implication operator.
Proof: (1) Suppose has a sup in , here denoted . We have
if and only if
if and only if
for all if and only if
for all if and only if
Since this is true for all , the (dual of the) Yoneda principle tells us that , as desired. (We don’t need to add the hypothesis that the sup on the right side exists, for the first four lines after “We have” show that satisfies the defining property of that sup.)
(2) Suppose are elements of a frame . Define to be . By definition, if , then . Conversely, if , then
where the equality holds because of the infinitary distributive law in a frame, and this last sup is clearly bounded above by (according to the defining property of sups). Hence , as desired.
Incidentally, part (1) this theorem gives an alternative proof of the infinitary distributive law for Boolean algebras such as , so long as we trust that really does what we ask of implication. We’ll come to that point again later.
Part (2) has some interesting consequences vis à vis topologies: we know that topologies provide examples of frames; therefore by part (2) they admit implication operators. It is instructive to work out exactly what these implication operators look like. So, let be open sets in a topology. According to our prescription, we define as the sup (the union) of all open sets with the property that . We can think of this inclusion as living in the power set . Then, assuming our formula for implication in the Boolean algebra (where denotes the complement of ), we would have . And thus, our implication in the topology is the union of all open sets contained in the (usually non-open) set . That is to say, is the largest open contained in , otherwise known as the interior of . Hence our formula:
Definition: A Heyting algebra is a lattice which admits an implication for any two elements . A complete Heyting algebra is a complete lattice which admits an implication for any two elements.
Again, our theorem above says that frames are (extensionally) the same thing as complete Heyting algebras. But, as in the case of inf-lattices and sup-lattices, we make intensional distinctions when we consider the appropriate notions of morphism for these concepts. In particular, a morphism of frames is a poset map which preserves finite meets and arbitrary sups. A morphism of Heyting algebras preserves all structure in sight (i.e., all implied in the definition of Heyting algebra — meets, joins, and implication). A morphism of complete Heyting algebras also preserves all structure in sight (sups, infs, and implication).
Heyting algebras are usually not Boolean algebras. For example, it is rare that a topology is a Boolean lattice. We’ll be speaking more about that next time soon, but for now I’ll remark that Heyting algebra is the algebra which underlies intuitionistic propositional calculus.
Exercise: Show that in a Heyting algebra.
Exercise: (For those who know some general topology.) In a Heyting algebra, we define the negation to be . For the Heyting algebra given by a topology, what can you say about when is open and dense?
I have been wanting to respond to some of the comments I’ve received over the past week but haven’t had enough time to do so. I will certainly respond some time soon.
For now, I just wish to point out to folks who use Firefox 2 that Firefox 3 Beta 4 has been released. Well, it is only meant for developers and testers for now, but having installed and used it over the past several days, I can say it’s a much better version of the Firefox browser. The biggest problem (I’ve had) with the current version of Firefox is it uses an enormous amount of memory mostly due to memory leaks. Keep it running for several days – I hardly log off – and the browser takes up as much as 700-800Mb of memory, sometimes even consuming a gigabyte, which is plain insane! Due to this, I had seriolusly considered switching to Flock or IE altogether. But, now I am very satisfied with the new beta version. It mostly uses around 150Mb of memory while never exceeding
200Mb 250Mb, which really shows that the Firefox team has had been working hard on the new version.
Go ahead and download/install the new beta version and test it yourself. I haven’t had any problems thus far. The only downside of using the beta version is your add-ons will not work, but that is hardly an issue, at least to me, for now.
This post is non-mathematical in nature, and yet, I felt I should write about a very important essay written by Kant, for the rational thought expressed in the essay is very close to my heart!
He answers that question in the first paragraph of his essay:
Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! [dare to know] “Have courage to use your own understanding!”–that is the motto of enlightenment.
And, the part that I really like is contained in the second paragraph, and it reads,
If I have a book to serve as my understanding, a pastor to serve as my conscience, a physician to determine my diet for me, and so on, I need not exert myself at all. I need not think, if only I can pay: others will readily undertake the irksome work for me. The guardians who have so benevolently taken over the supervision of men have carefully seen to it that the far greatest part of them (including the entire fair sex) regard taking the step to maturity as very dangerous, not to mention difficult. Having first made their domestic livestock dumb, and having carefully made sure that these docile creatures will not take a single step without the go-cart to which they are harnessed, these guardians then show them the danger that threatens them, should they attempt to walk alone.
When the mathematician proposed to his girlfriend, why did she turn him down?
Because he offered her a ring but she wanted a field!
Less than a couple of months ago, we heard of the (untimely) death of Bobby Fischer, undoubtedly the greatest chess player who ever lived on earth. For a lot of people, in his later years he was a raving arrogant “lunatic.” But few people knew/know about his human side, which was brought out in a moving eulogy on Fischer, by Dick Cavett, titled Was It Only a Game? written for The New York Times. The accompanying video in that article shows how “normal” Fischer was, just like you and me.
Here is a wonderful video of Fischer as a 15-yr old kid appearing in a game show I’ve Got a Secret.
The following is a casual interview in which Fischer smiles and laughs as never seen before.
And here is a short documentary on Fischer’s world championship match with Boris Spassky in 1972 and how he beat the gargantuan Soviet chess machine.
There is immense joy and thrill in discovering that one of the world’s greatest mathematicians (and probably many more like him) got interested in mathematics, just as I did as a kid, after reading one of Yakov I. Perelman‘s popular science books. Physics for Entertainment is the book and Grisha Perelman is the mathematician I am referring to!
Yakov I. Perelman’s books on physics, mathematics and astronomy were written in a style that brought out many of the aspects of the aforesaid subjects in the most enjoyable way. He breathed life into every page of his books and made mathematics and physics accessible to any kid in a way that brought sheer joy to the soul! At the same time, his writings provided a glimpse of the amazing way that physics helps us understand and study the nature around us. Physics Can Be Fun, Mathematics Can be Fun and Astronomy for Entertainment are the titles of some of his other books that were immensely popular among young students who read them. The credit for my long-lasting interest/passion in physics and mathematics solely goes to him.
I think all twelve-year olds should be given copies of his books as birthday gifts instead of toys! Surprisingly, there is an online copy of Physics For Entertainment here. I am not sure if any copyright laws will be violated if you download the online version but it seems that the site is a “genuine” one.
Yakov Perelman, sadly, died during the Siege of Leningrad in 1942.