# Digest: Thynges that are thynges, that ARE other thynges

- Published on
- ∘ 26 min read ∘ ––– views

# Introduction

It's March, which means it's time for my annual category theory adjacent shitpost from Warsaw. Just for the occasion, I've re-entered my art hoe era. I'm bundled up, there's a hole-in-the-wall coffee shop near my hotel I've setup base in while I wait to check-in. The coffee shop is playing Phoebe Bridgers (some customs are universal): all signs are pointing towards cigarette.

The subject of this post is Eric Hehner's (now-dated) manifesto on Unified Algebra, which I've been sitting on for a few months for this very occasion since I categorize it vaguely in the same vein of humor as the elder-sister post: *Things that are things, but not other things*.

Whereas last year's Poland-post derived the taxonomy of various type classes, this time around I want to expand on Hehner's paper *from Boolean Algebra to Unified Algebra* which argues for a sort of reduced instruction set for mathematics to help illustrate the similarities between elementary and higher orders of mathematics – a new unifed set of operators whose syntax reflects the transcendental laws which hold from boolean logic, to arithmetic, and further.

Writing almost a century after Alfred Whitehead and Bertrand Russell attempted to unify mathematics with their *Principia Mathematica*, Hehner –who is perhaps a student of Marshall McLuhan, judging by the verbose fixation on symbols and representation– offers his own unification of mathematics with an entertaining fixation on notation in which he re-discovers several fundaments and motivations of Category Theory. His paper argues the benefits of unifying notation and concepts from boolean algebra to broader number systems, projecting those relationships onto other domains in a way that is novel and ostensibly pragmatic.

Formal logic has developed a complicated terminology that its students are forced to learn. There are terms, which are said to have values. There are formulas, also known as propositions or sentences, which are said not to have values, but instead to be true or false. Operators $(+, –)$ join terms, while connectives $(∧, ∨)$ join formulas. Some terms are boolean, and they have the value $true$ or $false$, but that's different from being true or false. It is difficult to find a definition of predicate, but it seems that a boolean term like $x=y$ stops being a boolean term and mysteriously starts being a predicate when we admit the possibility of using quantifiers $(∃, ∀)$. Does $x+y$ stop being a number term if we admit the possibility of using summation and product $(Σ, Π)$? There are at least three different equal signs: $=$ for terms, and $\iff$ and $≡$ for formulas and predicates, with one of them carrying an implicit universal quantification. We can even find a peculiar mixture in some textbooks...

... the words $true$ and $false$ confuse the algebra with an application ... We sometimes say that there are 0s and 1s in a computer's memory, or that there are $true$s and $false$s. Of course that's nonsense; there are neither 0s and 1s nor $true$s and $false$s in there; there are low and high voltages. We need symbols that can represent truth values and voltages equally well.

Hehner asserts that "logic is a tool, like a knife," with the threatening analogy stemming from a historical mistrust of algebra. Throughout the paper –whose page count is zero-indexed btw, giga-chad– Hehner leverages records of dialectics from 16th century barterers who squabbled over the correct way to divide poultry:

"When the chekyns of two gentle menne are counted, we may count first the chekyns of the gentylman having fewer chekyns, and after the chekyns of the gentylman having the greater portion. If the nomber of the greater portion be counted first, and then that of the lesser portion, the denomination so determined shall be the same."

As you can imagine, the distance from $2x + 3 = 3x + 2$ to $x = 1$ was likely to be several pages. The reason for all the discussion in between formulas was that algebra was not yet fully trusted. Algebra replaces meaning with symbol manipulation; the loss of meaning is not easy to accept. The author constantly had to reassure those readers who had not yet freed themselves from thinking about objects represented by numbers and variables.

Whether intentional or not, he rhetorically primes the reader for swaths of abstraction by mocking the feudal naive.

## Binary Logic

Building on the fundaments of binary logic which are commonly (though, he notes, arbitrarily) denoted as $true$ and $false$, Hehner makes the case for replacing these representations of the binary elements with the symbols $\top$ and $\bot$ in order to better contextualize their projections onto higher orders of mathematics. He points out that, while $true$ and $false$ are very intuitive for the binary case, properties we commonly understand as "boolean behavior" hold in general for any set of two elements which are the inverses of each other, and are closed under negation, conjunction, disjunction, and implication.

These symbols, read as "top" and "bottom", com from the set theory idea of **partially ordered sets** where the "top" element of a set is larger than all other elements in the set for some definition of ordering. Likewise, "bottom" is smaller than all other elements in the set. Hehner begins his syntactical crusade by calling attention to the fact that $\top$ and $\bot$ are visual duals, reflecting their relation which transcends mere boolean logic / set theory.

### Binary Relations

To make the first leap from the confines of a lower domain of boolean logic, Hehner lays out some familiar binary relations and shows their arithmetic parallels. Considering first the binary `AND`

operator, denoted $\land$, and its truth table:

$\pmb{ \land }$ | $\bot$ | $\top$ |
---|---|---|

$\bot$ | $\bot$ | $\bot$ |

$\top$ | $\bot$ | $\top$ |

Hehner urges the reader (his poor, poor students whom I think are actually subjected to these glyphs in the classroom) to view these values not simply as $true$ and $false$, but as *ordered* elements related by the axiom $\bot < \top$ which allows us to interpret logical `AND`

as the `min`

operator such that its value is $\top$ only when both operands are $\top$. Similarly, logical `OR`

, commonly denoted $\lor$, can be interpreted as the `max`

operator:

$\pmb{\lor}$ | $\bot$ | $\top$ |
---|---|---|

$\bot$ | $\bot$ | $\top$ |

$\top$ | $\top$ | $\top$ |

Hehner notes again how the classicial conjunctive and disjunctive symbols are vertical duals and also symmetrical about the horizontal which "beautifully" hints at their commutativity. E.g.,

–commutativity which, under the min/max interpretations, holds for numeric values of $x,y$ as well as the germane binary elements.

Logical implication, denoted $\implies$, and conventionally read as "implies" in the discrete context, or when used infix: "if $p$, then $q$", takes on the following truth table:

$\implies$ | $\bot$ | $\top$ |
---|---|---|

$\bot$ | $\top$ | $\color{red}\top$ |

$\top$ | $\bot$ | $\top$ |

This table can be understood as "if the first input is false, then the second input is irrelevant":

Making the case for $\bot$ and $\top$, Hehner points out how many students get hung up on the cell highlighted in red, known as the counter factual case, which seems to indicate that something false can imply something true. Students would do well to study Soissons, or maybe Philo the Dialectician, and –as Hehner correctly notes– to avoid the hubris of Machine Learning researchers who fall victim to the counter-factual, and introduce a new, third boolean value of e.g. $unknown$ for the cases:

This is argued to be more intuitive. I believe this proposal betrays a serious misunderstanding of logic.

If, however, we divorce ourselves from the bondage of truthiness in favor of the splendidly symmetric partially-ordered sentinals, we can observe that logical implication behaves isomorphically to the relation $\leq$, and the truth table then reflects that axiomatic relationship between $\top$ and $\bot$, reinforcing the projection of set theory onto binary logic, and further onto arithmetic!

If we take a concrete example of how these might be used, such as the plainly truthful expression

which is commonly how logical implication would be used in formal proofs

We understand that fewer values of $x$ satisfy the sinistral side of the expression than the dextral side, thus $x < 4$ is closer to the falsest statement, which is represented by $\bot$, and likewise, the looser constraint of $x < 6$ is closer to $x < \top$ which is itself simply $\top$.

I have tried using the standard words "stronger" and "weaker", saying $x < 4$ is stronger than $x < 6$; but I find that some of my students have an ethical objection to saying that falsity is stronger than truth.

Therefore, we have a partial ordering of the quality, or more correctly, the strictness of proofs. True proofs which are equivalent to $\top$ are called **theorems**. Some statements evaluate to $\top$ regardless of the variables that may appear within them. These are awarded an even higher status of **Law**. E.g. $q \lor \lnot q \equiv \top$ which is known as the Law of the Excluded Middle.

This relationship between $\lt$ and $\gt$ in the context of a formal proof helps illustrate that $\top$ and $\bot$ are appropriate models for extrema of whatever domain they appear in, be it the "truest" or "falsest" values in logical set, or numeric values...

## Arithmetic

Now that we're somewhat comfortable with the arithmetic applications of binary relations via Hehner's notation, we can examine possible interpretations of $\top$ and $\bot$ for various number systems. It's commonplace for $false$ and $true$ to take on values of $0$ and $1$ in computer science, however Hehner argues that binary logic is naturally embedded in the real number system via $\pm \infty$. (St. Thomas Moore is pogging).

If we take, for example, two values $a,b$ on the number line:

We can negate the arithmetic interpretation of the logical conjunction of the values to observe how this relation behaves:

Thus we arrive at an arithmetic derivation of De Morgan's Law(s):

that is, the minimum of the inverses of two values is equivalent to the inverse of their maximum! Similarly,

Many other -but not all! as we'll see shortly– logical operations generalize to arithmetic systems under Hehner's unified algebra:

boolean law | number law |
---|---|

$\top \oplus \bot$ | $\infty \neq - \infty$ |

$a \equiv \lnot \lnot a$ | $x = - - x$ |

$a \lor \top \equiv \top$ | $x \uparrow \infty = \infty$ |

$a \land \bot \equiv \bot$ | $x \downarrow -\infty = -\infty$ |

$a \lor \bot \equiv a$ | $x \uparrow -\infty = x$ |

$a \land \top \equiv a$ | $x \downarrow \infty = x$ |

$a \implies \top$ | $x \leq \infty$ |

$\bot \implies a$ | $-\infty \leq x$ |

$a \lor (b \land c) \equiv (a \lor b) \land (a \lor c)$ | $x \uparrow (y \downarrow z) = (x \uparrow y) \downarrow (x \uparrow z)$ |

$a \land (b \lor c) \equiv (a \land b) \lor (a \land c)$ | $x \downarrow (y \uparrow z) = (x \downarrow y) \uparrow (x \downarrow z)$ |

$a \lor b \equiv \lnot (\lnot a \land \lnot b)$ | $x \uparrow y = -(-x \downarrow -y)$ |

$a \land b \equiv \lnot (\lnot a \lor \lnot b)$ | $x \downarrow y = -(-x \uparrow -y)$ |

### Not All Laws!

Now we say: what a pity that $x = -x$ has no solution; let's give it one. The new solution is denoted $0$. While gaining a solution to some boolean expressions, we have lost some laws such as the law of the excluded middle $x \uparrow -x$.

Previously, we saw that the Exclusive Middle $q \lor \lnot q \equiv \top$ holds for any binary values of its input variable. However, $q \uparrow (-q) = |q|$ is *not* equivalent to/does not evaluate to $\top$ anymore (unless $q = \top$). Therefore, since the truthiness of this statement is now dependent on the value of $q$, the Exclusive Middle is no longer a Law nor a theorem when evaluate numerically. This makes sense in general for extended number systems, hence the importance of qualifiers like "... closed under the set ..." and is true for many other laws. E.g. when extending fundamental operations from the Reals to the Complex numbers, to the quaternions, octonions, etc. we begin to lose properties commonly taken for granted like ordering (which breaks for complex numbers), commutativity (which we lose for quaternions), and associativity for the octonions.^{1}

Now we have an algebra of three values $\top, 0, \bot$. In one application they can be used to represent "yes", "maybe", and "no"; in another they can be used to represent the "large", "medium", and "small". This algebra has 27 one-operand operators such as $-$ which is now defined by:

It has 19,683 two-operand operators for which we'd need many new symbols in order to enumerate the relationships between possible *products* such as $\top \top$, $0 \top$, and $\top\bot$. We might continue our development with a four-valued algebra and five-valued algebra, but at this point it's just as well to fill the space between the three primary values with all the integers, then the rationals, the reals, and the complex numbers as usual.

At the same time as we gain solutions, we lose laws, since the laws and unsatisfiable expressions are each other's negation. For example, when we gain a solution to $x^2 = 2$, we lose the law $x^2 \neq 2$.

unified | ||
---|---|---|

top | $\top$ | infinity |

bottom | $\bot$ | minus infinity |

negation | $-$ | negation |

conjunction | $\downarrow$ | minimum |

disjunction | $\uparrow$ | maximum |

"nand" | $⤓$ | negation of minimum^{2} |

"nor" | $⤒$ | negation of maximum |

implication | $\leq$ | order |

reverse implication | $\geq$ | reverse order |

strict implication | $<$ | strict order |

strict reverse implication | $>$ | strict reverse order |

equivalence | $=$ | equality |

exclusive or | $ǂ$ | inequality |

with the rationale for these choices being:

In choosing infix symbols, there is a simple principle that really helps our ability to calculate: we should choose symmetric symbols for symmetric operators, and asymmetric symbols for asymmetric operators, and choose the reverse of an asymmetric symbol for the reverse operator. The benefit is that a lot of laws become visual: we can write an expression backwards and get an equivalent expression. For example, $x + y < z$ is equivalent to $z > y + x$. By this principle, the arithmetic symbols $+ \; \times \; < \; > \; =$ are well chosen, but $- \; \neq$ are not. The boolean symbols $\land \; \lor \; \implies \; \impliedby \; \equiv \; \oplus$ are well chosen, but $\not \equiv$ is not.

There are some other troublesome and vestigial operators as well, such as multiplication which is not self-dual, but $\times$ is unfortunately vertically symmetric.

## Functions

A rote definition of a function might be a mapping from one set to another. Conventionally, we express functions as:

where the *scope* of the variable $x$ is limited to its usage in the definition of $f$. We (computer programmers/algebra II graduates) intuitively know that the $x$ referenced by some other function e.g. $g(x) = x + 2$ refers to an entirely different argument.

This becomes confusing to the uninitiated when we introduce function composition, and therefore Hehner posits that a more expressive function notation solidifes the scope (Hehner's functions *are* scopes) and lends itself to broader composability:

For trivial $f$, this might seem like overkill, but this representation contains more information and generalizes more nicely than $f: \R \rightarrow \R$ which gets fugly fast when we start currying or dealing with higher order dimenions, or tensors, or anything at all outside of what Carathéodory had in mind when he decided to get cute with his notation.

Consider the sigma notation used for summations:

Here, we understand that $i$ is local to the summation, but only because we were taught that this is the case. It's by no means intuitively communicated by the notation.^{3}

Hehner's alternative notation captures summation with a prefix operation over which the result of the scope is folded:

We can apply this form to set comprehension, integrals, and really to any collection with a binary operator (that is, *any semigroup*). The daunting product notation such as $\prod^{10}_{1} i^2$ becomes simply:

and less-obvious comprehension compositions such as taking the maximum or minimum of a list are as compact as

If $f$ is also a binary function (called a **predicate**) such as $\sf{is\_prime}(x)$, then we can write:

read as "the smallest or largest (respectively) prime in the input list".

This isn't a novel realization either though. Hehner notes that this is merely a projection onto Set Theory's existential and universal qualifiers.

## Set Theory I

Poo-pooing the needless introduction of new symbols to communicate a concept which might be more easily grokked by students if axiomatically derived from top/bottom, Hehner notes that the application of logical AND/OR to a collection is isomorphic to

## Compactness

Returning the to toy example of how we might express a function $f$'s domain, Hehner notes that we can still use variables to refer to whole functions with similar bindings as we might see elsewhere in calculus:

This yields increasingly compact expression such as:

De Morgan's laws can be further simplified (at the expense of readability) by omitting the implicit $f$ entirely (since it's a Law, and by definition idempotent to its inputs):

## Set Theory II

I migrated to another coffee shop btw, if u even care. This one's a bit less cutesy – underground at Warszawa Centralna station while I wait for the train to Krakow (not to be confused with Centralna st which is about 4 miles away from here. Only a fool would make that mistake and miss their train because of it...)

This is where Hehner gets extra zany. A set is a collection of elements denoted $\{0, 1, 2, ... \}$. Hehner points out that this notation relies on a lot of assumptions which we take for granted. Without deviating from the existing conventions of comma separated values in braces, he explicitly defines the comma as an **idempotent** binary operator which joins two elements into a **bunch**. A bunch is just a collection of "free-floating" elements which have not yet been placed into a set. The enclosing braces $\{ \}$ then are a *unary operator* which takes a bunch to a set. The empty set is just a box $\{ \}$ with nothing else inside, and the empty bunch is denoted:

gotcha! It's literally nothing!

This clever use of syntax as operators rather than mere punctuation helps reinforce the properties of these domains which we take for granted in terms of the familiar behaviors we've derived from boolean logic.

- $,\;$ is commutative, since sets are not ordered
- $,\;$ is idempotent, since all elements of a set are unique, so $a,a \implies a$

In other words, $\{ 1, 2, 2, 3\}$ *is* $\{ 1, 2, 3\}$ since $2,2 = 2$

Conventionally, Set theory is far more verbose than Hehner in communicating set membership, introducing yet another symbol: $\in$. Hehner prefers to standadrize use of the colon (and also drops the esoteric symbols representing common domains such as the natural numbers s.t. $n \in \N$ becomes $n : nat$). However, in Hehner's unified system, $nat$ is a *bunch* not a set. His semantics prefer bunches to sets when specifying domains because this means that the comma operator can be applied to the domains to refine them (whereas a comma joining two free floating sets like so $\{...\},\{...\}$ is undefined in set theory, we now recognize this as a bunch of sets).

Bunch theory tells us about aggregation; set theory tells us about packaging. The two are independent.

This gives us intuitive flexibility in defining domains like the following, which would otherwise require verbose or bespoke notation to express:

or

Thus by inserting a teensie weensie bit of specification by means of bunching to Set Theory, we get a lot more operability (namely distributivity over things we'd like to treat as sets) for free.

## Statistics

Hehner notes that Boole's seminal work on algebra targeted both logic *and* probability. The standard convention assigns a probability value of 0 to an event that cannot happen, 1/2 to an event that is equally likely to happen as it it *to not happen*, and 1 to a certain event. From these axioms, for a set of event in which exactly one event must happen, the probability distribution function of that set must sum to 1. A natural mapping to Hehner's unified algebra takes on the properties:

conventional | unified |
---|---|

$P(X) = 0$ | $\bot$ |

$P(X) = 1$ | $\top$ |

$P(X) = 1/2$ | $0$ |

$\int p(x) dx= 1$ | $\int p(x) dx= 0$ |

Hehner notes that there is likely some further exploration into the logarithmic relation between these two models of statistics, but he's less interested with fleshing those out.

## Metalogic

At a more advanced level, when we want a formalism to study formalisms, we will need an operator that applies to the form of an expression. For that purpose, we do not need any new kind or level of operator. Rather, we need to do exactly what Gödel did when he encoded expressions, but we can use a better encoding. We need to do exactly what programmers do: distinguish program from data.

^{4}

Rather than borrowing the logician's symbols of $\vdash$ used to identify a theorem, Hehner points out that we already have a symbol for expressions which evaluate to true and that's $\top$ dammit!

Hehner proposes a new symbol (symmetric, mind you) $\textmd{I}$ called *interpreter*. $\textmd{I}\; s$ is said to be a theorem iff $s$ represents a theorem, and an antitheorem iff $s$ represents the negation of a theorem:

defined for all boolean expression as follows:

such that $\textmd{I}$ unquotes, and distributes over its operands.

### Tarskian Consistency and Gödel Incompleteness

Gödel's incompleteness proof is just three lines under $\textmd{I}$. Suppose that every boolean expression is either a theorem or an antitheorem, and define $Q$ by

which proves that a boolean expression is equal to its negation, which is inconsistent. Our logic containing an interpeter must be either inconsistent or incomplete. We favor consistency and choose to disallow the replacement of an expression with its equal, preventing us from defining a complete interpreter (in particular, $\textmd{I}$ cannot interpret/unquote $``-\textmd{I}\; Q"$ ).

## The Terminal Section Header

While Hehner's crusade ends with a meager attempt at unifying the symbology of metalogic, the arguments he makes throughout the paper mirror the core principles of Category Theory. The most rhetorically motivating aspect of this paper is the way in which he makes several subtle arguments in favor of the adoption of his proposed system without explicitly drawing the conclusion. Between the lines, he's like "see, you already have everything you need ..."

Number algebra is used by scientists and engineers everywhere. It is used by economists and architects. It is taught first to 6-year olds, without a metanotation, concretely as addition and subtraction of numbers. Then variables and equations are introduced, and always the applications are emphasized. As a result of that early and long education, scientists and engineers and mathematicians are comfortable with it... People have looked at it from every angle; they've described how it works at great length; now it's time to pick it up and use it. To use logic well, one must learn it early, and practice a lot. Fancy versions of logic, such as modal logic and metalogic, can be left to university study, but there is a simple basic algebra that can be taught early and used widely.

For better or worse, though, I've not seen his notation adopted anywhere outside of this publication his own website.

But we shouldn't let the complexities of this application of boolean algebra complicate the algebra, any more than we let the complexities of the banking industry complicate the definition of arithmetic.

Oh, salient Eric, if only you could see me now

## Footnotes

## Footnotes

It's a Complex Situation, which I'm bringing out of the drafts for its discussion of quaternions and octonions ↩

In the original paper, Hehner uses a symbol similar to $\nrightarrow$, except rotated to point in a vertical direction, and with a perpendicular strikethrough. While the rationale is clear, perhaps this is a poor choice of symbols since they're too bespoke to even have native LaTeX macros ad it is

*impossible*to create them with MathJax. ↩See here for interesting discussion of "intuitive" notation via the Triangle of Power ↩

von Neumann wept ↩