Librarians, Luhn, and Lizard Brain

Published on
65 min read––– views

'Cause a fear of disrepair can grow overblown and trite

And surely less sustainable than some ailment more defined

So give me tribulations, so give me trials

And errors, and errors, and errors, causes, and "whys?"

What in the...

I returned to an old textbook on Error Correcting Codes1 due to some bafflement caused by one of the explanations for the simple ISBN-10 algorithm for error detection and tripped and fell down a rabbit hole.

Pragmatically, the motivation for the algorithms discussed in this post centers around error detecting codes (but not correcting codes, unlike the Galois code covered in this post2 which goes into more depth on the nature of both types of codes and some of the requisite Group Theory basics I needed to grokk the Galois).

I stumbled across an arcane piece of code at work that employed a related algorithm to validate a debit card number before passing it along to a third party payment processing network since it's both faster and more affordable to attempt this on the "client" side before passing it along to e.g. Visa who could munge it over for hundreds of milliseconds before returning an error and an invoice t'boot.

In this post I'll briefly summarize:

  • The ISBN-10 code
  • The aforementioned Luhn algorithm widely used for computing a check-digit for debit cards
  • A superior code developed (independently) by both Jacobus Verhoeff3 and Peter Gumm4 which detects more errors than Luhn's imperfect-but-simpler method
    • This is where I'll spend most of the words (and symbols) allotted for this post, providing an alternate geometric interpretation of the proof(s) provided by the original authors


Cumulative Sum of Digits

As mentioned, Todd Moon1 describes the International Standard Book Number as a sequence of digits whose sum-of-the-sum of digits must be equal to 0mod110 \mod 11 and provides following tabular example of the confusing computation:

0country20publisher136186book #8check\underbrace{0}_{\text{country}} - \underbrace{20}_{\text{publisher}} - \underbrace{1 - 36186}_{\text{book \#}} - \underbrace{8}_{check}
digitcum. sumcum. sum of sum

The final sum is 121=0mod11121 = 0 \mod 11.

† not restricted to strictly numerical digits since summation mod11\mod 11 requires a digit to represent 1010, so this code also employs the digit XX. The same method can be generalized to other bases as well such as alphanumeric sequences.

Weighted Sum of Digits

Conventional implementations of ISBN-10 describe a different process relying on a weighted single sum of the digits, assigning descending weights to each index:

ISBN10(d)=i=210idi=(100)+(92)++(26)+c=113+c\begin{aligned} ISBN_{10}(d) &= \sum_{i=2}^{10} i \cdot d_i = (10 \cdot 0) + (9 \cdot 2) + \dots + (2 \cdot 6) + c &= 113 + c \end{aligned}

for some minimum positive value of cc constrained by:

ISBN10(d)mod11=0ISBN_{10}(d) \mod 11 = 0

yielding an expression that indicates a valid check-digit is c=8c = 8:

ISBN10(d)=113+c=113+8=121mod11=0\begin{aligned} ISBN_{10}(d) &= 113 + c \\ &= 113 + 8 = 121 \mod 11 = 0 \checkmark \end{aligned}

Curse the Senior Engineer and their Lizard Brains 🧠

It wasn't intuitively obvious at first how weighted modular arithmetic was equivalent to Moon's sum of a sum of digits, but we can observe that in the former cumulative summation, the first digit is accumulated in the sum 10 times, the 2nd digit 9 times, and so on – effectively weighting each digit according to its index in the same way the straightforward weighted summation does. Here's a sample implementation of the cumulative summation which neatly illustrated this property for me courtesy of the homie Will who's brain is sufficiently wrinkled to see the matrix:

def psychoDoubleCumSum(isbn: String): Int = {
    isbn.filter(_ != '-')
        .foldLeft((0, 0)) { case (acc, next) =>
            val (middle, next) = acc
            (middle + next, right + middle + next)

So, charged with the primitive ISBN-10 check-digit algorithm, we can approach the self-checkout at the library, and if a book (or any item) is scanned or otherwise processed for some arbitrary end, and it doesn't validate according to this check, we know that something's gone wrong. In the exempli bibliothecae, we're kind of just screwed (what're you gonna do, scan the book again 5head?), but there are myriad scenarios where error detection proves useful and where we might have more agency.

The Luhn Algorithm

Luhn's Algorithm is another basic modulo check-digit error detecting code developed by Hans Peter Luhn for IBM in 1960.5 At a high level, this algorithm works on a numerical sequence SS of length S|S| to generate an S+1|S| + 1th check-digit by doubling every other digit, summing over the residual digits and adding the smallest positive value cc such that the resultant residual sum mod10=0\mod 10 = 0:

(10iSi)mod10=c(10 - \sum_i S_i') \mod 10 = c

It is widely used to validate credit/debit card numbers (a scenario where a user might misread or fatfinger a digit).

For example, given hypothetical credit card with a Physical Account Number (PAN) of S=7992739871S = 7992739871, we compute the following SS':

SiS_i'718\color{red} 1894\color{red} 476\color{red} 6916\color{red} 1672\color{red} 2
SiS_i'71+8\color{red} 1 + 894\color{red} 476\color{red} 691+6\color{red} 1 +672\color{red} 2

Doubling every other digit, splitting those digits greater than 9 into two additional digits, and summing modulo ten we get:

(10iSi)mod10=c    iSi=67(1067)mod10=c3=c\begin{aligned} (10 - \sum_i {S_i}') \mod 10 &= c \\ \implies \sum_i {S_i}' &= 67 \\ (10 - 67) \mod 10 &= c \\ 3 &= c \end{aligned}

It's worth noting that the implementation of modulo for negative number varies from language to language 🙃,6 so be careful and leverage associative operations to avoid the possibility of mistake. For example, we can instead prefer

10(iSimod10)=c10 - (\sum_i {S_i}' \mod 10) = c

performing modular summation before subtraction to guarantee that we avoid any zane (fairly assuming SiN0SiSS_i \in \N_0| \forall S_i \in S). Associating operations to avoid calamity in this way will prove important later.

Shortcomings of the Luhn Algorithm

I once again refer back to my lengthier post about Error Correcting Codes which details more properties of such codes, but to summarize – Luhn's code detects:

  • All single digit errors
  • Most –but not all– adjacent transpositions provided one of the numbers involved isn't zero
    • This "most" here is the crux of this post. Verhoeff set out to obliterate all missed adjacent transposition errors


As we can see, the Luhn code is susceptible to missing transpositions involving a 0-digit:

Let S=x  y  9  0  zS = x\;y\;9\;0\;z, and an erroneous transposition of SS be Sbad=x  y  0  9  zS_{bad} = x\;y\;\color{red}0\;9\color{black}\;z. The every-other-digit-doubled versions of these strings will be:

S=x  y  9  0  zLuhn(S)=iSiSx+(2y)+9+(20)+z=x+2y+9+z=x+y+z+11Sbad=x  y  0  9  zLuhn(Sbad)=iSbadiSbadx+(2y)+0+(29)+z=x+(2y)+0+1+8+z=x+2y+9+z=x+y+z+11    Luhn(S)=Luhn(Sbad)\begin{aligned} S &= x\;y\;9\;0\;z \\ Luhn(S) &= \sum_i S_i' \\ S' &\leftarrow x + (2 \cdot y) + 9 + (2 \cdot 0) + z \\ &= x + 2y + 9 + z \\ &= x + y + z + 11 \\ \\ S_{bad} &= x\;y\;\color{red}0\;9\;\color{black}z \\ Luhn(S_{bad}) &= \sum_i S_{{bad}_i}' \\ S_{bad}' &\leftarrow x + (2 \cdot y) + \color{red}0\color{black} + (2 \cdot \color{red}9\color{black}) + z \\ &= x + (2 \cdot y) + 0 + \color{red}1\color{black} + \color{red}8\color{black} + z \\ &= x + 2y + \color{red}9\color{black} + z \\ &= x + y + z + 11 \\ &\implies Luhn(S) = Luhn(S_{bad}) \end{aligned}

which means that a payment request in the common example of credit card validation will fail after hundreds/thousands of milliseconds rather than a dozen or so for the non-edge cases of transposition errors. Good thing "0" and "9" are nowhere near on the keyboard 🙃.7

In his own derivation of a superior check-digit generation technique, Gumm notes that:

Various check-digit methods have been designed for the decimal number system. Each method is able to detect all "single-error mistakes," but they fail to detect all transposition errors. At least one (and often not more than one) erroneous transposition is undetectable with those methods.


To motivate his own publication (which is really just an blog post peer reviewed gatekept by the establishment if you think about it), Verhoeff researched the frequency and category of all the types of errors which can occur in a code. Verhoeff identified the following types of errors:

  • transcription: a straight up typo
    • i.e. "Peter Gumm" -> "🅱️eter Gumm"
  • transposition: two or more digits swapping positions
    • i.e. "Peter Gumm" -> "Geter Pumm"
  • twin transcription errors: a double fatfinger
    • i.e. "Peter Gumm" -> "Peter Gunn"
  • jump transpositions: a transposition of non-adjacent letters
    • i.e. "Peter Gumm" -> "Peter muGm"
  • phonetic errors: using a prohibited character
    • i.e. "Peter Gumm" -> "Peter Gu44" (assuming a strictly alphabetic domain)
  • jump twin errors: non-adjacent double typos
    • i.e. "Peter Gumm" -> "Patar Gumm"

As well as their observed frequencies in real life scenarios:

digits in errorerror classfrequency (%)
2other adjacent1.92
2jump transpositions0.82
2jump twins0.29
2other jumps0.36
3(errors of the dyslexic variety)1.40

From this table, we can see how Gumm is able to make the at first lofty-seeming claim that every single-error check-digit code immediately detects about 90% of all format errors, since about 90% of common errors are either single digit transcriptions (~80%) or mundane transpositions (~10%) for which – of the two digits, one will necessarily be detected.

Relying on Verhoeff's research which showed that the vast majority of real errors are effectively detected by Luhn's algorithm, Gumm also notes that –despite how tempting it may be to just tack on additional check-digits to capture the remaining 2% of transposition errors–

The use of two check-digits ... is not advised since the previously mentioned studies have also shown that the absolute number of errors that occur roughly doubles when the number of digits increases by two. ... We therefore concentrate on methods to detect single-digit errors and transposition errors.

That is – each additional check-digit increases the literal and statistical surface area for a fatfingered input.

Gumm asserts that all error detecting codes of the form mod2k\mod 2k are flawed for this reason.

Criteria for an Improved Error Detecting Code

Gumm introduces the following preqrequisites for an improved check-digit code. Borrowing his notation which I'll discard shortly, he defines a check-digit method base rr as a set DD with D=r|D| = r together with a family F=Δ(f1,f2,...,fn,...)F \overset{\Delta}{=} (f_1, f_2, ..., f_n, ... ) of operations fn:DnDf_n : D^n \rightarrow D. An operation fkf_k on the elements dk,...,d1Dd_k, ..., d_1 \in D is interpreted as the check-digit for a number with decimal representation dk,...,d1d_k, ..., d_1 in base rr. To improve on mod2k\mod 2k methods, the following axioms must satisfied for all nNn \in \N:

Axiom 1: Typos are Detected

fn(xn,...,xi,...,x1)=fn(xn,...,xi,...,x1)    xi=xi\begin{aligned} f_n(x_n, ..., x_i,...,x_1) &= f_n(x_n, ..., \color{red}{x_i'}\color{black},...,x_1) \\ &\implies x_i = \color{red}{x_i'}\color{black} \end{aligned}

If the check-digit calculation is identical for two sequences involving xi,xix_i, \color{red}{x_i'}\color{black}, then the two "differing" digits must be not actually be different. This axiom guarantees single-error detection.

Axiom 2: Transpositions are Detected

fn(xn,...,xi,xi1,...,x1)=fn(xn,...,xi1,xi,...,x1)    xi1=xi\begin{aligned} f_n(x_n, ..., x_i, \color{red}{x_{i-1}}\color{black},...,x_1) &= f_n(x_n, ..., \color{red}{x_{i-1}}\color{black}, x_i, ... ,x_1) \\ &\implies \color{red}{x_{i-1}}\color{black} = x_i \end{aligned}

If two digits are transposed, and the check-digit remains equivalent, then the two digits must be identical.

Axiom 3: Transpositions Involving the Check-Digit and its Left Neightbor Are Still Detected

fn(xn,...,x2,fn(xn,...,x1))=x1    fn(xn,...,x1)=x1\begin{aligned} f_n(x_n, ..., x_2, f_n(x_n, ..., x_1)) &= x_1 \\ &\implies f_n(x_n, ..., x_1) &= x_1 \end{aligned}

The Verhoeff-Gumm Algorithm

With these criteria in mind, Verhoeff and Gumm both sought out to develop a better code which could detect 100% of all single-digit and transposition errors. Verhoeff proved the existence of such a code, and Gumm provided a concrete permutation of digits in the Dihedral Group of order 10 (D5D_5) which satisfy these criteria. Let's take a look at how we might go about trying to do the same. We'll use a simple 4-digit example throughout: S=3529S = 3529 and determine a means of generating a 5th digit cc such that the whole code is impervious to the two aforementioned classes of errors.

Looking at the arithmetic toolbox, we might reach for simple operations such as addition, subtraction, multiplication, or division. However, by themselves, each of these operations fails for either single-digit or transposition errors, or both (lookin' at you division):

operation / errorcodedetected?
++3+5+9+2=193 + 5 + 9 + 2 = 19


3+7+9+2=213 + \color{red}7\color{black} + 9 + 2 = \color{red}21\color{black}


3+9+5+2=193 + \color{red}9\color{black} + \color{red}5\color{black} + 2 = 19❌ addition is commutative
×\times3×5×9×2=2703 \times 5 \times 9 \times 2 = 270


3×7×9×2=3783 \times \color{red}7\color{black} \times 9 \times 2 = \color{red}378\color{black}


3×9×5×2=2703 \times \color{red}9\color{black} \times \color{red}5\color{black} \times 2 = 270❌ multiplication is commutative
-3592=133 - 5 - 9 - 2 = -13


3792=153 - \color{red}7\color{black} - 9 - 2 = -15


5392=9\color{red}5\color{black} - \color{red}3\color{black} - 9 - 2 = \color{orange}-9\color{black}❌ subtraction is commutative
÷\div3/5/9/2=0.0333 / 5 / 9 / 2 = 0.0\overline{33}


3/7/9/2=0.0238...3 / \color{red}7\color{black} / 9 / 2 = 0.0238...✅ but bad in general since we'd like cN0c \in \N_0


not even gonna bother

† subtraction fails to detect transposition errors unless a distinct, adjacent pair not including the leading digit is transposed: e.g.

3592=135392=93529=13\begin{aligned} 3 - 5 - 9 - 2 &= 13 \\ \color{red}5\color{black} - \color{red}3\color{black} - 9 - 2 &= \color{red}-9\color{black} \\ 3 - 5 - \color{red}2\color{black} - \color{red}9\color{black} &= -13 \\ \end{aligned}

which is a false negative!

Consider: the Square

So, we'll have to plumb the depths of our operations for something less trivial. Let's reach for geometry. Consider the square and the things we can do to it.

If we take a square, we can rotate it counter clockwise about its center by 90º yielding a new square:

090º\quad \overset{90º}{\circlearrowleft} \quad 1

we can also flip it along its vertical axis yielding yet another new square:

090º\quad \overset{90º}{\circlearrowleft} \quad 1\quad \leftrightarrows \quad 7

What happens if we flip first, then rotate?

0\quad \leftrightarrows \quad 490º\quad \overset{90º}{\circlearrowleft} \quad 5

BAHA yet another new square – like a caveman discovering fire, we've defined two non-commutative, primitive operations on our square: RotateRotate and FlipFlip. From these, we can get 8 distinct squares:

  • S0S_0: 0: our arbitrarily chosen progenitor
  • S1S_1: 1: one 90º rotation from S0S_0
  • S2S_2: 2: two 90º rotations from S0S_0
  • S3S_3: 3: three 90º rotations from S0S_0 (note that four 90º from S0S_0 is S0S_0)
  • S4S_4: 4: S0S_0 flipped about its vertical axis
  • S5S_5: 5: S1S_1 flipped about its vertical axis
  • S6S_6: 6: S2S_2 flipped about its vertical axis
  • S7S_7: 7: S3S_3 flipped about its vertical axis

Each square can also be thought of as an operation rather than just the element, with S0S_0 being the identitive element (a 0º rotation with no flips leaves any square unchanged), S1S_1 being a single rotation (I'll drop the directional notation since a -90º rotation is equivalent to three +90º degree rotations which is S3S_3, so if we ever wanted to perform, say, an R1R_{-1} we'd instead do R3R_{3}), and each of the flips S47S_{4-7} are just combinations of some amount of rotations before flipping about the axis (implicitly the vertical axis since we can make the horizontal axis the vertical axis via rotation).

Using our Square, we can potentially map 8 distinct digits to our arrangements of the Square which are formed by non-commutative primitive operations. However, modern society (since the 7th century Indian Vedic mathematicians pioneered the base 10 system commonly used today) has embraced a decimal system rather than an octal system (and even then, sometimes 10 digits still ain't enough, see ISBN-10, and written language, and the languid existence of culture), so we're going to need a couple more squares...

Consider: The Pentagon

Luckily, we can also generalize our same primitive operations to the Pentagon, which has 5 distinct rotations of 72º each, and 5 axes of symmetry, which conveniently gives us 10 arrangements (See the Appendix on D5D_5 for more details).

Define the elements of our set of Pentagons (which I'll refer to as D5D_5 for no reason in particular😈) to be the following:

  • P0P_0: 0: an arbitrarily chosen, promethean identitive element from which we'll anchor the definitions of all our other Pentagons
  • P1P_1: 1: the 1st rotation by 72º
  • P2P_2: 2: the 2nd rotation by 144º
  • P3P_3: 3: the 3rd rotation by 216º
  • P4P_4: 4: the 4th rotation by 288º
  • P5P_5: 5: the 1st flip, along the blue/red axis (axis 1) of P0P_0
  • P6P_6: 6: the 2nd flip, along the yellow/green axis (axis 2) of P0P_0
  • P7P_7: 7: the 3rd flip, along the red/orange axis (axis 3) of P0P_0
  • P8P_8: 8: the 4th flip, along the green/blue axis (axis 4) of P0P_0
  • P9P_9: 9: the 5th flip, along the orange/yellow axis (axis 5) of P0P_0

We can interpret each pentagon as a digit according to its index subscript.

How to use it tho?

Returning to our example string of S=3592S = 3592 for which we still desperately need a check-digit and interpretting it as a sequence of pentagons, we get

S=S = 3 ?\quad ? \quad 5?\quad ? \quad 2?\quad ? \quad 9

But even armed with the knowledge that our Shapes themselves can be interpreted as operations, how are we to combine them with either operation?


Composition babyyyy! We can treat each Pentagon as a simple algebraic composition of functions akin to f(g(x))=fgf(g(x)) = f \circ g:

S=S = 3 \quad \circ \quad 5\quad \circ \quad 2\quad \circ \quad 9

In general, Group Theoretic operations compose from right to left. The Pentagonal analog for a Luhnean check-digit is whatever Pentagon we start the chain of operations on (so, still the rightmost digit) that results in P0=0P_0 = 0

So what Pentagon can we prepend the composition with such that our result is P0P_0?

Computing a Pentagonal Check-Digit

Though our family of operations (R,F)(R, F) isn't commutative, it is associative, so we can group the operations with parentheses however we please. Knowing that a sequence of Pentagons is equivalent to the composition of operations on a starting Pentagon, and that each operation is itself a Pentagon, we can group and reduce our previous equation:

Sc=P0=P3P5P2P9Pc=R3(F1(R2(P9)))(Pc)=R3(F1(P6))(Pc)=R3(P4)(Pc)=P2Pc\begin{aligned} S||c = P_0 &= P_3 \circ P_5 \circ P_2 \circ P_9 \circ P_c\\ &= R_3(F_1(R_2(P_9)))(P_c) \\ &= R_3(F_1(P_6))(P_c) \\ &= R_3(P_4)(P_c) \\ &= P_2 \circ P_c \\ \end{aligned}

We introduce the inverse of P2P_2 to find the Pentagon corresponding to our check-digit:

P0=R2(Pc)R21(P0)=PcR3(P0)=PcP3=Pc\begin{aligned} P_0 &= R_2(P_c) \\ R_2^{-1}(P_0) &= P_c \\ R_3(P_0) &= P_c \\ P_3 &= P_c \\ \end{aligned}

Intuitively, the inverse of of R2R_2 is R3R_3 since that's 5 rotations == 0 rotations, so our check-digit is P3P_3.

We can verify this mechanism is resistant to transposition by swapping operations to see that the result of composing SPcS \circ P_c is no longer P0P_0:

SPc=S || \color{blue}P_c\color{black} = 3 \quad \circ \quad 5\quad\circ \quad 2\quad \circ \quad 9\quad \circ \quad 3=\quad = \quad 0

=P3P5P2P9P3=P0=?P3P5P9P2P3=P0P3P5P9P2P3=P4\begin{aligned} &= P_3 \circ P_5 \circ P_2 \circ P_9 \circ \color{blue}P_3\color{black} = P_0\\ &\overset{?}{=} P_3 \circ P_5 \circ \color{red}P_9\color{black} \circ \color{red}P_2\color{black} \circ P_3 = P_0 \\ &\neq P_3 \circ P_5 \circ P_9 \circ P_2 \circ P_3 = \color{red}P_4\color{black} \\ \end{aligned}

Can we prove this works in general though? Or are there any edge cases (like adjacent 0,9 Pentagons that would cause trouble as in Luhn's algorithm?) for which we might accidentally wind up with a green flag P0P_0.

Proving Pentagonal Correctness

We can verify with a tabular illustration of all mappings of Pentagons to their corresponding elements under each operation – this is known as a Cayley Table.

\circP0P_0 0P1P_1 1P2P_2 2P3P_3 3P4P_4 4P5P_5 5P6P_6 6P7P_7 7P8P_8 8P9P_9 9
P0P_0 0P0P_0 0P1P_1 1P2P_2 2P3P_3 3P4P_4 4P5P_5 5P6P_6 6P7P_7 7P8P_8 8P9P_9 9
P1P_1 1P1P_1 1P2P_2 2P3P_3 3P4P_4 4P0P_0 0P6P_6 6P7P_7 7P8P_8 8P9P_9 9P5P_5 5
P2P_2 2P2P_2 2P3P_3 3P4P_4 4P0P_0 0P1P_1 1P7P_7 7P8P_8 8P9P_9 9P5P_5 5P6P_6 6
P3P_3 3P3P_3 3P4P_4 4P0P_0 0P1P_1 1P2P_2 2P8P_8 8P9P_9 9P5P_5 5P6P_6 6P7P_7 7
P4P_4 4P4P_4 4P0P_0 0P1P_1 1P2P_2 2P3P_3 3P9P_9 9P5P_5 5P6P_6 6P7P_7 7P8P_8 8
P5P_5 5P5P_5 5P9P_9 9P8P_8 8P7P_7 7P6P_6 6P0P_0 0P4P_4 4P3P_3 3P2P_2 2P1P_1 1
P6P_6 6P6P_6 6P5P_5 5P9P_9 9P8P_8 8P7P_7 7P1P_1 1P0P_0 0P4P_4 4P3P_3 3P2P_2 2
P7P_7 7P7P_7 7P6P_6 6P5P_5 5P9P_9 9P8P_8 8P2P_2 2P1P_1 1P0P_0 0P4P_4 4P3P_3 3
P8P_8 8P8P_8 8P7P_7 7P6P_6 6P5P_5 5P9P_9 9P3P_3 3P2P_2 2P1P_1 1P0P_0 0P4P_4 4
P9P_9 9P9P_9 9P8P_8 8P7P_7 7P6P_6 6P5P_5 5P4P_4 4P3P_3 3P2P_2 2P1P_1 1P0P_0 0

which looks like troglodytic vomit. A heatmap is easier to digest:


Cayley Tables are useful since they remove the need for guess and checking (or thinking whatsoever) about inverse operations: cells with P0P_0 entries indicate inverses. Additionally, we can detect transpositions by noting that e.g.

R1F1F1R1P1P5P5P1P6P9\begin{aligned} R_1 \circ F_1 &\neq F_1 \circ R_1 \\ P_1 \circ P_5 &\neq P_5 \circ P_1 \\ P_6 &\neq P_9 \\ \end{aligned}

If we fold the whole table along the major diagonal, we can note that most of the table has asymetric results (highlighted in green), indicating that transpositions produce different PcP_c check-digits (you're gonna have to flip your phone sideways for this one – sorry not sorry):

P0P_0P1P_1 P1P_1P2P_2 P2P_2P3P_3 P3P_3P4P_4 P4P_4P5P_5 P5P_5P6P_6 P6P_6P7P_7 P7P_7P8P_8 P8P_8P9P_9 P9P_9
P1P_1P3P_3 P3P_3P4P_4 P4P_4P0P_0 P0P_0P9P_9 P6P_6P5P_5 P7P_7P6P_6 P8P_8P7P_7 P9P_9P8P_8 P5P_5
P2P_2P0P_0 P0P_0P1P_1 P1P_1P8P_8 P7P_7P9P_9 P8P_8P5P_5 P9P_9P6P_6 P5P_5P7P_7 P6P_6
P3P_3P2P_2 P2P_2P7P_7 P8P_8P8P_8 P9P_9P9P_9 P5P_5P5P_5 P6P_6P6P_6 P7P_7
P4P_4P6P_6 P9P_9P7P_7 P5P_5P8P_8 P6P_6P9P_9 P7P_7P5P_5 P8P_8
P5P_5P1P_1 P4P_4P2P_2 P3P_3P3P_3 P2P_2P1P_1 P4P_4
P6P_6P1P_1 P4P_4P2P_2 P3P_3P3P_3 P2P_2
P7P_7P1P_1 P4P_4P2P_2 P3P_3
P8P_8P1P_1 P4P_4
Alternatively, here's an image for mobile plebians.

The non-green cells are symmetric entries indicating undetected transpositions e.g. P3P4=P4P3=P2P_3 \circ P_4 = P_4 \circ P_3 = P_2. In general, any combination of two rotations, or any operation with the identity P0P_0 has an insecured symmetry.

Blasted Symmetries

We can circumvent these blind spots by adding a pre-processing step to introduce further asymmetry across the table, just like how the Luhn algorithm doubles every other digit before summation, and ISBN utilizes index weights. We'll call our asymmetric "activation function" σ\sigma.

We'll apply σ\sigma to pairs of Pentagons that are about to be combined. We need to define our σ\sigma s.t. for pairs Pi,PjP_i, P_j appearing in a sequence:

PiPjσ(Pi)Pjσ(Pj)Pi, unless i=j\begin{aligned} \cdots \circ P_i &\circ P_j \circ \cdots \\ \sigma(P_i) \circ P_j &\neq \sigma(P_j) \circ P_i, \text{ unless } i = j \end{aligned}

Implying that σ\sigma should produce a different check-digit for two sequence differing by a transposition.

To prove this property, we have to deviate from our colorful geometric fantasy land of Pentagons, and instead model our elements as germane integer tuples (this is how D5D_5 is conventionally tracted). Each Pentagon can be uniquely described by a tuple of the form

P=(f,r){f{1,1},r{0,1,2,3,4,}P = (f, r) \begin{cases} f \in \{-1, 1\}, \\ r \in \{0, 1, 2, 3, 4, \} \end{cases}

where negative ff indicates a flip, and rr indicates how many rotations have been applied to PP. Our new exhaustive list of Pentagons is:

  • P0=(1,0)P_0 = (1,0): 0
  • P1=(1,1)P_1 = (1,1): 1
  • P2=(1,2)P_2 = (1,2): 2
  • P3=(1,3)P_3 = (1,3): 3
  • P4=(1,4)P_4 = (1,4): 4
  • P5=(1,0)P_5 = (-1,0): 5
  • P6=(1,1)P_6 = (-1,1): 6
  • P7=(1,2)P_7 = (-1,2): 7
  • P8=(1,3)P_8 = (-1,3): 8
  • P9=(1,4)P_9 = (-1,4): 9

We combine tupled Pentagons according to the formula:

PiPj=(fi,ri)(fj,rj)=(fifj,firj+ri)mod5P2P8=(f2,r2)(f8,r8)=(1,2)(1,3)=(11,13+2)=(1,5)=(1,0)=P5\begin{aligned} P_i \circ P_j &= (\color{blue}f_i\color{black}, \color{blue}r_i\color{black}) \circ (\color{red}f_j\color{black}, \color{red}r_j\color{black}) \\ &= (\color{blue}f_i\color{black}\color{red}f_j\color{black}, \color{blue}f_i\color{black}\color{red}r_j\color{black} + \color{blue}r_i\color{black}) \mod 5 \\ P_2 \circ P_8 &= (f_2, r_2) \circ (f_8, r_8) \\ &= (1, 2) \circ (-1, 3) \\ &= (1 \cdot -1, 1 \cdot 3 + 2) \\ &= (-1, 5) = (-1, 0) \\ &= P_5 \\ \end{aligned}

which checks out according to our Cayley Table! Note that we constrain our arithmetic mod5\mod 5 to bound the number of rotations to the number of smmetrical axes in our polygon (where D5D_5 get the name). With our new (gross) algebraic model of our Pentagons, we can devise a σ\sigma.

Bad Sigma

Intuitively, we might just try adding some constant bb and see if that works, or at least how it behaves:

σ(P)=σ((f,r))=(f,r+b)\sigma(P) = \sigma((f, r)) = (f, r + b)

To verify, we just have to show that this sigma function detects transpositions:

PiPjσ(Pi)Pjσ(Pj)Pi(Pi,PjS;ij)σ((fi,ri))(fj,rj)σ((fj,rj))(fi,ri)(fifj,firj+ri+b)(fjfi,fjri+rj+b)\begin{aligned} \cdots \circ P_i &\circ P_j \circ \cdots \\ \sigma(P_i) \circ P_j &\neq \sigma(P_j) \circ P_i \\ (\forall P_i, P_j &\in S; i \neq j) \\ \sigma((\color{blue}f_i\color{black}, \color{blue}r_i\color{black})) \circ (\color{red}f_j\color{black}, \color{red}r_j\color{black}) &\neq \sigma((\color{red}f_j\color{black}, \color{red}r_j\color{black})) \circ (\color{blue}f_i\color{black}, \color{blue}r_i\color{black}) \\ (\color{blue}f_i\color{black}\color{red}f_j\color{black}, \color{blue}f_i\color{black}\color{red}r_j\color{black} + \color{blue}r_i\color{black} + b) &\neq (\color{red}f_j\color{black}\color{blue}f_i\color{black}, \color{red}f_j\color{black}\color{blue}r_i\color{black} + \color{red}r_j\color{black} + b) \end{aligned}

We know that fifj=fjfi\color{blue}f_i\color{red}f_j\color{black} = \color{red}f_j\color{blue}f_i for all values since arithmetic multiplication is commutative, so that will cancel the first element from both sides, leaving us to examine the second:

(fifj,firj+ri+b)(fjfi,fjri+rj+b)firj+ri+bfjri+rj+bfirj+rifjri+rj\begin{aligned} (\cancel{\color{blue}f_i\color{black}\color{red}f_j\color{black}}, \color{blue}f_i\color{black}\color{red}r_j\color{black} + \color{blue}r_i\color{black} + b) &\neq (\cancel{\color{red}f_j\color{black}\color{blue}f_i\color{black}}, \color{red}f_j\color{black}\color{blue}r_i\color{black} + \color{red}r_j\color{black} + b) \\ \color{blue}f_i\color{black}\color{red}r_j\color{black} + \color{blue}r_i\color{black} + \cancel{b} &\neq \color{red}f_j\color{black}\color{blue}r_i\color{black} + \color{red}r_j\color{black} + \cancel{b} \\ \color{blue}f_i\color{black}\color{red}r_j\color{black} + \color{blue}r_i\color{black} &\neq \color{red}f_j\color{black}\color{blue}r_i\color{black} + \color{red}r_j\color{black} \\ \end{aligned}