# It's a Complex Situation

Published on

## So, you got scooped by Hamas...

It is October 7th, I'm putting the final touches on a markdown file named something cheeky like "The Game Theory of Apartheid" (title pending). I take a break from forging diagrams for the lengthy deep dive – rebuking figures and claims from an article on the divisive subject and open Twitter X.com.

And then, lo and behold, I scroll past cell phone footage of an agent of Hamas paragliding into Sderot. In the coming days, my commentary on the dynamics of apartheid and its mathematical origins go from novel and humorous to decidedly un-chique. I've scrapped the critique for a later date. Though not particularly inflammatory itself, it still doesn't seem very couth to publish – instead I'll make snide offcolor jokes in the section headings of this post instead!

For the still-curious, the original paper from last month's would-be post can be found here. I'd suggest questioning why there are different distributions for the modest and greedy populations if their preferences are symmetric? That's all.

## It's a Complex Situation

A month later [footnote], with no signs of the situation letting up, I instead turn my attention to the conveniently less-political subject of Geometric Algebra. After the last last last month's post, I did some additional reading about Cayley Tables, their other uses, their eponymous author, and some of his other work such as Cayley-Dickson constructions of algebras from which I had to work backwords to solidify the question(s) motivating this post:

• Why is the cross product only well-defined in 3 and 7 dimensions?
• Why is there no canonical 'just' product between two vectors? We've got the dot product, the cross product, seemingly nonsensical wedge and Hadamard products, and other esoteric ways to combine two collections of numbers, but no single analogue for arithmetic multiplication, why is that?

That's the rabbit hole, let's jump in.

## Vectors, a refresher

To work our way up to the definitions of the dot, cross, and wedge products, their respective properties and pitfalls in the search for a canonical vector product, we first examine the taxonomy of a vector.

Taking some vector $\mathbf v = (v_x, v_y, v_z )$ we immediately presume a basis of our coordinate system. A basis is the set of linearly independent vectors that generates all elements of our vector space. Which of course begs the question, what is linear independence, again?

### Linear Independence

A set of vectors $\{ \mathbf v_1, \mathbf v_2, ..., \mathbf v_k \}$ is said to be linearly independent if the equation

$x_1v_1 + x_2v_2 + \cdots + x_kv_k = 0$

has only one trivial solution: $\mathbf x = \mathbf 0$.

We can verify the linear independence of a set of vectors like so. Given a set of vectors:

$\{ \mathbf u, \mathbf v, \mathbf w \} = \Biggr \{ \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix}, \begin{bmatrix} 3 \\ 1 \\ 4 \end{bmatrix} \Biggr \}$

we can check if the following homogenous equation1 has a non-trivial2 solution:

\begin{aligned} \mathbf{x} \cdot \mathbf{u} + \mathbf{y} \cdot \mathbf{v} + \mathbf{w} \cdot \mathbf{z} &= \mathbf 0\\ \mathbf x \begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix} + \mathbf y \begin{bmatrix} 1 \\ -1 \\ 2 \\ \end{bmatrix} + \mathbf z \begin{bmatrix} 3 \\ 1 \\ 4 \\ \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ 0 \\ \end{bmatrix} \\ \begin{bmatrix} 1 & 1 & 3 \\ 1 & -1 & 1 \\ 1 & 2 & 4 \\ \end{bmatrix} &= \begin{bmatrix} 0 \\ 0 \\ 0 \\ \end{bmatrix} \\ \implies \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \\ \end{bmatrix} &\begin{matrix} \implies \mathbf x= -2 \mathbf z \\ \\ \implies \mathbf y = -\mathbf z \end{matrix} \\ \end{aligned}

which are nontrivial solutions, so our vectors are linearly independent.

### Properties of Linear Independence

• A set of vectors is linearly dependent iff one of the vectors $\mathbf u$ is colinear with another $\mathbf v$ such that $\mathbf u$ is a scalar multiple of $\mathbf v$,
• Any set containing the zero vector $\mathbf 0$ is linearly dependent since the zero vector is colinear with all other vectors,
• If a subset of the vectors we're concerned about is linearly dependent, then the whole set is linearly dependent (hence it is only necessary to show that two vectors in a set are colinear to prove linear dependence of the whole set).

With this in mind, the conventional basis used to describe 3 dimensional space is $\{\mathbf{x, y, z} \}$, a familiar variable for each axis, where all of the basis vectors are orthonormal to one another:

\begin{aligned} \mathbf x &\perp \mathbf y, \; \Vert \mathbf x\Vert = 1\\ \mathbf y &\perp \mathbf z, \; \Vert \mathbf y\Vert = 1\\ \mathbf x &\perp \mathbf z, \; \Vert \mathbf z\Vert = 1\\ \end{aligned}

(Orthonormality meaning: each vector is orthogonal to the others, and has a unit length).

## Vector Products

If you're still reading, then chances are that at least two things come to mind when you read the phrase "Vector Product": the dot product and the cross product.

### The Dot Product

Also known as the "inner" product, this operation aptly denoted with a '$\cdot$' can be interpreted in a Euclidean context as the projection of one vector $\mathbf u$ onto another $\mathbf v$, or the amount that $\mathbf u$ is pointing in the same direction as $\mathbf v$.

It is computed via summation over the element-wise products:

$\color{red} \mathbf u \color{black} \cdot \color{blue}\mathbf v \color{black} = \color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}u\color{black}_z\color{blue}v\color{black}_z$

and its general form is:

$\color{red} \mathbf u \color{black} \cdot \color{blue}\mathbf v \color{black} = \sum_{i=1}^n \color{red} u \color{black}_i \color{blue} v \color{black}_i, \quad n = \Vert\color{blue}\mathbf u \color{black} \Vert = \Vert\color{red}\mathbf v \color{black} \Vert$

### The Cross Product

Also kown as the "exterior" product, and deceptively denoted with an innocuous '$\times$', the cross product cannot be intuitively interpreted from a purely mathematical standpoint (unless you're really weird). It makes more sense in the context of physics, but also is naturally derived from basic axioms of linear algebra as we'll see shortly.

\begin{aligned} \color{red} \mathbf u \color{black} \times \color{blue}\mathbf v \color{black} = \begin{bmatrix} \color{red}u\color{black}_y\color{blue}v\color{black}_z - \color{red}u\color{black}_z\color{blue}v\color{black}_y \\ \color{red}u\color{black}_z\color{blue}v\color{black}_x - \color{red}u\color{black}_x\color{blue}v\color{black}_z \\ \color{red}u\color{black}_x\color{blue}v\color{black}_y - \color{red}u\color{black}_y\color{blue}v\color{black}_x \\ \end{bmatrix} \end{aligned}

and there is no general definition of the cross product! It only exists in 3 and sort-of (for reasons we'll see later) 7 dimensions. In two dimensions, it can be thought of as $\color{red}u\color{black}_x\color{blue}v\color{black}_y - \color{red}u\color{black}_y\color{blue}v\color{black}_x$ which, though useful for our intuition, is also not really a cross product since the essence of the cross-product is directionality/orientation (hence why it produces a vector itself).

There's also the Hadamard product which is the intuitive method of element-wise multiplication for matrices (and thus vectors as well):

\begin{aligned} \color{red} \mathbf u \color{black} \circ \color{blue}\mathbf v \color{black} = \begin{bmatrix} \color{red}u\color{black}_x\color{blue}v\color{black}_x \\ \color{red}u\color{black}_y\color{blue}v\color{black}_y \\ \color{red}u\color{black}_z\color{blue}v\color{black}_z \\ \end{bmatrix} \end{aligned}

but it doesn't see much use outside of like,,, vector graphics (used in JPEG), the Julia programming language, and the penetrating face product (whatever that is 😬).

## Not quite...

But what about a canonical, vanilla $\color{red}\mathbf u\color{blue}\mathbf v$ product? Why isn't that universally recognized to be the Hadamard product when every other standard arithmetic operation works intuitively on vectors: addition, subtraction, division, ... but not multiplication. Why? If we consider the "lower" families of numbers and the operations that work on them:

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$

Starting first with the Natural numbers: addition and multiplication are closed (addition/multiplication of any natural number will produce another natural number). Subtraction of natural numbers is closed under the set of integers (since e.g. $0 - 1 = -1 \notin \N$), and similarly division is closed under the rationals, and exponentiation of the naturals will always produce another non-negative integer.

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$
$\Z : ...,-1,-2,0,1,2,...$$\Z$$\Z$$\Z$$\mathbb Q$$\mathbb Q$

Taking a step up, the Integers behave similarly up to exponentiation because an integer raised to a negative integer will be a fraction.

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$
$\Z : ...,-1,-2,0,1,2,...$$\Z$$\Z$$\Z$$\mathbb Q$$\mathbb Q$
$\mathbb Q: \frac{0}{1}, \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, ...$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\R$

Rational numbers start to cause some ripples when we try to raise another number to a fraction of a power we introduce square roots (and other roots) which yields irrational numbers so we have to expand our horizons twice more to account for roots of positive numbers, and also roots of negative numbers:

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$
$\Z : ...,-1,-2,0,1,2,...$$\Z$$\Z$$\Z$$\mathbb Q$$\mathbb Q$
$\mathbb Q: \frac{0}{1}, \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, ...$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb R: \sqrt 2$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb C: \sqrt -1$?????

## Complex Numbers

How do the Complex numbers behave for these operations, especially multiplication? First, we must understand what complex numbers are. The axiomatic definition of Complex numbers is the "imaginary" unit:

$\color{purple}i\color{black}^2 = -1$

We can think of complex numbers as vectors with Real and Imaginary components with a basis of $\{1, \color{purple}\mathbf i \color{black} \}$. E.g. $\mathbf z = \begin{bmatrix}z_r \\ z_i \end{bmatrix}$.

Crucially, we can express complex numbers as the sum (linear combination) of their components: $a + b \color{purple}\mathbf i \color{black}$ and do algebra on them! Multiplication of these vector-like quantities might give us insight into vector multiplication. For example, for some $\color{red} a \color{black}, \color{blue}b \color{black} \in \mathbb C$:

\begin{aligned} \color{red}a\color{black} \color{blue}b\color{black} &= (\color{red}u\color{black}_r + \color{red}u\color{black}_i)( \color{blue}v\color{black}_r + \color{blue}v\color{black}_i) \\ &= \color{red}u\color{black}_r \color{blue}v\color{black}_r + \color{red}u\color{black}_i \color{blue}v\color{black}_i \color{purple}\mathbf {ii} \color{black} + \color{red}u\color{black}_r\color{blue}v\color{black}_i \color{purple}\mathbf i \color {black} + \color{red}u\color{black}_i \color{blue}v\color{black}_r \color{purple}\mathbf i \color {black} \\ &= \color{red}u\color{black}_r \color{blue}v\color{black}_r - \color{red}u\color{black}_i \color{blue}v\color{black}_i + \color{purple}\mathbf i \color{black} (\color{red}u\color{black}_r\color{blue}v\color{black}_i \color{purple} + \color{red}u\color{black}_i \color{blue}v\color{black}_r) \end{aligned}

which is itself a complex number. So $\mathbb C$ is closed under multiplication. And, since it can be expressed strictly in terms of the coefficients of the real and imaginary components, we never actually need to explicitly model the basis variable $\color{purple}\mathbf i \color {black}$ in our computations. In the reduced form of an algebraic expression, it's just used for notation. The rest of the arithmetic operations we've been concerned with in our table so far are also closed under the complex numbers:

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$
$\Z : ...,-1,-2,0,1,2,...$$\Z$$\Z$$\Z$$\mathbb Q$$\mathbb Q$
$\mathbb Q: \frac{0}{1}, \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, ...$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb R: \sqrt 2$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb C: \color{purple}\mathbf i, \mathbf i^2 \color {black}$$\color{purple} \mathbb C$$\color{purple} \mathbb C$$\color{purple}\mathbb C$$\color{purple}\mathbb C$$\color{purple}\mathbb C$

Worth noting, the Complex numbers form the only "complete" algebra3 since they are closed under all 5 of these operations. Can we generalize anything useful from interpretation of the Complex numbers as vectors in the 2-dimensional plane to $\R^3$?

## From $\mathbb C$ to $\R^3$

Armed with the knowledge that

$\mathbf {z} = (z_r, z_i) = z_r + z_i \color{purple} \mathbf i \color{black} \equiv v_x\color{red}\mathbf{x}\color{black} + v_y \color{green}\mathbf{y}\color{black} = (v_x, x_y) = \mathbf v$

That is, a complex number is equivalently expressed as the sum of vector components multiplied by a symbol representing its corresponding basis vector or axis – we can use the same expression of a vector as a linear combination for higher dimension vectors:

$\mathbf v = (v_x, x_y, v_z) = v_x\color{red}\mathbf{x}\color{black} + v_y \color{green}\mathbf{y}\color{black} + v_z \color{blue}\mathbf{y}\color{black}$

And, jumping through similar hoops as before, we can convince ourself that addition and subtraction are closed under $\R^3$, as is scalar multiplication and division, but what about vector multiplication...

\begin{aligned} \color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black} = &(\color{red}u\color{black}_x\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_y\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_z\color{blue}\mathbf{z}\color{black}) (\color{blue}v\color{black}_x\color{red}\mathbf{x}\color{black} + \color{blue}v\color{black}_y\color{green}\mathbf{y}\color{black} + \color{blue}v\color{black}_z\color{blue}\mathbf{z}\color{black}) \\ = &\color{red}u\color{black}_x\color{blue}v\color{black}_x\color{red}\mathbf{xx}\color{black} + \color{red}u\color{black}_x\color{blue}v\color{black}_y\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{red}\mathbf{x}\color{blue}\mathbf{z}\color{black} + \\ &\color{red}u\color{black}_y\color{blue}v\color{black}_x\color{green}\mathbf{y}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_y\color{blue}v\color{black}_y\color{green}\mathbf{yy}\color{black} + \color{red}u\color{black}_y\color{blue}v\color{black}_z\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} + \\ &\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{blue}\mathbf{z}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_z\color{blue}\mathbf{zz}\color{black} \\ \end{aligned}

but this isn't really a vector anymore... We can't combine like terms to express the result of in terms of just the coefficients of the basis vectors $\{ \color{red}\mathbf{x}\color{black}, \color{green}\mathbf{y}\color{black}, \color{blue}\mathbf{z}\color{black} \}$. We've got those scrungly terms involving two basis vectors at once which aren't in $\mathbb C$. So we can conclude that the Complex numbers aren't closed under vector multiplication as we'd like to define it, so are we just done then?

## Quaternions

No, lol, of course not! Returning to axiomatic truths about vectors, another useful definition for vector exponentiation gives us some outs:

$\mathbf v^2 = \Vert \mathbf v \Vert^2$

A vector times itself is equal to its magnitude squared. But can we use this for arbitrary vectors that are not equal? For example

\begin{aligned} \mathbf u &= (1,2,3) \\ \mathbf u^2 &= (1,2,3)^2 \\ &= (\sqrt{1^2 + 2^2 + 3^2})^2 \\ &= 1^2 + 2^2 + 3^2 = 14 \\ \end{aligned}

works for an element of $\R^3$, but what about our basis vectors for $\R^3$ itself? By definition these basis vectors have unit length (1), so squaring them yields unit length as well:

\begin{aligned} \R^3 &= ( \color{red}\mathbf{x}\color{black}, \color{green}\mathbf{y}\color{black}, \color{blue}\mathbf{z}\color{black} ),\\ \color{red}\mathbf{xx}\color{black} &= \color{red}\mathbf{x}\color{black}^2 = 1^2 = 1 \\ \color{green}\mathbf{yy}\color{black} &= \color{green}\mathbf{y}\color{black}^2 = 1^2 = 1 \\ \color{blue}\mathbf{zz}\color{black} &= \color{blue}\mathbf{z}\color{black}^2 = 1^2 = 1 \\ \end{aligned}

which allows us to eliminate some terms from our vector product $\color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black}$ in $\R^3$ from earlier:

\begin{aligned} \color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black} = &(\color{red}u\color{black}_x\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_y\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_z\color{blue}\mathbf{z}\color{black}) (\color{blue}v\color{black}_x\color{red}\mathbf{x}\color{black} + \color{blue}v\color{black}_y\color{green}\mathbf{y}\color{black} + \color{blue}v\color{black}_z\color{blue}\mathbf{z}\color{black}) \\ = &\color{red}u\color{black}_x\color{blue}v\color{black}_x\color{red} \cancel{\color{red}\mathbf{xx}\color{black}} + \color{red}u\color{black}_x\color{blue}v\color{black}_y\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{red}\mathbf{x}\color{blue}\mathbf{z}\color{black} + \\ &\color{red}u\color{black}_y\color{blue}v\color{black}_x\color{green}\mathbf{y}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_y\color{blue}v\color{black}_y\color{green}\color{red} \cancel{\color{green}\mathbf{yy}\color{black}} + \color{red}u\color{black}_y\color{blue}v\color{black}_z\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} +\\ &\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{blue}\mathbf{z}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_z\color{blue}\color{red} \cancel{\color{blue}\mathbf{zz}\color{black}} \\ \\ = &\color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}u\color{black}_z\color{blue}v\color{black}_z + \\ &\color{red}u\color{black}_y\color{blue}v\color{black}_z\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{blue}\mathbf{z}\color{green}\mathbf{y}\color{black} + \\ &\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{red}\mathbf{x}\color{blue}\mathbf{z}\color{black} + \\ & \color{red}u\color{black}_x\color{blue}v\color{black}_y\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_y\color{blue}v\color{black}_x\color{green}\mathbf{y}\color{red}\mathbf{x}\color{black} \\ \end{aligned}

which is marginally better, but we need to derive a little more juice out of our orthonomral basis vectors to clean up the rest of the riff raff.

### little juice, little squeeze

If we consider just two axes $\{ \color{red}\mathbf{x}\color{black}, \color{green}\mathbf{y}\color{black} \}$ we know that the diagonal projection bisecting them has a length of $\sqrt 2$ from/giving us the pythagorean theorem:

\begin{aligned} \Vert \color{red}\mathbf{x}\color{black} + \color{green}\mathbf{y}\color{black} \Vert &= \sqrt{1^2 + 2^2} = \sqrt 2 \\ \mathbf v^2 &= \Vert \mathbf v \Vert^2 \\ (\color{red}\mathbf{x}\color{black} + \color{green}\mathbf{y}\color{black})^2 &= \Vert \mathbf (\sqrt 2) \Vert^2 \\ \color{red}\mathbf {xx} \color{black} + \color{red}\mathbf {x}\color{green}\mathbf {y} \color{black} + \color{green}\mathbf {y}\color{red}\mathbf {x} \color{black} + \color{green}\mathbf {yy}\color{black} &= 2\\ 1 + \color{red}\mathbf {x}\color{green}\mathbf {y} \color{black} + \color{green}\mathbf {y}\color{red}\mathbf {x} \color{black} + 1 &= 2 \\ \color{red}\mathbf {x}\color{green}\mathbf {y} \color{black} + \color{green}\mathbf {y}\color{red}\mathbf {x} \color{black} &= 0 \\ \color{red}\mathbf {x}\color{green}\mathbf {y} \color{black} &= - \color{green}\mathbf {y}\color{red}\mathbf {x} \color{black} \\ \end{aligned}

we can reduce pretty handily down to this useful corollary. Notice that we didn't combine the middle term in the expansion since we have yet to prove that vector multiplication is commutative (it is not).

We can return to our mountain of terms and use this identity to make it more tractable. We can take each term in our "right column" and substitute it with its swapped and negated counterpart:

\begin{aligned} \color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black} = &\color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}u\color{black}_z\color{blue}v\color{black}_z + \\ &\color{red}u\color{black}_y\color{blue}v\color{black}_z\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} + \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{blue}\mathbf{z}\color{green}\mathbf{y}\color{black} + \\ &\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} + \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{red}\mathbf{x}\color{blue}\mathbf{z}\color{black} + \\ & \color{red}u\color{black}_x\color{blue}v\color{black}_y\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} + \color{red}u\color{black}_y\color{blue}v\color{black}_x\color{green}\mathbf{y}\color{red}\mathbf{x}\color{black} \\ \\ \implies & \color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}u\color{black}_z\color{blue}v\color{black}_z + \\ &\color{red}u\color{black}_y\color{blue}v\color{black}_z\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} - \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black} + \\ &\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} - \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black} + \\ & \color{red}u\color{black}_x\color{blue}v\color{black}_y\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} - \color{red}u\color{black}_y\color{blue}v\color{black}_x\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \\ \\ \implies & \color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}u\color{black}_z\color{blue}v\color{black}_z + \\ &\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black}(\color{red}u\color{black}_y\color{blue}v\color{black}_z - \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{black}) + \\ &\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black}(\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{black} - \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{black}) + \\ & \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black}(\color{red}u\color{black}_x\color{blue}v\color{black}_y\color{black} - \color{red}u\color{black}_y\color{blue}v\color{black}_x) \\ \end{aligned}

which is (drum roll) the dot product added to the cross product? "Somehow" emerging out of nowhere. Well, not quite nowhere. And we still have these nebulous basis products that don't make any sense on their own either...

### actually Quaternions

To "fix/eliminate" these terms, what if we just repeat what's been working for us so far:

\begin{aligned} (\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black})^2 &= \color{red}\mathbf{x}\color{black}\underbrace{\color{green}\mathbf{y}\color{red}\mathbf{x}}_{\color{black}-\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black}}\color{green}\mathbf{y}\\ &= -\color{red}\mathbf{xx}\color{green}\mathbf{yy}\color{red} \\ &= -1 \cdot 1 \\ &= -1 \end{aligned}

(we can only make this substitution with orthonormal vectors – not the least part because we're guaranteed that they're non-zero). But hmmmm we have a thing we can square which gives us $-1$ you say???? That's just $\color{purple} \mathbf i$, which holds for the other combinations of our basis vectors too:

$(\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black})^2 = (\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black})^2 = (\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black})^2 = (\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black})(\color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black})(\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black}) = -1$

which does now look vaguely familiar as long-suppressed physics lectures come surging to memory. These basis vector combinations behave identically to

$\mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = \mathbf{ijk} = -1$

which is the definition of quaternions.

### Anotha one

returning to the tabular classification of algebras, we can answer the question about what family of numbers $\R^3$ is closed under via muliplication (this sentence isn't real). The answer is not one I would have guessed in a vacuum (pun intended), it's quaternirons denoted with a bb'd $\mathbb H$ for "Hamilton":

family/operation$+$$\times$$-$$/$$\text{\textasciicircum}$
$\N : 0,1,2,3,...$$\N$$\N$$\Z$$\mathbb Q$$\N$
$\Z : ...,-1,-2,0,1,2,...$$\Z$$\Z$$\Z$$\mathbb Q$$\mathbb Q$
$\mathbb Q: \frac{0}{1}, \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, ...$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb R: \sqrt 2$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb Q$$\mathbb C$
$\mathbb C: \color{purple}\mathbf i, \mathbf i^2 \color {black}$$\color{purple} \mathbb C$$\color{purple} \mathbb C$$\color{purple}\mathbb C$$\color{purple}\mathbb C$$\color{purple}\mathbb C$
$\R^3: (1.5, 2, -3)$$\R^3$$\mathbb H$$\R^3$$\R^3$$\mathbb H$

So 3D vector multiplication in this manner yeilds a quaternion $\color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \in \mathbb H$ with a basis of $\{1, \color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black}, \color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black}, \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \}$

### Bivectors

What about it two dimensions? Vector multiplication in two dimensions is equivalent to the same in $\R^3$ just with the $\color{blue}\mathbf z$ components set to 0. Trying this, we get:

\begin{aligned} \color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black} = & \color{red}u\color{black}_x\color{blue}v\color{black}_x + \color{red}u\color{black}_y\color{blue}v\color{black}_y + \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black}(\color{red}u\color{black}_x\color{blue}v\color{black}_y\color{black} - \color{red}u\color{black}_y\color{blue}v\color{black}_x) \\ \end{aligned}

which has a basis of $\{1, \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \}$. In order to understand what these terms mean, and why we see artifacts of the cross product in our "just" product, we can take a closer look at the properties of the cross product in two dimensions. The cross product is commonly understood to be the vector that is normal to the plane formed by its inputs, and its magnitude will be the area of the parallelogram they form.

Consider the planes formed by these terms in our basis:

In the basis for our 2D model, $\{1, \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \}$ the janky second term describes a unit plane with area 1. In the three dimensional model, the three terms define a unit cube. These planar terms are called bivectors, or in our particular unitary examples basis bivectors, each having an area of 1 and being mutually orthogonal.

So what the heck is a bivector?

Just a vector that did some experimenting in college?

To improve our geometric understanding, we can consider an oriented shape (e.g. spinning about some axis) in $\R^3$. This point can be described / modeled as the shadow that it casts onto the planes defined by the basis bivectors:

this like roughly captures the image in my mind (which may not be correct at all tbqh) but google images for "bivector in 3D space" is a frickin warzone of confusing diagrams and memes about how confusing the diagrams are. There's also this runescape-tier illustration of the somatic components of the fireball spell (shoutout to EuclidanSpace.org):

Because our closed shape casts a shadow, and because it's oriented, the area of that shadow can be negative. So even though the vector-like object

$\begin{bmatrix} \color{blue}0.21 \\ \color{red}0.24 \\ \color{green}-0.14 \end{bmatrix}$

looks identical to a regular vector in $\R^3$, we actually interpret these numbers as the area of the projection of our oriented surface onto the three bivectors defining our space. Algebraically, this is a bit more obvious because of those formerly janky-seeming basis variables that must be included/expressed in the linear combinations of bivectors: $\{\color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black}, \color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black}, \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black} \}$ which are pretty hard to mistake for just $\{ \color{red}\mathbf{x}\color{black}, \color{green}\mathbf{y}\color{black},\color{blue}\mathbf{z}\color{black} \}$.

### The Cross Product, revisted

Taking another look at the cross product:

$\color{red}\mathbf{u}\color{black} \times \color{blue}\mathbf{v}\color{black} = \color{red}\mathbf{x}\color{black}(\color{red}u\color{black}_y \color{blue}v\color{black}_z - \color{red}u\color{black}_z\color{blue}v\color{black}_y) + \color{green}\mathbf{y}\color{black}(\color{red}u\color{black}_z\color{blue}v\color{black}_x - \color{red}u\color{black}_x\color{blue}v\color{black}_z) + \color{blue}\mathbf{z}\color{black}(\color{red}u\color{black}_x\color{blue}v\color{black}_y - \color{red}u\color{black}_y\color{blue}v\color{black}_x)$

This is a pseudo-vector. Not quite a fully-fledged vector because it doesn't retain all of the expected transformational properties we'd expect it to [//TODO: link]. Additionally, this resultant vector only "works" in 3 and 7 dimensions – or rather, the properties we expect to hold true for a cross product are only true in three dimensions, and some of them are true in 7 dimensions, but there's also not one unique solution to the cross product in that many definitions, so the notion of orthogonality becomes less precise.

Specifically, we'd like a higher-dimensional cross product to retain the desired properties of:

1. If $\mathbf c = \mathbf a \times \mathbf b$, then $\mathbf c \cdot \mathbf a = \mathbf c \cdot \mathbf b = 0$
2. $\mathbf a \times \mathbf b = - \mathbf b \times \mathbf a$
3. $\Vert \mathbf a \times \mathbf b \Vert$ is the hypervolume of $\mathbf{a,b}$

In higher dimensions, there will be more than one vector which satisfies most of these requirements (not the Jacobi identity), so which one do we choose? In 7 dimensions, there are 480 cross products with the feature that $e_i \times e_j = \pm e_k, i \neq j$ for some basis vector indices in our dimension $0 < i, j,k < |\mathbb F|$.

The dot and the cross product are the real and imaginary components of a quaternion product. For these operations, we can't keep "going up" (though damnit if Cayley didn't try) indefinitiely. Mathematical objects like real numbers, complex numbers, quaternions, and octonions (the next step "up") are called division algebras. At each higher dimension, the algebra loses properties.

There are exactly four normed division algebras: the real numbers ($\R$), complex numbers ($\mathbb C$), quaternions ($\mathbb H$), and octonions ($\mathbb O$). The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on. The complex numbers are a slightly flashier but still respectable younger brother: not ordered, but algebraically complete. The quaternions, being noncommutative, are the eccentric cousin who is shunned at important family gatherings. But the octonions are the crazy old uncle nobody lets out of the attic: they are nonassociative.

which is a wonderful quote from the first page of Baez's survey of the Octonions. I lasted approximately 5 pages before he delved entirely too deep into occult diagrams and $\LaTeX$ symbols I've never even seen before. Thus ends my larp as a mathemetician.

The wikipedia page for the seven dimensional cross product is in fact my initial entrypoint into this whole blog post. As mentioned, I was doing additional reading about Cayley after some exposure to his lizard brain discussed in my last post. Turns out he was rather involved, as was Conway:

### u ever tried putting with a wedgie?

The product we've been exploring up to this point which produces a bivector is called the wedge product:

\begin{aligned} \color{red}\mathbf{u}\color{black} \wedge \color{blue}\mathbf{v}\color{black} = \color{green}\mathbf{y}\color{blue}\mathbf{z}\color{black}(\color{red}u\color{black}_y\color{blue}v\color{black}_z - \color{red}u\color{black}_z\color{blue}v\color{black}_y\color{black}) + \color{blue}\mathbf{z}\color{red}\mathbf{x}\color{black}(\color{red}u\color{black}_z\color{blue}v\color{black}_x\color{black} - \color{red}u\color{black}_x\color{blue}v\color{black}_z\color{black}) + \color{red}\mathbf{x}\color{green}\mathbf{y}\color{black}(\color{red}u\color{black}_x\color{blue}v\color{black}_y\color{black} - \color{red}u\color{black}_y\color{blue}v\color{black}_x) \\ \end{aligned}

It shares the same coefficients as the cross product, but has a categorically different meaning because of the basis! The wedge product generalizes to any dimension as well, not just 3 and 7. The general form of vector multiplication we've been striving towards, therefore, is not the combination of the dot and the cross products, but rather the dot and the wedge:

$\color{red}\mathbf{u}\color{blue}\mathbf{v}\color{black} = \color{red}\mathbf{u}\color{black} \cdot \color{blue}\mathbf{v}\color{black} + \color{red}\mathbf{u}\color{black} \wedge \color{blue}\mathbf{v}\color{black}$

## Bivectors hoo hah, what are they good for?

Despite an existential quip seen in a caustic rebuttal on Math Stack Exchange that the only naturally occurring example of quaternions is in their definition, the wedge product has numerous practical uses including measuring curvature, spacetime fields, linear dependence in higher dimneions etc. that are better detailed in a fledged Geometric Algebra course or the wiki subsections.

## Footnotes

1. There's too many nested definitions at play already, welcome to a footnote. A homogenous system of linear equations is one where all constant terms are zero.

2. usually, this means not the zero vector $\mathbf 0 = \begin{bmatrix} 0 \\ \vdots \\ 0 \\ \end{bmatrix}$