Notes on Information Theory
- Published on
- ∘ 170 min read ∘ ––– views
Notes on Information Theory from various sources
Index
Lectures
1 | Introduction to Coding Theory
- Shannon's seminal book “A Mathematical theory of communication” (1948) measured information content in the output of a random source in terms of its entropy
- In his book, he presented the Noiseless Coding Theorem: given independent and identically distributed random variables, each with entropy ...
- can be compressed into bits with negligible loss
- or alternatively compressed into bits entailing certain loss
- Given a noisy communication channel whose behavior is given by a stochastic channel law and considering a message of bits output by a source coder, the theorem states that every such channel has a precisely defined real numbered capacity that quantifies the maximum rate of reliable communication
- Given a noisy channel with a capacity , if information is transmitted at rate (meaning message bits are transmitted in uses of the channel), then if , there exist coding schemes (encoding and decoding pairs) that guarantee negligible probability of miscommunication, whereas if , then –regardless of a coding scheme– the probability of an error at the receiver is bounded below some constant (which increases with )
- Conversely, when , the probability of miscommunication goes exponentially to 1 in
1 - A Simple Code
Suppose we need to store 64-bit words in such a way that they can be recovered even if a single bit per word gets flipped
- We could duplicate each bit at least three times, thus storing 21 bits of information per word, and limiting ourselves to roughly 1/3 of the information per word. But the tradeoff is that we could correct any single bit flip since the majority value of the three copies gives te correct value of the bit if one of them is flipped.
In 1949, Hamming proferred his code to correct 1-bit errors using fewer redundant bits. Chunks of 4 information bits get mapped to (encoded) to a codeword of 7 bits as
which can be equivalently represented as a mapping from to (operations do ) where if the column vector and is the matrix
- Two distinct 4-bit vectors get mapped to the codewords which differ in the last 3 bits (define and check that, for each non-zero , has at least 3 bits which are )
- This code can correct all single bit flips since, for any 7-bit vector, there is at most 1 codeword which can be obtained by a single bit flip
- For any which is not a codeword, there is always a codeword which can be obtained by a single bit flip
Basic Definitions
Hamming Distance
The Hamming Distance between two strings of the same length over the alphabet is denoted and defined as the number of positions at which and differ:
The fractional or relative Hamming Distance between is given by
- The distance of an error correcting code is the measurement of error resilience quantified in terms of how many errors need to be introduced to cofuse one codeword with another
- The minimum distance of a code denoted is the minimum Hamming distance between two distinct codewords of :
- For every pair of distinct codewords in , the Hamming Distance is at least
- The relative distance of denoted is the normalized quantity where is the block length of . Thus, any two codewords of differ in at least a fraction of positions
- For example, the parity check code, which maps bits to bits by appending the parity of the message bits, is an example of a distance 2 code with rate
Hamming Weight
The Hamming Weight is the number of non-zero symbols in a string over alphabet
Hamming
A Hamming Ball is given by the radius around a string given by the set
Code
An error correcting code of length over a finite alphabet is a subset of .
- Elements of are codewords in , and the collection of all the codewords is sometimes called a codebook.
- The alphabet of is , and if , we say that is -ary (binary, ternary, etc.)
- The length of codewords in is called the block length of
- Associated with a code is also an encoding map which maps the message set , identified in some canonical way with to codewords in
- The code is then the image of the encoding map
Rate
The rate of a code is defined by the amount of redundancy it introduces.
which is the amount of non-redundant information per bit in codewords of .
- The dimension of is given by
- A -ary code of dimension has codewords
Distance
- The distance of an error correcting code is the measurement of error resilience quantified in terms of how many errors need to be introduced to confuse one codeword with another
- The minimum distance of a code denoted is the minimum Hamming distance between two distinct codewords of :
- For every pair of distinct codewords in , the Hamming distance is at least
- The relative distance of denoted is the normalized quantity where is the block length of . Thus, any two codewords of differ in at least a fraction of positions
- For example, the parity check code, which maps bits to bits by appending the parity of the message bits, is an example of a distance 2 code with rate
Properties of Codes
- has a minimum distance of
- can be used to correct all symbol errors
- can be used to detect all symbol errors
- can be used to correct all symbol erasures – that is, some symbols are erased and we can determine the locations of all erasures and fill them in using filled position and the redundancy code
An Aside about Galois Fields
- A finite or Galois Field is a finite set on which addition, subtraction, multiplicaiton, and division are defined and correspond to the same operations on rationals and reals
- A field is denoted as or where is a the order or size of the field given by the number of elements within
- A finite field of order exists if and only if is a prime power , where is prime and is a positive integer
- In a field of order , adding copies of any element always results in zero, meaning tat the characteristic of the field is
- All fields of order are isomorphic, so we can unambiguously refer to them as
- , then, is the field with the least number of elements and is said to be unique if the additive and multiplicative identities are denoted respectively
- It should be pretty familiar for bitwise logic then as these identities correspond to modulo 2 arithmetic and the boolean XOR operation:
0 | 1 | |
---|---|---|
0 | 0 | 1 |
1 | 1 | 0 |
XOR | AND | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
- Other properties of binary fields
- Every element of satisfies
- Every element of satisfies
3 - Linear Codes
General codes may have no structures, but of particular interest to us are a subclass of codes with an additional structure, called Linear Codes defined over alphabets which are finite fields where is a prime number in which case with addition and multiplication defined modulo .
- Formally, if is a field and is a subspace of then is said to be a Linear code
- As is a subspace, there exists a basis where is the dimension of the subspace. Any codeword can be expressed as the linear combination of these basis vectors. We can write these vectors in matrix form as the columns of an matrix call a generator matrix
Generator Matrices and Encoding
- Let be a linear code of dimension . A matrix is said to be a generator matrix for if its columns span
- It provies a way to encode a message (a column vector) as the codeword which is a linear transformation from to
- Note that a Linear Code admits many different generator matrices, corresponding to the different choices of bases for the code as a vector space
- A -ary code block of length and dimension will be referred to as an code
- If the code has distance , it may be expressed as and when the alphabet size is obvious or irrelevant, the subscript may be ommitted
Aside about Parity-Bit Check Codes
A simple Parity-bit Check code uses an additional bit denoting the even or odd parity of a messages Hamming Weight which is appended to the message to serve as a mechanism for error detection (but not correction since it provides no means of determing the position of error).
- Such methods are inadeqaute for multi-bit errors of matching parity e.g., 2 bit flips might still pass a parity check, despite a corrupted message
data | Hamming Weight | Code (even parity) | odd |
---|---|---|---|
0 | |||
3 | |||
4 | |||
7 |
Application
- Suppose Alice wants to send a message
- She computes the parity of her message
- And appends it to her message:
- And transmits it to Bob
- Bob recieves with questionable integrity and computes the parity for himself: and observes the expected even-parity result
- This same process works under and odd-parity scheme as well
Examples of Codes
- corresponds to the binary parity check code consisting of all vectors in of even Hamming Weight
- is a binary repetition code consisting of the two vectors
- is the linear Hamming Code discussed above
- If is the parity check matrix of the Linear Code , then is the minimum number of columns of that are Linearly Dependent
Hamming Codes Revisited
- Hamming Codes can be understood by the structure of its parity check matrix which allows us to generalize them to codes of larger lengths
- = is a Hamming Code using the Generator Matrix , and a parity matrix
and see that .
Correcting single errors with Hamming codes
- Suppose that is a corrupted version of some unknown codeword with a single error (bit flip)
- We know that by the distance property of , is uniquely determined by . We could determine by flipping each bit of and check if the resulting vector is in the null space of ,
- Recall that the nullspace of a matrix is the set of all n-dimensional column vectors such that
- However, a more efficient correction can be performed. We know that , where is the column vector of all zeros except for a single 1 in the th position. Note that the th column of which is the binary representation of , and thus this recovers the location of the error
- Syndrome: We say that is the syndrome of
Generalized Haming Codes
We define to be the matrix where column of is the binary representation of . This matrix must contain through , which are binary representations of all powers of 2 from and thus has full row rank
- The th generalized Hamming Code then is given by
and is the binary code with the parity check matrix
- Note that is an code
- Proof:
- Since has rank , and we must simply check that no two columns of are linearly dependent, and that there are three linearly dependent columns in
- The former follows since all the columns are distinct, and the latter since the first three columns of are the binary representations of 1,2,3 add up to 0.
If is a binary code of block length , and has a minimum distance of 3, then
and it follows that the Hamming Code has the maximum possible number of code words (and thus, an optimal rate) amongst all binary codes of block length and a minimum distance of at least 3.
- Proof:
- For each , define its neighbordhood of all strings that differ in at most one bit from .
- Since has distance 3, by the triangle inequality we have
yielding the desired upper bound on per this “Hamming Volume” upper bound on size of codes.
- Note that has a dimension of and therefore a size which meets the upper bound for block length
- The general definition of a Hamming or Volume Bound for binary codes of block length , and distance is given by
for equality to hold in (2), Hamming Balls of radius around the code words must cover each point in exactly once, giving a perfect packing of non-overlapping Hamming spheres that cover the full space.
- Codes which meet the bound in (2) are therefore called perfect codes
Perfect Codes
There are few perfect binary codes:
- The Hamming Code
- The Golay Code
- The trivial codes consistng of just 1 codeword, or the whole space, or the repetition code for odd
The Golay Code
Like other Hamming Codes, the Golay Code can be constructed in a number of ways. consists of when the polynomial is divisible by
in the ring .
3.3 Dual of a Code
Since a Linear Code is a subspace of , we can define its dual in orthogonal space – a co-code, if you will
- Dual Code: If is a linear code, then its dual is a linear code over over given by
where is the dot product over of vectors
- Unlike vector spaces over , where the dial (or orthogonal complement) vector of a subspace satisfies and (), for subspaces of , and can intersect non-trivially
- In fact, we can devise (a self-orthogonal code) or even just , a self-dual
3.4 Dual of the Hamming Code
The Dual of the Hamming Code has a generator matrix which is a matrix whose rows are all non-zero bit vectors of length . This yields a code and is called a simplex code
The Hadamard Code, the Hamming Code's dual, is obtained by adding an all-zeros row to
is a linear code whose generator matrix has all -bit vecotrs as its rows.
- Its encoding map encodes by a string in consisting of the dot product for every
- The hadamard code can also be dinfed over , buby encoding a message in with its dot product with ever other vector in that field
- The Hadamard Code is the most redundant linear code in which no two codeword symbols are equal in every codeword, implying a robust distance property:
- The Hadamard Code and simplex codes have a minimum distance of . The -ary Hadamard Code of dimension has a distance
- Proof: For for exactly (i.e. half) of the elemtns of .
- Assume for definiteness that .
- Then, for every , and therefore exactly one of and equals 1. The proof for the -ary case$ is similar.
- Binary codes cannot have a relative distance of more than , unless they only have a fixed number of code words. This the relative distance of Hadamard Codes is optimal, but their rate is necessarily poor
Code Families and Asymptotically Good Codes
- The Hamming and Hadamard codes exhibit two extreme trade-offs between rate and distance
- Hamming Code's rates appraochs 1 (optimal), but their distance is only 3
- Conversely, Hadamard Code's have an optimal relative distance of , but their rate approaces 0
- The natural question that follows is is there a class of codes which have good rate and good relative distance such that neither approach 0 for large block lengths?
- To answer this question, we consider asymptotic behavior of code families:
- Define a code family to be an infinite collection , where is a -ary code of block length , with and
- Many constructions of codes will belong to an infinite family of codes that share general structure and properties whoe asymptotic behavior guides their construction
- We also need extend the notion of relative rate and distance to these inifinite families:
- A -ary family of codes is said to be asymptotically good if its rate and relative distance are bounded away from zero such that there exist constant
While a good deal is known about the existence or even explicit construction of good codes as well as their limitations, the best possible asymptotic trade-off between rate and relative distance for binary codes remains a fundamental open question which has seen no improvement since the late 70s in part due to the the fundamental trade off between large rate and large relative distance
Error Correction Coding: Mathematical Methods and Algorithms - Todd Moon
1 | A Context for Error Correction Coding
1.2 - Where Are Codes
- Error Control Codes are comprised of error correction and error detection
Da heck is an ISBN
- For a valid ISBN, the sum-of-the-sum of the digits must be equal to
digit | cum. sum | cum. sum of sum |
---|---|---|
0 | 0 | 0 |
2 | 2 | 2 |
0 | 2 | 4 |
1 | 3 | 7 |
3 | 6 | 13 |
6 | 12 | 25 |
1 | 13 | 38 |
8 | 21 | 59 |
6 | 27 | 86 |
8 | 35 | 121 |
The final sum is
1.3 - The Communication System
Information theory treats information as a pseudo-physical quantity which can be measured, transformed, stored, and moved from place to place. A fundemental concept of information theory is that information is conveyed by the resolution of uncertainty.
Probabilities are used to described uncertainty. For a discrete random variable , the information conveyed by observing an outcome is bits.
- If , outcome of 1 is certain, then observing yields bits of information.
- Conversely, observing in this case yields ; a total surprise at observing an impossible outcome
Entropy is the average information. For our above example with a discrete, binary, random variable , we have either (entropy of the source) or (a function of the outcome probabilities) given by
- For a source having outcomes with probabilities , the entropy is
Source-Channel Separation Theorem: Treating the problems of data compression and error correction separately, rather than seeking a jointly optimal source/channel coding solution, is asymptotically optimal in block sizes
Because of redundancy introduced by the channel coder, there must be more symbols at the output of the coder than at the input. A channel coder operates by accepting a block of input symbols and outputting a block of symbols.
The Rate of a channel coder is
Channel Coding Theorem: Provided that the rate of transmission is less than the channel capacity , there exists a code such that the probability of error can be made arbitrarily small
1.4 - Binary Phase-Shift Keying
- Process of mapping a binary sequence to a waveform for transmission over a channel
- Given a binary sequence , we map the set via
- We also have , a mapping of a bit or () to a signal amplitude which multiplies a waveform which is a unit-energy signal
over . A bit arriving at can be represented as the signal where the energy required to transmit the bit is given by
- The transmitted signal are points in a (for our purposes: one-) dimensional signal space on the unit-energy axis
such that a sequence of bits can be expressed as a waveform:
We can expand this representation to two dimensions by introducing another signal and modulating the two to establish a signal constellation. Let , for some inteeger , be the number of points in a constellation. -ary transmission is obtained by placing points in this signal space and assigning each point a unique pattern of bits.
- Note that the assignment of the bits to constellation points is such that adjacent points differ by only 1 bit. Such an assignment is called Gray code order. Since it is most probably that errors will move from a point to an adjacent point, this reduces the probability of bit error
Associated with each signal (stream of bits) and signal constellation point is a signal energy
The average signal energy is the average of all signal energies, assuming that each point is used with equal probability
and ca be related to the average energy per bit
2 | Groups and Vector Space
2.1 - Introduction
Linear block codes form a group and a vector space, which is useful
2.2 - Groups
A Group formalizes some basic rules of arithmetic necessary for cancellation and solution of simple equations
A binary operation on a set is a rule that assigns to each ordered pair of elements of a set some element of the set
Since the operation is deinfed to return an element of the set, this operation is closed
We can further assume that all binary operations are closed
Examples include:
A Group is a set with a binary operation on such that
- The operator is associative: for any
- For all there exists which is the inverse of such that , where is called the identity of the group. The inverse can be denoted for multiplication-like and addition-like operations, respectively
When is obvious from context, the group may be referred to as just the set .
If has a finite number of elements, it is said to be a finite group with order , the number of elements.
A group is commutative if .
, the integers under addition, forms a group with the identity since
- The invert of . Thus the group is commutative.
- Groups with an addition-like operator are called Abelian
, the integers under multiplication, does not form a group since there does not exist a multiplicative inverse for every element of the integers.
, the rationals without zero under multiplication with the identity of and the inverse forms a group
These rules are strong enough to introduce the notion of cancellation. In a group , if , then by left cancellation via
and similarly for right hand cancellation via properties of associativity and identity.
We can also identify that solution to linear equations of the form are unique.
- Immediately, we get . If are the two solutions such that
then by cancellation, we get .
For example, let be addition on the set with the identity .
- Since appears in each row and column, every element has an inverse.
- By uniqueness of solution, we must have every element appearing in every row and column, which we do. So is a Abelian group.
One way to form groups is via the Cartesian production. Given groups , the direct product group has elements where . The operation is performed elementwise such that if
and
then
Example
The group consists of the tuplies and addition .
This is called the Klein 4-group, and intorduces the concept of permutations as elements of a group, and is a composable function as our operation as opposed to simple arithmetic groups.
Example
A permutation of set is a bijection (one to one, and onto) of onto itself. Let
then a permutation
meaning
There are different permutations on distinct elements. We can think of as a prefix operation: or . If
then
is the construction of permutations which is closed under the set of permutations with the identity and the inverse
Composition of permutations is associative: for
and thus the set of all permutations on elements forms a group under composition.
- This is known as a symmetric group on letters .
- Note that composition is not commutative since , so is a non-commutative group
2.2.1 Subgroups
A subgroup of is a group formed from a subset of elements of where indicates the relation between the two sets where is said to be a proper subgroup.
The subgroups and are the trivial subgroups.
Example
Let , both . We can show that is a group: .
Similarly,
Example
Consider the group of permutations on , which has a subgroup formed by the closed compositions:
2.2.2 Cyrclic Groups and the Order of an Element
In a group with a multiplication-like operation, we use to indicate
with being the identity element for .
For a group with additive operations is used to indicate
If , any subgroup containing must also include , etc.
It must also adhere to .
For any , the set generates a subgroup of called the cyclic subgroup. is said to be the generator of the subgroup and the resultant group is denote .
If every element of a group can be gneerated from a single element, the group is said to be cyclic.
Example
is cyclice since every element can be generated by .
In a group with , the smallest such that is equal to the identity in is said to be the order of the element .
- If no such exists, is of infinite order.
- In , 2 is of order , as are all non-zero elements of the set.
Example
If , then
Therefore is a generator for the group if and only if and are relatively prime
[IMAGE Of the planes]
2.3 - Cosets
The left coset of ( not necessarily being commutative), is the set
and the right coset is given by the symmetric relation. In a commutative group, they are obviously equal.
Let be a group, a subgroup of , and the left coset . Then, if and only f . By cancellation, we have .
To determine if are in the same (left) coset, we simply check the above condition.
Exmaple
Let and such that . Now, form the cosets:
and observe that neither are subgroups as they do not have an identity, but the union of all three cover the original group: .
We can determine whether or not two numbers are in the same coset of by checking whether
2.2.4 - Lagrange's Theorem
Basically, it's a prescription of subgroup size relative to its group.
- Every coset of in a group has the same number of elements
- Let be two elements ub tge ciset . If they are equal, then we have , and thereform the elements of a coset are uniquely identified by elements in
It follows, then, that we have the following properties of cosets:
- reflecivity: An element is in the same coset of itself
- symmetry: If are in the same coset, then are in the same coset
- transitivity: If are in the same coset, and are in the same coset, then are in the same coset as well
Therefore, the relation of "being in the same coset" is an equivalence relation, and thus, every equivalence relation partitions its elements into disjoint sets.
- It follows then that the distinct coets of in a group are disjoint
With these properties and lemmas, Lagrange's Theorem states that: If is a group of finite order, and a subgroup of , then the order of divides the order of such that . We say " divides " without remainder
- If and , then
Lagrange's Theorem implies that every group of prime order is cyclic.
- Let be of prime order, with and . Let , the cyclic subgroup generated by . Then . But by Lagrange's Theorem, the order of must ("cleanly") divide the order of . Since is prime, we must have , thus generates , so must be cyclic.
2.2.5 - Induced Operations; Isomorphism
Returning to the three sentence cosets we defined earlier, we have and define addition on as follows: for
if and only if . That is, addition of the sets is defined by representatives in the sets. The operation is said to be the induced operations on the coset: .
Taking and noting that . Similarly, via , taking the representatives to be . Based on this induced operation, we can obtain the following table
which we say is to the table for :
Two groups are said to be (group) isomorphic if there exists a one-to-one, onto function called the isomorphism such that
We denote isomorphism via . Isomorphic groups are essentially "the same thing"
Let be a group, a subgroup, and the set of cosets in . The induced operation between cosets is defined by if and only if for any provided that the operation is well-defined (no ambiguity for those operands).
For any commutative group, the induced operation is always well-defined, otherwise only for normal groups
- A subgroup of is normal if . All Abelian groups are normal.
If is a subgroup of a commutative group the induced operation on the set of cosets of satisfies
The group formed by the cosets of in a commutative (or normal subgroup) group with the induced operation is said to be the factor group of , denoted and the cosets are called the residue classes of
- We could express and in general .
A Lattice is formed by taking all the possible linear combination of a set of basis vectors. That is are a set of linearly independent vectors. A lattive is then formed from the basis by
e.g. the lattice formed by is the set of points with the integer coorinates in the plane denoted .
For the lattice , let be a subgroup with the cosets
and
2.2.6 - Homomorphism
A homomorphism is a weaker condition than the structural and relational equivalence defined by an isomorphism. A homorphism exists when sets have when sets have the same algebraic structure, but they might have a different number of elements.
The groups are said to be homomorphic if there exists a function. (not necessarily one to one) called the homomorphism such that:
Example
Let , and . For , we have
This are homomorphic though they clearly have different orders.
Let be a commutative group, a subgroup, so that is the factor group. Let be defined by , then is a homomorph known as the natural or canonical homomorphism.
The kernel of a homomorphism of a group into a group is the set of all elements of which mapped to by .
2.3 - Fields
A Field is a set of of objects on which addition, multiplication, and their inverse operations apply arithmetically (analagously to their behavior on the reals). The operations of a field must satisfy the following conditions:
- Closure under addition:
- There is an additive identity: