Skip to main content

Exercises 16.6 Exercises

1.

Why is the following encoding scheme not acceptable?

Information \(0\) \(1\) \(2\) \(3\) \(4\) \(5\) \(6\) \(7\) \(8\)
Codeword \(000\) \(001\) \(010\) \(011\) \(101\) \(110\) \(111\) \(000\) \(001\)
2.

Without doing any addition, explain why the following set of \(4\)-tuples in \({\mathbb Z}_2^4\) cannot be a group code.

\begin{equation*} (0110) \quad (1001) \quad (1010) \quad (1100) \end{equation*}
Hint.

This cannot be a group code since \((0000) \notin C\text{.}\)

3.

Compute the Hamming distances between the following pairs of \(n\)-tuples.

  1. \(\displaystyle (011010), (011100)\)

  2. \(\displaystyle (11110101), (01010100)\)

  3. \(\displaystyle (00110), (01111)\)

  4. \(\displaystyle (1001), (0111)\)

Hint.

(a) \(2\text{;}\) (c) \(2\text{.}\)

4.

Compute the weights of the following \(n\)-tuples.

  1. \(\displaystyle (011010)\)

  2. \(\displaystyle (11110101)\)

  3. \(\displaystyle (01111)\)

  4. \(\displaystyle (1011)\)

Hint.

(a) \(3\text{;}\) (c) \(4\text{.}\)

5.

Suppose that a linear code \(C\) has a minimum weight of \(7\text{.}\) What are the error-detection and error-correction capabilities of \(C\text{?}\)

6.

In each of the following codes, what is the minimum distance for the code? What is the best situation we might hope for in connection with error detection and error correction?

  1. \(\displaystyle (011010) \; (011100) \; (110111) \; (110000)\)

  2. \(\displaystyle (011100) \; (011011) \; (111011) \; (100011) \\ (000000) \; (010101) \; (110100) \; (110011)\)

  3. \(\displaystyle (000000) \; (011100) \; (110101) \; (110001)\)

  4. \(\displaystyle (0110110) \; (0111100) \; (1110000) \; (1111111) \\ (1001001) \; (1000011) \; (0001111) \; (0000000)\)

Hint.

(a) \(d_{\min} = 2\text{;}\) (c) \(d_{\min} = 1\text{.}\)

7.

Compute the null space of each of the following matrices. What type of \((n,k)\)-block codes are the null spaces? Can you find a matrix (not necessarily a standard generator matrix) that generates each code? Are your generator matrices unique?

  1. \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
  2. \begin{equation*} \begin{pmatrix} 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
  3. \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
  4. \begin{equation*} \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
Hint.
  1. \((00000), (00101), (10011), (10110)\)

    \begin{equation*} G = \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{pmatrix} \end{equation*}
  2. \((000000), (010111), (101101), (111010)\)

    \begin{equation*} G = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 0 \\ 1 & 1 \\ 0 & 1 \\ 1 & 1 \end{pmatrix} \end{equation*}
8.

Construct a \((5,2)\)-block code. Discuss both the error-detection and error-correction capabilities of your code.

9.

Let \(C\) be the code obtained from the null space of the matrix

\begin{equation*} H = \begin{pmatrix} 0 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 & 1 \end{pmatrix}\text{.} \end{equation*}

Decode the message

\begin{equation*} 01111 \quad 10101 \quad 01110 \quad 00011 \end{equation*}

if possible.

Hint.

Multiple errors occur in one of the received words.

10.

Suppose that a \(1000\)-bit binary message is transmitted. Assume that the probability of a single error is \(p\) and that the errors occurring in different bits are independent of one another. If \(p = 0.01\text{,}\) what is the probability of more than one error occurring? What is the probability of exactly two errors occurring? Repeat this problem for \(p = 0.0001\text{.}\)

11.

Which matrices are canonical parity-check matrices? For those matrices that are canonical parity-check matrices, what are the corresponding standard generator matrices? What are the error-detection and error-correction capabilities of the code generated by each of these matrices?

  1. \begin{equation*} \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
  2. \begin{equation*} \begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
  3. \begin{equation*} \begin{pmatrix} 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
  4. \begin{equation*} \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
Hint.

(a) A canonical parity-check matrix with standard generator matrix

\begin{equation*} G = \begin{pmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 1 \end{pmatrix}\text{.} \end{equation*}

(c) A canonical parity-check matrix with standard generator matrix

\begin{equation*} G = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1 \\ 1 & 0 \end{pmatrix}\text{.} \end{equation*}
12.

List all possible syndromes for the codes generated by each of the matrices in Exercise 16.6.11.

Hint.

(a) All possible syndromes occur.

13.

Let

\begin{equation*} H = \begin{pmatrix} 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 \end{pmatrix}\text{.} \end{equation*}

Compute the syndrome caused by each of the following transmission errors.

  1. An error in the first bit.

  2. An error in the third bit.

  3. An error in the last bit.

  4. Errors in the third and fourth bits.

14.

Let \(C\) be the group code in \({\mathbb Z}_2^3\) defined by the codewords \((000)\) and \((111)\text{.}\) Compute the cosets of \(C\) in \({\mathbb Z}_2^3\text{.}\) Why was there no need to specify right or left cosets? Give the single transmission error, if any, to which each coset corresponds.

15.

For each of the following matrices, find the cosets of the corresponding code \(C\text{.}\) Give a decoding table for each code if possible.

  1. \begin{equation*} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
  2. \begin{equation*} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 \end{pmatrix} \end{equation*}
  3. \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 & 1 \end{pmatrix} \end{equation*}
  4. \begin{equation*} \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 & 0 \end{pmatrix} \end{equation*}
Hint.

(a) \(C\text{,}\) \((10000) + C\text{,}\) \((01000) + C\text{,}\) \((00100) + C\text{,}\) \((00010) + C\text{,}\) \((11000) + C\text{,}\) \((01100) + C\text{,}\) \((01010) + C\text{.}\) A decoding table does not exist for \(C\) since this is only a single error-detecting code.

16.

Let \({\mathbf x}\text{,}\) \({\mathbf y}\text{,}\) and \({\mathbf z}\) be binary \(n\)-tuples. Prove each of the following statements.

  1. \(\displaystyle w({\mathbf x}) = d( {\mathbf x}, {\mathbf 0})\)

  2. \(\displaystyle d( {\mathbf x}, {\mathbf y}) = d( {\mathbf x} + {\mathbf z}, {\mathbf y} + {\mathbf z} )\)

  3. \(\displaystyle d({\mathbf x}, {\mathbf y}) = w({\mathbf x}- {\mathbf y})\)

17.

A metric on a set \(X\) is a map \(d: X \times X \rightarrow {\mathbb R}\) satisfying the following conditions.

  1. \(d( {\mathbf x}, {\mathbf y}) \geq 0\) for all \({\mathbf x}, {\mathbf y} \in X\text{;}\)

  2. \(d( {\mathbf x}, {\mathbf y}) = 0\) exactly when \({\mathbf x} = {\mathbf y}\text{;}\)

  3. \(d( {\mathbf x}, {\mathbf y})= d( {\mathbf y}, {\mathbf x})\text{;}\)

  4. \(d( {\mathbf x}, {\mathbf y}) \leq d( {\mathbf x}, {\mathbf z}) + d( {\mathbf z}, {\mathbf y})\text{.}\)

In other words, a metric is simply a generalization of the notion of distance. Prove that Hamming distance is a metric on \({\mathbb Z}_2^n\text{.}\) Decoding a message actually reduces to deciding which is the closest codeword in terms of distance.

18.

Let \(C\) be a linear code. Show that either the \(i\)th coordinates in the codewords of \(C\) are all zeros or exactly half of them are zeros.

19.

Let \(C\) be a linear code. Show that either every codeword has even weight or exactly half of the codewords have even weight.

Hint.

Let \({\mathbf x} \in C\) have odd weight and define a map from the set of odd codewords to the set of even codewords by \({\mathbf y} \mapsto {\mathbf x} + {\mathbf y}\text{.}\) Show that this map is a bijection.

20.

Show that the codewords of even weight in a linear code \(C\) are also a linear code.

21.

If we are to use an error-correcting linear code to transmit the 128 ASCII characters, what size matrix must be used? What size matrix must be used to transmit the extended ASCII character set of 256 characters? What if we require only error detection in both cases?

22.

Find the canonical parity-check matrix that gives the even parity check bit code with three information positions. What is the matrix for seven information positions? What are the corresponding standard generator matrices?

23.

How many check positions are needed for a single error-correcting code with 20 information positions? With 32 information positions?

Hint.

For \(20\) information positions, at least 6 check bits are needed to ensure an error-correcting code.

24.

Let \({\mathbf e}_i\) be the binary \(n\)-tuple with a \(1\) in the \(i\)th coordinate and \(0\)'s elsewhere and suppose that \(H \in {\mathbb M}_{m \times n}({\mathbb Z}_2)\text{.}\) Show that \(H{\mathbf e}_i\) is the \(i\)th column of the matrix \(H\text{.}\)

25.

Let \(C\) be an \((n,k)\)-linear code. Define the dual or orthogonal code of \(C\) to be

\begin{equation*} C^\perp = \{ {\mathbf x} \in {\mathbb Z}_2^n : {\mathbf x} \cdot {\mathbf y} = 0 \text{ for all } {\mathbf y} \in C \}\text{.} \end{equation*}
  1. Find the dual code of the linear code \(C\) where \(C\) is given by the matrix

    \begin{equation*} \begin{pmatrix} 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 & 0 \end{pmatrix}\text{.} \end{equation*}
  2. Show that \(C^\perp\) is an \((n, n-k)\)-linear code.

  3. Find the standard generator and parity-check matrices of \(C\) and \(C^\perp\text{.}\) What happens in general? Prove your conjecture.

26.

Let \(H\) be an \(m \times n\) matrix over \({\mathbb Z}_2\text{,}\) where the \(i\)th column is the number \(i\) written in binary with \(m\) bits. The null space of such a matrix is called a Hamming code.

  1. Show that the matrix

    \begin{equation*} H = \begin{pmatrix} 0 & 0 & 0 & 1 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 & 0 \end{pmatrix} \end{equation*}

    generates a Hamming code. What are the error-correcting properties of a Hamming code?

  2. The column corresponding to the syndrome also marks the bit that was in error; that is, the \(i\)th column of the matrix is \(i\) written as a binary number, and the syndrome immediately tells us which bit is in error. If the received word is \((101011)\text{,}\) compute the syndrome. In which bit did the error occur in this case, and what codeword was originally transmitted?

  3. Give a binary matrix \(H\) for the Hamming code with six information positions and four check positions. What are the check positions and what are the information positions? Encode the messages \((101101)\) and \((001001)\text{.}\) Decode the received words \((0010000101)\) and \((0000101100)\text{.}\) What are the possible syndromes for this code?

  4. What is the number of check bits and the number of information bits in an \((m,n)\)-block Hamming code? Give both an upper and a lower bound on the number of information bits in terms of the number of check bits. Hamming codes having the maximum possible number of information bits with \(k\) check bits are called perfect. Every possible syndrome except \({\mathbf 0}\) occurs as a column. If the number of information bits is less than the maximum, then the code is called shortened. In this case, give an example showing that some syndromes can represent multiple errors.