Introduction to Coding Theory (Fall 2014-2015)

Error correcting codes are combinatorial objects that allow sending and recovering messages transmitted through noisy channels. Error correcting codes have many applications in both theory and practice in computer science. This class gives an introduction to coding theory with a theoretical and algorithmic focus.

Requirements:

There will be an exam, and the exam grade is the final grade.

Course material:

The course does not have a textbook, however we will sometimes use the following sources.

-        Essential Coding Theory, Venkatesan Guruswami, Atri Rudra and Madhu Sudan. A draft of a book that can be found at http://www.cse.buffalo.edu/~atri/courses/coding-theory/book/index.html.

-        A course by Venkatesan Guruswami.  http://www.cs.cmu.edu/~venkatg/teaching/codingtheory/

-        A course by Madhu Sudan. http://people.csail.mit.edu/madhu/ST13/

A concise review text on linear algebra by Madhu Sudan can be found here.

 

·       Lecture 1: My lectures 1+2 are roughly contained in Guruswami's lectures 1+2.

o   Motivation of noisy channels.

o   Definitions: Codes, distance, rate.

o   Distance d=2t+1 suffices for correcting t errs with "maximum likelihood decoding".

o   Definition: Linear codes. Generating matrix, parity check matrix.

o   The distance of a linear code is the minimum hamming weight of a nonzero codeword.

o   The distance of a linear code is the minimal number of columns in a parity check matrix that are linearly dependent.

o   Hamming codes.

o   Hamming bound and optimality of Hamming codes.

o   Hat puzzle.

·       Lecture 2: My lectures 1+2 are roughly contained in Guruswami's lectures 1+2.

o   Syndrome decoding.

o   Dual codes.

o   Hadamard code.

o   Family of codes and asymptotically good codes.

o   Gibert Varshamov bound, by a greedy algorithm.

·       Lecture 3: A crash course on information theory. Mostly contained in http://www.cs.cmu.edu/~venkatg/teaching/ITCS-spr2013/notes/15359-2009-lecture25.pdf

o   Entropy as surprise, compression, and Hamming balls.

·       Lecture 4+5: Mostly covered in Guruswami's lecture 3+4.

o   Shannon channels.

o   Capacity of a channel and general theorem.

o   Binary symmetric channel, and statement for special case.

o   Proof that random linear codes achieve R=1-h(p)

o   Proof that Random codes can give rate R=1-h(p) after ``corretcions''.

o   Intuition for why 1-h(p) is best possible.

·       Remainder of Lecture 5:

o   Random linear codes obtain GV bound.

o   Singleton bound.

o   Plotkin bound. (finished on lecture 6)

·       Lecture 6+7: Algebraic codes

o   Review of finite fields and polynomials.

o   Reed Solomon codes

o   BCH codes

o   Application: k-wise independent distributions

o   Reed-Muller codes

o   Schwartz's Lemma.

o   The case of q=2 (or more generally q< total degree).

·       Lecture 8: Concatenated codes

o   Code concatenation

o   Wozencraft's ensemble and Justesen codes

·       Lecture 9: Expander codes

o   The notion of unique neighbors.

o   Factor graphs with the unique neighbor property give distance.

o   Better distance via expansion

o   Efficient decoding of expander codes

·       Lecture 10: Efficient decoding

o    Decoding RS codes from errors

o   The Welch-Berlekamp algorithm

o   Decoding concatenated codes up to 1/4 of the distance

o   GMD decoding

·       Lecture 11: List decoding

o   The notion of list-decoding

o   List decoding capacity R=1-H(\rho)

o   Johnson bound

·       Lecture 12: Randomness extractors

o   Motivation and definitions (min-entropy, statistical distance, extractors)

o   Codes with distance 1/2-\eps can be used to construct extractors with m=d+1.

o   Collision probability and statistical distance

o   Proof of the Johnson bound using extractors.

o   Unique decoding by list-decoding and an additional error free O(log L) bits.