Error-correcting codes, finite fields, algebraic curves

[ambient page updated 27 Dec 13 10:10] ... [ home ] ... [ garrett@math.umn.edu ]

Course text "The Mathematics of Coding: Information, Compression, Error Correction, and Finite Fields" (formerly published by Prentice-Hall).

(Just in case anyone wonders: My contract with the publisher states that when they cease making the book available, upon my written notice to them, the copyright reverts to me. As of Fall 2013, the publisher's rep said the book is no longer available. I sent the rep and the main office an email saying that I do reclaim the copyright.)

Solutions: [s01] ...[s02] ...[s03] ...[s04] ...[s05] ...[s06] ...[s07] ...[s08] ...[s09] ...[s10] ...[s11]

This course is an introduction to basic ideas of information theory, compression, and error-correction. The necessary mathematics developed along the way includes a bit of probability, number theory, and abstract algebra. After a small introduction to probability and information, Shannon's Noiseless Coding Theorem and the Kraft-MacMillan inequality can be discussed, along with Huffman and other efficient coding schemes. The larger part of the course starts with Shannon's Noisy Coding Theorem, and aims at making good error-correcting codes. The historical development of error-correcting codes starts with Hamming codes, and looks at other linear codes such as Reed-Solomon, Bose-Chaudhuri-Hocquengham, and Goppa codes. Construction of codes (not to mention efficient encoding/decoding algorithms) requires that we develop basic facts about finite fields and linear algebra over them. We'll also look at best-possible behavior of codes: Hamming (sphere-packing) bound, Gilbert-Varshamov bound, Singleton bound, etc. If time permits, We'll give an introduction to some very good codes, geometric Goppa codes, attached to algebraic curves, to Turbo Codes, and other special topics.