How sounds are represented digitally

Microphones can record sound waves by converting change in amplitude to change in electrical voltage. Samples of the voltage are taken at a regular rate and held through a process known as  process and hold,  then using a device called an  analog-to-digital-converter,  these voltage samples can be converted into numerical representations of the amplitude; a process is known as  quantization. These numerical representations can be passed through a  digital-to-analog-converter  a  low-pass filter  an amplifier and then a speaker to produce sound. A  low-pass filter  is used to remove any frequencies above a certain thresh-hold because the computer can only accurately represent frequencies equal to up to half of the sampling rate putting a limitation on what sounds a computer can produce. This was proved by a theorist named Harold Nyquist and thus was named the  Nyquist Theorem.


 * Quantization converts voltage samples into numerical representations, processed by an analog-to-digital converter (ADC), stored within the computer's memory as strings of binary digits. In order to be read (or 'heard', more rather), the binary strings are processed by a digital-to-analog converter (DAC).  Strings processed by the DAC must be played back at the same speed they were recorded, or the corresponding samples will not be heard properly.
 * Furthermore, sound quality, or resolution, is affected by the number of bits the computer uses to read the sound signals. Less bits means a lower sound resolution, and vice versa.  Obviously, sound measurements can only be correct and pleasant-sounding to a certain degree, and using more bits to get better sound quantity reduces the quantization error.
 * [expansion on quantization section contributed by dmonchus]