Starting in the late 70s and early 80s we spent a lot of time doing MOS (mean opinion score) testing of analog to digital speech conversion techniques. 64 kbit/s PCM (pulse coded modulation) was the standard format used by AT&T and GTE (now Verizon) at the time. This samples the voice circuit 8,000 times per second and generates an 8 bit digital sample for each interval, thus 64 kbit/s.
A 32 kbit/s delta mod codec was also popular with some transmission providers, this sampled voice 32,000 times per second and produced a simple 0 or 1 depending upon whether the waveform was increasing or decreasing in amplitude that sample. Some scaling was done in the encoder and decoder for longer runs of consecutive 1s or 0s as these represented signals with a higher slope. One system that I worked on would not transmit data for gaps in speech and on the receiving end would fill the gaps with background noise. Thus over a number of circuits the savings of not transmitting the dead space between words allowed more circuits to share the same aggregate data link. A form of compression.
These were straight encoder/decoders and made no attempt to compress speech itself, but still had pretty good MOS scores.
http://en.wikipedia.org/wiki/Mean_opinion_score
Trouble is that they required more bandwidth than desired to get the signal across, so speech compression algorithms started being used.
G.728 and G.729 were two that were pretty common, G.728 is a low delay code excited linear prediction algorithm that needs 16 kbit/s but has low processing delay. G.729 is also a code excited linear prediction algorithm but has higher complexity and higher delay, but only needs 8 kbit/s.
Unlike 64 kbit/s PCM or the 32 kbit/s delta mod codecs, the G.728 and G.729 speech compression algorithms don't pass arbitrary waveforms very well and also don't work well with dial-up modem or fax modulations and require the use of a helper chip that is designed to demodulate/remodulate the modem signals and bypass the vocoder.
A lot of voice over IP systems use these.
Digital cell phones needed even further compression and things went downhill from there in terms of MOS. Likely not helped by the speakers in the phones themselves.
As someone who spent a lot of time engineering the transmission facilities that can carry these signals, the shift to voice over IP (VoIP) is both interesting and frustrating. When a lot of work is spent squeezing out every last bit/s from a transmission facility or vocoder it hurts to see the huge amount of overhead that is consumed by the TCP/IP headers and framing of the voice into the IP packets. For example, G.729 which requires 8 kbit/s by itself is bloated up to 31.2 kbit/s. But this can be reduced through header compression.
I moved a year ago to a new house and finally had to break down and give up my old copper POTS service for VoIP over cable. Working pretty well, but even with my aging hearing I can tell the difference, but mostly from the occasional dropped packet and less from the quality itself.