nvpowerdoc said:
To quote "http://www.hdmi.org/learningcenter/faq.aspx":
One of the greatest advantages of Digital signals over Analog signals with regards to transmissions over long distances and losses due to those distances is the ability of CRC error checking. If all the 1's and 0's don't arrive the way they were sent, Cyclical Redundancy Coding embedded in data sent allows the receiver to "re-formulate" the original content. This is the TRUE BEAUTY of digital signals.
Sorry much of this is quite off-topic for an NEC forum, but the wrong information / mis-understanding of information was swirling in earlier posts.
Evidently whomever wrote that FAQ has a less than complete understanding of digital error detection/correction.
Firstly, a Cyclic Redundancy Check (CRC) doesn't allow the receiver to "re-formulate the original content". A CRC is only useful to tell you if the data has become corrupted along the way. It's even written there as "CRC error checking." If I sent you the numbers 3, 8, 17, 22, 55 I could also tell you that they should add up to 105. If they don't then some part of the message must have been garbled.
Error correction is possible for digital signals, the easiest method being parity bits. HDMI does in fact use parity bits at the physical layer to minimize TMDS transmission errors. Very similar to the method used between your hard drive and motherboard. (especially SCSI) However this error correction is minimal at best, which is why digital usually works or doesn't. It can correct a bit here and there, but once you reach a certain threshold it just hopeless.
Digital signals prone to data corruption employ more complicated error correction, and generally speaking more space is consumed by the error correction data. Lets take DishNetwork/ExpressVu for example, which average 20-30% error correction overhead:
Each transponder on an Echostar satellite uses a standard Symbol Rate of 20 MS/sec, which works out to 40 Mbit/sec using QPSK (8PSK used for HD uses an SR of 21.5 MS/sec which works out to 64.5 Mbits/sec)
So... given a raw data stream of 40 Mbits/sec, we subtract all the bandwidth required for Forward Error Correction. Originally they used an FEC of 3/4 (meaning 3/4 of the bandwidth was usuable) but in the quest to cram more channels in the available bandwidth, they've lowered the FEC to 5/6 and ExpressVu even had the balls to drop it to 7/8. (and people up here wonder why they have rain fade)
Next we have to subtract the bandwidth required for 188:204 Reed-Solomon error correction, so subtract 16/204.
Of that original 40 Mbit/sec we are left with...
27.647 Mbit/sec per transponder (FEC
3/4 SR 20.00 QPSK)
30.719 Mbit/sec per transponder (FEC
5/6 SR 20.00 QPSK)
32.255 Mbit/sec per transponder (FEC
7/8 SR 20.00 QPSK)
and of the 64.5 Mbit/sec we have:
39.637 Mbit/sec per transponder (FEC
2/3 SR 21.50 8PSK)
Now HDMI v1.3 is pumping 3400 Mbit/sec through each of three color channels. Do you think they want to waste 20-40% of their bandwidth on error correction?
What's the Matter with HDMI?