It's another topic that's after the concept of digital transmission (which again, is either serial or parallel).
No, it’s the fundamental question that’s BEFORE the concept of digital transmission. Before we can consider transmission, we obviously have to know what we are going to transmit. Once we know that, then we can design a transmission system that meets (or exceeds) that requirement. This is what Shannon’s theory is all about, what and how much data do we need in order to guarantee communication of information, EG. What “Channel Capacity” is required. Only then can
The need, type, and amount of error correction can also be different depending on application.
Absolutely, and that is down to the designers of the protocol used to transfer the data but again, this will be based on Shannon’s theory. Additionally, there must be some way of detecting errors in the first place, regardless of the error correction (or lack of it).
Scratches on a CD being handled in the player, digital transmission of a digital audio cable being handled by receiving component (that has data sent in one stream), or digital transmission of a network cable (that has data sent in packets).
Fundamentally, there is no difference. With a CD player for example, the data is read by the reader/laser transmitted to a buffer where errors are detected (and corrected up to a defined tolerance of consecutive errors). Same principle with a synchronous stream or an asynchronous/packetised stream.
Now that network transmissions are in the gigabits, and are sent in packets, the need for error correction for audio is greatly diminished.
No, it’s the same. In theory the faster the transmission the more errors are likely to occur and therefore the more error correction required but in practice faster transmission protocols incorporate specifications to mitigate the higher likelihood of errors.
With streaming video over internet, I've noticed the program might start with a fuzzy or pixelated picture until there's enough of a buffer for the full HD or UHD image to get to optimal quality.
As far as I’m aware, this occurs as the receiver and transmitter “negotiate” an image quality based on connection speed. The transmitter/server initially supplying a low res image quality until a sufficient “Channel Capacity” is established/confirmed. In many cases this can be re-negotiated, for example the system may detect a fall in “channel capacity” and reduce image resolution. TBH, I’m not entirely certain of the techniques employed by various streaming services. It depends on whether they think their consumers would rather see a low res image or see nothing at all and wait.
The DD+ audio is clear immediately.
That’s because DD+ uses an already highly optimised codec which can’t easily be reduced in size and doesn’t require much bandwidth/channel capacity compared to HD or UHD video any way.
G