Digital Transport Quality (and why it *may* matter).
Apr 21, 2023 at 5:09 AM Post #91 of 135
CDs sound better if you put them in the freezer!
I have actually tried that! I have bought a lot of CDs used for a low price and sometimes these CDs have severe enough scratches to cause playback issues. Sometimes "toothpaste treatment" is enough to fix the issue, but not always. So once I tried if freezing the disc does anything, but it did not.
 
Apr 21, 2023 at 5:26 AM Post #92 of 135
I am super sensitive to playing back 44khz at 'non even sample' multiples (like 48khz).
You are sensitive to low-quality sample rate conversion. Arbitrary sample rate conversions (e.g. from 44100 Hz to 47133.91 Hz) can be done totally transparently to human ears and I have myself written MATLAB code utilising sums of sinc-interpolations for that.
 
Apr 21, 2023 at 7:50 AM Post #93 of 135
It's another topic that's after the concept of digital transmission (which again, is either serial or parallel).
No, it’s the fundamental question that’s BEFORE the concept of digital transmission. Before we can consider transmission, we obviously have to know what we are going to transmit. Once we know that, then we can design a transmission system that meets (or exceeds) that requirement. This is what Shannon’s theory is all about, what and how much data do we need in order to guarantee communication of information, EG. What “Channel Capacity” is required. Only then can
The need, type, and amount of error correction can also be different depending on application.
Absolutely, and that is down to the designers of the protocol used to transfer the data but again, this will be based on Shannon’s theory. Additionally, there must be some way of detecting errors in the first place, regardless of the error correction (or lack of it).
Scratches on a CD being handled in the player, digital transmission of a digital audio cable being handled by receiving component (that has data sent in one stream), or digital transmission of a network cable (that has data sent in packets).
Fundamentally, there is no difference. With a CD player for example, the data is read by the reader/laser transmitted to a buffer where errors are detected (and corrected up to a defined tolerance of consecutive errors). Same principle with a synchronous stream or an asynchronous/packetised stream.
Now that network transmissions are in the gigabits, and are sent in packets, the need for error correction for audio is greatly diminished.
No, it’s the same. In theory the faster the transmission the more errors are likely to occur and therefore the more error correction required but in practice faster transmission protocols incorporate specifications to mitigate the higher likelihood of errors.
With streaming video over internet, I've noticed the program might start with a fuzzy or pixelated picture until there's enough of a buffer for the full HD or UHD image to get to optimal quality.
As far as I’m aware, this occurs as the receiver and transmitter “negotiate” an image quality based on connection speed. The transmitter/server initially supplying a low res image quality until a sufficient “Channel Capacity” is established/confirmed. In many cases this can be re-negotiated, for example the system may detect a fall in “channel capacity” and reduce image resolution. TBH, I’m not entirely certain of the techniques employed by various streaming services. It depends on whether they think their consumers would rather see a low res image or see nothing at all and wait.
The DD+ audio is clear immediately.
That’s because DD+ uses an already highly optimised codec which can’t easily be reduced in size and doesn’t require much bandwidth/channel capacity compared to HD or UHD video any way.

G
 
Apr 21, 2023 at 8:23 AM Post #94 of 135
I have actually tried that! I have bought a lot of CDs used for a low price and sometimes these CDs have severe enough scratches to cause playback issues. Sometimes "toothpaste treatment" is enough to fix the issue, but not always. So once I tried if freezing the disc does anything, but it did not.
A PC or laptop with a decent CD rom drive and good ripping software can recover all but the worst damaged CD’s ,
With my server most CD’s will rip in a few minutes, a few may take longer as the software slows the drive from higher speeds, and the odd one or two will sit with the drive working away for 10 minutes or more as it slows down to 1x in its attempts to re read,
As a lst resort some #2000 or finer grit wet n dry paper used wet on a sheet of glass can yield a flat but slightly duller surface while removing the slight raised edges of severe scratches, managed to save a few discs destined for the bin that way,
if what’s on the CD is worth the effort ..
 
Apr 21, 2023 at 9:48 AM Post #95 of 135
No, it’s the fundamental question that’s BEFORE the concept of digital transmission. Before we can consider transmission, we obviously have to know what we are going to transmit. Once we know that, then we can design a transmission system that meets (or exceeds) that requirement. This is what Shannon’s theory is all about, what and how much data do we need in order to guarantee communication of information, EG. What “Channel Capacity” is required. Only then can
In practice, in regards to device that is receiving a digital transmission, its error correction is happening after it receives data. I was pointing out that transmission and error correction are different concepts. Granted, error correction schemes have the transmitting device send code for the the receiver to detect errors. This is part of the data, and not a component of the cable. There are also separate error correction systems during the reading stage of a CD player, or some computer memory chips.
Fundamentally, there is no difference. With a CD player for example, the data is read by the reader/laser transmitted to a buffer where errors are detected (and corrected up to a defined tolerance of consecutive errors). Same principle with a synchronous stream or an asynchronous/packetised stream.
Fundamentally, there are different error correction schemes, using different error correcting algorithms. The error correction in a CD player uses Cross-interleaved Reed-Solomon Code. HDMI is asynchronous like a network cable, and uses BCH error correction. Forward error correction being a standard error correction scheme.
As far as I’m aware, this occurs as the receiver and transmitter “negotiate” an image quality based on connection speed. The transmitter/server initially supplying a low res image quality until a sufficient “Channel Capacity” is established/confirmed. In many cases this can be re-negotiated, for example the system may detect a fall in “channel capacity” and reduce image resolution. TBH, I’m not entirely certain of the techniques employed by various streaming services. It depends on whether they think their consumers would rather see a low res image or see nothing at all and wait.

That’s because DD+ uses an already highly optimised codec which can’t easily be reduced in size and doesn’t require much bandwidth/channel capacity compared to HD or UHD video any way.

G
You don't think h.264 and h.265 video streams are highly optimised codecs for video? When it comes to availability of video content based on specs, that's decided by the service's application. Netflix will make 4K Dolby Vision and Atmos content available on my Apple TV because my Apple TV is setup for it. 4K content used to not be available on internet browsers, but eventually was (when their HTML5 players began supporting HDR/Atmos codecs). The reason it takes longer to buffer video (and why the initial packets off the internet may have noticeable artifacts), is that the video stream is inherently larger than audio (and it might have noticeable artifacts as the buffer gets to full speed). My TV will still say it's starting HDR or Dolby Vision when the video starts: so it's not first setting video at SDR 1080P. I believe what you're describing in "handshaking" what size resolution the streamed video is what happens when you play a YouTube video with "auto" resolution. You can still overrride it if you want: there's a setting to go to a particular resolution. And if your internet speed isn't good enough, then you'll see artifacts. If your computer isn't fast enough at decoding and processing the video, then you'll have diminished frame rate.
 
Last edited:
Apr 21, 2023 at 11:09 AM Post #96 of 135
In practice, the device that is receiving a digital transmission, error correction happens after it receives the transmission.
If error correction is required then of course it must occur in the receiving device as the transmitting device obviously cannot know if there will be any downstream errors after it’s transmitted the data. The data that’s being transmitted obviously does have to be involved in the error correction before it leaves the transmitting device though, so that the receiving device can detect errors and where necessary, it also includes error code data to enable a receiver to carry out it’s own error correction.
I was pointing out that transmission and error correction are different concepts.
And Shannon pointed out they weren’t, they’re integral.
The error correction in a CD player uses Cross-interleaved Reed-Solomon Code.
Good example. RS code allows both error detection and correction in the receiving device but is embedded into the data before transmission. A CD carries audio data at the rate of 1,411,200bps (16 x 44,100 x 2) but if we design a transmitter with that “channel capacity” it will fail, because in addition to the audio data there needs to be “redundancy”, the Reed-Soloman code to detect and correct errors. The “Shannon Limit” dictates the minimum “channel capacity” depending on the level of redundancy required, EG. The expected worst case amount of noise/interference.
Forward error correction being a standard error correction scheme.
Very true but aren’t you arguing against yourself? Forward error correction is where code is embedded before transmission, such as your Reed-Soloman example. And even where FEC isn’t employed, code still has to be embedded (before transmission) for error detection, a parity bit and/or CRC for example. While I don’t know the protocols used in every device, say between the RAM and CPU cache, all the audio protocols I’m aware of include at least error detection code.
You don't think h.264 and h.265 video streams are highly optimised codecs for video?
Of course but then a streaming service can easily switch between say a h.265 stream containing a 4K image res or a 720p res image, you can’t do that with DD+.

G
 
Last edited:
Apr 21, 2023 at 11:56 AM Post #97 of 135
And Shannon pointed out they weren’t, they’re integral.
It seems you are taking me out of context here. There is no error correction happening in the cable itself. I don't know how many times I have to restate that a data stream is binary. With a serial connection, it's one binary stream, and parallel has more than one lane. Error correction is fundamentally different than data connection, as it deals with the actual data. They may both be used in telecommunication, but they are not always integral with each other and they are different (example of a data connection without error correction in areas of a data bus).
Very true but aren’t you arguing against yourself? Forward error correction is where code is embedded before transmission, such as your Reed-Soloman example. And even where FEC isn’t employed, code still has to be embedded (before transmission) for error detection, a parity bit and/or CRC for example. While I don’t know the protocols used in every device, say between the RAM and CPU cache, all the audio protocols I’m aware of include at least error detection code.
No, it seems you are wanting to argue semantics. I maintained that there are different error correction schemes and they happen in different places. The error correction happening in a CD player is for potential data loss on the optical disc. That's separate than what it does for sending data through a digital cable. It's also separate from the receiving device (in which itself is receiving data and forming a "bit perfect" data stream). If you want to continue to argue about error correction schemes also needing to have a "handshake" with the transmitting device...this is horse before cart that's pointless to argue.

As for computers, most consumer RAM does not have error correction. It is pretty standard with servers, though.

Of course but then a streaming service can easily switch between say a h.265 stream containing a 4K image res or a 720p res image, you can’t do that with DD+.

G
Again, that's not what's happening in my example. h.265 is a recent codec created for 4K HDR (that requires quite a bit more data than a 1080P video). The signal that you're getting is 4K (as I indicated: the video first starts with HDR). With video protocols, there can be artifacts in a picture when data rate is not fully reached yet. It's not that the image is downgraded to 1080P, but as JPEG, the whole image is formed progressively (and if there's not a full image, pixels get blocked). The handshake that has already happened is saying "send the 4K video stream instead of the 1080P one". Again, the one main example I can think of where resolution changes is the "auto" setting on YouTube.
 
Last edited:
Apr 21, 2023 at 12:34 PM Post #98 of 135
It seems you are taking me out of context here.
As I was attempting to explain Shannon’s theory, how it provides immunity from transmission noise/interference and as that’s what you quoted when you responded to me, I naturally assumed that was the context, have you changed the context?
There is no error correction happening in the cable itself.
Where did I say there was?
I don't know how many times I have to restate that a data stream is binary.
No idea why you would feel the need to do that, in fact I don’t know why you would state it even once to me, seeing as I’m the one referencing the “Mathematical Theory of Communication”! Maybe you haven’t read it, don’t understand it’s implications or don’t realise it’s the fundamental basis of digital communications?

This doesn’t seem to be going anywhere, I’ve explained and Shannon (and Kotelnikov) proved immunity from noise, you can carry on arguing against this on your own.

G
 
Apr 21, 2023 at 12:53 PM Post #99 of 135
As I was attempting to explain Shannon’s theory, how it provides immunity from transmission noise/interference and as that’s what you quoted when you responded to me, I naturally assumed that was the context, have you changed the context?
Refer back to post #88: I explained what a digital signal is. It's a binary stream. You have continued to conflate it with error correction, which is an entirely separate topic that deals with the data that is in a binary stream. They may both be subjects used in telecommunications, but they are not the same thing!
This doesn’t seem to be going anywhere, I’ve explained and Shannon (and Kotelnikov) proved immunity from noise, you can carry on arguing against this on your own.

G
You're right, this is going nowhere as you have falsely maintained that error correction is always integral with a digital connection. Refer back to my example of consumer RAM not having ECC and server RAM having ECC.
 
Apr 21, 2023 at 1:32 PM Post #100 of 135
Refer back to post #88: I explained what a digital signal is. It's a binary stream. You have continued to conflate it with error correction, which is an entirely separate topic that deals with the data that is in a binary stream.
That makes no sense. Are you taking about a binary stream that’s just random zeros and ones which do not represent information, if so, what’s the point of such a stream? If that binary stream does represent information then it has to follow a transmission protocol, which in turn guarantees immunity from noise/interference as per Shannon’s theory. I don’t know all the protocols out there but certainly all the audio protocols require at least error detection.
You're right, this is going nowhere as you have falsely maintained that error correction is always integral with a digital connection.
It is going to go nowhere if you’re just going to make up falsehoods about what I “falsely maintained”! Clearly you haven’t read Shannon’s theory. There are cases, even with audio protocols, which do not need error correction, there might be cases that don’t even need error detection but I don’t know of any such protocols and all the audio protocols I’m aware of contain data in the bitstream to detect errors.

G
 
Apr 21, 2023 at 1:43 PM Post #101 of 135
That makes no sense. Are you taking about a binary stream that’s just random zeros and ones which do not represent information, if so, what’s the point of such a stream? If that binary stream does represent information then it has to follow a transmission protocol, which in turn guarantees immunity from noise/interference as per Shannon’s theory. I don’t know all the protocols out there but certainly all the audio protocols require at least error detection.
Why can't you understand that a digital connection is an electronic connection? It does not do any coding of digital information by itself. Go on and accuse me of not understanding Shannon's theories, conflating different topics, and do gross generalizations that may not be accurate (like saying decreased visual quality in streaming video are from reduced resolution of stream).
 
Last edited:
Apr 21, 2023 at 2:05 PM Post #102 of 135
Why can't you understand that a digital connection is an electronic connection?
Is an optical connection “electronic”? Why can’t you understand that it doesn’t matter, as long as it complies with the specifications of the protocol?
It does not do any coding of digital information.
Again, where did I say a cable/connection codes digital information?
Go on and accuse me of not understanding Shannon's theories and conflating different topics.
Go on and falsely accuse me for a 3rd time of claiming cables/connectors code digital information or that the information being transmitted isn’t binary and failing to conflate the cables/connectors with the protocol.

G
 
Apr 21, 2023 at 2:18 PM Post #103 of 135
Is an optical connection “electronic”? Why can’t you understand that it doesn’t matter, as long as it complies with the specifications of the protocol?
Is an optical connection not a binary stream of data??
Again, where did I say a cable/connection codes digital information?
You keep conflating it with error correction schemes.
Go on and falsely accuse me for a 3rd time of claiming cables/connectors code digital information or that the information being transmitted isn’t binary and failing to conflate the cables/connectors with the protocol.

G
Did either of us say that digital information isn't binary? You have previously stated the binary stream and an error correction system are always integral. They are not, and why there's a fundamental difference between data transmission (which is either serial or parallel connection), vs the interpretation of data. That you refuse to acknowledge these differences, it is pointless to engage further.

https://www.cdnetworks.com/enterpri...Data Transmission?,part of a wireless network.

"As we know, data transmission methods can refer to both analog and digital data but in this guide, we will be focusing on digital modulation. This modulation technique focuses on the encoding and decoding of digital signals via two main methods parallel and serial transmission."
 
Apr 21, 2023 at 3:03 PM Post #104 of 135
Is an optical connection not a binary stream of data??
Why ask? What protocols does a binary stream of digital audio data through an optical connection have to comply with?
You keep conflating it with error correction schemes.
You keep failing to conflate cables/connectors with the digital protocol specifications.
Did either of us say that digital information isn't binary?
Then why did you just ask if optical is binary? What’s the obvious answer to this question?
You have previously stated the binary stream and an error correction system are always integral.
No, I stated that the binary stream always contains error correction or detection code and therefore obviously it is “integral”. There is a condition where it may not be, Shannon’s “Noiseless Channel” but I’m unaware of any circumstances where that is applicable in practice and it’s certainly not applicable to digital audio data transmission.
They are not, and why there's a fundamental difference between data transmission (which is either serial or parallel connection), vs the interpretation of data.
How is it data transmission if it’s the wrong data? If the “interpretation of data” is incorrect then you have not transmitted the data/information! That’s the whole point of digital transmission in the first place.

G
 
Apr 21, 2023 at 3:17 PM Post #105 of 135
Why ask? What protocols does a binary stream of digital audio data through an optical connection have to comply with?
Why ask? Because you continue to conflate data transmission (defined by serial, parallel, synchronous, and asynchronous data) with the actual data stream itself.

Here's another link documenting types of data transmission:
https://ecomputernotes.com/computernetworkingnotes/communication-networks/data-transmission

You keep failing to conflate cables/connectors with the digital protocol specifications.
No, I'm not failing, it is you who are conflating the connection cable with the protocols of the system
No, I stated that the binary stream always contains error correction or detection code and therefore obviously it is “integral”. There is a condition where it may not be, Shannon’s “Noiseless Channel” but I’m unaware of any circumstances where that is applicable in practice and it’s certainly not applicable to digital audio data transmission.
No again, it's not necessarily in a binary data stream. It may be part of the audio protocols for a toslink cable, but not all digital connections employ an error correction. I previously listed ECC vs non ECC computer memory. USB has modes that don't use ECC.
 
Last edited:

Users who are viewing this thread

Back
Top