castleofargh
Sound Science Forum Moderator
- Joined
- Jul 2, 2011
- Posts
- 10,492
- Likes
- 6,136
No, it could not. The only parallel is that all sensory systems have limits and constraints. It's a can of worms anyway, there is the FPS of the video and the refresh rate of the screen as 2 independent variables, plus fast refresh rates usually introduce non images between frames (the dumbest way being a gray frame to reduce ghosting). Some systems will extrapolate pictures between frames (effectively changing the FPS value, but also creating frames with images that didn't exist on the recorded video), while some systems will just repeat the same frame until it's time for the next one... Other variables might come into play as a result of all those differences in implementation, like luminosity and dynamic range as the refresh rate increases. Thinking we can draw conclusions about the impact of increased refresh rate (or increased FPS) based on how we happen to feel while experiencing 2 settings or 2 entirely different monitors, is just a case of ignoring all the variables until we think we can correlate what we want to correlate. TBH a lot of audio beliefs about audibility of certain variables rely on that very same logical tunnel vision and cherry-picking of what variable we want to believe is relevant to our experience.imo much of this "under audible threshold" could be compared to high hertz/fps monitors... people said 30fps (24 actually) is enough but people are seeing still differences between 120vs240fps maybe even between 240 and higher fps
kinda wondering if there were a lot of studys suggesting "24 fps is all we need", probably to be honest, which would definitely show that "studys" dont state "facts" like you guys like to suggest
And of course, eyes are not ears. ^_^
Jitter doesn't have one strict finite value for audibility because different types of jitter will stop being noticeable at different levels under different conditions, but of course there is a limit in time and amplitude for any signal and any listener within a given set of testing conditions. Thinking otherwise only makes someone wrong.
By nature, Jitter tends to be louder at higher frequency, so people look for that, but our hearing also loses sensitivity as we get into the treble. What we might barely notice at 2kHz, we're likely to need it to be 20dB louder at 15kHz to also barely notice it. Then there is all the masking caused by other signals, and the way our ears work, lower frequencies mask a wider range of nearby higher frequencies. Which of course is an issue in music if we try to use it to detect something. For those reasons and because some things did get tested properly, we do know for example that the threshold of hearing with music is a good deal worse than the very best testing conditions for the purpose of detecting jitter by ear. A trend that is found for a vast range of variables and hearing thresholds. Music is almost never the best solution to detect small amplitudes or small timings.
From all that work and from measurements of popular devices, it is fair to expect that jitter is rarely an issue that will be noticed. Of course, I'm sure we can find some "audiophile" products with antiquated design and horrible jitter numbers. Don't purchase those, problem solved.
You don't have to accept empty claims about something being inaudible (or audible), even less so if there is no clear statement of magnitude or any supporting evidence presented, but the notion of threshold is a fact. Nothing has infinite sensitivity (be it in amplitude or time), and certainly not human sensors.