Recording Impulse Responses for Speaker Virtualization
Jul 22, 2019 at 10:20 AM Post #31 of 1,817
So I tried it out - flat EQ vs headphone compensation. The Headphone Compensation renders a more realistic sound than EQ to flat. Although it's very close.

I also have some extra info on the IEM for HRTF from oratory that I'll post on a GitHub thread.

What's I'm finding is so strange is how your brain works. I did a bunch of testing on my laptop in a cafe between Dolby Headphone Cinema room with a flat EQ and Impulcifer. My tests there made me conclude that I actually preferred the sound signature of the Dolby Headphone and it had a crazy realistic center channel for an artificial HRIR. While Impulcfier did have more accuracy in the rear channels but it just didn't sound right. That was on Friday.

Today, sitting in my actual theater room, where I took the measurements, it's the complete opposite. I don't like the sound of the Dolby Headphone room and the center channel just isn't right. But Impulcifer sounds amazing and like my actual speakers. I'm even enjoying music on it which did not sound right in the cafe.

Psychoacoustics is weird. I suspect my brain knows what my theater room sounds like and is correcting for it. But when I take that same sound into an unfamiliar environment It knows that the center channel should be coming from a distance away too - where as in the cafe I was on a laptop so Dolby's HRIR was sufficient for that.

It makes it really hard to do traditional A vs B testing and figuring out which you prefer. I just didn't appreciate how important the room was and even the visual cues from the speakers in the room etc.
vision is our dominant sense, it's using a bigger area of our brain. and then maybe there's also habit, how your room sounds, how you're used to that and now consider it to be how it should sound. so it's very likely that you have enough reference of that room to simply want something as close as possible instead of other fancy simulations. I've told that anecdote a few times, where I used a laptop on the side feeding a bigger screen and I was of course sitting in front of the screen using an external keyboard for a few months. I would often just use the tweeters of the laptop when watching a youtube video or some web radio as background while browsing or ruining my pictures with unskilled post processing. at some point, my brain started to place sound at the screen where the guy in the video was appearing. because I assume, my brain still had more confidence in what I was seeing than in what I was hearing. and there has been so many such examples of brain plasticity, including the guys learning to live with glasses that inverted the image. it's nothing new. what's IMO very interesting is that if I decided to put on my headphone, I never even for a second felt that the sound was off center. my brain somehow had a clear understanding that the laptop sound was a on off weirdness, and that using the headphone was another system with other audio rules.
all that to say that something as basic as being in a given room, or having real speakers in your field of view, that can and probably does play a part in what you hear. then there may also be something about the reverb in the room, at a cafe I'm guessing you could perceive enough outside noises to capture a sense of the room's acoustic. it's both amazing and super frustrating to me TBH ^_^.
 
Jul 22, 2019 at 10:40 AM Post #32 of 1,817
It's also very loud in a cafe - I was using ANC in good tips but maybe all the room information in the HRIR is very difficult for the brain to compute in loud environments. The Dolby synthesised HRIR sounds much more like a headphone in a quiet environment.

I've read that neural plasticity is huge in visual and aural acuity. There's those experiments where the shape of the pinnae were artificially changed which made median localisation very difficult, but after time the brain adapts. But I've never had it demonstrated to my own senses so quickly. I was convinced after hours of testing that I'd found the optimal HRIR. But nope!

I did something kinda similar with my 120" front projector and my 65" OLED TV. Because my TV is in a dedicated black velvet curtain laden theater room and an electric screen rolls out over it I can sit 1m away from it and replicate the viewing angle of a real IMAX screen on my TV. After some time, because my brain has no visual queues I'm watching a TV due to it being surrounded my black velvet in a pitch black room the TV appears as large as the projection screen. The math works out - a 70 degree viewing angle on a 65" TV is at 1m, on a projector it's at 2m. But in an ordinary living room you see all sorts of queues that you're really just watching a small screen close. But take those away, and the brain is fooled.

Another similar trick is closing one eye while watching a 2D image in a dark room. Eventually it looks like a 3D movie.

But it's funny - I was very much in the camp of objective measurements - so I always tried to A vs B and where I could ABX. But this has really made me realise it's not so simple.

Ultimately I think I'm going to have to go for some HRIR's based on seating distance away from the screen for movies. Like one where the speaker is monitor/laptop distance away and so on.

And with the A16 realiser finally shipping - I think many owners are going to experience a similar effect.

Visions always been a big thing in audio but I thought objective measurements saved me. When I got my first nice set of speakers (B&W 803s) I'd listen to to them with the grills off and just admire the look. They sounded better because they looked nice! No matter the measurements
 
Last edited:
Jul 22, 2019 at 11:26 AM Post #33 of 1,817
It's also very loud in a cafe - I was using ANC in good tips but maybe all the room information in the HRIR is very difficult for the brain to compute in loud environments. The Dolby synthesised HRIR sounds much more like a headphone in a quiet environment.

I've read that neural plasticity is huge in visual and aural acuity. There's those experiments where the shape of the pinnae were artificially changed which made median localisation very difficult, but after time the brain adapts. But I've never had it demonstrated to my own senses so quickly. I was convinced after hours of testing that I'd found the optimal HRIR. But nope!

I did something kinda similar with my 120" front projector and my 65" OLED TV. Because my TV is in a dedicated black velvet curtain laden theater room and an electric screen rolls out over it I can sit 1m away from it and replicate the viewing angle of a real IMAX screen on my TV. After some time, because my brain has no visual queues I'm watching a TV due to it being surrounded my black velvet in a pitch black room the TV appears as large as the projection screen. The math works out - a 70 degree viewing angle on a 65" TV is at 1m, on a projector it's at 2m. But in an ordinary living room you see all sorts of queues that you're really just watching a small screen close. But take those away, and the brain is fooled.

Another similar trick is closing one eye while watching a 2D image in a dark room. Eventually it looks like a 3D movie.

But it's funny - I was very much in the camp of objective measurements - so I always tried to A vs B and where I could ABX. But this has really made me realise it's not so simple.

Ultimately I think I'm going to have to go for some HRIR's based on seating distance away from the screen for movies. Like one where the speaker is monitor/laptop distance away and so on.

And with the A16 realiser finally shipping - I think many owners are going to experience a similar effect.

Visions always been a big thing in audio but I thought objective measurements saved me. When I got my first nice set of speakers (B&W 803s) I'd listen to to them with the grills off and just admire the look. They sounded better because they looked nice! No matter the measurements
yup, removing some cues sometimes ruin an effect, but sometimes it makes the remaining cues the only thing that matters and boosts their impact. what's a little annoying(or let's call it impracticable) is how some people apparently have a different ranking in their mind for various cues and how important they are for them. like some just can apparently never feel a mono sound at a reasonable distance if they don't have a visual cue of the sound source.

about the A16, I expect that most, if not all of them are on summer holidays right now. but one day... when they finally deliver a product, indeed people might be surprised by how much some apparently trivial non audio stuff can affect the experience.

about objective approach to sound, I don't see a problem. on one hand, if we're trying to figure out what is happening to the sound, then that's something 100% related to objective reality. a sound wave isn't going to bend the other way without a proper and predictable physical reason.
on the other hand we humans don't experience anything objectively, so of course a complete objective approach will usually fail to translate into what we feel. plus, we suck at trying to separate our senses in our head. if anything, experiences like those we have witnessed only reinforce my opinion that if we want to know about sound, or what a given device does, we need to properly control everything we can the scientific way. and after that, I'm in my room and the color of my cable makes me enjoy music more(for whatever reason), I'll happily use that color for my cables and enjoy the subjective benefits. I don't think there is a conflict. I will just not be coming on the forum telling others to get the same color claiming that it does change the sound a certain way because I feel like it does. and to be able to refrain from doing that, or simply just know better, I do need my controlled experiments. different things, different purposes.
I'm honestly fine with the concept of objective and subjective reality, I only wish I had a better understanding of myself and humans in general. but that's curiosity, or because I realize that if I understood something better, I might be able to get even more of a kick out of my favorite music ^_^.
 
Jul 26, 2019 at 8:58 AM Post #34 of 1,817
Your points are all true - I think I'm going to research more about human preferences.

I've been tinkering some more with the tool and running my own room corrections. Like any Home Theater setup I have a crossover problem and I think the below will solve it with the HRIR I created.

The problem with my real room is I experience crossover suck out. I've got a dual sub and KEF R300 setup that I cross over at 60hz. The problem is around cross over, due to cancellation, there's a deep dip in the frequency response. It's been known for a while that there's a distance trick you can run with Audysey Room Correction to fill this gap in. It involves delaying the subwoofer output marginally so it arrives later than the suck out. That has never sounded right to me so it's something I live in my real room.

The Frequency Response graphs from impulcfer identified it - look at the huge dip at around 50hz.

R300 FL.PNG


Because it's in the bass range it's actually present on most of the speakers and it's identical between ears. So I used WebPlot Digitizer to convert the image graph into a frequency response csv that I could import into REW. REW's EQ calc then generated a flat EQ to solve the suck out. Export to text and import into peace.

Correction.jpg

Now I think I have much cleaner bass.

The power of virtual room correction is going to be huge - we should be able to get properly flat frequency response across the entire range. I'm not smart enough to figure it out for the treble range where it varies with which ear you're looking at but in the bass range, where it's clear - this is a good way to fix it.

I'm very excited at the prospect of having this all automagically done by impulcifer
 
Last edited:
Jul 26, 2019 at 2:04 PM Post #35 of 1,817
Your points are all true - I think I'm going to research more about human preferences.

I've been tinkering some more with the tool and running my own room corrections. Like any Home Theater setup I have a crossover problem and I think the below will solve it with the HRIR I created.

The problem with my real room is I experience crossover suck out. I've got a dual sub and KEF R300 setup that I cross over at 60hz. The problem is around cross over, due to cancellation, there's a deep dip in the frequency response. It's been known for a while that there's a distance trick you can run with Audysey Room Correction to fill this gap in. It involves delaying the subwoofer output marginally so it arrives later than the suck out. That has never sounded right to me so it's something I live in my real room.

The Frequency Response graphs from impulcfer identified it - look at the huge dip at around 50hz.




Because it's in the bass range it's actually present on most of the speakers and it's identical between ears. So I used WebPlot Digitizer to convert the image graph into a frequency response csv that I could import into REW. REW's EQ calc then generated a flat EQ to solve the suck out. Export to text and import into peace.



Now I think I have much cleaner bass.

The power of virtual room correction is going to be huge - we should be able to get properly flat frequency response across the entire range. I'm not smart enough to figure it out for the treble range where it varies with which ear you're looking at but in the bass range, where it's clear - this is a good way to fix it.

I'm very excited at the prospect of having this all automagically done by impulcifer
There's no reason you need to delay the sub output unless it's actually hitting earlier than your mains. Rather it sounds like you need to invert the phase. You can see the actual timings of various frequencies in REW by going to spectrogram view and activating the "Plot the peak energy curve" option.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 27, 2019 at 5:45 AM Post #36 of 1,817
It's not a phase issue - I've got a great sub only response with dual sub. It's when you run the fronts in conjunction with the sub the issue appears. I've tried playing around with various phase angles to no avail. If you watch the vid it describes it in more detail or there's a ton of threads on AVSForums - like this one that go into detail. I did eventually hit a config that prevented the suck out but I've since re-arranged my room and haven't had time to tinker with it.

I think it's going to be a problem in any loud speaker measurement in a room unless it's truly full range and has been meticulously setup. The virtual room correction can easily take care of it - and the sub 80hz bass could even be artificially generated or pasted in from a perfect measurement since it has no impact on HRTF or localisation.
 
Jul 27, 2019 at 5:59 AM Post #37 of 1,817
When I get the room correction implemented there should not be a need for full range speakers or subwoofers. The bass frequencies can simply be boosted back in and this is what I'm doing myself with a GraphicEQ filter in EqualizerAPO. Even small bookshelf speakers will reproduce subbass frequencies but they are just simply rolled off heavily. But since the frequencies exist in measurement they can be equalized to a correct level. Now for example 40 dB bass boost might sound like a bad idea but keep in mind that the impulse response is what will reduce the headphone bass reproduction to the level of bookshelf speakers if not corrected so it is quite safe to negate that effect and the final result will have only the bass boost which is required by the headphones. Of course this will affect signal to noise ratio in the subbass range but I'm not too worried about that. Another option of course is to generate the bass frequencies in the sweep recording but that might create a disconnection in the reverb.

By the way @johnn29 you can actually use room correction DSP while measuring the HRIR, just remember to turn it off while measuring the headphones.
 
Jul 27, 2019 at 6:06 AM Post #38 of 1,817
Another idea I've been toying around with is virtual room correction without room measurements. It should be possible to correct the HRIR frequency response at least up to 1000 Hz or so because below that point the HRTF has very minimal impact on the frequency response or at least it should be quite consistent across individuals. Maybe I could extend this frequency limit by inspecting Ircam or other HRTF measurements and trying to find a good enough average HRTF frequency response. I know this won't take it past 5000 Hz in any case because the individual variance is so large in high frequencies but it might be very good for bass and decent for mids.
 
Jul 27, 2019 at 10:25 AM Post #39 of 1,817
It's not a phase issue - I've got a great sub only response with dual sub. It's when you run the fronts in conjunction with the sub the issue appears. I've tried playing around with various phase angles to no avail. If you watch the vid it describes it in more detail or there's a ton of threads on AVSForums - like this one that go into detail. I did eventually hit a config that prevented the suck out but I've since re-arranged my room and haven't had time to tinker with it.

I think it's going to be a problem in any loud speaker measurement in a room unless it's truly full range and has been meticulously setup. The virtual room correction can easily take care of it - and the sub 80hz bass could even be artificially generated or pasted in from a perfect measurement since it has no impact on HRTF or localisation.
It's a phase issue between your subs and your mains. If they are cancelling themselves out at the said frequencies, inverting either the subs or the mains will make them sum up instead. Use a phase switch instead of any phase knob. A phase knob set to 180 does something quite entirely different (and much more unpredictable) than a phase switch set to reverse. If only a phase knob is available, try to use software to invert the sub.
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com
Jul 27, 2019 at 11:34 AM Post #40 of 1,817
I did another test of IEM usage but this time using Harman over-ear 2018 target for the IEM instead of Harman in-ear 2017-1 target. The results are exceptionally good despite the fact that Custom Art FIBAE 3 has that big gap between 5 to 7 kHz which shall not be compensated for. I honestly did not expect this realistic results. If I had to use only my FIBAE 3 for speaker virtualization from now on I wouldn't even be disappointed. I definitely need to add this as an option when running Impulcifer. I suspect the problem previously was the steep drop after 8 kHz in Harman in-ear target.
 
Jul 28, 2019 at 3:29 AM Post #41 of 1,817
Could you talk me through why you'd EQ to a harman type target? I just don't get why - I'd have thought all the HRTF related frequency changes would already be there due to the HRIR. So flat would make most sense. I'm getting great results using a Flat EQ, but I'd just like to understand it a bit more.

I also find the headphone compensation has a hard time with treble sibilant headphones like the DT990. I found the sound to be better using no headphone compensation but relying on oratory's measurements and a flat EQ. With the headphone compensation the DT990's still have that nasty treble that just isn't there in the room with the loud speaker. Of course - that also comes down to headphone choice but it's good to know it can be fixed.

Something of relevance from oratory over on reddit

Me: Btw - Jaakko mentioned this "I'm fairly sure equalizing an ear simulator measured frequency response flat is not the way to go for HRTF. Ear simulator measurements include the 3 kHz peak which is caused by ear canal resonance and should always be there"

Oratory:
That's true, but the 3 kHz peak is already there in the HRTF depending on how it was measured - some institutions measure HRTF at the EEP instead of at/near the DRP, meaning a transfer function accounting for the ear canal will have to be added. I've recently had my HRTF measured at the Austrian Academy of Sciences, and they did exactly that: place microphones at the EEP and then add the transfer function for the ear canal.

So EQing it flat and then adding it back in by adding the HRTF accounts for that.
The reason for removing and adding back is to remove the peak caused by the coupler/ear simulator, since it will not perfectly match that of your own ear, and then add one that matches your ear.
 
Jul 28, 2019 at 4:16 AM Post #42 of 1,817
Could you talk me through why you'd EQ to a harman type target? I just don't get why - I'd have thought all the HRTF related frequency changes would already be there due to the HRIR. So flat would make most sense. I'm getting great results using a Flat EQ, but I'd just like to understand it a bit more.

I also find the headphone compensation has a hard time with treble sibilant headphones like the DT990. I found the sound to be better using no headphone compensation but relying on oratory's measurements and a flat EQ. With the headphone compensation the DT990's still have that nasty treble that just isn't there in the room with the loud speaker. Of course - that also comes down to headphone choice but it's good to know it can be fixed.

Something of relevance from oratory over on reddit

Perhaps I left some details out when I wrote about equalizing to harman target. I'm not actually equalizing the IEMs to Harman target but I'm using Harman target as the mutually shared reference point when generating equalization settings to make the IEMs sound like the over-ear headphones. This is how the frequency response "morphing" or "transfer" between two headphones work in AutoEQ. I use the error curve of HD 800 as the error target for FIBAE 3. Error of HD 800 is the difference between the raw frequency response and Harman target. Similarly error of FIBAE 3 is the difference between it's raw fr and Harman target. Previously I used the Harman in-ear target as the reference for FIBAE 3 but this caused problems that were fixed when using the Harman on-ear target. AutoEQ has parameter called --sound_signature for this and that should point to an existing equalization result CSV file which has error data included. See here for more details: https://github.com/jaakkopasanen/AutoEq#using-sound-signatures

Could you share the Impulcifer headphone compensation graphs for your DT990? I'd like to take a look what is going on in there. Maybe there's something I can do to improve the algorithm.

When HRTF is measured at EEP (ear canal entrance) then it doesn't include ear canal transfer function. There's two options: first is to add it to HRTF and equalize headphones flat at DRP (ear drum) and the second is to leave it out from HRTF and equalize headphones flat at EEP (this is how Impulcifer works). Second option is not ideal because it assumes (falsely) that over-ear headphones don't affect ear canal transfer function. First option is not feasible for most because that requires specialized gear to measure the ear canal transfer function. Impulcifer is meant as an easy tool which can be used by normal people in their homes so building support for ear canal transfer function measurements would not fit very well with that goal.
 
Jul 28, 2019 at 6:09 AM Post #43 of 1,817
Ah ok - I get it. It's the suggestion you made to me to morph the IEM's to sound like my already corrected over-ears. I'll give that a shot tomorrow too.

I don't have the DT990 graphs now. I've started saving every pass I run with notes now so I know which headphones and setup it was from. Before I was just blindly over-writing everything. I'm going to do some more measurements tomorrow, along with the raising rear speakers and will submit github issues after confirming them.
 
Aug 15, 2019 at 12:09 PM Post #44 of 1,817
Some more thoughts after using Headphones as my main sound source for the last couple of weeks as I'm in an airbnb.

- Taking two measurements was a good idea. I took one from my normal listening/watching position - about 1.8-2m away from my speakers. I took another 1m away (quasi anechoic?). The one further away is so much better for music - I guess I love those side wall reflections. It opens up the sound stage dramatically. That's the kind of thing you just don't process in real life - because there's no way to do an A vs B that quick. The close ones have little reverb - locations are pin point. That's good for Atmos movies that are properly mixed in 7 channels (you gain 7.1 with atmos even though Windows can't do height). But for regular TV shows that don't have gerat surround mixes - the reverb is welcome.

- Speaker virtualisation is a game changer for personal VR theaters. I have a Goovis Cinego - it simulates a large screen at 20m distance. The syntheised HRTF's just feel so wrong when the screen is that far away. With my 2m measurement I could swear the sound was coming from an accoustically transparent screen. It's really made me enjoy watching movies on that now.

- The real headphone compensation is superior to a flat EQ from gras measurements. It just gets the treble right.
 
Aug 15, 2019 at 12:21 PM Post #45 of 1,817
You know this sh!t is deep when you've been working on the same stuff for like 3 years and still can't follow @jaakkopasanen 's version of the idea :sweat_smile:
 
HiBy Stay updated on HiBy at their facebook, website or email (icons below). Stay updated on HiBy at their sponsor profile on Head-Fi.
 
https://www.facebook.com/hibycom https://store.hiby.com/ service@hiby.com

Users who are viewing this thread

Back
Top