Mesh2hrtf
Apr 15, 2022 at 8:52 AM Post #61 of 93
I was gonna ask that. Is there a benefit of having so many sources for 7.1 movies and games? Or am I just wasting time trying to get it to work. Is there any benefit over impulcifer?
For stereo and multichannel such a huge amount of speaker positions within a radius of 1.2 Meters isn't required nor useful, what do you expect from it?
If the game doesn't allow you to use ur own hrtf + support for object based sound like dolby atmos, I also can't see any benefit from hundreds or thousands speaker positions. Useless overkill

The advantage of an hrtf is the uncoloured sound, the choice of different speaker positions and distances, the option to add reverb for room simulation (if we can find the right program), no mic capsule invariances, etc.
 
Apr 15, 2022 at 9:47 AM Post #62 of 93
what do you expect from it?
I was hoping for clearer distinct sounds. And maybe able to get upto 9.1.4 surround sound (height and ground) in movies/games.

The 1.2 meters used is probably because most measurement like these have speaker at around that distance. Like this
F7800AA1-91D3-4320-AB9C-9CEC1EEB2774.png
 
Last edited:
Apr 15, 2022 at 10:27 AM Post #63 of 93
For more than 7.1 we stick to 3d- formats like dolby atmos, dts:x and auro 3d, which require licenced decoders.
Especially in games or movies a wider range than 1.2 meters can be useful to give the illusion of far field or a cinema theather.
 
Apr 15, 2022 at 10:43 AM Post #64 of 93
Change the distance - it is possible to simulate HRTFs for different sound source distances (different simulation grids) and pick the most suitable option. (Currently there is almost no software that can dynamically use the mutiple sound source distances for the same angle, so the choice must be done by user and stays constant for a given HRTF)
I understand that you used 1.2 m as simulated distance, not as a deliberate choice but because it was the default setting?
And the result was that it sounded much closer to your head than 1.2 m, right? And this is probably partly because of not having a virtual room. It is also possible that it is partly because of imperfections of the Mesh2hrtf process.
I am curious what would be the result if you choose a very high distance. Maybe it will get the sound a bit further outside your head, even if it is not as far as the chosen distance.
 
Apr 15, 2022 at 11:06 AM Post #65 of 93
nd the result was that it sounded much closer to your head than 1.2 m, right? And this is probably partly because of not having a virtual room.
That's the main problem with hrtfs I guess, hearing without the smallest amount of reverb/ echo is quite unnatural and lacks the sense of distance.
I recommend to read the experiences decribed in the "aural id" thread, similiar is to expect from mesh2hrtf. https://www.head-fi.org/threads/genelec-aural-id.904475/page-2
 
Last edited:
Apr 15, 2022 at 12:01 PM Post #66 of 93
I understand that you used 1.2 m as simulated distance, not as a deliberate choice but because it was the default setting?
And the result was that it sounded much closer to your head than 1.2 m, right? And this is probably partly because of not having a virtual room. It is also possible that it is partly because of imperfections of the Mesh2hrtf process.
I am curious what would be the result if you choose a very high distance. Maybe it will get the sound a bit further outside your head, even if it is not as far as the chosen distance.

It was the default setting. There isn't a option to choose the distance, Maybe in the script it can be done but when I asked He told me I should try and use reverb to get the room effect. He also mentioned speakers can be placed and adjusted re. height, distance and width
 
Apr 16, 2022 at 8:01 AM Post #67 of 93
@musicreo I'm Having a hard time working that script to convert my latest .sofa to .wav if you have the time could you do this one for me. Its done using better scans and merging and also better mic placement

https://mega.nz/folder/kZsSyAzB#zMmVjQLzIAQT7Y8pRkcsOA


>> hrtf = SOFAload('HRIR_ARI_48000.sofa');
>> %% find the channels for 7.1
>> CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 );
CH_L = find(hrtf.SourcePosition:),2)==0 & hrtf.SourcePosition:),1)==30 );

Invalid expression. When calling a function or indexing a variable, use
parentheses. Otherwise, check for mismatched delimiters.

You can download the file here: one drive link

How do you run the code? With matlab? When I have time I will lookagain into the python sofa API and do the same with python but this have to wait for know.

I made a small change in the code as the angles are sometimes not exact.
% Start SOFA SOFAstart; % Load your impulse response into a struct folder='C:\Users\HRIR_ARI_48000.sofa' hrtf = SOFAload(folder); %% spherical coordinates phi, theta, r %hrtf.SourcePosition ; % position of loudspeaker % hrtf.SourcePosition(:,phi=1,theta=2,r=3) % find the correct indices for the 7.0 channel configuration theta=0; CH_L = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==30 ); CH_R = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==360-30 ); CH_C = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==0 ); CH_LS = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==110 ); CH_RS = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==360-110 ); CH_LB = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==135 ); CH_RB = find(round(hrtf.SourcePosition(:,2))==theta & round(hrtf.SourcePosition(:,1))==360-135 ); %% audioch=[hrtf.Data.IR(CH_L,:,:),hrtf.Data.IR(CH_R,:,:),hrtf.Data.IR(CH_C,:,:),... hrtf.Data.IR(CH_C,:,:),hrtf.Data.IR(CH_LS,:,:),hrtf.Data.IR(CH_RS,:,:),... hrtf.Data.IR(CH_LB,:,:),hrtf.Data.IR(CH_RB,:,:),]; audioch=squeeze(audioch)'; audioch_hesuvi=audioch(:,[1 2 9 10 13 14 5 4 3 12 11 16 15 6 ]); %normal to hesuvi outfolder='C:\Users\SOFA_myHRTF_project_merged2\'; outname='HRIR_ARI_48000(L-R-C-LFE-LS-RS-LB-RB).wav'; outname2='HRIR_ARI_48000_hesuvi.wav'; Fs=48000; audiowrite([outfolder,outname],audioch,Fs,'BitsPerSample',32); audiowrite([outfolder,outname2], audioch_hesuvi,Fs,'BitsPerSample',32);
 
Apr 17, 2022 at 7:11 AM Post #72 of 93
Im planning to do so, but need to get me an iphone + figure ot the options to mirror the screen to tv or an non-apple Display, have never used matlab, etc.
Many used Iphones are refurbished and therefore face-id may not work, functioning face-id has to be confirmed before purchase.

I'm questioning why capturing the ear canal sholudn't be feasible, if the infrared-laser dephth sensor has a resolution of 1 mm?

And" how to capture the area between pinna and head, aah and how to build up this strange setup for the head helping the sensor to detect the right reference points for measurement.
 
Apr 17, 2022 at 7:52 AM Post #73 of 93
I can not remember or find who mentioned it first and in which thread, but it seems to me that the "missing link" is:

https://info.natus.com/otoscan-3d-digital-ear-scanning-solution

Scan the outside of the head and ears with the apple, let an audiologist who has an otoscan scan your ear canals, insert the ear canal models in the outside model (filling the holes in the outside model with the ear canal models so to speak). And then with Mesh2hrtf we can see what happens at the ear drums.

(And now what we really would want is that we could also model the headphones on our ears, and see how the headphone sound arrives at the ear drums. Then we finally could find a proper headphone compensation that fits with above Mesh2hrtf results without needing additional EQ to account for measuring at the entrance of a blocked ear canal.)
 
Apr 17, 2022 at 8:31 AM Post #74 of 93
Im planning to do so, but need to get me an iphone + figure ot the options to mirror the screen to tv or an non-apple Display, have never used matlab, etc.
Many used Iphones are refurbished and therefore face-id may not work, functioning face-id has to be confirmed before purchase.

I'm questioning why capturing the ear canal sholudn't be feasible, if the infrared-laser dephth sensor has a resolution of 1 mm?

And" how to capture the area between pinna and head, aah and how to build up this strange setup for the head helping the sensor to detect the right reference points for measurement.
Matlab isn’t needed. You just need free programs blender, meshmixer and app £7 Heges.

The depth sensor on the iPhone goes down to 0.5 and the canal is a hole that you can fill in meshmixer. You can make it deep or shallow for your measurements it’s upto you.

Just one thing to note my i7 7700k and 16gb ddr4 ram it took 12.5 hours. So you need decent ram
 
Apr 17, 2022 at 8:33 AM Post #75 of 93
I can not remember or find who mentioned it first and in which thread, but it seems to me that the "missing link" is:

https://info.natus.com/otoscan-3d-digital-ear-scanning-solution

Scan the outside of the head and ears with the apple, let an audiologist who has an otoscan scan your ear canals, insert the ear canal models in the outside model (filling the holes in the outside model with the ear canal models so to speak). And then with Mesh2hrtf we can see what happens at the ear drums.

(And now what we really would want is that we could also model the headphones on our ears, and see how the headphone sound arrives at the ear drums. Then we finally could find a proper headphone compensation that fits with above Mesh2hrtf results without needing additional EQ to account for measuring at the entrance of a blocked ear canal.)
What I have done is used the headphone compensation from impulcifer and use it in the virtualisation in hesuvi.
 

Users who are viewing this thread

Back
Top