Comments

You must log in or register to comment.

reality_boy t1_jdvjm6d wrote

Yes, your ears can’t hear quieter sounds that are near the frequency of a loud sound. And some frequencies cancel out higher frequency sounds as well. This is the reason that lossy audio compression works. You can throw away all noise outside of you ability to hear. It is a significant amount of data that can be tossed.

In addition audio is collected at a certain bit rate (8 bit, 16 but, 32 bit, 96 bit). As well as at a certain sample rate (44khz, etc). Modern audio cards can sample audio well above your ears ability to distinguish the individual changes or frequencies.

We often use this to our advantage when capturing audio. For example capturing at 96 bits is similar to modern HDR cameras in that you can capture both the quietest and loudest sounds we can detect, at the same time, even if both together would never be in the same final mix. This lets us set and forget our mic gains without worrying about blowing out the sound.

68

Garo5 t1_je1k4cv wrote

You must be mixing up 96 kHz and 96 bit? A 24bit audio would already give you dynamic range of 144 dB, so "96 bit" audio must be a mistake. If I'm wrong I'd be really happy to know a use case for 96bit audio! :)

6

VeryVeryNiceKitty t1_je4659t wrote

96bit audio might potentially have some scientific applications? The requirements for measurements in experiments like LHC are quite extreme.

2

[deleted] t1_jdviys8 wrote

[removed]

18

tdgros t1_jdvo1j7 wrote

We don't go up to 96kHz because listeners can perceive it, but because it allows for the design of better filters, better processing, etc... which does result in better quality, but it'd be completely fine to then downsample in 48kHz right before sending to speakers.

26

ins0ma_ t1_jdvv8j3 wrote

Exactly. 96k allows finer resolution and filtering when it comes to stuff like EQ and dynamics.

6

_Jam_Solo_ t1_jdwecct wrote

It helps with aliasing and time stretching, but idk much more than that. I personally don't go over 48k because the benefits aren't worth the disk space.

6

GuybrushBeeblebrox t1_jdw54cu wrote

As far as I know, and this is more experience, but higher bitrates allow for better results from dsps or filtering, or any post processing

5

mrxexon t1_jdwh9lm wrote

"Hear" needs to be defined. It's not just the physical mechanism.

Provided your hearing is perfect, you are still bottlenecked in the brain. The reason being is your brain sorts out what you're focused on and excludes everything else to the best of it's ability. This is what allows you to concentrate on something in a noisy office, etc. But there is a limit.

The limit is in your conciousness. You only have no much of it and it doesn't divide very well. Each sound would require it's own attention and humans just aren't wired that way. In theory, you could train yourself to some degree but it's still an uphill battle.

14

ZaneJayMusic t1_jdwkmft wrote

Its called the “cocktail party effect” if anyone is curious.

Also a more broad term for how our brain perceives / uses sound is “psychoacoustics”

9

stvmjv2012 t1_jdx3f78 wrote

I’ve noticed when on LSD I was able to hear like the individuals instruments in a song quite clearly. I could also hear better in general.

8

Brain_Hawk t1_jdxheiv wrote

Separate sounds did not necessarily require their own attention. We can still subtly differentiate numerous different sounds simultaneously but not necessarily be attending to the different sources or channels. But they're still an element to which the complexity of that sound is being processed.

Although I guess that comes back to your first point that it depends on how you define hears, and I may just be defining it a bit different than you. Maybe you're defining it as a sandwich is specifically identified, and I'm defining it as the full total complexity of this sound information regardless of whether specific things are process. But, to be fair to that perspective, sometimes we can think back on a sound we heard recently and reevaluated, drawing attention to the memory trace of different aspects of that sound

The end point limit of a TV types sound system is one that equates to being in the environment. But now that I've said that, I realize the limit of that is in fact the neuronal limit of our processing capacity, cuz the fidelity of real life is infinite. The maximum precision of sound in the universe is whatever the plank sound constant length is, which is effectively infinitely small. Sort of. Almost

−1

[deleted] t1_jdw6y5t wrote

[removed]

6

Ninjewdi t1_jdwbtmn wrote

Is it just a limit to the ear's capabilities to pick up the sounds, or are there also limits to the number of audio inputs our brains can process simultaneously?

Like, even if the ear picks up all the noise, can our brain only really recognize and parse x number of them at a time?

1

Wilm_Roget t1_jdxveb1 wrote

Well, there are only so many hairs in the inner ear to transmit sound.

https://en.wikipedia.org/wiki/Hair_cell

"The human cochlea contains on the order of 3,500 inner hair cells and 12,000 outer hair cells at birth" Since those hairs are necessary to transmit sound to the nerve cells - that does create a limit to number of audio imputs that can be transmitted to the brain.

I didn't find anything to clarify how many hairs must react to register a sound at all ( hairs to decibel for example). But given that the number of hairs is much higher than the maximum number of decibels the ear can process . . . So there is a limit on the number of audio inputs that can reach the neurons.

Then,from there, how much of that data can the brain process? It gets complicated, because not only does the brain recognize pitch, it process elements like rhythm and volume. Our brains are limited, so short answer is yes, there is a limit to how much of any kind of input the brain can process.

I didn't find anything that gave a number for audio input. But, I did find this very detailed explanation of how neurons connect to the hairs that transmit sound.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3078955/

2

_Jam_Solo_ t1_jdwdz64 wrote

I would say the comparison with resolution for audio is how high in pitch we can hear, how low, to what degree of precision we can tell how loud something is, like if you raise the volume of a sound in increments what's the smallest increment you could perceive. For audio this basically comes down to bitrate and samplerate, and we maxed those out a long time ago.

In so far as number of sounds, it's just after a while it becomes a mess.

Like, imagine waves in a pool. If you have one wave, you can see it easily. You could mix a number of waves and still be able to tell which is which. But after a while, the water would just be a mess of noise.

When you listen to a record, every element sounds distinct, and clear, because the engineers mixing the music made sure if did that. Even just a few elements can really start clouding over things and making them intelligible.

Just like if you have one person talking that's easy, 2 people taking you could go back and forth and you know they are distinct voices. After a certain number of voices, it becomes noise. If you have enough noise, you won't notice an extra voice being added.

2

Environmental_Ad5451 t1_jdx7qh9 wrote

I'd say the answer is yes, because our ears cannot respond to anything faster than a frequency, or tone, of about 20kHz-ish. That is kind of like sampling rate, in some regard. And then there is only so much information, so many different sounds) you can pack into a roughly 20kHz bandwidth (there's an awful lot here to unpack, and I've not done it well), which is similar in some sense to bit depth, if you give some latitude for pushing a digital domain onto an analogue system. If fact, because of that bit depth, or packing information into how fast we can hear, most of what we listen to sits in the 50Hz to about 8kHz range. It's mostly music that routinely takes us to our limit. Lots of noise will do it to.

Fundamentally, we can't hear sounds that have tones that are too high pitched, and we can't resolve separate sounds that are too close together, even if they're in our hearing range individually. So it's kind of like eyes, but our ears are much faster than our eyes, largely because they're much simpler.

Last thing, I've ignored amplitude, or loudness. Others have explained it well. If sounds are loud they can reduce your effective bandwidth at any given moment. So information (sound) in a loud place can get lost even if you could otherwise hear them.

1

Ausoge t1_jdydlul wrote

Ultimately, once all processing is finished, the end result that comes out of the speaker can be thought of as a single sound wave. During the mixing process, all the individual waves from all the different instruments are compiled together into effectively a single, complex waveform.

There's more to it, of course - most speakers have more than one speaker element (one for highs, one for lows, or more) and then you are of course dealing with multiple sound sources all coming together at the point where your ears are. And your ears only have one membrane (the timpanic membrane) that oscillates back and forth, so again this feed to your brain and nerves can be considered a single "source".

Ultimately, the amount of detail you can perceive depends on the relative loudness and position of different external sources, and the quality of the audio mix.

1

[deleted] t1_jdwloa3 wrote

[removed]

0

[deleted] t1_jdwrxai wrote

[removed]

−2

[deleted] t1_jdwu7v8 wrote

[removed]

2

[deleted] t1_jdwwcbt wrote

[removed]

0

[deleted] t1_jdwx25x wrote

[removed]

1

[deleted] t1_jdwq10u wrote

[removed]

0

StableGenius304 t1_jdxxik3 wrote

This is not correct because a decibel scale is not linear. 1000 1db sounds would be 31db, unless I am missing something.

3

Wilm_Roget t1_jdynhb1 wrote

I'll correct the math tomorrow. Thanks for the heads up. The principle remains the same though.

1

[deleted] t1_jdxgpvz wrote

[removed]

0

Wilm_Roget t1_jdxro33 wrote

"This doesn't necessarily relate to how many different sounds you hear, "

Thanks for noticing that your reply doesn't address the question.

1

[deleted] t1_jdwcegg wrote

[removed]

−1

tdgros t1_jdx0f2y wrote

>Human vision is about 576 megapixels

it's really not, we don't even have that many vision cells per retina. You can find this figure if you extrapolate the density in the fovea to the entire field of view. But in reality, the density of color cells drops off sharply outside of the fovea, which only has a few degrees of FOV.

Do the test: focus your eyes on one word of text, without moving your eyes, how far can you read the words around? our vision is really really blurry outside the center, we just don't realize it.

3