Viewing a single comment thread. View all comments

NaturalisticDualism t1_j8ewd9h wrote

I'm not sure. The study is on epileptics. I'm going to say the language faculty and the amygdala are quite different from my (limited) phonological representations in some models are entirely learned with few or no primitives and the pruning you speak of.

However, Neonates already have some facial recognition software. My understanding in terms of innate structure leads me to hypothesize some deep homology in primates here and appearing early in development.

It's well known that the amygdala can be effected my ACEs, I don't know and would rather not guess. But a nonconscious 100 millisecond process might be hard to retrain. Im doing a bunch of guesswork. Nevertheless what you say is interesting and food for thought. I'm pretty unsure.

3

Songoffireandice t1_j8gs2x6 wrote

I intended to mean it was a similar mechanism in the broader sense that reducing unnecessary sensory stimulation is likely advantageous, using a specific example I was certain of as a token reference in the case of auditory development.

Retraining may not even be a good idea, as I can say with first hand experience being hyper-reactive to subtle emotional visual cues doe's not make functioning in society generally any easier. It does make you potentially better at reading people you are familiar with though.

I like your angle as a vestigial function, and after finding the study posted I think it's better supported than my speculation. What really interests me is the specificity, especially in the lack of reaction to happiness, but not entirely unexpected. Happiness, sadness, anger, disgust, contempt, fear, and surprise are all universal to our expressions. With that being said, I would like to at least have seen anger tested in addition, as we currently attribute the amygdala to processing fear and anger specifically.

2