vernes1978 t1_iwphtbo wrote
So, even tho it's not reconstructing the image based on your fMRI data.
It is comparing your fMRI data with fMRI of other people and the related images that got associated with the FMRI data.
That means we all have the same brain-area's associated with abstract concepts?
-ZeroRelevance- t1_iwpjvxf wrote
Yeah, it’s been experimentally proven a few times. The example I remember is that even for speakers of different languages, the word for ‘apple’ in their language lights up the same part of the brain.
It makes sense to be honest. If our brains weren’t almost entirely determined by our genetics, there’s no way we’d all be as smart as we are.
vernes1978 t1_iwpni0x wrote
> be as smart as we are.
For a certain definition of "smart" of-course.
"Takes a big bite of rainforest killing sojabean fed cowmeat filled with microplastics"
But that means a person is a dataset applied to a "generally" identical neural net.
Ok, that statement might be generally a lie but this is my question:
What would happen if we could measure all the synaptic weights/values of brain model A belonging to ZeroRelevance.
And just use those values to adjust the neurons in Brainmodel B (belonging to vernes1978).
Howmuch would Brainmodel B react differently then ZeroRelevance?
How big would the difference be?
-ZeroRelevance- t1_iwpo1pn wrote
You’re asking what would happen if all the neurons in your brain were rewired to be the same as mine? In a purely theoretical case, you would react exactly the same as I would, but in practice, the differences in the rest of our bodies would:
-
mean that there may be some issues in sensing the world and controlling the body
-
have that variation in stimulus lead to differences in the responses
On the other hand though, if you had a brain in a vat that was wired to be identical to mine, and also put my own brain in a vat, any given stimulus to either brain should give identical responses. Since there should be no fundamental difference between them.
vernes1978 t1_iwqlhw5 wrote
I wasn't aware that the wiring (connectome) was the data.
I kinda assumed there was a electro-chemical factor involved where the neuron had different trigger conditions which was the result of a learning process.
I was imagining that these factors could be transfered to a brain with a different connectome.
Since this image prediction was possible using fMRI data, I was wondering if our connnectome could be similar enough that the transfer of this (assumed) electro-chemical state of neurons would result in a personality that is similar enough to represent the person who's electro-chemical state you transfered to a different brain (connectome-wise).
Although this is sciencefiction stuff, it would be an interresting question wether or not you could clone yourself into a standardized artificial brain, by copying these electro-chemical variations.
-ZeroRelevance- t1_iws8y1g wrote
I’ll admit I didn’t really consider the actual neurons themselves as seperate to the wiring in my answer. Since neurons are created based on genetic code, every person’s neurons would likely react slightly differently, leading to a different end result. If you also consider the activation conditions to be different to the wiring, that would also obviously lead to pretty big differences, because the activation conditions are just as important as the wiring.
I just kind of combined both of those into my previous answer, so I concluded that there would be no differences. If it was solely the wiring, though, then there would likely still be big differences.
Keep in mind though that I’m far from an expert in anything to do with brains, just an enthusiast, and all of this is just my speculation based on what I know about brains and AI.
Viewing a single comment thread. View all comments