peregrinkm t1_j81dqsx wrote
Reply to comment by pepperoniwassabi in A Different Kind of Ark — How we can sequence and store our DNA to be encoded into a future simulation and why this may have already happened by I_HaveA_Theory
They can already do that by combining deepfakes with AI. Encoding DNA into the simulation would just help it to simulate protein and cell growth. That would be extremely high resolution.
The question is: could it ever be conscious?
CaseyTS t1_j81f7wb wrote
>would it ever be conscious?
With actual computers, that is a hard question and I'm not sure how to answer. But if you could simulate every cell in the human brain, you could definitely produce something that behaves exactly like a person that we'd call conscious - inside and out. There's no fundamental rule that says that matter we build machines out of cannot be conscious. I see consciousness as purely emergent, not primal like a dualist's idea of a soul. As such, I think of it as more of an information phenomenon than a material phenomenon (though, obviously, humans use physics to operate).
peregrinkm t1_j81fyfh wrote
But would it be aware of itself as a conscious entity, rather than merely mimic the patterns of something that is conscious?
CaseyTS t1_j829j70 wrote
In the same way that a human is, sure. Consciousness is a product of the behavior of a brain. If the simulation allows the brain to make whatever choices a human would (it would have to have virtual senses or something), then I would say it's the same as human consciousness. I don't see a reason otherwise.
peregrinkm t1_j82axzr wrote
Clearly there’s something within you that registers sight as an image interpreted by consciousness, but is that any reason why someone should “see” what they see? You experience consciousness, meaning something experiences the sensory stimuli. What is the nature of experience itself?
Dozygrizly t1_j81l7iy wrote
It's actually pretty hotly debated whether this would be possible, if you're interested check out the debates around things like the blue brain project etc.
Your brain has billions or trillions of synapses. The information being relayed at synapses is not binary (different neurotransmitters have excitatory/inhibitory effects which can even propagate backwards). The effect of the entire nervous system would need to be modelled, as well as the gut microbiome (these all influence it significantly).
Add onto that, say you manage to create a perfect simulation, you essentially just have a brain in a jar. So you now need to simulate an external environment, a lifetimes worth of experiences to allow the simulated brain to plastically develop in response to input (otherwise it's an inert lump of meat essentially). Your simulated brain will not respond accurately without this plastic development.
To simulate a consciousness accurately, you would essentially need to simulate someone's entire life.
I agree that consciousness is emergent, but I don't think we could simulate consciousness as we know it. I believe we could get to some form of consciousness though.
CaseyTS t1_j829r9r wrote
Sure, but we don't have to use a binary computer to simulate it. We could use an analogue computer or whatever else. That said, I agree that this is outside of any practical application; it's science fiction. But I think that, in principle, there is no difference between a machine brain and a human brain if they do the same things. Of course, any consciousness would have to have an appropriate environment, artificial or not.
Dozygrizly t1_j8bh5ot wrote
Yea, in my mind it's more of a case of, if we get to the point where we can simulate one brain properly in this fashion, it's essentially useless.
We can simulate the brain of a Lithuanian man who has been addicted to 2CB his entire life. Great. Now that we have done that, we have essentially just recreated a brain that already exists.
What does this fake brain tell us that we don't already know, after studying this man for his entire life to determine all the inputs required to simulate his brain?
We now have one brain. This is useless in an inferential sense for any kind of research - we have a sample of N = 1, meaning we have a case study of a brain that we already have fully mapped without having to expend the resources to simulate it.
Once we have the capabilities to simulate a human brain properly, we wont need to do it (or learn anything substantive from it). This is the argument I most agree with anyway.
I wouldn't be so quick to distinguish between a biological brain and a computerised one, they exist on such different planes that (in my opinion) such broad statements are bold to say the least. I do appreciate your point though.
OvermoderatedNet t1_j81xqaf wrote
> debates around things like the blue brain project
It would really suck if at the end of the day it turned out there were tasks that silicon and computers literally cannot do and that anything more complex than a slow-motion self-driving delivery bot requires organic brain cells.
Viewing a single comment thread. View all comments