Submitted by talkingtoai t3_y0zmd1 in MachineLearning
Cogwheel t1_irwdfyl wrote
Reply to comment by mixelydian in [D] Is it possible for an artificial neural network to become sentient? by talkingtoai
I think the fundamental difference you're pointing out is that a brain's weights change over time, and those changes are influenced by factors beyond the structure and function of the neurons themselves. Maybe this kind of thing is necessary for consciousness, but I don't think it really changes the argument.
We don't normally think of the weights changing over time in a neural net application, but that's exactly what's happening when it goes through training. Perhaps future sentient AIs will have some sort of ongoing feedback/backpropagation during operation.
And because of the space/time duality for computation, we can also imagine implementing these changes over time as just a very large sequence of static elements that differ over space.
So I still don't see any reason this refutes the idea that the operations in the brain can be represented by math we already understand, or that brains are described by biochemical processes.
Edit removed redundant words that could be removed for redundancy
Viewing a single comment thread. View all comments