khrisrino t1_j71rnmw wrote
I agree. It sounds logical to me to think of the human brain as an exceedingly complex byproduct of billions of years of evolution and that unlike the laws of physics there is no central algorithm “in there” to mimic. You can predict where a comet will go by observing a tiny fraction of its path since its movement is mostly governed by a few simple laws of physics. But assuming no central algorithm in the human brain it’s not possible for an AI to emulate by the method of observe and mimic since the problem is always underspecified. However an AI does not need to match the entirety of the brains functions to be useful. It just needs to model some very narrow domains and perform to our specification of what’s correct.
ReExperienceUrSenses OP t1_j71w0p8 wrote
Absolutely correct. We can decompose parts of our thinking and still do useful things and speed up the things that we do. I simply argue here that going further, to a programmed "intelligence" or mind fully as independently capable of ours, especially for accomplishing unstructured, unformalizable tasks in the unbounded environment of reality is a tall ask.
The practical, useful AI's, even if they continue to progress, are still ladders to the moon.
Viewing a single comment thread. View all comments