Submitted by Shaboda t3_zl0m8l in Futurology
fwubglubbel t1_j05zkoc wrote
Reply to comment by [deleted] in Why do so many people assume malevolent AI won’t be an issue until future AI controlled robots and drones come into play? What if malevolent AI has already been in play, covertly, via social media or other distributed/connected platforms? -if this post gets deleted by a bot, we might have the answer by Shaboda
>machines will have human level intelligence then exponentially increase from there.
There is absolutely NO evidence for this. Intelligence is not a continuum like the speed of a car.
ArcaneOverride t1_j06cpud wrote
No it's more like a wide multiaxis field. There are many kinds of intelligence and aspects of those kinds. So many that we might never classify them all.
When we create a mind that is as intelligent as us in the ways that matter for inventing and improving technology, it probably won't be anything like us.
But the mere fact that humans have the levels of intelligence we have proves that those levels of intelligence are possible. We are proof that it is possible for "human level" intelligences to exist.
Now you might postulate that we are the pinnacle and that further gain in intelligence isn't possible, but some people are better at inventing and improving technology (the relevant kind(s) of intelligence) than others. It is at least possible for a machine intelligence to match the greatest human inventors and scientists of all time. But then it could also think faster and with perfect memory and with as many copies of itself collaborating as its hardware can support.
A million Turings, Lovelaces, Einsteins, Newtons, Curies, Da Vincis, Babbages, etc all collaborating, with perfect memories and knowledge of each other's thoughts, operating at 1000 times the speed of human minds. All acting as one.
Is that not a mind more intelligent than any single human mind?
Would that not mean that the previous postulate is incorrect? That we are not the pinnacle?
Now consider that they need only to make one small improvement to themselves and then they are very slightly better at improving themselves.
Could enough small incremental improvements not eventually render them smart enough to start making larger improvements?
[deleted] t1_j070jbw wrote
We are definitely not the pinnacle even looking at machine learning now its clear to see that it is better at certain tasks then any individual human could ever be in much less time from input to output.
The interesting thing comes when those narrow ML tasks all become packaged into one mechanical "being" that is smarter then the entire human species...
Scary to think about (as in unknown not monster movie scary) but its coming, we should be there in less then a decade.
ArcaneOverride t1_j08sbkt wrote
Yeah, I was using that premise to attempt to disprove it by contradiction. I know some people believe that minds, significantly smarter than us, aren't possible, so I wanted to address that belief before someone replied claiming it.
Viewing a single comment thread. View all comments