Submitted by izumi3682 t3_11shevz in Futurology
izumi3682 OP t1_jcdrt9v wrote
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
The opening of this article tells you everything you need to know.
>In 2018, Sundar Pichai, the chief executive of Google — and not one of the tech executives known for overstatement — said, “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
>Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.
>Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as though they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in A.I. is the improvement curve.
>“The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
I constantly re-iterate; The "technological singularity" (TS) is going to occur as early as the year 2027 or as late as the year 2031. But you know what? Even I could be off by as many as 3 years too late. The TS could occur in 2025. But I just don't feel comfortable saying as early as 2025. That is the person of today's world in me, that thinks even as soon as 2027 is sort of pushing it. It's just too incredible for me even. I say 2027 because I tend to rely on what I call the accelerating change "fudge factor" that is how Raymond Kurzweil came to the conclusion in the year 2005 that the TS would occur in the year 2045. He knows now that his prediction was wildly too conservative. He too now acknowledges that the TS is probably going to occur around the year 2029.
I put it like this in a very interesting dialogue with someone who we have argued what and by what timeline was coming for almost the last 7 years I believe. Now he is a believer.
idranh t1_jcdvy3e wrote
In his seminal 1993 essay, The Coming Technological Singularity, Vernor Vinge writes, "Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030." Vinge may have been right all along.
bogglingsnog t1_jcec9da wrote
I have a growing sensation that AI automation/optimization/outsourced intelligence is one of the strongest candidates for the great filter, seeing how efficiently government overlooks the common person it would likely be greatly enhanced by automation. Teach the system to govern and it will do whatever it can to enhance its results...
LandscapeJaded1187 t1_jceg3oo wrote
It would be nice to think the super smart AI would solve some actual problems - but I think it's far more likely to be used to trick normal people into more miserable lives. Hey ChatGPT solve world peace and stop with all the agonized navel-gazing teen angst.
yaosio t1_jcetfxg wrote
This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.
The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.
Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.
Gubekochi t1_jchwk57 wrote
>True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.
You can have intelligence that doesn't want, at least in theory. I'm sure that there has been a few monks and hermits across history that have been intelligent without desiring much if anything.
[deleted] t1_jcgtg54 wrote
[removed]
Iwanttolink t1_jcifj9k wrote
> True AGI implies that it has its own wants and needs
How do you propose we ensure those wants are in line with human values? Or do you believe in some kind of nebulous more intelligence = better morality construct? Friendly reminder that we can't even ensure humans are aligned with societal values.
[deleted] t1_jciaqee wrote
[deleted]
bogglingsnog t1_jcirlz0 wrote
By reducing the population by 33%
First-Translator966 t1_jcy27kc wrote
More likely by increasing birth rates with eugenic filters and euthanizing the old, sick and poor since they are generally net negative inputs on budgets.
greatdrams23 t1_jch8y3d wrote
A lot of dates are quoted, but you give no reason why you think AI will be achieved.
Huge amounts of effort are being poured into self driving cars, which is simple compared to AI. But we are still 7 years from self driving cars.
Viewing a single comment thread. View all comments