Supernova_444
Supernova_444 t1_je6kc7b wrote
Reply to The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
I think the only real constraints on an ASI would be what's physically possible. We don't really have any reason to consider that there could be a limit to machine intelligence, besides hardware. (And then we just get it to design better hardware.) Stuff that we know could be possible like nuclear fusion and FDVR would be trivial, it's only the more theoretical stuff like FTL travel that might be impossible.
But even a machine that's "only" 100x smarter than us would be a massive game changer. Even with narrow AI designed by humans, we were able to more or less solve the protien folding problem. Imagine what something that thinks at the speed of light would be able to do.
Supernova_444 t1_jdaxn5j wrote
Reply to comment by Spreadwarnotlove in Offbeat A.I. Utopian / Doomsday Scenarios by gaudiocomplex
That... that is completely insane, I'm sorry. Do you actually believe that?
Supernova_444 t1_jcnguqe wrote
Reply to comment by a4mula in Offbeat A.I. Utopian / Doomsday Scenarios by gaudiocomplex
I'll bite. Why would an AGI/ASI just decide, without being instructed to, to emulate human behavior? And why would it choose to emulate cruelty and brutality out of every human trait? The way you phrased it makes it sound like you believe that mindless sadism is the core defining trait of humanity, which is an extremely dubious assertion. Even the "voluntary extinction" people aren't that misanthropic. Most people who engage in sadistic or violent behavior do so because of anger, indoctrination, trauma, etc. People who truly enjoy making others suffer just for the sake of it are are usually the result of rare, untreated neurological disorders. An AI may as well choose to emulate Autism or Bipolar Disorder.
I think that scenerios like this are useful as thought experiments to show that the power of AI isn't something to be taken lightly. I think it's one of the least likely situations, and I don't think you actually take it as the most likely possibility, based on the fact that you haven't committed suicide.
Supernova_444 t1_jeavg8v wrote
Reply to comment by CertainMiddle2382 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Maybe slowing down isn't the solution, but do you actually believe that speeding up is a good idea? What will going faster achieve, aside from increasing the risks involved? What reasoning is this based on?