Submitted by basafish t3_zpibf2 in Futurology
JoeBookish t1_j0t8a4i wrote
I don't think this is really an option. A) programs that improve themselves already exist. B) any random a*hole with a computer can make one with access to a broad enough knowledge base to be smarter than a human, so it's a matter of time until someone does and it goes wild.
I think we just have to keep our hands on the power supplies and monitor their behavior, but sooner or later the algorithms are gonna do whatever they want with us. It just stands to reason that it's not helpful to kill a person (though I'm sure cops in like 2 years will argue differently, after they kill somebody with a robot dog).
norbertus t1_j0tpsaq wrote
> any random a*hole with a computer can make one with access to a broad enough knowledge base to be smarter than a human
Been training machine learning models for four years. Without a big lab and an atom bomb worth of energy, it is hard.
JoeBookish t1_j0udwps wrote
It's oversimplified, but computers aren't rare and it's not hard to be smarter than a human at most things we consider essential, like building controls, math, driving, etc. The broad point is that anybody can program.
norbertus t1_j0uhkjg wrote
> The broad point is that anybody can program
LOL a lot of young people struggle with folders
Viewing a single comment thread. View all comments