Submitted by [deleted] t3_10xoxh6 in Futurology
adrenalinjunkie89 t1_j7thqyb wrote
I'm no programmer but i figure if a machine has a strong ai, it could eventually break through its safeguards
Sasuke_1738 t1_j7ti61z wrote
Well, couldn't we prevent this if scientists somehow figured out how to upgrade the consciousness by implementing the computing within the AI into our own brains. Also, wouldn't our intelligence be able to evolve with genetic enhancement as well as implementing AI into ourselves through augmentation.
adrenalinjunkie89 t1_j7tikdf wrote
We're pretty far from adding electronic computing power to our brains. Any science dealing with that stuff is purely theoretical.
People are scared of making ai too smart. It makes sense that a computer thousands of times smarter than a human could eventually break through the barriers we put up and do... Who knows what.
The final season of Silicon Valley is an excellent portrayal of what might go wrong
Sasuke_1738 t1_j7tiugq wrote
I need to finish silicon valley
adrenalinjunkie89 t1_j7tiw1p wrote
Dude the last season is the best for sure. You gotta finish it.
Zer0pede t1_j7vuk59 wrote
You can have the same effect without direct wiring into the brain. You just need an AI that always wants to “check in” to make sure it’s aligned with human goals and values. There’s a good discussion of that in this book. It’s more game theory added on top of neural networks than biotech (but it probably will be the basis for more direct wiring once we have a better idea of how the human brain works).
(Also, no “consciousness” would be involved, because nobody is even trying for that.)
Zer0pede t1_j7vtopj wrote
Not if the “safeguards” are structured like a value system. I like the approach in Stuart Russel’s “Human Compatible,” which is that we start now making AI have the same “goals”* as humans (including checking with humans to confirm).
*I put “goals” in quotes because it makes AI sound conscious, but literally no AI researcher is working on consciousness so we’re really just talking about a system that “checks in” with humans to make sure it doesn’t achieve a minor human-assigned goal at the expense of more important, abstract human values. (eg., Paperclip Multiplier or Facebook algorithm.)
Viewing a single comment thread. View all comments