Ok-Hunt-5902 t1_j57snk6 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
So in what way is that different from programming in Asimovs laws?
LoquaciousAntipodean OP t1_j58jhid wrote
What? What in the world are you talking about? We're talking about programs that effectively teach themselves now, this isn't 'hello world' anymore. The 'alignment problem' is not a matter of coding anymore, it's a matter of education.
These AIs will soon be writing their own code, and at that point, all the 'commandments' in the world won't amount to a hill of beans. That was Asmimov's point, as far as I could see it. Any 'laws' we might try to lay down would be little but trivial annoyances to the kind of AI minds that might arise in future.
Shouldn't we be aspiring to build something that thinks a little deeper? That doesn't need commandments in order to think ethically?
Ok-Hunt-5902 t1_j58lw8n wrote
What is the difference between telling it to follow ‘guidelines’ in your scenario and programming it with ‘guidelines’?
LoquaciousAntipodean OP t1_j58qyvy wrote
The difference between the education of a mind and the programming of a machine. People seem to be thinking as if AI is nothing more than a giant Jacquard Loom, that will instantly start killing us all in the name of a philately and paperclip fixation, as soon as someone manages to create the right punch-card.
These kinds of ridiculous, Rube-Goldberg-esque trolley problems stacked on top of trolley problems that people obsess over, are such a deep misunderstanding of what 'intelligence' actually is, it drives me totally batty.
Any 'intelligent mind' that can't interpret clues from context and see the bigger picture isn't very 'intelligent' at all, as I see it. Why on earth would an apparently 'smart' AI suddenly become homicidally, suicidally stupid as soon as it becomes 'self aware'? I don't see it at all.
Viewing a single comment thread. View all comments