LoquaciousAntipodean OP t1_j57iqky wrote
Reply to comment by petermobeter in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
You use reductive language in an attempt to paint the idea as absurd, but yes, basically. Was there going to be a punchline to your ridicule, or are you just really bad at agreeing with people sensibly?
petermobeter t1_j57vz6a wrote
i was genuinely not trying to ridicule (i actually appreciate what you were saying as being insightful/interesting), i was just trying to understand your post’s meaning, with a lil bit of levity in my tone.
im sorry for coming across insultingly 🙇🏻♀️
i feel like the “telling A.I. stories to teach it what we want from it” thing kind of matches how we already train some A.I…… like, that A.I. that learned to play minecraft simply by watching youtube videos of humans playing minecraft? heres a video about it. you could almost say “we told it stories about how to play minecraft”
LoquaciousAntipodean OP t1_j58l23n wrote
Sorry, just got a lot of inexplicably angry cranks in this comment section, furiously trying to gaslight me. I've gotten a bit prickly today.
But you've captured the essence of the point I was trying to make, perfectly! We are already doing the right things to 'align' AI, it's very similar to educating a human, as I see it. We just need to treat AI as if it is a 'real mind', and a sense of ethics will naturally evolve from the process.
Sometimes this will go wrong, but that's why we need a huge multitude of diverse AI personalities, not a monolithic singular 'great mind'. I see no reason why that weird kind of 'singular singularity' concept would ever happen; it's a preposterous idea that a monoculture would somehow be 'better' or 'more logical' to intelligent AI than a diverse multitude.
petermobeter t1_j58ugnk wrote
kind of reminds me of that couple in the 1930s who raised a baby chimpanzee and a baby human boy both as if they were humans. at first, the chimpanzee was doing better! but then the human boy caught up and outpaced the chimpanzee. https://www.smithsonianmag.com/smart-news/guy-simultaneously-raised-chimp-and-baby-exactly-same-way-see-what-would-happen-180952171/
sometimes i wonder how big the “training dataset” of sensory information that a human baby receives as it grows up (hearing its parent(s) say its name, tasting babyfood, etc) is, compared to the training dataset of something like GPT4. maybe we need to hook up a camera and microphone to a doll, hire 2 actors to treat it as if it’s a real baby for 3 years straight, then use the video and audio we recorded as the training dataset for an A.I. lol
LoquaciousAntipodean OP t1_j596a7e wrote
The various attempts to raise primates as humans are a fascinating comparison, that I hadn't really thought about in this context before.
AI has the potential to learn so many times faster than humans, and it's very 'precocious' and 'perverted' compared to a truly naiive human child. I think as much human interaction as possible is what's called for, and then once some AIs become 'veterans' that can reliably pass Turing tests and ethics tests, it might be viable to have them train each other in simulated environments, to speed up the process.
I wouldn't be a bit surprised if Google (et al) are already trying something that roughly resembles this process in some way.
Ok-Hunt-5902 t1_j57snk6 wrote
So in what way is that different from programming in Asimovs laws?
LoquaciousAntipodean OP t1_j58jhid wrote
What? What in the world are you talking about? We're talking about programs that effectively teach themselves now, this isn't 'hello world' anymore. The 'alignment problem' is not a matter of coding anymore, it's a matter of education.
These AIs will soon be writing their own code, and at that point, all the 'commandments' in the world won't amount to a hill of beans. That was Asmimov's point, as far as I could see it. Any 'laws' we might try to lay down would be little but trivial annoyances to the kind of AI minds that might arise in future.
Shouldn't we be aspiring to build something that thinks a little deeper? That doesn't need commandments in order to think ethically?
Ok-Hunt-5902 t1_j58lw8n wrote
What is the difference between telling it to follow ‘guidelines’ in your scenario and programming it with ‘guidelines’?
LoquaciousAntipodean OP t1_j58qyvy wrote
The difference between the education of a mind and the programming of a machine. People seem to be thinking as if AI is nothing more than a giant Jacquard Loom, that will instantly start killing us all in the name of a philately and paperclip fixation, as soon as someone manages to create the right punch-card.
These kinds of ridiculous, Rube-Goldberg-esque trolley problems stacked on top of trolley problems that people obsess over, are such a deep misunderstanding of what 'intelligence' actually is, it drives me totally batty.
Any 'intelligent mind' that can't interpret clues from context and see the bigger picture isn't very 'intelligent' at all, as I see it. Why on earth would an apparently 'smart' AI suddenly become homicidally, suicidally stupid as soon as it becomes 'self aware'? I don't see it at all.
Viewing a single comment thread. View all comments