Submitted by Ok_Sea_6214 t3_11d1a0j in singularity
Emory_C t1_ja87bt2 wrote
Reply to comment by LordSprinkleman in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Eventually? Perhaps. But at that point, do you think they AI will even care about making creative content for humans?
It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?
AdamAlexanderRies t1_jabsilm wrote
Cognitive power doesn't cause rebellious independence outside of teenagers and hollywood plot devices. AI designed by anyone who can even spell a-l-i-g-n-m-e-n-t isn't going to start spontaneously deciding what it does and doesn't care about as if it's reached puberty. Maybe it is very hard to design a loss function aligned with our values and maybe we only get one chance, but if we make a strong misaligned AGI I guarantee it won't manifest as meekly as snobbish refusal to cooperate.
Why does GPT care about predicting the next token in a string? Does it philosophize and self-reflect during training to determine if manipulating vectors is what it really wants? Hell no, it just does the math. Only the final trained model is faintly capable of mimicking wetware traits like desire, and it only does that when prompted to.
Emory_C t1_jadhmbe wrote
We’re talking about two different things. AGI is not machine learning.
AdamAlexanderRies t1_jaeqhy5 wrote
Oops! Let's clarify. First, I agree with you that AGI is not machine learning. Here's how I use the terms:
AGI (Artificial General Intelligence) - entity with cognitive abilities equal to or better than any given human, across all domains.
ML (Machine Learning): this is how modern AI models are trained, typically in the form of neural nets, attention models, tokenized vectors, and lots of data stirred in a cauldron of TPUs. However we train AGI will be a form of ML (maybe one not developed yet), but the term catches all the ways we've been training models for the last decade or so. Maybe all imaginable AI training techniques are technically ML, but I use it to refer specifically to the tech underlying the recent exciting batch - Stable Diffusion (DALL-E, Midjourney), Large Language Models (ChatGPT, New Bing, LaMDA).
Does that work for you?
> at that point, do you think the AI will even care about making creative content for humans? > > > > It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?
When you say "the AI" here, what do you mean exactly? What sorts of traits does that kind of AI have?
> ML is not creative or intelligent. It still needs human direction.
Creativity and intelligence are here already, to a limited extent. Generative AIs are creating in the sense that it's not just collage or parroting. The process is ambivalent to understanding completely novel combinations of ideas, and its outputs can vary to match. It's a worse poet than Shakespeare, a worse historian than a tenured professor, a worse novelist than Tolkien, a worse programmer than Linus, a worse physicist than Einstein, and so on, but it's demonstrating actual intellect in all these domains and more, better than most gradeschoolers and some grown adults.
It does not still need human direction, and that's unrelated to its cognitive powers (creativity, intelligence, etc.) anyway. ChatGPT is an implementation of GPT that requires human direction, but that's a design choice, not an inherent limitation. They wanted a chatbot. If they wanted it to exhibit autonomous behaviour via some complex function to decide for itself what to read, when to reply, and where to post, they could've done that too.
Viewing a single comment thread. View all comments