Submitted by [deleted] t3_115ez2r in MachineLearning
gwern t1_j91ozq3 wrote
Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]
> Some people would do it on purpose, and it can happen by accident.
Forget 'can', it would happen by accident if it ever does. I mean like bro, we can't even 'design an AI' which learns the 'tl;dr:' summarization prompt, that just happens when you train a Transformer on Reddit comments and we discover that afterwards investigating what GPT-2 can do, you think we'd be designing 'consciousness'?
Sphere343 t1_j92y4se wrote
A AI can literally theoretically change from being not sentient to being so if it gains enough information in a certain way. As for the specific way? No clue cause it hasn’t been found yet. But in data gathering and self improvement a AI could become sentient if the creators didn’t but some limits or if the creators programmed the self improvement in a certain way.
Would it truly be sentient? Unknown. But what is for certain is even if the AI isn’t sentient but has gained enough information to respond in any circumstance it will seem as if it is. Except for the true creative skills of course. Kinda have to be truly sentient to create brand new detailed ideas and stuff.
TheRealSerdra t1_j944f39 wrote
What defines sentience? If I ask ChatGPT “what are you” it’ll say it’s ChatGPT, a LLM trained by OpenAI or something to that affect. Does that count as sentience or self awareness?
Sphere343 t1_j94dx4y wrote
Uh cause the programmers literally added that in. It’s a obvious question. So no of course not.
Viewing a single comment thread. View all comments