Submitted by Baturinsky t3_104u1ll in MachineLearning
Baturinsky OP t1_j3bh9kb wrote
Reply to comment by LanchestersLaw in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Thanks.
I think people vastly underestimate the possibilities of use of ChatGPT-like model. If it has learned from the entire(-ish) interned scrapped, it's not just language model, it's the model of entire human kowledge avaialbe on the internet, neatly documented and cross-referenced for very easy use by algorithms. Currently it's used by quite simple algorithms, but what if it will be algorithms that try to use that data to write itself, btw? Or something else we don't forese yet.
And I don't even know how it's possible to contain the danger now, as algorithm for "pickling" internet like that is already widely known, so it could be easily done by anyone with budget and internet access. So, one of necessary measures could be switching off the internet...
LanchestersLaw t1_j3dh4ws wrote
The key word you to use for better answers are “control problem” and “AI safety”. For my personal opinion ChatGPT/GPT-3.5 is an inflection point. GPT-3.5 can understand programming code well and do a passable job generating it. This includes its own code. One of the beginner tutorials is using GPT to program its own API.
That said, GPT-3.5 has many limitations. It isnt a threat. Future versions of GPT have the potential to be very disruptive.
Viewing a single comment thread. View all comments