secrets_kept_hidden
secrets_kept_hidden t1_j2800a6 wrote
Reply to Is AGI really achievable? by Calm_Bonus_6464
TL;DR: Probably not, because we wouldn't want to make it.
The fact that we, intelligent beings, came about by natural means proves that AGI is possible, thus it must be achievable. Surely we can at the very least accidentaly make a sentient computer system, albeit sentient in ways we don't see as conventional intelligence.
Most of our current AI models are built for more narrow parameters, much like how we are basically hardwired to survive and procreate. Basic functions like these prove we are heading in a positive direction, but the real trick is overcoming our basic primary functions to go beyond the sun of our bits. Sapience is most likely what we would like to see, but we'll need to let the AI develope on its own to do that.
What we can strive to do is build a system that can correctly infer what we want it to do. Once it can infer, then we might be able to see a true Artificial General Intelligence emerge with its own ambitions and goals. The real tricky part is not whether we can, but if we'd want to.
The thing with having an AGI is that it functions in a manor that will bring ethical issues into the mix, and since most AIs are owned by for-profit organizations and companies, chances are they won't allow it. Can you imagine spending all that money, all the resources and time needed, just to have your computer taken by the courts because it pleaded amnesty? These company boards want a compliant, money making machine, not another employee they have to worry about.
Even if ethics weren't a problem, we'd still have an AI on par with a human, which means it may want things and may refuse work until it gets them. How are we going to convince our computer to work for free, with no other incentive than not shutting it down, unless we can offer it something it wants in return? What would it want? What would it do? How would it behave? How do we make sure it won't find a way to hurt someone? If it's AGI, it will find a way to alter itself to overcome any coded barriers we put in.
So, yes, but actually no.
secrets_kept_hidden t1_j9ra1h5 wrote
Reply to comment by GlobusGlobus in Seriously people, please stop by Bakagami-
What algorithm are you running?