Submitted by gaudiocomplex t3_zxnskd in Futurology
jackl24000 t1_j24sglg wrote
Reply to comment by CoolioMcCool in What, exactly, are we supposed to do until AGI gets here? by gaudiocomplex
Yeah, but try to imagine in any foreseeable future you’d turn loose on e.g., customer facing tasks involving potentially disputed or ambiguous issues like warrantee eligibility and spouting nonsensical corporate gobbledegook to your good customers who are infuriated by the time it gets kicked to a human?
Or any other high value or mission critical interaction with other humans?
How do such systems to replace most human interactions with AGI deal with black swan events not in training sets like natural disasters, pandemics, etc.
CoolioMcCool t1_j267f0m wrote
Ok, so the narrow AIs that are coming in the next several years will only be able to do the job 95% of the time. It'll still take a lot of jobs. What do we do with all of the people it replaces?
Honestly a lot of these replies read like people are threatened and being defensive "there's no way it could do MY job".
Cool. It will be able to do a lot of stuff and massively reduce the number of jobs that require people is my point. What do we do about all of the unemployment?
jackl24000 t1_j26as9o wrote
Try reading it more like trying to understand how this would work, not from a worried worker bee’s perspective, but more from his manager or line executive worried about having to clean up messes caused by a possibly wrong cost saving calculus. Just like today having to backstop your more incompetent employees mistakes or omissions.
And maybe we’ll also figure out the other AGI piece: Universal Basic Income to share in this productivity boon if it happens, not just create a few more billionaires.
CoolioMcCool t1_j26ij1g wrote
As you hinted at, incompetent employees already make expensive mistakes. Once AI gets to a point where it makes less expensive mistakes, employers would be incentivised to replace the people with machines.
Driving is an easy example, humans crash, AI will still get involved in crashes, but if it is involved in significantly fewer crashes then it would seem almost irresponsible to have humans driving.
I think ultimately it just comes down to me having higher expectations of AI ability than others.
Have you played around with chat gpt? I'd highly recommend it, it's pretty incredible, and a lot of it's limitations are ones that have been intentionally placed on it e.g. it doesn't have access to information from the last year or 2, and there are certain topics it has been restricted from talking about(e.g. race issues and religion).
Viewing a single comment thread. View all comments