The real future of AI lawyers is to do research and write drafts for human lawyers. Small firms will be able to provide better service and Big Law will save millions on staffing.
Somebody said Public Defenders, and it will help there too, I'm sure. But government will also use it as an excuse to continue not properly funding that service.
I'm really not looking forward to the inevitable: when AI starts doing a lot of this low-level/foundational work, people are going to see it as a wonderful scapegoat for anything that goes wrong in their projects, or for malicious people to shift liability. "The AI got the analysis wrong" or "The AI missed this parameter" or whatnot. At some point there will be a court case where real people were harmed, and the AI gets blamed while the people responsible are let off the hook.
AI needs accountability, and that is woefully missing right now. We already have half of the country wanting AIs to be racist and tell horrible jokes on command, and get mad when the AI refuses to do that. I don't see how our society can handle the AI implementations we're witnessing. It's not going to go well.
There are mistakes now. And low level staffers get blamed for them. If it's bad enough, the staffer will get fired and replaced by someone of (probably) equal ability.
So the real question will be: Are the AI mistakes worse and/or more numerous than the human mistakes?
Viewing a single comment thread. View all comments