Viewing a single comment thread. View all comments

RollingTater t1_j74h8q5 wrote

Eventually AI will be able to build a better legal defense than a human can, and in that case it would be unethical to give people human defense teams.

However, that day is not today. ChatGPT has no hard internal logic. You can trick it into doing bad math for example, or sometimes it writes code that is just wrong.

I'm no lawyer but I'm assuming legal defense requires some sort of presentation of factual evidence, logic, and verification of that evidence. Right now you can't guarantee the AI hasn't just spat out a huge document of gibberish that looks right but has a hidden logical flaw.

4

henningknows t1_j74hll3 wrote

What makes you think an ai can make a better legal defense? You understand winning a court case is about persuading a jury just as much as having the law on your side.

5

RollingTater t1_j74hz4u wrote

Persuasion is the one thing chatgpt can do really well. That's something that doesn't require any hard logic. And it's also why this tech is dangerously deceptive, it will be so persuasively correct until it's not.

4

VectorB t1_j74o49s wrote

Wonderfull, our system is not based in rules or fairness, but inthe quality of charisma rolls your lawyer makes.

2

lycheedorito t1_j755rt6 wrote

Also AI is trained on existing things by humans. It's not going to do better than what it is trained on.

1