Viewing a single comment thread. View all comments

DoktoroKiu t1_j24lxlt wrote

It may have passed the test, but I would not use this as an indication that it could represent you in court. Unless it is fundamentally different than the other large language models it will confidently lie and is only really "motivated" to produce probable responses to given prompts.

The AI they trained on only research papers was shut down very quickly when it started making very detailed lies citing studies that seem plausible yet don't exist.

Now this is by no means an unsolvable problem, but solving it is not something we can just assume. AI alignment is not an easy problem.

6