A fresh set of benchmarks could help specialists better understand artificial intelligence.
Artificial intelligence (AI) models can perform as well as humans on law exams when answering multiple-choice, short-answer, and essay questions [1], but they struggle to perform real-world legal tasks.
Some lawyers have learned the hard way, being fined for filing AI-generated court briefs that misrepresented principles of law and cited non-existent cases.
Author: Chaudhri, principal scientist at Knowledge Systems Research in Sunnyvale, California.
Search author on: PubMed | Google Scholar
AI models can perform well on law exams, but struggle with real-world legal tasks.
Author's summary: New benchmarks are needed to assess AI's real-world knowledge.