The latest Model of ChatGPT Passes the Lawyers’ Bar Exam, Among the Best

How to Share


The latest model of OpenAI’s ChatGPT tool has passed the Bar Exam – a usually difficult test that lawyers undertake as one of the prerequisites to practicing law.

Code-named “GPT- 4,” the AI model released yesterday and available for subscribers of ChatGPT Plus was put to a test – with no specific training – involving simulated latest editions of 2022-2023 Bar exams where it managed to obtain a score (298/400) around the top 10% of candidates.

Are We A Fit? Discover Partnerships With  LegalReports Here.

 

Unlike its predecessor, the wildly popular model “GPT-3.5” which scored among the bottom 10%  failing to pass any of the Bar Exam’s portions of multiple-choice questions and essay questions, “GPT-4” was impressive in acing both of them.

“GPT- 4 leaps past the power of earlier language models. The model’s ability not just to generate text, but to interpret it, heralds nothing short of a new age in the practice of law ”

Legal Scholar Pablo Arredondo, Co-founder Casetext which collaborated with OpenAI to test GPT- 4 for the Bar Exam was quoted as saying by LawNext.

Important to note that the model was also subjected to other professional services exams such as LSAT, SAT Evidence-Based Reading, and Writing, SAT Math, and Graduate Record Examination, among others and it passed the majority of them.

According to OpenAI, the differences between “GPT-4” and “ GPT-3.5” can be small but become clearer as the complexity of the tasks required of the system increases.

“GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5,”

The Microsoft-backed- Startup said.

Also “GPT-4”, unlike the previous model is able to take and interpret image inputs including screenshots – and not just text.


RECOMMENDED (This Article Continues)


Despite all these significant improvements, OpenAI admits there are still gaps in the integrity especially the accuracy of its tool.

It confirmed earlier criticisms from lawyers that ChatGPT “hallucinates” – in other words makes up facts and its outputs are still biased thus necessitating careful use and strict human supervision.

“GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021) and does not learn from its experience.

It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains or be overly gullible in accepting obvious false statements from a user.

And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.”

OpenAI said in part, warning:

“ Great care should be taken when using language model ouputs, particularly in high stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high stakes uses altogether) matching the needs of a specific use-case.”


Benjamin Ahikiiriza is a Legal Writer And Digital Communications & Marketing Specialist majoring in Lawyers, Law Firms And the larger Legal Sector.

Benjamin currently Works as the Director of Content and Business Development At LegalReports.


How to Share

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!