The integration of artificial intelligence into various sectors has been mostly met with optimism, but for rapper Pras Michél of the Fugees, AI seems to have been his undoing in the courtroom. Michél is challenging his April conviction on conspiracy charges, arguing that his former attorney’s use of AI to draft closing statements compromised his defense.
This trial offers a lens into the uncharted waters of AI’s role in the legal system. Even as more firms look to integrate this technology, questions about its ethical implications, accuracy and the potential for misuse abound.
Michél, convicted in Washington federal court for conspiracy to defraud the U.S., submitted a motion for a new trial through his current legal team. Politico first reported that the rapper’s former lawyer, David Kenner, had used EyeLevel.AI’s generative AI platform to produce the closing arguments. Kenner previously stated the technology was an “absolute game changer for complex litigation.”
The motion contends that Kenner not only relied on inexperienced lawyers for trial preparations but also failed to acquaint himself with the specifics of the case’s statutes. EyeLevel.AI specializes in adding GPT (Generative pre-trained transformer) capabilities to customer service and legal applications. Yet, Michél’s team argues that this application of AI resulted in “frivolous arguments, conflated schemes and a failure to underline the government’s case weaknesses.”
While the American Bar Association doesn’t yet have guidelines for using AI in legal practices, experts are scrutinizing the risks. AI tools like ChatGPT have previously been reprimanded for fabricating legal cases for citation. Such errors demonstrate the potential pitfalls of trusting machine-generated content in sensitive, high-stakes scenarios like legal trials.
Legal analysts say this case could set an important precedent. “AI is making quick inroads into the legal profession. However, the question of reliability and ethics remains. Firms have to decide if speed and efficiency are worth the potential risk,” said Sharon Nelson, president of Sensei Enterprises, a tech consultancy specializing in digital forensics and cybersecurity.
Michel was accused of taking approximately $88 million to introduce Malaysian financier Jho Low to former U.S. presidents Barack Obama and Donald Trump. While awaiting sentencing, he is out on bond and recently performed with the Fugees during Lauryn Hill’s anniversary tour for “The Miseducation of Lauryn Hill.”
In a press release, Neil Katz, COO of EyeLevel.AI, refuted the notion that Kenner had a financial stake in his company. Katz said the technology was intended to assist human lawyers in crafting arguments, not replace them. “The idea is not to take what the computer outputs and walk it straight into a courtroom. That’s not what happened here,” he said.
Michél’s new lawyer, Peter Zeidenberg, begs to differ, stating that the AI program and Kenner failed to serve the defense effectively. “The closing argument was deficient, unhelpful and a missed opportunity that prejudiced the defense,” Zeidenberg said.
John Villasenor, a professor of engineering and public policy at UCLA, added a word of caution: “Even as products improve, attorneys who use AI should make sure they very carefully fact-check anything they are going to use.”
As for Michél, the outcome of his motion could not only affect his future but also set a precedent for the broader use—or misuse—of AI in the legal landscape. It’s an unfolding drama that encompasses not just the entertainment and political spheres, but also triggers a discussion that could reshape the future of legal practice in the U.S.
While awaiting a judge’s decision on the motion for a new trial, one thing is clear: as AI’s capabilities grow, so do the complexities and controversies surrounding its application.
The Justice Department declined to comment, and it’s not yet clear when a decision on Michél’s motion will be rendered. The debate surrounding AI in legal settings is far from settled. But what is clear is the need for thorough vetting and potential regulation. Lawyers should not blindly trust AI technologies; rather, they should see them as a tool that can aid but not replace human expertise. As the Michél case demonstrates, the consequences of doing otherwise can be dire.
Read the full article here