Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
A New York judge has scolded a law firm for citing ChatGPT to support its application for “excessive” attorneys’ fees of up to $600 an hour.
The Cuddy Law Firm had invoked the predictive artificial intelligence tool in a declaration to the court over a case it won against the city’s education department. It said it had done so “to provide context to what a parent — having ChatGPT-4 open and available to them — might take away in researching whether to hire an attorney and who to accept or reject”.
When asked what would be a “reasonable hourly rate” to expect for an associate attorney with up to three years experience in a hearing over disabilities education, the large language model said it could “range anywhere from $200 to $500 an hour”, an attorney at the firm wrote.
He also pointed out that ChatGPT concluded that “lawyers who specialise in a certain type of law (such as special education law, in this case) may command higher rates” and that an attorney with “25 years of experience” might command an hourly rate of up to $1,200 “or even more”.
Judge Paul Engelmayer, who ultimately cut the fees to be awarded to Cuddy’s lawyers by more than half, called the firm’s reliance on the AI program “utterly and unusually unpersuasive”, adding that “barring a paradigm shift in the reliability of this tool, the [firm] is well advised to excise references to ChatGPT from future fee applications”.
He added that “as the firm should have appreciated, treating ChatGPT’s conclusions as a useful gauge of the reasonable billing rate for the work of a lawyer with a particular background carrying out a bespoke assignment for a client in a niche practice area was misbegotten at the jump”.
Engelmayer also found the number of hours billed by Cuddy to be “excessive or inadequately explained”.
A lawyer at Cuddy said “the underlying assertion was not about ChatGPT’s correctness on rates, but rather, what parents would expect as consumer[s]” and pointed to the difficulties of gathering evidence on fee structures for attorneys working on similar claims, which are rare.
Engelmayer’s rebuke is the latest in a series of admonishments to lawyers by US courts over the use of generative AI. Last year, a judge in the same Manhattan court sanctioned lawyers for submitting a brief compiled with the help of the tool which cited fictitious cases, while another attorney is under investigation for citing a “non-existent” authority generated by ChatGPT.
A lawyer for Michael Cohen, a former attorney for Donald Trump, apologised to a federal judge last month for citing three fictitious cases in a brief, blaming “a series of unfortunate mistakes”.
An appellate court in Texas is seeking to change its rules to force lawyers to declare whether a generative AI program was used in the drafting of a document, and if so, if it “has been reviewed for accuracy and approved by a human”.
Read the full article here