AI can be a helpful tool for the legal profession if used responsibly

By Tony Poland, LegalMatters Staff • The launch of ChatGPT last November presents new possibilities to the legal profession, but also raises serious concerns for lawyers who fail to exercise due diligence when incorporating artificial intelligence (AI) into their practices, says Toronto employment lawyer Ellen Low.

ChatGPT is a tool that allows users to have humanlike conversations with a chatbot. It can respond to questions and compose written content such as emails, social media posts, blog posts and even legal submissions.

“I recently spoke at the Ontario Bar Association’s 21st Annual Current Issues In Employment Law a conference about the rise and use of AI in employment law,” says Low, principal of Ellen Low & Co. “There is an abundance of interesting ways in which it might be usable in a legal sense such as interfacing with potential clients, researching citations, or reviewing websites. I see it as a helpful tool, but I can’t see it replacing lawyers actually drafting and reviewing their own pleadings or anything else that is being submitted to a court or tribunal.”

Of course, AI is not new to the legal profession, she says, adding that research can be time-consuming and new innovations can analyze data in a fraction of the time it would take the average person.

Trying to assess the limitations and liabilities

“I use a really cool predictive analytics tool to help me do case law research on potential notice periods, for example,” Low tells LegalMattersCanada.ca. “But obviously, with the launch of ChatGPT, we are entering an entirely new realm of possibility. All sorts of different organizations, including legal firms, are now grappling with how to use AI while trying to assess the limitations and liabilities of doing so.”

Many agree that technology such as ChatGPT can help improve access to justice, providing answers for the average person seeking legal advice. However, Low says caution is needed, pointing to recent concerns by government regulators.

“We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else,” Lina Khan, chair of the United States Federal Trade Commission, told Congress. “We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about.”

While AI can produce a well-written argument filled with citations, there is the real danger that it could be a work of fiction. Last month a U.S. judge sanctioned two New York lawyers who submitted a legal brief that included six fictitious case citations generated by ChatGPT.

“AI created submissions using citations to cases that didn’t exist,” Low says. “That is really problematic because in a common law system such as we have in Canada, part of our law is based on this idea that past rulings influence present or future decisions.”

‘Abandoned their responsibilities’

According to AP News, Judge P. Kevin Castel said the lawyers “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT.”

“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” he wrote. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

Low says the sanction should not come as a surprise.

“In a profession that relies on the exercise of good judgment and discretion, I can see why the New York court took these lawyers and their firm to task for effectively neglecting that entire duty,” she says. “Citing nonexistent cases is obviously extremely problematic.

“It is possible that someone can refer to a case that other lawyers just simply cannot find. It may be a resource they don’t have or it is behind a paywall. But it is possible that it exists. Here it was discovered that the citations were fabricated,” Low adds. “Of course, the danger becomes what if false citations go unchallenged?”

Courts have started to react

The courts have also started to react in Canada, she says. In Manitoba, lawyers must now disclose when they use artificial intelligence to prepare court documents in the Court of King’s Bench.

CBC News reported that Manitoba Chief Justice Glenn Joyal issued a practice direction in June, acknowledging that AI might be used in future court submissions. 

“While it is impossible at this time to completely and accurately predict how artificial intelligence may develop or how to exactly define the responsible use of artificial intelligence in court cases, there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence,” the order states. 

Low says it shouldn’t be long before other jurisdictions issue similar practice directions.

“I suspect Ontario will follow suit relatively quickly. The Ontario Superior Court comes out with practice directions pretty regularly,” she says. “With Manitoba taking the lead, it will be interesting to see what actually has to be disclosed about the use of AI in court filings.”

Danger not limited to court documents

The potential danger is not just limited to documents presented in court, Low says. Citing employment law as an example, a lawyer could use AI to search for notice periods in a certain situation and create a conclusion that the employee is entitled to receive say 24 months of common law notice, she says. However, if those citations are fake, potential clients are being misled, leading to unreasonable expectations, Low says.

She says AI has a great upside “especially on the business side of the legal profession.”

“But I am concerned about issues such as security and confidentiality,” Low says. “Personally, I would be reluctant to put confidential information or even very specific facts into any sort of ChatGPT.”

Technology is constantly evolving and can be a boon to the legal profession but caution is needed, she says.

“Explore and use AI but absolutely check those sources to ensure that they actually exist and are certifiable,” Low says. “It all comes back to the rules of professional conduct. As a lawyer, you are responsible for any work delegated to all employees. You must directly and adequately supervise them. That includes anything created using artificial intelligence.

“The lawyer responsible for any delegated work, including a throughout review of any work generated with AI,” she adds. “At the end of the day, the lawyer is the ultimate decision maker and the one who holds the bag with respect to any submissions that are made.”  

She cautions all lawyers using AI to make sure those submissions are accurate and real.