A Summary of The Use of Artificial Intelligence Products and Services as Reflected in the NJSBA Task Force on Artificial Intelligence Report.

 In NEWSLETTER

Artificial intelligence is technology that replicates human intelligence and problem-solving capabilities. In the last year, the availability of AI products has significantly increased necessitating action to protect legal professionals and our clients. In late 2023, the NJSBA created a Task Force on Artificial Intelligence and the Law composed of 27 attorneys and industry experts to provide practical guidance to legal professionals. In May 2024, the Task Force released its 36-page report focusing on the applications and implications of AI within the legal profession. The report is divided into substantive sections which coincide with the workgroups that were asked to deliver recommendations. These workgroups were: Artificial Intelligence and Social Justice Concerns, Artificial Intelligence Products and Services, Education and CLE Programming, and Ethics and Regulatory Issues.

There are AI tools that are tailored for the legal profession and those that are used publicly (i.e. Chat GPT, Claude or Gemini). Lawyers should only use AI tools designed for the practice of law when using client data or reviewing discovery or other sensitive information (never use client specific data only hypotheticals). The tools for public use generally gather data from the internet and other sources, pool them together to provide a response to the prompt. In addition to the security concerns, because the technology has been created to always produce a result, the information is subject to “hallucinations.” Hallucinations occur when the system produces a response that appears plausible but lacks a basis. These inaccuracies create, not just practical consequences, but also potentially ethical consequences. The Task Force suggests that these tools are best used for inspiration and non-legal tasks.

AI Tools that are specifically created for legal professions typically offer enhanced privacy controls; however, evaluating their privacy effectiveness remains challenging due to the lack of standardized regulations. Many of these tools require the attorney to upload or provide the company with access to client information. Prior to committing to the service, the Task Force suggests determining the company’s reputation and longevity by at least asking the following questions:

  • Q. Does the company have a proven track record of providing reliable, secure and compliant solutions specifically tailored to the legal sector?
  • Q. Are there any case studies, testimonials or references from other legal professionals who have successfully used the tool? Reaching out to colleagues or requesting references from the provider can yield valuable insights.
  • Q. If the company is relatively unknown, who are its owners, and are there any potential ethical concerns related to the ownership structure? Researching the company’s website, press releases and financial disclosures can help uncover this information.

Task Force on Artificial Intelligence (AI) and the Law

Once these questions are satisfactorily addressed, the Task Force provides ten additional areas to consider, including data localization, privacy standards and data sources. This “Selection Criteria” allows Lawyers to make intelligible decisions about whether the AI product is fit for their firm and clientele. The Task Force provides practical, actionable guidance for legal professionals on integrating AI into their practices. By employing a structured evaluation framework, legal professionals can ensure ethical standards and privacy concerns are prioritized, enabling the safe and effective use of AI as the tools continue to evolve.


Image Generated with AI

WLIB President Tamra Katcher, Esq.dogs as emotional support animals