Incorporating AI Tools Into Your Legal Practice

Image Generated by ChatGPT

I published Advice for Incorporating AI Tools Into Your Legal Practice along with Celia Bigoness and Robert MacKenzie in the National Law Review. It reads,

We have been speaking with many lawyers and law students about using generative artificial intelligence (AI) tools in their legal practice. We are struck by the fact that many of them have not been experimenting much, if at all, with the tools that are available to them – although many acknowledge that their clients are increasingly integrating generative AI into their businesses. We have been integrating a lot of these tools into our own professional lives, and here are some tips to help lawyers and law students get comfortable with AI tools that can help them, in big ways and small, with their job.

Put it on Your Home Screen

Put your preferred AI app (ChatGPT, Claude, etc.) onto your phone’s home screen and be sure to allow it to access your phone’s microphone. You will be surprised by how often you get the urge to ask the app slightly complex questions that a basic web search would not answer. (Hat tip to one of our kids for this idea.)

Start with the Familiar

Trusting the output of an AI tool without having the ability to verify its accuracy is okay if you are choosing a movie to stream tonight. It is not okay if you are using it to provide legal advice to a client. To get comfortable with AI tools, start using it for tasks that you have experience executing and reviewing. One simple way to start: explain a familiar task to the AI tool and ask it for guidance on how you can use it to complete the task.

As you use AI tools in newer areas, you want to review the cited sources in the AI output to confirm that you agree with the AI model’s interpretation of them. Sometimes they are plain wrong, sometimes the AI model misinterprets the cited documents, and sometimes those documents are out-of-date.

When the stakes are greater than your personal entertainment, you need to do a lot of due diligence before you adopt an AI tool’s findings.

Use Multiple Tools

Different AI models are built on different training documents and have different algorithms that they apply to those documents. There is nothing more edifying than running the same queries through a few general AI models and a few specialized ones (like those geared to lawyers, in particular). You will see a range of answers, from non-answers to highly specialized and accurate ones. You will start to become a more sophisticated consumer of the different models, understanding each of their strengths and limitations.

Tell It Your Needs

Most AI tools will tailor their responses to your preferences. In some cases, we created a prompt to instruct the AI tool that responses should be of the type that a lawyer would like to receive—providing sources, explaining its analytical steps, and what it did and did not consider. The AI tool responded that it would be precise, answer “above a lay level,” and “be candid about uncertainty.” This has improved its answers and had the side effect of reducing sycophantic language (“That is a very good question!”).

Use it for Your Pain Points

We all have some routine tasks that we find irritating. They are usually the ones we procrastinate on. For some, it is preparing slide decks. For others, it is drafting certain kinds of emails (unpaid bills, anyone?). Just getting a first draft from the AI tool often helps you to finish the work up. But for some tasks, like preparing presentation slide decks, you can save hours and hours of your time.

We have experimented with both general AI tools and those that specialize in slide deck preparation. They have pros and cons, but are generally very helpful. In all these cases, the AI tool’s time savings are in large part due to the fact that the AI tool is optimizing a task that you are capable of doing yourself. You are able to quickly verify and edit the output.

However, if you were asking the tool to analyze a topic with which you are unfamiliar, or perform a task that you’ve never done before—if you’re learning from scratch—you will still need to go through the painstaking process of checking sources and confirming output.

Play in Vaults

One game-changing use of AI tools is to upload documents to a secure location in the cloud (sometimes referred to as a “vault”) and hone the tool’s focus on only those documents. A transactional lawyer can upload hundreds of documents and quickly identify commonly appearing terms for comparison or inconsistencies among them. A litigator can upload thousands of pages of litigation documents and create a draft chronology of events. Again, the output cannot be taken at face value due to the functional limitations of these tools, but it can provide an extraordinary first draft that can then be verified and edited to the form you prefer. This can be a game-changing use of AI for lawyers, as long as you have verified the vault’s security in advance.

Use it as a Second Set of Eyes

This is a great and scalable tip for those who are skeptical of AI tools. After you have completed a written task, ask an AI tool to critique for clarity, coherence, and accuracy. Even an experienced attorney will get at least a couple of suggestions that will ring true. And of course, you can reject all of the suggestions that you disagree with. This is a great way to see if an AI tool can provide you with real value with very little investment of your time.

Along the same lines, for more advanced experimentation, you can use the AI tool to issue spot and offer counterarguments to your work to complement your own analysis. Again, this is very low stakes because you can reject anything you find wrong-headed or irrelevant. Of course, you need to be careful about sharing privileged information (see vault security above).

Preserve Confidentiality

We have spent more time than many of you would like looking at the Terms of Use of the AI tools we have used. Except for certain tools that are developed for legal work in particular, we believe that the attorney-client privilege can be compromised when using many AI tools because of how the tools use your input information.

We have had students and clients who wanted to use AI transcription tools to compile meeting notes. We have advised them that confidential information can be compromised by such tools and that we do not use them in our practice, at least at this time.

If you begin to use a tool with client-identifying information, be sure to confirm that you are complying with your professional responsibilities to preserve client confidences.

Don’t get Lazy!

We all read the headlines about lawyers who use AI to draft legal documents and do not check to confirm that the work product is correct. Those lawyers rightfully face professional discipline and reputational consequences. We can all say that we would never do that, but a new term has arisen to describe an unthinking reliance on AI: “cognitive offloading.” This offloading occurs when we reduce our own deep research and thinking because of an unhealthy reliance on AI tools.

Every time we complete a substantive task with AI, we need to ask if we have thought through the task as fully as we would have if we did it without the tool. If the answer is no, we need to dig into it again. Cognitive offloading is a particular concern for law students and younger generations of lawyers, who have grown up with technology and tend to be more comfortable using AI tools – and therefore more susceptible to this unthinking reliance.

Conclusion

From our discussions with lawyers in private practice, it is clear that AI tools are being used in the ways we have mentioned above. No doubt, more specialized tools are in development. It’s clear that AI will transform the practice of law in the coming years. Those who are new to AI can use these pointers to begin exploring how AI works. We think they can amplify their effectiveness to the benefit of their clients and themselves, so long as the risks that AI tools pose are thoughtfully addressed.

 

Using AI in Transactional Law Practice

The Role of AI in Legal Decision-Making: Opportunities and Ethical Concerns

© Romain Vignes CC BY-NC-SA 3.0

Celia Bigoness and I published a column in Law360, What 2 Profs Noticed As Transactional Law Students Used AI (behind a paywall). It reads,

We teach entrepreneurship law clinics in which our students do transactional work on a wide range of matters, including business formation, contracts, intellectual property protection and regulatory compliance.

This past semester, we had access to generative artificial intelligence tools from Lexis, Westlaw and Bloomberg Law, as well as those that are more broadly available to the general public, including ChatGPT and Perplexity.

While we have not done a rigorous study of these tools, we have some early observations about how AI is changing how transactional lawyers do their jobs, particularly new transactional lawyers. Our own experience has been mostly positive, when these tools are used responsibly. But there are many caveats that experienced and new practitioners should be aware of.

Potential Applications

For a transactional lawyer, one tempting potential use case for legal AI tools is to provide first drafts of transactional documents, such as contracts or company bylaws. Most lawyers love to start with a draft — any draft — rather than starting from scratch.

In our experience, though, using an AI-generated draft provides, at best, only an incremental benefit over starting with a precedent and modifying it oneself. Asking an AI tool to come up with a first draft is more like having a junior colleague take a stab at drafting the document, given the extensive review and editing that the draft will require.

There may be some value to this approach in the rare circumstance in which the lawyer does not have access to any relevant precedents, but the lawyer will need to be extremely diligent in reviewing the AI-produced draft.

One AI query that we have found to be more helpful has been to ask whether an existing draft or standard form is missing any important provisions. The AI tool may generate a list of a half-dozen suggested clauses to consider adding to the draft. For instance, it might suggest adding a force majeure clause if your draft does not contain one.

Again, this is not like waving a magic wand over your document: You need to understand what a force majeure clause is, whether it makes sense in your draft and what type of force majeure clause makes the most sense in it.

Also, the suggestions can range from not helpful to redundant to downright useful. But it generally doesn’t take long to parse through the suggestions, and the process can be an efficient way of testing the strength of a document.

Bloomberg Law’s Clause Adviser tool has the very useful ability to evaluate whether a particular clause favors one side in a transaction — e.g., pro-buyer or seller, or pro-tenant or landlord — drawing from thousands of real-life examples that can be found on the U.S. Securities and Exchange Commission’s Electronic Data Gathering, Analysis and Retrieval database.

A transactional lawyer can find comparable market analysis otherwise — for example, Lexis’ and Westlaw’s annotated forms will often indicate provisions that may sway in favor of one party or another — but Bloomberg’s tool is unique in that it is based on actual, negotiated transaction documents on EDGAR.

Similarly, the legal databases’ AI tools can review whether a draft contract or set of bylaws complies with relevant laws — state, federal and foreign jurisdictions. Again, this is helpful, but Lexis’ and Westlaw’s annotated forms already provide a lot of the same guidance.

One excellent use of legal AI tools is to summarize and compare documents. This feature is helpful when you are summarizing one document, but it can be really useful in summarizing a bunch of documents, perhaps pulling all of the assignment clauses out of a bunch of agreements to understand how they differ from each other.

We used to do this in a more labor-intensive way — hours and hours of reading and cross-referencing — and getting almost instantaneous results can feel like AI magic. But again, junior lawyers need to understand that they are responsible for checking the AI work product for accuracy. So we’d consider any summary or comparison to be merely a starting point for the lawyer’s own analysis.

Based on our experience so far, we believe the current suite of legal AI tools may be most useful to transactional lawyers in developing general skills, like contract drafting and analysis. For example, we can design exercises for our law students in which we give the students a few precedents of a particular contract, and ask them to compare the precedents and figure out what they’re missing.

Using both legal AI tools and conventional research, this type of exercise could help the students learn about how the particular provisions of a contract fit together. But we would be much more hesitant about using these AI tools to draft documents from scratch.

Challenges

Given these potential use cases and their limitations, in our view, the biggest challenge is to train junior transactional lawyers to approach these AI tools with a healthy skepticism.

The law students we work with are increasingly comfortable outsourcing aspects of their daily lives to ChatGPT — our students regularly ask ChatGPT to draft or summarize emails, or even to take on more nuanced tasks, such as proposing an itinerary for a post-bar exam trip. They understand that ChatGPT’s output can be a mixed bag when it comes to quality, and they seem to spend a fair amount of time double-checking the results.

But when a law student or junior lawyer is given an AI tool branded by a trusted source such as Bloomberg, Lexis or Westlaw — let alone a tool funded and hosted by that individual’s own law firm — they can become overly confident about that tool’s capabilities. We’ve seen that our students, unless specifically instructed by us, can be too deferential to the drafting and analysis produced by a legal AI tool.

So, whether in a law clinic or a law firm setting, transactional lawyers will face the dual task of staying up-to-date on potential applications for these tools, without abdicating our professional responsibilities to our clients.

Another related concern presented by these AI tools — and particularly by how law students and junior lawyers use them — relates to the disclosure of confidential client information.

Any law student who has taken a professional responsibility course or spent a semester representing clients in a law clinic understands that a lawyer cannot disclose confidential client information without getting the client’s informed consent. But that same law student may not realize that putting client information into a ChatGPT prompt, for example, may constitute disclosure.

The American Bar Association noted in July 2024 that the extent of this disclosure, and the corresponding requirement to obtain the client’s informed consent, will vary from one AI tool to the next, depending on each tool’s policies and practices.

Client Relationships

While we and our students were using AI this past year, so were our clients. Save for a few technology companies, most of our clients have no particular AI expertise. Accordingly, their AI usage is fairly representative of how small businesses around the U.S. are using AI.

The biggest challenge that we are encountering with our clients’ use of AI is the potential for interference with the attorney-client relationship. As business advisers, we build long-term relationships with clients, and the advice we provide is customized and iterative. For law students who are learning how to represent business clients, one key learning outcome of the clinic is the skill to curate legal advice for a client’s particular circumstances.

For example, at the start of the semester, a new startup client founded by a team of graduate students might ask our team to advise on the appropriate equity allocations for the founding team. We may have several conversations with the clients, learning more about each founder’s role within the company and about the company’s future plans. We might learn that one founder is planning to leave the company after graduation, but the others are planning to stay. This fact would necessarily influence our recommendations about the founders’ equity allocations.

This past year, for the first time, we found that a few clients were — without telling us — feeding legal advice that we had provided to them into AI tools and responding to us, again without telling us, with the AI-generated content.

To the law students’ frustration — and ours — the responses generated by the AI tools invariably took no account of the clients’ particular factual circumstances. So when our clients reacted to our advice, their reactions were completely disconnected from the relationship we had built up with them, and were often incongruous with the conversations we’d had before rendering our advice.

One question is whether this dynamic is unique to, or at least particularly acute in, a context where clients are receiving pro bono legal services. If our clients were paying for legal advice, would they invest more time in digesting and responding to that advice?

Perhaps. But with all of the recent discussion about how generative AI will change how lawyers work, we believe there has been insufficient attention paid to how generative AI is going to affect the lawyer-client relationship in the coming years.

Takeaways

This article just touches on the surface of our use of AI in the clinic, and the opportunities and challenges it presents to transactional lawyers — and new transactional lawyers, in particular.

Our main takeaway after a semester is that legal AI tools are an incremental improvement upon the sophisticated tools available to lawyers already. While some uses may be transformative, many just speed up legal tasks, reduce mistakes and provide a second set of virtual eyes to the drafting process. No doubt there are many uses we have not yet considered, but these early experiences may be illuminating.