Integrating AI Tools Into Law School Teaching

Robert MacKenzie and I have an article in Bloomberg Law, Law Schools Should Teach How to Integrate AI Tools Into Practice. It reads,

Now that artificial intelligence tools for lawyers are widely available, we decided to integrate them for a semester in our Entrepreneurship Clinic. We have some important takeaways for legal education in general and the transactional practice of law in particular.

First, employers and educators need to account for law students who already are using AI tools in their legal work and guide new lawyers about how to use such tools appropriately.

Second, different AI products lead to wildly different results. Just demonstrating this to law students is very valuable, as it dispels the notion that AI responses can replace their independent judgment.

Third, AI’s greatest value may be in refining legal judgment for lawyers in ways that can help new and experienced lawyers alike.

Legal AI Prep

As we were planning our syllabus over the summer, we provided formal training in AI tools designed for lawyers. A librarian provided us an overview of products from Bloomberg Law, Lexis, and Westlaw early in the semester.

Before the training, we asked students how they were using AI in the legal work. Their responses ranged from “not at all” to “I start all of my case law research on ChatGPT.”

We were confident that our students would be better off operating somewhere between those extremes. Over the semester, we demonstrated how AI could enhance the speed and quality of legal work, as well as the dangers of outsourcing research and judgment to an AI tool.

AI Tool Differences

Perhaps the training’s most valuable takeaway was that each tool had access to different databases of materials and had different constraints. We designed simulations that required groups of students to complete the same transactional tasks (drafting, researching, benchmarking market terms, and crafting effective client emails) using various AI tools.

In one exercise, students acted as counsel to a small business owner. The “client” emailed them asking about standard-form contracts relevant to their industry and what pricing mechanics such contracts use.

For the research stage of the task, all teams located a standard-form construction contract, but only half of them found the industry-accepted standard form that we contemplated. The others located this form later by modifying their search approach. This helped to demonstrate some limitations of AI tools.

For the client communication stage, some teams failed to answer the “client’s” questions. This isn’t something the AI tool could address on its own, and it reminded students to constantly refocus on the big picture in addition to individual tasks.

We found that AI tools built on widely available AI platforms such as ChatGPT produced the most responsive outputs and were most forgiving of haphazard prompting. But certain specialized legal AI tools often failed to answer the prompt.

This is a double-edged sword. Although the generally available tools were more likely to generate an answer, they also were more prone to providing unreliable outputs. By contrast, the specialized tools hallucinated much less frequently but regularly stopped short of fulfilling a request if it required work beyond their guardrails.

Delegating Work

Our final takeaway was that AI was surprisingly good at issue-spotting and double-checking a lawyer’s work product. These uses can help both new and experienced lawyers.

We used the idea of delegation to make this point to our students. AI is fast, adaptable, and always available, so it’s a great resource. But you should only delegate work to it when you can verify its output.

In one exercise, students had to issue-spot risks and approaches after a “client” described a business opportunity. Students brainstormed in small groups. There was a lot of overlap, but some groups thought of items that others had not. We added the items to a collective list, relying on our years of practice to guide the students through gaps that remained.

Once we had a strong collective list of items, a team asked an AI product to issue-spot the same scenario. It generated most of the items in our list, some that weren’t relevant, and—most importantly—a couple that no one had raised.

This was a valuable lesson: AI had something to add to our analysis, but we had to exercise independent judgment to determine whether its contributions merited further thought.

Important Takeaways

We asked students for feedback on our use of AI throughout the semester. The most valuable feedback was that they wanted to develop their own legal judgment and learn how and why certain tasks are completed before relying on AI.

This echoes the transition from book-based legal research to electronic legal research. There was some value in searching the law reports in the library, but electronic legal research won out because it was so much more efficient. Yet even with this enhanced efficiency, a responsible lawyer must understand how to build a strong research plan and actually read the cases they cite.

In the clinic, our goal is student learning. It was for this reason that we liked to deploy the AI tools at the end of our exercises: You do the work and then interrogate it with the AI tools of your choice.

Such an approach ensures law students get the benefit of struggling through first repetitions of new tasks while allowing them to generate superior work product with fewer drafts. This process requires discipline. Legal education and legal employers need to clarify the line between AI as a tool versus AI as a crutch.

We learned a lot about how AI tools can help law students develop into good lawyers. As those tools are integrated into legal practice, lawyers of all experience levels should take a self-conscious approach to using them.

Using AI in Transactional Law Practice

The Role of AI in Legal Decision-Making: Opportunities and Ethical Concerns

© Romain Vignes CC BY-NC-SA 3.0

Celia Bigoness and I published a column in Law360, What 2 Profs Noticed As Transactional Law Students Used AI (behind a paywall). It reads,

We teach entrepreneurship law clinics in which our students do transactional work on a wide range of matters, including business formation, contracts, intellectual property protection and regulatory compliance.

This past semester, we had access to generative artificial intelligence tools from Lexis, Westlaw and Bloomberg Law, as well as those that are more broadly available to the general public, including ChatGPT and Perplexity.

While we have not done a rigorous study of these tools, we have some early observations about how AI is changing how transactional lawyers do their jobs, particularly new transactional lawyers. Our own experience has been mostly positive, when these tools are used responsibly. But there are many caveats that experienced and new practitioners should be aware of.

Potential Applications

For a transactional lawyer, one tempting potential use case for legal AI tools is to provide first drafts of transactional documents, such as contracts or company bylaws. Most lawyers love to start with a draft — any draft — rather than starting from scratch.

In our experience, though, using an AI-generated draft provides, at best, only an incremental benefit over starting with a precedent and modifying it oneself. Asking an AI tool to come up with a first draft is more like having a junior colleague take a stab at drafting the document, given the extensive review and editing that the draft will require.

There may be some value to this approach in the rare circumstance in which the lawyer does not have access to any relevant precedents, but the lawyer will need to be extremely diligent in reviewing the AI-produced draft.

One AI query that we have found to be more helpful has been to ask whether an existing draft or standard form is missing any important provisions. The AI tool may generate a list of a half-dozen suggested clauses to consider adding to the draft. For instance, it might suggest adding a force majeure clause if your draft does not contain one.

Again, this is not like waving a magic wand over your document: You need to understand what a force majeure clause is, whether it makes sense in your draft and what type of force majeure clause makes the most sense in it.

Also, the suggestions can range from not helpful to redundant to downright useful. But it generally doesn’t take long to parse through the suggestions, and the process can be an efficient way of testing the strength of a document.

Bloomberg Law’s Clause Adviser tool has the very useful ability to evaluate whether a particular clause favors one side in a transaction — e.g., pro-buyer or seller, or pro-tenant or landlord — drawing from thousands of real-life examples that can be found on the U.S. Securities and Exchange Commission’s Electronic Data Gathering, Analysis and Retrieval database.

A transactional lawyer can find comparable market analysis otherwise — for example, Lexis’ and Westlaw’s annotated forms will often indicate provisions that may sway in favor of one party or another — but Bloomberg’s tool is unique in that it is based on actual, negotiated transaction documents on EDGAR.

Similarly, the legal databases’ AI tools can review whether a draft contract or set of bylaws complies with relevant laws — state, federal and foreign jurisdictions. Again, this is helpful, but Lexis’ and Westlaw’s annotated forms already provide a lot of the same guidance.

One excellent use of legal AI tools is to summarize and compare documents. This feature is helpful when you are summarizing one document, but it can be really useful in summarizing a bunch of documents, perhaps pulling all of the assignment clauses out of a bunch of agreements to understand how they differ from each other.

We used to do this in a more labor-intensive way — hours and hours of reading and cross-referencing — and getting almost instantaneous results can feel like AI magic. But again, junior lawyers need to understand that they are responsible for checking the AI work product for accuracy. So we’d consider any summary or comparison to be merely a starting point for the lawyer’s own analysis.

Based on our experience so far, we believe the current suite of legal AI tools may be most useful to transactional lawyers in developing general skills, like contract drafting and analysis. For example, we can design exercises for our law students in which we give the students a few precedents of a particular contract, and ask them to compare the precedents and figure out what they’re missing.

Using both legal AI tools and conventional research, this type of exercise could help the students learn about how the particular provisions of a contract fit together. But we would be much more hesitant about using these AI tools to draft documents from scratch.

Challenges

Given these potential use cases and their limitations, in our view, the biggest challenge is to train junior transactional lawyers to approach these AI tools with a healthy skepticism.

The law students we work with are increasingly comfortable outsourcing aspects of their daily lives to ChatGPT — our students regularly ask ChatGPT to draft or summarize emails, or even to take on more nuanced tasks, such as proposing an itinerary for a post-bar exam trip. They understand that ChatGPT’s output can be a mixed bag when it comes to quality, and they seem to spend a fair amount of time double-checking the results.

But when a law student or junior lawyer is given an AI tool branded by a trusted source such as Bloomberg, Lexis or Westlaw — let alone a tool funded and hosted by that individual’s own law firm — they can become overly confident about that tool’s capabilities. We’ve seen that our students, unless specifically instructed by us, can be too deferential to the drafting and analysis produced by a legal AI tool.

So, whether in a law clinic or a law firm setting, transactional lawyers will face the dual task of staying up-to-date on potential applications for these tools, without abdicating our professional responsibilities to our clients.

Another related concern presented by these AI tools — and particularly by how law students and junior lawyers use them — relates to the disclosure of confidential client information.

Any law student who has taken a professional responsibility course or spent a semester representing clients in a law clinic understands that a lawyer cannot disclose confidential client information without getting the client’s informed consent. But that same law student may not realize that putting client information into a ChatGPT prompt, for example, may constitute disclosure.

The American Bar Association noted in July 2024 that the extent of this disclosure, and the corresponding requirement to obtain the client’s informed consent, will vary from one AI tool to the next, depending on each tool’s policies and practices.

Client Relationships

While we and our students were using AI this past year, so were our clients. Save for a few technology companies, most of our clients have no particular AI expertise. Accordingly, their AI usage is fairly representative of how small businesses around the U.S. are using AI.

The biggest challenge that we are encountering with our clients’ use of AI is the potential for interference with the attorney-client relationship. As business advisers, we build long-term relationships with clients, and the advice we provide is customized and iterative. For law students who are learning how to represent business clients, one key learning outcome of the clinic is the skill to curate legal advice for a client’s particular circumstances.

For example, at the start of the semester, a new startup client founded by a team of graduate students might ask our team to advise on the appropriate equity allocations for the founding team. We may have several conversations with the clients, learning more about each founder’s role within the company and about the company’s future plans. We might learn that one founder is planning to leave the company after graduation, but the others are planning to stay. This fact would necessarily influence our recommendations about the founders’ equity allocations.

This past year, for the first time, we found that a few clients were — without telling us — feeding legal advice that we had provided to them into AI tools and responding to us, again without telling us, with the AI-generated content.

To the law students’ frustration — and ours — the responses generated by the AI tools invariably took no account of the clients’ particular factual circumstances. So when our clients reacted to our advice, their reactions were completely disconnected from the relationship we had built up with them, and were often incongruous with the conversations we’d had before rendering our advice.

One question is whether this dynamic is unique to, or at least particularly acute in, a context where clients are receiving pro bono legal services. If our clients were paying for legal advice, would they invest more time in digesting and responding to that advice?

Perhaps. But with all of the recent discussion about how generative AI will change how lawyers work, we believe there has been insufficient attention paid to how generative AI is going to affect the lawyer-client relationship in the coming years.

Takeaways

This article just touches on the surface of our use of AI in the clinic, and the opportunities and challenges it presents to transactional lawyers — and new transactional lawyers, in particular.

Our main takeaway after a semester is that legal AI tools are an incremental improvement upon the sophisticated tools available to lawyers already. While some uses may be transformative, many just speed up legal tasks, reduce mistakes and provide a second set of virtual eyes to the drafting process. No doubt there are many uses we have not yet considered, but these early experiences may be illuminating.