Using AI in Transactional Law Practice

The Role of AI in Legal Decision-Making: Opportunities and Ethical Concerns

© Romain Vignes CC BY-NC-SA 3.0

Celia Bigoness and I published a column in Law360, What 2 Profs Noticed As Transactional Law Students Used AI (behind a paywall). It reads,

We teach entrepreneurship law clinics in which our students do transactional work on a wide range of matters, including business formation, contracts, intellectual property protection and regulatory compliance.

This past semester, we had access to generative artificial intelligence tools from Lexis, Westlaw and Bloomberg Law, as well as those that are more broadly available to the general public, including ChatGPT and Perplexity.

While we have not done a rigorous study of these tools, we have some early observations about how AI is changing how transactional lawyers do their jobs, particularly new transactional lawyers. Our own experience has been mostly positive, when these tools are used responsibly. But there are many caveats that experienced and new practitioners should be aware of.

Potential Applications

For a transactional lawyer, one tempting potential use case for legal AI tools is to provide first drafts of transactional documents, such as contracts or company bylaws. Most lawyers love to start with a draft — any draft — rather than starting from scratch.

In our experience, though, using an AI-generated draft provides, at best, only an incremental benefit over starting with a precedent and modifying it oneself. Asking an AI tool to come up with a first draft is more like having a junior colleague take a stab at drafting the document, given the extensive review and editing that the draft will require.

There may be some value to this approach in the rare circumstance in which the lawyer does not have access to any relevant precedents, but the lawyer will need to be extremely diligent in reviewing the AI-produced draft.

One AI query that we have found to be more helpful has been to ask whether an existing draft or standard form is missing any important provisions. The AI tool may generate a list of a half-dozen suggested clauses to consider adding to the draft. For instance, it might suggest adding a force majeure clause if your draft does not contain one.

Again, this is not like waving a magic wand over your document: You need to understand what a force majeure clause is, whether it makes sense in your draft and what type of force majeure clause makes the most sense in it.

Also, the suggestions can range from not helpful to redundant to downright useful. But it generally doesn’t take long to parse through the suggestions, and the process can be an efficient way of testing the strength of a document.

Bloomberg Law’s Clause Adviser tool has the very useful ability to evaluate whether a particular clause favors one side in a transaction — e.g., pro-buyer or seller, or pro-tenant or landlord — drawing from thousands of real-life examples that can be found on the U.S. Securities and Exchange Commission’s Electronic Data Gathering, Analysis and Retrieval database.

A transactional lawyer can find comparable market analysis otherwise — for example, Lexis’ and Westlaw’s annotated forms will often indicate provisions that may sway in favor of one party or another — but Bloomberg’s tool is unique in that it is based on actual, negotiated transaction documents on EDGAR.

Similarly, the legal databases’ AI tools can review whether a draft contract or set of bylaws complies with relevant laws — state, federal and foreign jurisdictions. Again, this is helpful, but Lexis’ and Westlaw’s annotated forms already provide a lot of the same guidance.

One excellent use of legal AI tools is to summarize and compare documents. This feature is helpful when you are summarizing one document, but it can be really useful in summarizing a bunch of documents, perhaps pulling all of the assignment clauses out of a bunch of agreements to understand how they differ from each other.

We used to do this in a more labor-intensive way — hours and hours of reading and cross-referencing — and getting almost instantaneous results can feel like AI magic. But again, junior lawyers need to understand that they are responsible for checking the AI work product for accuracy. So we’d consider any summary or comparison to be merely a starting point for the lawyer’s own analysis.

Based on our experience so far, we believe the current suite of legal AI tools may be most useful to transactional lawyers in developing general skills, like contract drafting and analysis. For example, we can design exercises for our law students in which we give the students a few precedents of a particular contract, and ask them to compare the precedents and figure out what they’re missing.

Using both legal AI tools and conventional research, this type of exercise could help the students learn about how the particular provisions of a contract fit together. But we would be much more hesitant about using these AI tools to draft documents from scratch.

Challenges

Given these potential use cases and their limitations, in our view, the biggest challenge is to train junior transactional lawyers to approach these AI tools with a healthy skepticism.

The law students we work with are increasingly comfortable outsourcing aspects of their daily lives to ChatGPT — our students regularly ask ChatGPT to draft or summarize emails, or even to take on more nuanced tasks, such as proposing an itinerary for a post-bar exam trip. They understand that ChatGPT’s output can be a mixed bag when it comes to quality, and they seem to spend a fair amount of time double-checking the results.

But when a law student or junior lawyer is given an AI tool branded by a trusted source such as Bloomberg, Lexis or Westlaw — let alone a tool funded and hosted by that individual’s own law firm — they can become overly confident about that tool’s capabilities. We’ve seen that our students, unless specifically instructed by us, can be too deferential to the drafting and analysis produced by a legal AI tool.

So, whether in a law clinic or a law firm setting, transactional lawyers will face the dual task of staying up-to-date on potential applications for these tools, without abdicating our professional responsibilities to our clients.

Another related concern presented by these AI tools — and particularly by how law students and junior lawyers use them — relates to the disclosure of confidential client information.

Any law student who has taken a professional responsibility course or spent a semester representing clients in a law clinic understands that a lawyer cannot disclose confidential client information without getting the client’s informed consent. But that same law student may not realize that putting client information into a ChatGPT prompt, for example, may constitute disclosure.

The American Bar Association noted in July 2024 that the extent of this disclosure, and the corresponding requirement to obtain the client’s informed consent, will vary from one AI tool to the next, depending on each tool’s policies and practices.

Client Relationships

While we and our students were using AI this past year, so were our clients. Save for a few technology companies, most of our clients have no particular AI expertise. Accordingly, their AI usage is fairly representative of how small businesses around the U.S. are using AI.

The biggest challenge that we are encountering with our clients’ use of AI is the potential for interference with the attorney-client relationship. As business advisers, we build long-term relationships with clients, and the advice we provide is customized and iterative. For law students who are learning how to represent business clients, one key learning outcome of the clinic is the skill to curate legal advice for a client’s particular circumstances.

For example, at the start of the semester, a new startup client founded by a team of graduate students might ask our team to advise on the appropriate equity allocations for the founding team. We may have several conversations with the clients, learning more about each founder’s role within the company and about the company’s future plans. We might learn that one founder is planning to leave the company after graduation, but the others are planning to stay. This fact would necessarily influence our recommendations about the founders’ equity allocations.

This past year, for the first time, we found that a few clients were — without telling us — feeding legal advice that we had provided to them into AI tools and responding to us, again without telling us, with the AI-generated content.

To the law students’ frustration — and ours — the responses generated by the AI tools invariably took no account of the clients’ particular factual circumstances. So when our clients reacted to our advice, their reactions were completely disconnected from the relationship we had built up with them, and were often incongruous with the conversations we’d had before rendering our advice.

One question is whether this dynamic is unique to, or at least particularly acute in, a context where clients are receiving pro bono legal services. If our clients were paying for legal advice, would they invest more time in digesting and responding to that advice?

Perhaps. But with all of the recent discussion about how generative AI will change how lawyers work, we believe there has been insufficient attention paid to how generative AI is going to affect the lawyer-client relationship in the coming years.

Takeaways

This article just touches on the surface of our use of AI in the clinic, and the opportunities and challenges it presents to transactional lawyers — and new transactional lawyers, in particular.

Our main takeaway after a semester is that legal AI tools are an incremental improvement upon the sophisticated tools available to lawyers already. While some uses may be transformative, many just speed up legal tasks, reduce mistakes and provide a second set of virtual eyes to the drafting process. No doubt there are many uses we have not yet considered, but these early experiences may be illuminating.

Regulating Rationally for Consumers

Alan Schwartz has posted Regulating for Rationality to SSRN. The abstract reads,

Traditional consumer protection law responds with various forms of disclosure to market imperfections that are the consequence of consumers being imperfectly informed or unsophisticated. This regulation assumes that consumers can rationally act on the information that it is disclosure’s goal to produce. Experimental results in psychology and behavorial economics question this rationality premise. The numerous reasoning defects consumers exhibit in the experiments would vitiate disclosure solutions if those defects also presented in markets. To assume that consumers behave as badly in markets as they do in the lab implies new regulatory responses. This Essay sets out the novel and difficult challenges that such “regulating for rationality” — intervening to cure or to overcome cognitive error — poses for regulators. Much of the novelty exists because the contracting choices of rational and irrational consumers often are observationally equivalent: both consumer types prefer the same contracts. Hence, the regulator seldom can infer from contract terms themselves that reasoning errors produced those terms. Rather, the regulator needs a theory of cognitive function that would permit him to predict when actual consumers would make the mistakes that laboratory subjects make: that is, to know which fraction of observed contracts are the product of bias rather than rational choice.

The difficulties exist because the psychologists lack such a theory. Hence, cognitive based regulatory interventions often are poorly grounded. A particular concern is that consumers suffer from numerous biases, and not every consumer suffers from the same ones. Current theory cannot tell how these biases interact within the person and how markets aggregate differing biased consumer preferences. The Essay then makes three further claims. First, regulating for rationality should be more evidence based than regulating for traditional market imperfections: in the absence of a theory the regulator needs to see what actual people do. Second, when the facts are unobtainable or ambiguous regulators should assume that bias did not affect the consumer’s contracting choice because the assumption is autonomy preserving, administerable and coherent. Third, disclosure regulation can ameliorate some reasoning errors. Hence, abandoning disclosure strategies in favor of substantive regulation sometimes would be premature.

This essay adds to a growing literature that challenges the ability of regulators to effectively incorporate the lessons of behavioral economics into consumer protection regimes. I take no position at this time on the particular claims of this essay, but I certainly think that the Consumer Financial Protection Bureau should grapple with this growing body of literature. The only thing worse than no consumer protection regime at all, would be one that was designed all wrong.