I Got Hooked On Midpage’s New Legal Research & Drafting Agent
I wanted to find out whether AI can help litigators draft briefs...turns out, it can.
I wanted to find out whether AI can help litigators draft briefs. To test it out, I turned a finished mock appellee’s brief that I had written for a class into an unfinished, excerpted version of itself, then I recreated part of my research and drafting process using Midpage.
Not only was I pleasantly surprised by how quickly Midpage helped me draft a coherent—and in some ways an improved—version of my brief in less time, but I was also hooked: With Midpage as my AI collaborator, each iteration made the argument smoother, the structure clearer, and the tone more compelling. I easily spotted improvements I had missed before.
1. Background & Set up
Last year, for a law school class, I wrote a mock appellee’s brief for a case before the Supreme Court of Ohio. The case centered on how far to extend an exception to the “American Rule,” the longstanding principle that each party ordinarily pays their own attorney fees. My position was that, while the punitive-damages exception to the American Rule permits a jury to award trial-level attorney’s fees after finding that a defendant acted maliciously, it does not allow a court to award additional appellate-level fees after a reasonable appeal.
I cut out a key section of my brief and used Midpage to help me reconstruct that section of the brief. To start, I gave Midpage a lengthy prompt, excerpted below:
“I’m drafting an appellee’s brief for filing in the Supreme Court of Ohio. This case centers on how far to extend an exception to the ‘American Rule’ . . . I want to add an argument explaining that the defendant’s pre-trial malicious conduct is the ‘wrong’ which justifies punitive damages and trial-level attorney’s fees, but a reasonable appeal is not malicious conduct—so appellate fee awards aren’t justified.”
Midpage’s research agent pulls relevant precedent—cases like Phoenix Lighting, Grodek, Finney, and Neal-Pettit v. Lahman—and then the drafting agent assembles a draft argument using those authorities.
The result was surprisingly coherent: a full, well-cited section articulating my thesis that reasonable appeals are protected exercises of legal rights, not the kind of malicious conduct delimiting the punitive-damages exception that allows fee shifting.
2. Iterating (And Getting Hooked)
After every substantial edit, Midpage reviews the draft to suggest improvements. I declined some edits (e.g., a policy argument that was already covered elsewhere in my brief), but others felt like real improvements (e.g., refining transitions, and adding a definition of “malice”). I’d instruct Midpage to implement the suggestions I liked, and ignored the suggestions I didn’t.
Not only did this process become strangely addictive, but it also allowed me to step back and look at the draft through a more critical, editorial lens. As a result, I noticed that my own instructions were becoming sharper in focus. For example, I realized that calling my earlier arguments “technical limitations” made them sound weak. So I told Midpage to call them “threshold hurdles that preclude appellate fee-shifting based on the punitive-damages exception.” This is the kind of insight that I missed when I drafted the brief the old fashioned way, but it became obvious with this new workflow.
By this point, Midpage felt like a junior associate who worked fast and didn’t mind endless redlines. The tool didn’t replace the intellectual work—it accelerated it.
3. Comparing Results
Finally, I asked ChatGPT to compare Midpage’s version of the new section with the version I had drafted manually last year. For this comparison, I used Midpage’s legal research GPT so ChatGPT could pull the cited cases and verify accuracy.
ChatGPT found both versions accurate and fair but noted that Midpage’s was more detailed and persuasive to a new reader. My version was leaner and more efficient. This difference in length was likely because Midpage drafted its section as a more standalone argument, sometimes overlooking the broader context of the brief, while my original version was written to fit within a larger document that already supplied background, definitions, and framing. With a few more rounds of edits—especially if I had instructed Midpage to tighten and condense its language—our versions likely would have converged in length and style.
4. What I Learned
Three takeaways:
AI Is Useful, But With Caveats: Midpage and other similar tools take a bit of massaging to get to passable work product. For instance, a quote may be misattributed to the wrong case. The nice thing about Midpage is that it verifies quotes against citations; when something doesn’t match, Midpage red underlines the quote so I can fix it.
A New Way Of Thinking: AI doesn’t replace critical thinking or legal reasoning. But it does give you a new lens through which you can examine your own work product. Prompting forces you to think deeply about your argument’s logic and purpose, and reviewing AI-generated drafts requires you to think from first principles. Both exercises crystallize your own understanding of your work product. This alone is a good reason to use AI.
Future work: For future work—when AI use is permitted—I can see Midpage saving hours of preliminary research and first-draft drafting. That extra time can go toward improving clarity, strengthening analysis, and cite checking every quote and case with the diligence real-world lawyering already demands.
This mock brief was written for a law school course. The Midpage exercises described here were conducted solely for educational purposes; no legal advice was given or received.
Midpage does not provide legal advice and can make mistakes. Always verify important information.




