AI and Legal Research: It Still Begins With One Question
We often think of prompts as the secret to getting better answers from AI. But in legal research, asking the right questions isn’t a trick, it’s a skill. One that’s grounded in practice, discipline, and the ongoing responsibility to understand not just what information says, but where it comes from.
In a recent exchange with ChatGPT, I found myself typing a question I’d picked up during a professional development course on working with generative AI:
“How do you account for the discrepancies in the initial numbers you provided me?”
At the time, I didn’t think much of it. But as soon as I hit enter, I recognized the question for what it was. Not just a clever prompt, but a reflection of how I’ve been trained to approach research: ask clearly, look for logic, and expect transparency.
A Quick Example
In a conversation with ChatGPT, I asked it to count a list of projects organized by jurisdiction. I already knew the total number of projects, I just didn’t want to manually tally them by region. It seemed like the kind of task AI could manage easily: objective, structured, and straightforward.
The initial response came back quickly, breaking down the projects across various provinces. But something didn’t sit right. The numbers didn’t quite add up and it included jurisdictions I knew weren’t part of the list. So I asked again. The response changed. The counts were different, and once again, no clear explanation was offered.
That’s when I asked the question that shifted the tone of the interaction: “How do you account for the discrepancies in the initial numbers you provided me?”
What followed was one of the more useful responses I’ve had from an AI. Not because it was accurate, but because it was transparent. The AI walked through what had gone wrong:
Initial estimates were based on memory and a scan of the content. It didn’t actually count anything, it guessed based on what it remembered from scanning the page.
It relied on prior examples from similar (but different) programs. It assumed Alberta was included because it had been in other rounds of funding, even though that wasn’t the case here.
It only produced accurate numbers after being asked to double-check. Once prompted, it counted each item manually and corrected its mistakes.
What stood out to me wasn’t the correction, it was everything that came before it. The confident wrong answers. The hallucinated jurisdictions. The fabricated data based on assumptions from similar (but not identical) contexts. The moment only became meaningful because I pressed for an explanation. Not a rephrased answer, but a breakdown of the process. A professional instinct baked into legal research:
How did you get here, and can I trust it?
What AI Needs and What Legal Research Still Requires
This is where research discipline comes in. It’s not enough to get an answer that sounds right, particulary in law. We need answers we can trust, trace, and cite. Tools like ChatGPT can be helpful early on. They can surface terminology, organize themes, or offer a jumping-off point for analysis. But without verification, they’re just placeholders, approximations dressed up in fluency. The profession of legal research is built on knowing the difference. That’s why even the best AI tools still need someone who knows how to ask: Where did this come from? Can I find the source? Does it hold up?
Reflections, Not Warnings
This post isn’t a warning against using AI, nor a rejection of its value. I use it. I explore its capabilities. I appreciate its potential. But what that exchange reminded me and what I’ve known throughout my career, is that strong research isn’t about getting fast answers. It’s about asking the right questions, interpreting the response, and following the trail back to its origin. AI doesn’t replace that. It relies on it.
If You’re Exploring AI in Your Practice
Use it to generate ideas, not verify facts.
Ask it to explain itself and then check that logic.
Treat its outputs as drafts, not deliverables.
Legal research isn’t just about getting to an answer. It’s about knowing whether you can stand behind it. That hasn’t changed and no tool, however advanced, should change that standard.