Generation AI?
- Christopher Dias
- 2 days ago
- 8 min read

Training Tomorrow's Lawyers in the Age of AI
My daughter does not value work produced with AI assistance. She is at the start of her career, ambitious, and wants to earn her expertise the hard way. I was surprised by this. If anything, I had expected the opposite: that the youngest entrants to the profession would be the most enthusiastic adopters. But talking to people across the sector, it seems that AI is being embraced most readily by those mid-career, practitioners with enough experience to direct it, and to call it out when it veers off track. New entrants, with some exceptions, are more ambivalent. And the more I think about it, the more sense that makes.
She is not being precious about merit for its own sake. She is afraid of being robbed of her formative years. She is afraid that her Generation will be defined by AI. And that fear is more legitimate than I initially gave her credit for.
I have built AI-powered tools for immigration lawyers and their clients: pre-assessment tools for visa applications; compliance planners for sponsor licence holders; frameworks for understanding proposed changes to settlement law. I have done this carefully. Enterprise-grade AI with proper data protection assurances. User-facing disclaimers on every tool. Explicit referral pathways to qualified lawyers for anything the tool identifies as complex. Not because I am timid about technology, but because I understand what is at stake when you put a vulnerable person in front of something that looks like advice but is not.
My honest assessment after all of this is that AI has made me significantly better at my job. Not because it knows more than me, but because it removes the friction between what I know and what I can produce. The judgment, the pattern recognition, the instinct for what matters in a particular set of facts: those remain entirely mine. What AI does is amplify them.
I have written about this elsewhere in more detail. The short version, what I have come to call the Superlawyer hypothesis, is that AI does not replace lawyers but completes them, amplifying strengths and compensating for weaknesses so that each type of practitioner can operate closer to their full potential. If you want the longer argument, it is here. The point for today's article is narrower but follows directly from it: the Superlawyer hypothesis only works if you are already a fully formed lawyer. It assumes the formation has already happened, and that the AI is now augmenting that formation.
What happens to the junior who never goes through that formation process because AI is doing it for them?
There is something almost sacramental about the way expertise is built in law. You read the bad cases before you understand why they are bad. You draft the clumsy letter before you know what elegant looks like. You sit across from a client whose life is genuinely at stake and you feel the full discomfort of not knowing enough, and that discomfort is what drives you to go and find out. The struggle is not incidental to the formation. In many respects it is the formation.
Let's put it another way. A Jedi padawan is not trained by a droid. The droid knows more: faster recall, no gaps, every language and culture catalogued and retrievable in milliseconds. But the droid has never felt fear before a battle, never had to find stillness in the middle of chaos, never made a decision that cost someone something and had to live with that regret. Each padawan is assigned a Jedi master precisely to give them that exposure to hardened experience. The knowledge transfer only works because of that relationship. And this is the same relationship that has been the foundation of legal education in the best law firms.
My daughter's fear is that AI will short-circuit this. That juniors will be pushed by default into using tools they do not understand, producing polished outputs without the formation behind them. That the profession will sleepwalk into producing a generation of operators rather than lawyers. She wants to earn it. She wants the struggle. And she is right to want it, even if the answer is not to pretend the tools do not exist.
UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) is being read by most practitioners as a cautionary tale about AI. I think it is more accurately read as a cautionary tale about supervision, and about what happens when juniors are left to their own devices with tools they do not understand and which they do not have the weight of experience to enable them to tame and direct the AI, in the way an experienced rider would their steed.
The case joined two separate matters before the Upper Tribunal. In the first, a solicitor included a non-existent case in grounds of appeal, probably through inadvertent use of Google's AI search function without verifying the result. He self-reported immediately to both his regulators. The tribunal took no further action. The lesson there is straightforward: verify every citation against a primary source before it leaves your office, regardless of how you found it.
The second case is the more instructive one for this argument. Judicial review grounds containing multiple false or incorrect citations had been drafted by a very junior caseworker without appropriate oversight, by a person who turned out not to be a trainee solicitor at all but the supervising solicitor's brother. The supervising solicitor signed off without checking. When the tribunal pressed him on his firm's AI policy, he said there was no mechanism by which staff could use AI. The tribunal's response was pointed: anyone with access to Google has access to AI. He had given his junior no understanding of the tools they were already using every day, and no framework for using them responsibly.
The tribunal put it plainly at paragraph 37 of the judgment:
It matters not how such citation errors come about. Whether they are inserted by a hapless trainee or by ChatGPT is really neither here nor there; the point is that the qualified legal professional with conduct of the matter is expected to ensure that such documents are checked, that errors are identified, and that only accurate documents are sent to the tribunal.
This is a supervision failure dressed in an AI costume. A junior left alone with AI and no guidance is not categorically different from a junior left alone with an old precedent and a prayer. The tool changes. The supervisory obligation does not. And in neither case does the junior learn anything of lasting value.
It is also worth noting what the tribunal said at paragraph 18, because it tends to get lost in the headlines:
We do not suggest for a moment that the use of legal AI programmes by properly trained professionals is anything other than a step forward in legal practice.
This is not a judgment condemning AI. It is a judgment condemning the absence of any serious engagement with it.
The answer to the formation problem is not to ban AI from junior practice. That is both unenforceable and, frankly, dishonest. As the tribunal noted, Google is already an AI tool whether your firm has licensed one or not. Telling juniors not to use AI while leaving them to find their own way around a search engine that serves AI-generated results at the top of every query is not a policy. It is wilful blindness.
The answer is for firms to treat AI the way they should treat any powerful resource in the hands of a junior: with explicit, structured, supervised engagement. Not as a default that bypasses thinking. Not as a secret resource used quietly and never discussed. But as a tool whose use is visible, understood, and woven into the training framework of the firm.
In practice this means several things. Juniors should be required to do tasks manually before being shown how AI can assist with them; you cannot evaluate an output you have no independent basis for assessing. They should verify AI outputs against primary sources as a matter of routine. They should understand the difference between a tool that accelerates their own thinking and a tool that substitutes for it. And senior practitioners should be having explicit conversations about where AI adds value and where it does not, rather than leaving juniors to discover the boundaries by falling over them in front of a tribunal.
Transparency matters here too. If a junior produces work with AI assistance, that should be discussable, visible, and subject to the same review as any other delegated work. The question a supervising partner should be asking is not "did you use AI" as though it were an accusation, but "walk me through your thinking" as a genuine test of whether the formation is happening. AI-assisted work that a junior can explain, defend, and build on is formation. AI-assisted work that a junior cannot explain is not, regardless of how polished it looks.
There is something that rarely gets said about law, perhaps because it sounds unlikely: it is a creative discipline. Not in spite of the rules and regulations but because of them. The legislation sets the walls. The client enters the room, and the Judges set the ceiling. And within that space it is the lawyer's job to interpret the language of the law in a way that serves the client, or serves a particular view of what justice requires. That interpretation is never purely mechanical. It involves judgment, instinct, and argument. It is why courts and tribunals exist at all; if the law simply applied itself, we would not need lawyers. The creative act is finding what is possible within the constraints, and then persuading someone else that you are right. That act requires a voice. It requires a human being who has thought something through and is prepared to stand behind it.
I was speaking recently to a colleague about a piece of work a junior had produced. The concern was not that it was wrong. It got the basic elements right. The client would probably not have noticed. But there was no part of that junior lawyer on the page. No personal imprint. No sense that a human being with a point of view had looked at this particular problem and brought something of themselves to it. It read, my colleague said, like a ChatGPT answer.
Finding your voice as a lawyer is not a luxury or an affectation. It is how you develop judgment. It is how clients learn to trust you specifically, not just the firm. It is how you become, over time, someone whose opinion means something. A junior who produces clean, competent, characterless work has not found their voice. They have borrowed one.
Part of our job as senior practitioners is to make sure that does not become a habit; to create the conditions in which junior lawyers find out who they are on the page, not just what the answer is. That means supervising properly, engaging openly with the tools they are already using, and being honest about the difference between work that has been thought through and work that has been generated.
The generation now entering law deserves both things: the tools that will make them formidable, and the formation and development time that will make those tools mean something. AI certainly has a place as a transformative tool for every lawyer. However, Generation AI do not want a droid for a teacher. It is our job to make sure they do not get one.
Chris Dias is a solicitor and founder of Lawyery, an immigration law firm based in Holborn. He is the founder of Legal Artificial Intelligence Development Ltd and teaches advanced immigration law for Free Movement.





Comments