top of page

soul power.

  • Writer: Christopher Dias
    Christopher Dias
  • 13 hours ago
  • 9 min read

Artificial intelligence is an extraordinary tool. It can process vast amounts of information simultaneously, identify patterns across thousands of cases that might take a human practitioner years to notice, and make connections between areas of law that a busy solicitor, juggling a full caseload, might never have the bandwidth to reach. It does not get tired. It does not have an off day. It does not miss the citation because it was thinking about something else on the train. On cognitive capacity and raw processing power, it has capabilities that comfortably exceed our own in many respects.


But AI has never felt the sun on its face. It has never tasted the wind. It has never sat across a table from a person whose entire future was dissolving in front of them and felt, in its own body, the weight of that. It has never made a mistake that cost someone something and lain awake afterwards carrying the memory of it. It has never loved anyone, lost anyone, been afraid, been ashamed, been redeemed. It processes the world. It does not inhabit it.


This is not a weakness in the engineering. It is not a problem that the next model will solve. It is the nature of what AI is: a tool of extraordinary, almost incomprehensible power, that has no soul. And in law, of all the disciplines where this distinction matters, it matters most.

I have written about the Superlawyer hypothesis: that AI completes lawyers rather than replacing them, amplifying each practitioner's natural strengths and compensating for their weaknesses. I stand by that. I have also written about Generation AI: the risk that juniors reaching for the tool before they have the formation behind them will become operators rather than lawyers, borrowing a voice rather than finding one. Both of those arguments were building to this one. Because the reason AI completes rather than replaces, the reason formation matters, the reason the voice has to be found rather than borrowed, is the same reason in each case. It comes back to soul. It comes back to the irreducible human thing that makes legal practice more than pattern recognition and document assembly.

I remember, early in my career, being assigned to a partner who would dictate whole letters to clients and expect me to transcribe them word for word. He was twenty-five years my senior; a font of knowledge, technically formidable, someone I genuinely learned from in many respects. But he never gave me the room to find my own voice. Every letter that left under my name was his letter. Every turn of phrase, his turn of phrase. I hated it, not out of arrogance, but because I could feel something being withheld; the opportunity to work out who I was on the page, to develop the instinct that only comes from being allowed to try and fail on your own terms. He was not a bad mentor. He was, by the standards of the time, a perfectly normal one. But the effect was to keep me as an extension of his practice rather than the beginning of my own.


AI, if we are not careful, does exactly the same thing to any lawyer at any stage of their career. Not just the junior finding their feet; the senior partner, the experienced solicitor, the practitioner who has been doing this for twenty years and still reaches for the output without pushing back, without fighting for their own reading of the facts, without insisting that their voice, their judgment, their instinct, appears somewhere in the work. It is impressive, it is capable, it is often right, and it will dominate if you let it. The lawyer who takes the polished output and sends it without interrogating it, who defers to the model's confidence because it is easier than forming their own view, who does this once, twice, a hundred times until there is nothing of themselves left in the work; that lawyer is my dictating partner's nightmare made digital. The output may be better. The lawyer will be worse. And a lawyer who has been hollowed out by a tool that was supposed to serve them is not a superlawyer. They are a conduit. They are, in the most literal sense, soulless.

Soul is what gives a lawyer connection to real life. It is what allows you to sit with a client's fear and understand it not as data but as experience, because you have been afraid too. It is what allows you to read a set of facts and feel, before you can articulate why, that something is wrong; that the official version does not quite hold; that the person in front of you is telling the truth even though the evidence is thin. It is what allows you to carry responsibility. Not just to acknowledge it intellectually, but to feel its weight, to understand what it means for a real human being if you get this wrong, and to let that understanding shape every decision you make.


We are not machines. We never were. We are natural creatures; creatures of the earth, of blood and breath and instinct, woven into the same fabric as everything that lives and dies and feels. The sun on your face is not an irrelevance to your work. The grief you carry from a loss, the joy that catches you off guard on an ordinary Tuesday, the anger that rises when you see something unjust; these are not interruptions to your humanity. They are your humanity. They are the reason you can look at a client's situation and understand it at a level that no amount of processing power can reach, because you are made of the same stuff they are made of, and you know, in your body as much as your mind, what is at stake.

There is a reason the Catholic tradition places confession at the centre of moral life rather than at its edges. It is not merely about admission. It is about the full and conscious ownership of what you have done; standing before something greater than yourself, naming the act, understanding its weight, and accepting what follows. You cannot confess what you will not acknowledge. You cannot be absolved of what you refuse to name. And the thing left unconfessed does not dissolve; it accumulates, it hardens, and when it is eventually drawn into the light, as it always is, the failure to have named it sooner becomes part of the reckoning itself.


Professional responsibility works the same way. The lawyer who signs off on work they have not truly supervised has not confessed. The firm that has decided not to ask whether its people are using AI because asking means owning has not confessed. The managing partner who points to a software licence when a regulator asks for evidence of oversight is not in a state of grace; they are in a state of attrition, sorry only because the question has been asked, not because they have understood the weight of what they failed to do.

AI cannot confess. It cannot be contrite or merely attrite. It cannot stand before a regulatory body and account for itself as a moral agent, because it is not one. It cannot bear the consequence of getting it wrong in any sense that matters. That is not a limitation of its intelligence. It is a limitation of its nature. And it is precisely why the soul in the room, the human being who can be held to account, who can suffer the consequence, who can carry the weight of what they did and be changed by it, is not optional. It is the whole point.

The courts in 2025 said as much, though not in these terms. And what they revealed, across three cases that should be read together rather than separately, was not just that lawyers were using AI carelessly. It was that when they were found out, they did not confess. They reached, instead, for the same instincts that had got them into trouble in the first place: minimisation, defiance, and the comfort of common practice. In each case the error was serious. In each case the response made it worse.


R (Ayinde) v London Borough of Haringey [2025] EWHC 1383 (Admin) was the first and the most instructive. A pupil barrister submitted grounds for judicial review citing five legal authorities. The authorities did not exist. When this was pointed out, the response was to call them "minor citation errors" and "cosmetic errors"; a phrase that must have seemed, in the moment, like a way of keeping the door open. It was not. It was the response of someone who had been found out and was hoping that minimisation might serve where honesty would not. The Divisional Court was not in a cosmetic mood. The barrister was referred to the Bar Standards Board; the solicitors to the SRA. There is a difference, in the confessional and in the regulatory tribunal, between the person who comes forward because they have understood and the person who minimises because they have been cornered. The courts know the difference. The SRA knows the difference. And we; we know the difference.


MS (Professional conduct; AI generated documents) Bangladesh [2025] UKUT 305 (IAC) brought the problem into my own jurisdiction and took the failure of contrition to a different level entirely. An immigration barrister used ChatGPT to draft grounds of appeal citing a case called Y (China). Y (China) does not exist. When the Upper Tribunal judge challenged him during the hearing he was given time over lunch and a copy of Ayinde so that he would understand the gravity of what was in front of him. He came back and maintained, with a commitment that would be remarkable in any other context, that the case was genuine because the AI had told him so. This was not minimisation. This was defiance; a doubling down in the face of direct judicial challenge that left no room for the tribunal to treat the matter as anything other than what it was. He was referred to the Bar Standards Board. The tribunal was unambiguous: taking unprofessional short cuts which will very likely mislead the tribunal is never acceptable. No excuses.


Mazur v Charles Russell Speechlys [2025] EWHC 2341 (KB) is the most interesting of the three because its defence was the most human and the most understandable, and therefore in some ways the most revealing. This was not an AI case at all. It was about whether unqualified staff can conduct litigation under the supervision of a qualified solicitor, and Mr Justice Sheldon's answer in September 2025 was unequivocal: no. Supervision does not confer authorisation. The person conducting the work must themselves be authorised to conduct it. The profession has not yet fully comprehended what that means for AI, but it should, because the logic is identical. If a qualified solicitor is providing light-touch sign-off on work that was substantively generated by a machine, the machine is conducting the work. The machine is not authorised. The supervision question and the AI question are not separate problems waiting to be addressed separately. They are the same problem, already decided, and the firms still falling back on common practice as their answer are making the same mistake Mazur made. We nowhave a judgment on the books that tells us exactly where that leads.


And yet the profession's dominant response remains silence. The firms that have invested in enterprise-grade AI solutions have convinced themselves that the investment is the answer. The platform is serious. The vendor is reputable. The data processing agreements are watertight. Surely the tool carries some of the regulatory burden. It does not. No platform, however sophisticated, takes professional responsibility for the work that leaves your office under your name and your SRA number. The platform does not supervise. It certainly does not have soul. And a polished, confident, beautifully formatted output is in some respects more dangerous than a rough one, because the polish invites exactly the deference that every one of these cases is warning against. A software licence is not a compliance strategy. It is another way of not confessing.


One supervising solicitor told a tribunal his firm had no mechanism by which staff could use AI. The tribunal's response was pointed and precise: anyone with access to Google has access to AI. The compliance officer who has decided not to ask the question is not protected by not asking it. Any criminal lawyer will tell you that inferences can be drawn from silence; and regulators, insurers, and courts are no different. When a regulator asks how you supervised AI use on a file and you have nothing to show, the inference is not that you were discreet. The inference is that you did not supervise at all.


AI has extraordinary capacity. But capacity without conscience is just processing. The law has always understood, even when it struggled to articulate it, that what makes legal practice legitimate is not the quality of the output. It is the accountability of the person behind it. The weight they carry. The consequences they can bear. The soul they bring to work that affects real people living real lives. That, my friend, is soul power.

 
 
 

Comments


  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • YouTube

© 2025 lawyery.co

Lawyery Limited is a Limited Company (Company number 13141708) and we are a Law Firm practising English law with offices in London. Lawyery Limited is authorised and regulated by the Solicitors Regulation Authority (SRA number 8001894).

Lawyery Limited also trades under the names 'Dias Solicitors' and 'MigrationLaw.' A full list of Directors is available at our registered office at: 3 Waterhouse Square, 138-142 Holborn, London, EC1N 2SW.

SELT_Member DoIA_Colour_CMYK.png
ILPA Member LogoRGB(web)_FIN.jpg
_logo lawyery animated.png
bottom of page