*/
Get beyond the buzzwords to drive innovation in your legal practice: a practical primer for barristers by Raquel Vazquez
Artificial intelligence is bringing a new wave of jargon. To use AI tools effectively and understand the regulations shaping them, lawyers must first grasp the fundamental concepts underpinning the technology. This article aims to demystify some of the buzzwords and provides an overview of the practical limitations that matter most to the legal profession.
To start, it is essential to distinguish between the two types of AI. Predictive AI analyses existing data to find patterns and predict outcomes, and has been used for years in applications like e-discovery. In contrast, Generative AI is designed to produce something new, such as a brief, a podcast, a short video or an infographic. You have likely heard of Large Language Model (LLM) as a synonym for Generative AI, but the technology has evolved. Most models nowadays are multimodal, which means they are capable of handling not just text, but also images, video, charts, code and audio.
These all-encompassing AI models are often referred to as foundation models because they serve as the base upon which developers build various applications. The most advanced, state-of-the-art versions are known as frontier AI. These complex models utilise vast computational power, which is why you may have heard regulators discuss compute. Legislators, needing a yardstick for these processing resources, have focused on FLOPs (Floating-Point Operations) as their primary measure. You can think of it as the total number of calculations required to build the model. This metric has been adopted as a threshold for scoping in models subject to the closest regulatory scrutiny.
This legislative approach rests on the assumption that ‘more calculations equals more capabilities which equals more risk’. Although generally true today, this premise is becoming insufficient for measuring risk as technical advancements help build models that are ‘smarter’ without getting much ‘bigger’. For instance, you may have heard about agents or that AI is becoming agentic. This means a model does not just answer questions, it can execute tools to complete workflows. Imagine a situation in which a smaller model is efficient enough to run locally on a laptop, but capable enough to navigate the necessary websites to register a company, check for conflicts, download and populate the IN01 forms, draft the Articles, calculate any registration fees and present you with a final package or validation screen. Such a model could fall below strict regulatory compute thresholds, yet these functionalities can carry risk. This shows that risk is a function of autonomy and application, not just computational resources.
Now, it is vital to make an important distinction: when we speak about AI, we are often talking about two different things in the same breath. On one hand, you have the underlying model itself – the powerful computational resource we have described – and on the other, you have the system or application layer. The system refers to the application built around the model, which serves as the user interface or the specific tool you are directly interacting with. A helpful way to conceptualise this is to think of the model as the engine and the application or system as the car. For example, your chambers or law firm might use a specific legal research tool (this would be the ‘system’) that leverages a well-known AI model such as Google’s Gemini or Anthropic’s Claude.
Beyond the theory, legal professionals will face three practical issues when using generative AI tools. The first and most commonly known challenge is reliability. This can happen for a few reasons but you have probably heard of hallucinations, where an AI tool confidently invents facts, cases and citations. A different concept, less commonly known by legal professionals, is the knowledge cut-off date. This date represents the end of the model’s internal knowledge base, meaning it is unaware of events or information that have occurred since that date – akin to using an early edition of a jurisprudence textbook. Knowing the cut-off date helps you understand the potential limitations that a tool may have when using it for recent topics.
Technical methods exist to address these challenges. For instance, a model may have access to real-time information via tools like web search. Using Retrieval-Augmented Generation (RAG) a tool can connect to an external knowledge base and use that information as context to create a more accurate answer. We call this anchoring to information outside the model knowledge base grounding, and you could use a legal database or your organisation’s own documents for this. Many systems also provide a citation back to the source on which their answer is grounded. A prominent example of this in practice is the tool ‘NotebookLM’, which works by grounding its output specifically on the sources given by the user to ensure the analysis remains within the provided context. For lawyers these concepts are valuable because RAG and grounding can help with outdated knowledge. However, it is important to remember that a model may still misinterpret the source text it is grounded in, or make up citations and attribute real quotes to the wrong sources.
A second practical element is system instructions. These instructions act as the AI’s ‘practice direction’, defining the rules that guide the model’s behavior, its role and its style of response. Developers or firms can use these instructions to implement safety guardrails or enforce specific guidelines – for example, instructing the AI to ‘always adopt a formal tone’ or ‘never express a legal opinion’.
Finally, every lawyer must understand the context window. This is simply the AI’s active memory for a single conversation. If you are working on a complex case and your bundle is larger than this context window, the AI will start forgetting the beginning of the conversation before it reaches the end. Importantly, everything in the conversation counts toward this limit: the documents you upload, your questions and the AI’s answers.
We have established that these models can be powerful engines, but a car still requires steering. This primer has focused on core technical concepts underpinning AI and how it works – demystifying terms across a gradient of complexity, from ‘multimodality’ to ‘RAG’ and ‘system instructions’ – to provide an approachable technical baseline. Many other substantive areas merit attention for our profession, but they cannot be properly addressed without some foundational knowledge.
Understanding some of the basic technical terms will increasingly be a sign of competence, but more importantly it can empower lawyers to not fall into the alluring trap of passive reliance. An AI system can generate a skeleton argument, but only you understand the strategy, the client’s risk appetite and the nuance of the law. The tool may build the draft, but your judgement builds the case.
Artificial intelligence is bringing a new wave of jargon. To use AI tools effectively and understand the regulations shaping them, lawyers must first grasp the fundamental concepts underpinning the technology. This article aims to demystify some of the buzzwords and provides an overview of the practical limitations that matter most to the legal profession.
To start, it is essential to distinguish between the two types of AI. Predictive AI analyses existing data to find patterns and predict outcomes, and has been used for years in applications like e-discovery. In contrast, Generative AI is designed to produce something new, such as a brief, a podcast, a short video or an infographic. You have likely heard of Large Language Model (LLM) as a synonym for Generative AI, but the technology has evolved. Most models nowadays are multimodal, which means they are capable of handling not just text, but also images, video, charts, code and audio.
These all-encompassing AI models are often referred to as foundation models because they serve as the base upon which developers build various applications. The most advanced, state-of-the-art versions are known as frontier AI. These complex models utilise vast computational power, which is why you may have heard regulators discuss compute. Legislators, needing a yardstick for these processing resources, have focused on FLOPs (Floating-Point Operations) as their primary measure. You can think of it as the total number of calculations required to build the model. This metric has been adopted as a threshold for scoping in models subject to the closest regulatory scrutiny.
This legislative approach rests on the assumption that ‘more calculations equals more capabilities which equals more risk’. Although generally true today, this premise is becoming insufficient for measuring risk as technical advancements help build models that are ‘smarter’ without getting much ‘bigger’. For instance, you may have heard about agents or that AI is becoming agentic. This means a model does not just answer questions, it can execute tools to complete workflows. Imagine a situation in which a smaller model is efficient enough to run locally on a laptop, but capable enough to navigate the necessary websites to register a company, check for conflicts, download and populate the IN01 forms, draft the Articles, calculate any registration fees and present you with a final package or validation screen. Such a model could fall below strict regulatory compute thresholds, yet these functionalities can carry risk. This shows that risk is a function of autonomy and application, not just computational resources.
Now, it is vital to make an important distinction: when we speak about AI, we are often talking about two different things in the same breath. On one hand, you have the underlying model itself – the powerful computational resource we have described – and on the other, you have the system or application layer. The system refers to the application built around the model, which serves as the user interface or the specific tool you are directly interacting with. A helpful way to conceptualise this is to think of the model as the engine and the application or system as the car. For example, your chambers or law firm might use a specific legal research tool (this would be the ‘system’) that leverages a well-known AI model such as Google’s Gemini or Anthropic’s Claude.
Beyond the theory, legal professionals will face three practical issues when using generative AI tools. The first and most commonly known challenge is reliability. This can happen for a few reasons but you have probably heard of hallucinations, where an AI tool confidently invents facts, cases and citations. A different concept, less commonly known by legal professionals, is the knowledge cut-off date. This date represents the end of the model’s internal knowledge base, meaning it is unaware of events or information that have occurred since that date – akin to using an early edition of a jurisprudence textbook. Knowing the cut-off date helps you understand the potential limitations that a tool may have when using it for recent topics.
Technical methods exist to address these challenges. For instance, a model may have access to real-time information via tools like web search. Using Retrieval-Augmented Generation (RAG) a tool can connect to an external knowledge base and use that information as context to create a more accurate answer. We call this anchoring to information outside the model knowledge base grounding, and you could use a legal database or your organisation’s own documents for this. Many systems also provide a citation back to the source on which their answer is grounded. A prominent example of this in practice is the tool ‘NotebookLM’, which works by grounding its output specifically on the sources given by the user to ensure the analysis remains within the provided context. For lawyers these concepts are valuable because RAG and grounding can help with outdated knowledge. However, it is important to remember that a model may still misinterpret the source text it is grounded in, or make up citations and attribute real quotes to the wrong sources.
A second practical element is system instructions. These instructions act as the AI’s ‘practice direction’, defining the rules that guide the model’s behavior, its role and its style of response. Developers or firms can use these instructions to implement safety guardrails or enforce specific guidelines – for example, instructing the AI to ‘always adopt a formal tone’ or ‘never express a legal opinion’.
Finally, every lawyer must understand the context window. This is simply the AI’s active memory for a single conversation. If you are working on a complex case and your bundle is larger than this context window, the AI will start forgetting the beginning of the conversation before it reaches the end. Importantly, everything in the conversation counts toward this limit: the documents you upload, your questions and the AI’s answers.
We have established that these models can be powerful engines, but a car still requires steering. This primer has focused on core technical concepts underpinning AI and how it works – demystifying terms across a gradient of complexity, from ‘multimodality’ to ‘RAG’ and ‘system instructions’ – to provide an approachable technical baseline. Many other substantive areas merit attention for our profession, but they cannot be properly addressed without some foundational knowledge.
Understanding some of the basic technical terms will increasingly be a sign of competence, but more importantly it can empower lawyers to not fall into the alluring trap of passive reliance. An AI system can generate a skeleton argument, but only you understand the strategy, the client’s risk appetite and the nuance of the law. The tool may build the draft, but your judgement builds the case.
Get beyond the buzzwords to drive innovation in your legal practice: a practical primer for barristers by Raquel Vazquez
Chair of the Bar finds common ground on legal services between our two jurisdictions, plus an update on jury trials
A £500 donation from AlphaBiolabs has been made to the leading UK charity tackling international parental child abduction and the movement of children across international borders
Marie Law, Director of Toxicology at AlphaBiolabs, outlines the drug and alcohol testing options available for family law professionals, and how a new, free guide can help identify the most appropriate testing method for each specific case
By Louise Crush of Westgate Wealth Management
Marie Law, Director of Toxicology at AlphaBiolabs, examines the latest ONS data on drug misuse and its implications for toxicology testing in family law cases
An interview with Rob Wagg, CEO of New Park Court Chambers
The deprivation of liberty is the most significant power the state can exercise. Drawing on frontline experience, Chris Henley KC explains why replacing trial by jury with judge-only trials risks undermining justice
Ever wondered what a pupillage is like at the CPS? This Q and A provides an insight into the training, experience and next steps
The appointments of 96 new King’s Counsel (also known as silk) are announced today
Resolution of the criminal justice crisis does not lie in reheating old ideas that have been roundly rejected before, say Ed Vickers KC, Faras Baloch and Katie Bacon
With pupillage application season under way, Laura Wright reflects on her route to ‘tech barrister’ and offers advice for those aiming at a career at the Bar