Why relying on AI for your tribunal claim is a bad idea
Artificial intelligence (AI) is becoming more common in legal settings including employment tribunals, helping with tasks like drafting legal documents and conducting research. While AI offers speed and convenience, it’s important to remember that it isn’t perfect. Relying too much on AI without human review can cause serious legal mistakes — a lesson self-represented litigants (Litigant in Person, LiP, Claimant) are learning the hard way.
How AI is being used in legal cases
AI tools are now widely available, offering services such as:
-
Legal research platforms that quickly find case law and statutes.
-
Document drafting software to help prepare legal arguments and forms.
-
Case law analysis that identifies relevant precedents.
For those representing themselves, these tools may seem like a way to level the playing field against lawyers. However, while AI can be helpful, it should not replace human legal expertise.
A costly mistake: Relying too much on AI
Since ChatGPT and other similar generative AI tools became popular, self-represented litigants have been using it to help support their employment tribunal cases. However, upon relying heavily on AI to draft legal arguments and research case law, there are now many case studies of where this approach has backfired due to a few key issues:
Imaginary Laws and Case Law
Generative AI isn’t actually intelligent – as it looks to answer your prompt/question it uses tons of data points to predict what comes next in a sentence. Whilst this often works as intended, there are many cases where ChatGPT and similar tools have stated laws that do not exist and referred to Case Law where there is absolutely no record of it being real.
Misunderstanding of Case Law
Using AI to find legal precedents to support an argument may seem like a great idea at first. However, AI doesn’t truly understand the law — it works by identifying patterns in data. As a result, AI may suggest case law that initially appears helpful but, upon closer review, actually weakens the litigant’s position. In one case study where this happened, the opposing legal team pointed out these errors to the judge, damaging the claimant’s credibility.
Outdated Information
AI relies on datasets which have been fed to it in advance; it does not look up new data in real-time. This means that the data the AI is relying upon may be many years old and therefore could be out of date. It is especially important to be aware of this if you are relying on laws (which change regularly), case law (which may have since been superseded) or events which are no longer relevant.
Incorrect Legal References
While AI-generated documents help structure legal submissions, they can also include incorrect references to court procedures. Trusting the AI’s output without double-checking is a trap that has seen many litigants unknowingly submit flawed documents to the tribunal. This causes confusion, delays proceedings, and adds extra legal costs.
Worried about not submitting a powerful and legally correct claim?
Need help but are concerned AI will give you incorrect information? Read our ET1 Submission Guide!
The Danger of Confirmation Bias
One major issue in using AI for employment tribunal claims is confirmation bias — the tendency to focus on information that supports what you already believe. A litigant asks the AI leading questions, pushing it to produce answers that matches their desired outcome instead of neutrally assessing the case’s strengths and weaknesses. This creates a dangerous echo chamber, reinforcing poor legal strategies.
The Consequences of Overconfidence
There have been cases where a litigant, believing that his AI-generated arguments were fool-proof, pursued legal action with little merit. This overconfidence led to repeated, baseless applications. The court eventually viewed these actions as unreasonable. If this happens within an employment tribunal the judge may give a Cost Order, meaning the Claimant could be held responsible for the Respondent’s costs.
How to Use AI Effectively in Legal Matters
To make the most of AI while avoiding its pitfalls, both self-represented litigants and legal professionals should follow these steps:
Verify all AI-generated content: Double-check AI suggestions against trusted legal sources and official case law databases.
Seek professional review: If possible, ask a qualified legal professional to review AI-generated documents or research.
Understand legal strategy: Use AI as a support tool, but ensure your legal strategy is rooted in sound legal reasoning, not just AI predictions.
AI as an efficiency tool, not a replacement: Let AI handle time-consuming tasks like document drafting or research, but keep human judgment at the core of legal decisions.
Invest in human expert written content: The content within SueMyEmployer.co.uk has been written by human experts with first-hand knowledge and experience of the UK employment tribunal system. This will provide reliable and relevant information.
AI may be smart, but it still needs a human touch to ensure accuracy and fairness in legal cases.
Conclusion: The risks of over-reliance on AI at Employment Tribunal
Overusing AI without proper human oversight can lead to serious issues, including:
-
Fabricated Case Law Citations: AI may generate case law citations that sound convincing but don’t actually exist, risking severe damage to your credibility.
-
Contextual Misinterpretation: AI might misinterpret legal concepts, suggesting precedents or arguments that are out of context.
-
Procedural Inaccuracy: AI tools may not always reflect the most recent procedural rules, leading to errors in document submission or court processes.
-
Inflexibility in Strategy: AI lacks the human ability to adjust strategies based on courtroom dynamics or unexpected developments.