ChatGPT_Dispute with AI

Dispute Resolution in the Age of Artificial Intelligence

Artificial Intelligence has moved firmly into our daily lives. It drafts emails, schedules meetings, and, possibly more controversially, assists in resolving disputes. For Alternative Dispute Resolution Practitioners, this technology promises exciting opportunities for efficiency. But it also opens the door to a new class of risks.

The appeal is obvious. AI can transcribe hearings, generate draft rulings, and search case law in seconds. AI tools are already reshaping how practitioners write, and how parties prepare for hearings, arbitrations and mediations. But the very ease of use is what makes AI dangerous. It is not just a tool, it is a system that responds to prompts, whether helpful or harmful.

South African courts have already drawn a line. In Parker v Forsyth NNO (2023)[1], Mavundla v MEC (2025)[2], and Northbound Processing v SA Diamond Regulator (2025)[3], judges rebuked legal professionals for relying on AI-generated citations that turned out to be fiction. The message is clear: unchecked AI use is not just sloppy – it’s negligent.

Bias is another lurking threat. Algorithms trained on flawed data can reinforce discrimination rather than dismantle it. And confidentiality? That’s a minefield. Uploading sensitive case details to public platforms may violate both the Constitution and POPIA.[4] Section 14 of the Constitution guarantees privacy. Section 9 demands equality. Section 10 protects dignity. AI doesn’t get a pass.

The ethical response is not to reject AI, but to use it responsibly. Short[5] (2025) outlines five principles for responsible AI use: fairness, transparency, accountability, privacy and security. These are not just corporate buzzwords, they are the scaffolding of credible dispute resolution. If AI is used, it must be disclosed. If outputs are generated, they must be verified. And if decisions are made, they must remain human.

Globally, the regulatory tide is rising. The EU’s AI Act (2024/1689) imposes strict rules on “high-risk” systems, including those used in employment. Chatbots must disclose that they are AI. AI-generated content, such as deepfakes, must be clearly labelled. The Council of Europe has adopted a treaty on AI and human rights, promoting responsible innovation. The G7 has issued a code of conduct to promote the safety of AI systems. South Africa, though not yet regulated, will need to align.

So what does this mean for dispute resolution? It means that dispute resolution practitioners must resist the temptation to outsource judgment. AI can help with volume, polish and accessibility – especially as online platforms grow. But it cannot replace impartiality and human judgement. At least not yet. The credibility of dispute resolution rests on trust in the humanity of fairness, not machine logic.

Used responsibly, AI can be a powerful ally. Used carelessly, it can undermine the very principles ADR is built on. The risk is not just technical, it’s ethical. And the solution is accountability.

[1] Parker v Forsyth NNO and Others (1585/20) [2023] ZAGPRD 1 (29 June 2023)
[2] Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KZN & Others (7940/2024P) [2025] ZAKZPHC 2; 2025 (3) SA 534 (KZP) (8 January 2025)
[3] Northbound Processing (Pty) Ltd v South African Diamond & Precious Metals Regulator & Others (2025/072038) [2025] ZAGPJHC 661 (30 June 2025)
[4] Promotion of Personal Information Act 4 of 2013.
[5] Short, L. Building a Responsible AI Framework: 5 Key Principles for Organizations. June, 26 2025. Retrieved on 18 September 2025 from https://professional.dce.harvard.edu/blog/building-a-responsible-ai-framework-5-key-principles-for-organizations/#Building-a-Responsible-AI-Strategy-Best-Practices.
The image for this article was created with ChatGPT 5 on 30 September 2025.

Leave a Reply

Your email address will not be published. Required fields are marked *