SOUTH AFRICAN COURTS WEIGH IN ON THE ETHICAL USE OF ARTIFICIAL INTELLIGENCE IN LEGAL PRACTICE
August 6, 2025

Background:

Artificial Intelligence (“AI”) is increasingly being adopted in South African legal practice, offering efficiency and convenience in tasks such as legal research, drafting and document analysis. However, the courts have made it clear that the benefits of AI must not come at the cost of ethical and professional standards. In recent judgments, South African courts have cautioned against the uncritical use of generative AI tools, especially when used to conduct legal research or produce court documents.

While South Africa’s Code of Conduct for all Legal Practitioners, Candidate Legal Practitioners and Juristic Entities (“Code of Conduct”) and the Legal Practice Act No. 28 of 2014 currently provide no express regulation on AI use, recent judgments are laying the groundwork for ethical expectations in this domain.

Three cases, in particular, have been central to date:

  1. Parker v Forsyth N.O. and Others (Johannesburg Regional Court, unreported case no 1585/20, 29 June 2023, per Magistrate Chaitram)(“Parker”);
  1. Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others (KZP, unreported case no 7940/2024P, 8 January 2025, per Bezuidenhout J) (“Mavundla”); and
  2. Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (Gauteng Division, Johannesburg, case no 2025-072038, 30 June 2025, per Smit AJ) (“Northbound”).

From these judgments, four guiding principles have emerged for the ethical use of AI in South African legal practice.

Principle 1: AI has its uses – but it is not yet reliable

In Mavundla, the judge concluded that ChatGPT, and by implication similar AI tools, are currently unreliable as a source of information and legal research. More directly, the court held that relying on AI for legal research is “irresponsible and downright unprofessional.” This reflects a clear judicial stance that legal practitioners are expected to independently verify legal authorities and not delegate this critical responsibility to AI.

This finding was echoed in Northbound, where the court discovered that multiple case citations in the applicant’s heads of argument were fictitious and traced back to an AI tool known as “Legal Genius”. Although counsel admitted that the incorrect citations were due to time pressure and oversight, not bad faith, the court nonetheless stressed that AI-generated legal research must always be verified, and that “Coherent and plausible” outputs are not sufficient if they are false.

Principle 2: AI is no substitute for professional judgment and diligence

While AI can increase efficiency, it does not and cannot replace the legal practitioner’s ethical and professional obligations. In Parker, the court noted that “…the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading“. Legal training, critical analysis and professional reasoning cannot be outsourced to algorithms. South African courts expect practitioners to exercise independent legal judgment, particularly when dealing with novel or complex matters, and not to rely blindly on AI-generated outputs.

In Northbound, the court emphasised that written heads of argument carry the same ethical weight as oral submissions and cannot include references to non-existent authorities, regardless of how the error occurred. The responsibility to apply legal reasoning with care remains constant, with or without the assistance of AI.

Principle 3: The duty to verify information remains with the legal practitioner

The Code of Conduct imposes a duty on legal practitioners to supervise the work of candidate legal practitioners and support staff. In Mavundla, the court extended this obligation to include the verification of all information generated by AI tools. Bezuidenhout J stated that the supervisory role “..include the verification of the accuracy and correctness of any information sourced from generative AI systems and other technologies and databases by staff, including candidate legal practitioners, in the legal practitioner’s employ”. Legal practitioners remain fully accountable for any content presented to court or clients, regardless of its origin.

Similarly, in Northbound, the court found that senior counsel had not independently verified the AI-generated citations in the heads of argument, assuming they were accurate based on the drafting team. The court reiterated that the professional duty to verify lies with the practitioner whose name appears on the document, regardless of internal delegation.

Principle 4: Integrity and honesty cannot be compromised by AI use

The Code of Conduct obliges legal practitioners to avoid misleading the court, directly or indirectly. Presenting AI-generated content that contains false case law or fabricated legal principles breaches this rule, whether through dishonesty or negligence. In Mavundla, the court reaffirmed that misleading the court can occur either deliberately or through ignorance. The court added that a legal practitioner is tacitly representing that the authorities cited “…do actually exist”. Misuse of AI-generated legal references thus undermines a core ethical obligation, to present an honest and accurate account of the law.

This view was reinforced by Northbound where the court concluded that the fact the fictitious cases were not cited orally did not mitigate the misconduct. Heads of argument are relied upon by the court as much as oral argument, if not more. Accordingly, the matter was referred to the Legal Practice Council for investigation, as in Mavundla, despite the absence of intentional deception.

Looking ahead: The need for regulation and training

Given the growing use of AI in South African legal practice, it is increasingly urgent for regulators, legal education bodies and law firms to provide guidance on its ethical use. This could include:

  1. amending the Code of Conduct to address AI and emerging technologies explicitly;
  1. developing firm policies and risk management protocols; and
  2. providing ongoing professional training on AI tools and their limitations.

Until formal regulations are in place, courts like those in Parker, Mavundla and Northbound will continue to serve as the primary source of authority on what constitutes ethical AI use.

Concluding remarks

South African courts have begun to articulate a clear message, namely, that AI tools may assist in legal practice but they do not absolve practitioners from their ethical, supervisory and professional obligations. As these technologies continue to evolve, so too must the legal profession’s approach to using them responsibly. The judiciary’s early interventions provide essential guidance, but more structured, proactive governance is needed to ensure that AI use enhances, rather than undermines, the administration of justice.

VDMA’s team of experts is at your disposal for any company law assistance that you or your business may require.

Published 6 August 2025