Introduction
Artificial intelligence (AI) is rapidly transforming modern workplaces, enabling unprecedented efficiencies, automation, and insight generation. From data analysis to content generation, AI tools, particularly language models, have become a part of daily operations across industries. Yet, the widespread adoption of AI presents complex compliance and privacy challenges under South Africa’s Protection of Personal Information Act 4 of 2013 (POPIA).
AI platforms, most notably freely accessible and publicly hosted ones, may operate outside the jurisdiction of South African law and are not necessarily required to or automatically compliant with regulations pertaining to the processing of data in accordance with POPIA, while organisations making use of such platforms and operating within South Africa remain fully accountable under POPIA.
Despite its established reputation, POPIA remains poorly understood in many organisations, especially in the context of emerging technologies like AI. At its core, POPIA establishes statutory protection regarding the collection, processing, storage, and dissemination of personal information.
In essence POPIA prescribes what information may be collected from a data subject (being a client for most organisations), and how the organisation may process such information.
The Requirements of POPIA
The aforementioned may still seem a bit too broad to effectively establish the relevant application and compliance measures thereof in relation to AI, however a few vital provisions under POPIA are critical for businesses seeking to understand and comply when considering the integration of AI solutions:
Section 10: Minimality
POPIA requires that the collection of personal information be adequate, relevant, and not excessive in relation to the purpose for which it is processed.
This principle of minimality requires that organisations collect only what is necessary from its clients in order to provide its services to its clients, mitigating the risk of overexposure and potential breaches.
The Act defines “processing” broadly, encompassing various interactions with personal information, including the:
- collection, recording, organisation, storage, retrieval, updating, modification, and use;
- dissemination via transmission, distribution, or making information publicly available; or
- merging, linking, restriction, degradation, erasure, or destruction of data. This broad definition becomes particularly relevant in the context of AI platforms used within a workplace. For example, when an employee inputs client data into an AI tool, multiple processing activities may be engaged depending on the platform’s operation:
- The data is collected, recorded, and used to generate outputs.
- It may be stored or retrieved for session continuity or future training purposes.
- If the platform shares information externally or generates outputs accessible to others, dissemination occurs.
- Internal mechanisms such as linking or combining data with other inputs could involve merging or linking, and retention policies may engage restriction, degradation, or erasure processes.
Thus, even routine use of AI by employees can trigger several processing types as defined under POPIA, highlighting the need for careful oversight and adherence to compliance requirements.
Section 11: Consent, Justification, and Objection
POPIA provides that personal information may be processed, but may only be processed in specific circumstances. Section 11 outlines six lawful bases for processing personal information:
- Consent – The data subject or a competent person (if the data subject is a child) must consent to processing. Consent must be informed, voluntary, and specific, giving individuals meaningful control over their personal data. Organisations bear the burden of proof to demonstrate that valid consent was obtained.
- Contractual Necessity – Processing is permitted when necessary to perform or conclude a contract to which the data subject is a party.
- Legal Obligation – Data may be processed to comply with statutory obligations imposed on the responsible party.
- Legitimate Interest of the Data Subject – Processing may protect interests directly benefiting the individual.
- Public Law Duties – Public bodies may process personal information to fulfil statutory duties.
- Legitimate Interests of the Responsible Party or Third Parties – In the absence of explicit consent, processing may occur if it advances legitimate interests, provided these do not infringe the rights of data subjects.
What is important to note from section 11 of POPIA is that it does not in any way prohibit the use of AI in processing information, but it requires any such use to be in accordance with the Act. It is therefore important form an organisational or business perspective to ensure that the use of such AI tools are compliant with the regulations of the Act.
From a corporate compliance perspective, this form of use of AI tools as well as the processing by the specific platform introduces several risks:
- Unintended Disclosure – Employees may inadvertently input sensitive client, employee, or internal data into AI platforms, potentially violating POPIA provisions.
- Data Retention Risk – AI systems may store information longer than necessary, contravening the principle of minimality.
- Cross-Border Exposure – Cloud-based AI services often operate internationally, complicating jurisdictional accountability and increasing vulnerability to regulatory enforcement abroad.
Solution?
The most effective strategy is to ensure the implementation of POPIA compliance policies and procedures internally. In order to achieve this, organisations should consider the following measures:
- Comprehensive AI Usage Policy
Develop a formal policy defining permissible AI tools, outlining data input restrictions, and specifying measures such as data redaction, anonymisation, or consent documentation.
- Employee Training and Awareness
Educate staff on the legal obligations under POPIA, the risks of AI misuse, and practical steps to ensure there’s sufficient protection of personal and sensitive information, to the extent required by the Act.
- Monitoring and Auditing
Regularly audit AI interactions and usage by employees to identify non-compliant practices, detect vulnerabilities, and ensure adherence to organisational policies.
Conclusion
AI presents transformative opportunities but also exposes organisations to substantial compliance and privacy risks. Because AI platforms may not be legally bound by South African legislation, organisations bear the full responsibility for ensuring internal compliance with POPIA. By employing measures internally, companies can exercise operational control over AI usage, and comply fully with POPIA while implementing AI.
For further assistance, consult an attorney at SchoemanLaw Inc.
Recent Comments