LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE

INTRODUCTION

There has been, both locally and abroad, an increasing drive towards incorporating artificial intelligence (“AI”) and machine learning into various different businesses and products. One of the main benefits associated with AI is the ability to streamline operations, efficiently analyse a user’s behavior and increase the ability to predict the potential purchasing behavior of consumers. However, as in most industries which have seen significant technological advances at a rapid pace, law and policy makers have battled to keep up and stay ahead of the technological curve when it comes to AI. This then begs the question, what is AI and how should it be regulated in South Africa?

WHAT IS AI?

Whilst there is no universally agreed upon definition of AI, AI can generally be described as the science and engineering of making intelligent machines, especially intelligent computer programs, which are able to perform tasks normally requiring human intelligence. It is related to the task of using computers to understand human intelligence whereby machines learn from experience, adjust to new inputs and perform or simulate human-like behavior or tasks. AI has characteristics that enable machines to operate independently of human intervention.

Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision whilst AI programming generally focuses on three cognitive skills: learning, reasoning and self-correction.

Many organisations and industries are using AI to improve consumers experiences. This includes the insurance industry whereby chatbots are being developed to answer consumers’ questions regarding financial products and the banking sector whereby AI is making decisions regarding whether to grant a loan or finance a vehicle.

THE LEGAL REGULATION OF AI

Whilst laws like the General Data Protection Regulation (EU) 2016/679 and the Protection of Personal Information Act No. 04 of 2013 (“POPI”) regulate the automated processing of data, there are currently no policy documents or pieces of legislation which specifically govern the regulation of AI in South Africa. With the upward trend of AI globally, numerous jurisdictions have now adopted AI strategies. In adopting an AI regulation strategy, South Africa can therefore look further afield to better understand how other jurisdictions are regulating AI.

In Canada the Canadian federal government has launched a Pan-Canadian AI Strategy with a strong focus on funding research regarding AI as opposed to developing regulations and governance structures from the onset. Kenya too has adopted an AI strategy whereby the Kenyan government has created a Blockchain and AI task force with the goal of providing the government with recommendations on how to harness emerging technologies (such as AI) and the application of these technologies to various different sectors.

A positive step towards formalising AI in the legislative sphere in South Africa has arrived in the form of the president establishing the Presidential Commission on the Fourth Industrial Revolution (“4IR”) with the aim of prioritising interventions to take advantage of rapid technological changes. The 4IR manifests itself though technological innovations across all levels of society and has necessitated the need to develop new policies, strategies and innovation plans to enable an inclusive approach with the government playing a leadership responsibility.

With the advent of AI through various different sectors, numerous questions are posed such as does the AI or its creator own the intellectual property created by the AI? If the AI creates new content outside of its predefined code, who does it belong to? And how will autonomous motor vehicles be governed and regulated?

This then begs the question as to whether existing legislation can adequately cater for AI? Are the provisions of POPI sufficient to cater for situations whereby personal information of a data subject is obtained through AI? Do the existing intellectual property laws regulate scenarios where intellectual property is created through AI? And will existing motor vehicle and traffics laws be able to adequately deal with autonomous vehicles?

In order to answer these questions and to develop the legal regulation of AI in South Africa it would be key for policy and law makers to fully and comprehensively understand AI, which can be achieved through establishing strategies and task forces such as those in Canada and Kenya. Perhaps it is best for regulators to be fully informed regarding AI before rushing to create legislation governing it. It would also be imperative for government to regulate AI in the way it will actually be used by entities and individuals, and therefore there is no one-size-fits-all approach which can be adopted.

When regulating AI policy and lawmakers should be careful not to stifle innovation through over-regulation and should make sure that the emerging policy discussion is framed in a way that ensures the technology can thrive and provide competitive advantages for entities implementing AI without introducing new risks.

A further issue which arises in the regulation of AI is that the risk of innovations occurring faster than regulations is always present. A regulatory system may face stagnation whilst technological innovation will be accelerating at a fast rate coupled with the fact that regulators may lack expertise in the technological field complicating the ability to manage the relationship between existing regulations and new technology.

It is clear that the question of how and to what extent AI should be regulated in South Africa is difficult to grasp given its fast-paced development. When a new innovation such as AI is introduced, the immediate regulation thereof creates the potential risk of it being counter-productive, since little is known about AI’s potential impact on society. However, if regulators take the opposite approach and wait to reduce uncertainty about the impact of AI, it would be more difficult to effectively regulate AI at this stage as the technology would have matured and may have become entrenched in society.

An ex-post system of regulating AI after an AI system causes harm is problematic because of the AI characteristic of autonomy, which creates issues of foreseeability, causation and control. An ex-post approach is therefore poorly suited to minimising public risk, particularly when those risks are as significant as those posed by AI development. On the other hand, ex-ante regulation, being the regulation of AI before it is fully developed, is also challenging as it may be difficult to identify where AI development is occurring as a result of AI be conducted with little visible infrastructure and regulators may not be able to reverse-engineer an AI system to understand the public risks it poses.

These unique challenges in regulating AI suggest that South African regulators should not rely on traditional regulatory approaches to effectively minimise the risk from AI development. Policy and law makers should therefore aim to strike a balance between protection and innovation and developed relevant safety standards to anticipate both ex-ante and ex-post challenges. In doing so it may be greatly beneficial to include the developers of AI in the drafting process in order to adequately address and understand the technical nuances of AI.

 

Published: 31 March 2020