top of page
Logo_RXDigital_White-new.png

Why Do We Need Ex Ante Regulation of AI? Is Self Regulation Not Enough?

Jul 21, 2023

3 min read



Algorithmic regulation should be rooted in ethics and be legally enforceable.


As human creations, algorithms are controlled by us; we must acknowledge that even human behaviour and decision making is circumscribed by the law in civilised societies.


We are supposed to be social animals capable of judging right from wrong, yet we hurt and harm our fellow human beings. How can we let a self-learning machine make potentially harmful decisions without taking proactive preventive measures or holding the designers, coders and the firms they work for responsible and accountable for such decisions?


The state and multilateral organisations would be eschewing their basic responsibility if we were to rely solely on code as a means of regulation or hope that ethical principles will guide business and strategic decisions on the use of AI.


Algorithmic decision-making is based on superior computing power. It can factor in many more variables than a human can, making it potentially more capable of harm than that perpetuated by its human creators. This can happen unwittingly thanks to the ‘black box’ problem or the scarcity, inaccuracy or inherent biases of the underlying training data or design teams. Regardless, since the outcomes can translate into terrible consequences for individuals and society, there has to be both ex-ante regulation to prevent harm and ex-post regulation to enforce liability and accountability and ensure redress and deterrence.


Competition law practitioners understand how independent algorithmic pricing decisions can cause tacit collusion without competitors ever meeting. The adverse outcome is facilitated by the ready and dynamic availability of data points on competitor prices thanks to the digital environment and the fact that firms naturally would design algorithms to maximise profits. In an oligopolistic market, profit maximisation would entail collusion between independent algorithmic decision-making agents! When certain behaviour/decisions are illegal for humans, how can we allow algorithms to harm the economy through the same conduct? The law must consider the harms and not whether they occurred via a machine, no matter that the latter is inanimate and cannot explain itself. Justice demands accountability and liability and, consequently, laws.


The decision not to use algorithms in high-risk areas where the consequences could outweigh any potential benefits is ethical but must have the backing of the law. To state that people will still break the law is insufficient justification for not making something potentially harmful illegal. When algorithmic decisions making is used in areas such as medicine, arms and war, education, employment and environment etc., or runs the risk of violating human rights, international consensus would be needed to protect the world from harm, place human dignity and autonomy above all else, safeguard human rights, protect the especially vulnerable, ensure equity and inclusion and preserve the environment. This consensus must be based on ethics (‘what is desirable)’ but translated into international and national laws (‘what is not permissible).


As algorithms have many positive applications that benefit society, there is a need for joint efforts of many stakeholders, including software experts, ethics experts and privacy and consumer protection experts and lawyers, to work together to ensure that the legal frameworks promote benign innovation and disincentivise toxic innovation.


The irony is that states have been veering towards delegating their law-making powers to coders by transferring responsibility to the owners of vast troves of data, computing capacity and patented algorithms. Ex Ante laws are needed to hold firms responsible for ethical design and outcomes when deploying AI.

Related Posts

Comments

Share Your ThoughtsBe the first to write a comment.
bottom of page