The Gryffindor, Hufflepuff, Ravenclaw and Slytherin of Digital Regulation? - Rx Digital Regulation The Gryffindor, Hufflepuff, Ravenclaw and Slytherin of Digital Regulation? | Rx Digital Regulation Humane ClubMade with Humane Club
table of contents

The Gryffindor, Hufflepuff, Ravenclaw and Slytherin of Digital Regulation?

455 4553520 Harry Potter And The Sorting Hat By Kirileonard

Getting Regulation Right: Regulating AI

Harry Potter struggled between Gryffindor and Slytherin before choosing.
However, to get regulation right, we must wear all four hats. Our lode star has to be consumer protection. It should not be surprising that consumer protection implies that each stakeholder is satisfied, too. As a digital policy and regulation professor who has been a practitioner for over thirty-three years, I have worn many hats: policymaker, regulator, legal, corporate, and academia. I encourage students to ‘sort’ out their career choices by trying on all four hats and discarding what doesn’t fit, but that they must wear all four stakeholders’ hats to understand digital markets and digital regulation. By doing this, you respect each stakeholder’s perspective, objectives, worth, and value systems. Policy and regulation must build trust between all stakeholders to ensure that we reap the potential of digital technologies and use them responsibly.

Regulation and AI: A Fascinating Debate

When it comes to artificial intelligence, who doubts it will transform human existence even in its fledgling current avatar? But when it comes to regulating AI, there are often heated debates and viewpoints, and in my class, we examine these diverse viewpoints with relish.

Other Exciting Discoveries and Inventions 

Let us go back in time and think about humanity’s discovery of fire. Once the potential of fire to keep us warm and safe and transform the human gastronomic experience had been understood, humans must have known that there was so much more to come. As there were no laws, we roasted enemies and other annoying people. Yet, I guess we are grateful that humanity managed not to set fire to all the forests on Earth. In that case, it may be because, over the ensuing centuries, laws evolved around the irresponsible and unethical deployment of the capabilities of this life-changing discovery and the following inventions. Otherwise, for all you know, Flint may have been banished, and fire may have been banned altogether, depriving humanity of transformative progress.

Why the European approach is praiseworthy

We can take various approaches to regulating AI, such as the EU’s risk-based, human rights-centric, China’s state-centric, or the ones that try to balance multiple interests, as in the US. I appreciate the EU for its thorough approach to regulation. They rock the art of regulation. They research, consult, explain and articulate, implement, modify based on feedback and replace/refurbish what is not working. A policymaker-regulator’s dream recipe. Above all, they are not afraid to regulate.

The European Parliament has now approved the AI Act, which aims to:

  • Safeguards on general purpose artificial intelligence  
  • Limits on the use of biometric identification systems by law enforcement  
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities  
  • Right of consumers to launch complaints and receive meaningful explanations  

The gist reproduced from the linked article verbatim is as below:

Banned applicationsBanned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

“Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups to develop and train innovative AI before its placement on the market.”Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups to develop and train innovative AI before its placement on the market.”

Some criticise regulation as anti-innovation and believe that digital innovation has thrived thanks to regulatory gaps and laissez-faire arguments, which may even have an element of truth.
However, let us consider our continuing and often losing battle with online privacy, data protection, and content regulation. Where do we draw the line between the excitement and profits of some and genocides, suicides, and lesser everyday harms? Our past experiences with these ongoing struggles should inform our choices regarding AI regulation. Further, regulator certainty is better at these embryonic stages of the AI journey than putting the AI cat back in the bag!

Final thoughts

Regulation is not bad; it just needs to be timed right, appropriate, proportionate, and respectful of the value each stakeholder brings to the table. However, the decision on how to balance the conflicting interests of various parties must put us humans, citizens, and consumers centre stage. We need to use technology thoughtfully and responsibly. Arson is only fun for the perpetrators; its consequences are not fun, not even for the committers.