How the EU AI Act regulates artificial intelligence: What it means for cybersecurity

On December 8, 2023, after more than 36 hours of negotiations, European Union lawmakers agreed on the details of a new law to regulate artificial intelligence. The document, dubbed the AI Act, is one of the first attempts in the world to establish a comprehensive set of rules for AI, and it aims to protect consumer rights while also fostering innovation. This new legislation is “a historical achievement and a huge milestone towards the future,” said Carme Artigas, Spanish secretary of state for digitalization and artificial intelligence.

The document carries cybersecurity implications and might change how tech giants like Google and Microsoft, as well as AI startups, operate in the EU. However, the impact of the bill may reach well beyond European borders: It could serve as a blueprint for other countries that want to establish rules for AI. The way EU policymakers think about the intersection of AI and cybersecurity could serve as an indicator of future regulatory trends.

“The AI Act is necessary because there are many threats involved with AI,” says Rob van der Veer, senior director at Software Improvement Group. “You need to set some guardrails, and the AI Act is doing just that.”

The document was drafted in 2021 but has recently been updated to include generative AI, which has recently gained traction. Now it “sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities,” co-rapporteur Dragos Tudorache of Renew Europe said.

Entities failing to adhere to these rules could face penalties of up to 35 million euros or 7% of global turnover, depending on the nature of the infringement and the size of the company. Citizens can file complaints regarding AI systems and obtain clarifications on decisions made.

The bill needs to be adopted by Parliament and Council to become law, and it will come into effect no earlier than 2025. The AI Act sets clear standards, explaining what uses of AI are banned in the EU and what are considered high-risk.

EU AI Act bans social scoring systems and real-time biometric identification

Several technology experts interviewed by CSO described the document as “a great start” for regulating AI. They also said that, given the swift advancement of artificial intelligence, it’s good that the document does not delve too deeply. “It’s a legal text, and it should provide a certain level of guidance in terms of requirements, but shouldn’t go too much into the technical details, because we know when problems start when lawyers start reading technical documents,” says Dr. Kris Shrishak, a public interest technologist and an ICCL Enforce Senior Fellow.

Van der Veer agrees. “The AI Act leaves much of the standardization to the industry, which, I think, is wise,” he says. “It’s a design decision for it to be high-level, so it can still be applicable over time.”

The document breaks AI systems into several tiers: unacceptable risk (which includes uses of AI that are prohibited), high-risk (which includes critical infrastructure), and limited and minimal risk. “I love the categories,” says Joseph Thacker, security researcher at AppOmni. “AI is right on the cusp of the smartest human; we’re going to have to answer really tough questions about what we want to enable people to use AI for. So, I love the fact that there’s an unacceptable kind of labeling to start with.”

The bill bans several uses of AI, such as social scoring systems, which evaluate people based on behavior, socioeconomic status, or other characteristics. These systems, often seen as invasive and discriminatory, pose risks to individual privacy and societal values. The document also prohibits practices that secretly influence people or vulnerable groups to change their behavior. For instance, it would be against the law to have voice-activated toys that might promote risky actions in children.

The document also prohibits the use of biometric categorization systems based on political, religious, or philosophical beliefs, sexual orientation, or racial characteristics. It also outlaws the gathering of facial images from the internet or CCTV footage for creating facial recognition databases. Furthermore, the use of emotion recognition technology in workplaces and educational settings becomes illegal.

Thacker says that the unacceptable risk category has a few holes in it. “One type of AI safety metric that I see in all of my testings is the notion that AI models should never reply when being prompted to do mass harm, for example, create a bioweapon or some sort of pandemic, or a bomb that can hurt a lot of people,” he says. “I didn’t notice that mentioned at all.”

Thacker suggests approaching the problem from both angles to prevent situations like this. “These can be solved at model-level, in the training, but they can also be solved by trying to do some protections, which analyze what users are asking the AI systems and then rejecting it there.”

Critical infrastructure, high-risk systems need cybersecurity assessment

The next tier is made up of high-risk systems, which need to be assessed before being put on the market and throughout their lifecycle, according to the document. High-risk systems can be broken into two categories.

The first includes products for industries like healthcare, aviation, automotive, toy manufacturers, and others that fall under the EU’s product safety legislation. The second gathers AI systems that must be registered in an EU database, and these are listed in Annex III of the AI Act. They include:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management, and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum, and border control management

Furthermore, the European Parliament members have added AI systems that are used to manipulate election results and voter behavior to the high-risk list. “In all of those sectors, if you’re using AI systems, or AI components, then specific AI parts need to be considered when you’re doing your compliance checks,” says Dr. Kris Shrishak, a public interest technologist and an ICCL Enforce senior fellow.

According to van der Veer, organizations that fall into the categories above need to do a cybersecurity risk assessment. They must then adhere to the standards set by either the AI Act or the Cyber Resilience Act, the latter being more focused on products in general. That either-or situation could backfire. “People will, of course, choose the act with less requirements, and I think that’s weird,” he says. “I think it’s problematic.”

Protecting high-risk systems

When it comes to high-risk systems, the document stresses the need for robust cybersecurity measures. It advocates for the implementation of sophisticated security features to safeguard against potential attacks.

“Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities,” the document reads. “Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. In this context, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.”

The AI Act has a few other paragraphs that zoom in on cybersecurity, the most important ones being those included in Article 15. This article states that high-risk AI systems must adhere to the “security by design and by default” principle, and they should perform consistently throughout their lifecycle. The document also adds that “compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.”

The same article talks about the measures that could be taken to protect against attacks. It says that the “technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training dataset (‘data poisoning’), or pre-trained components used in training (‘model poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws, which could lead to harmful decision-making.”

“What the AI Act is saying is that if you’re building a high-risk system of any kind, you need to take into account the cybersecurity implications, some of which might have to be dealt with as part of our AI system design,” says Dr. Shrishak. “Others could actually be tackled more from a holistic system point of view.”

According to Dr. Shrishak, the AI Act does not create new obligations for organizations that are already taking security seriously and are compliant.

How to approach EU AI Act compliance

Organizations need to be aware of the risk category they fall into and the tools they use. They must have a thorough knowledge of the applications they work with and the AI tools they develop in-house. “A lot of times, leadership or the legal side of the house doesn’t even know what the developers are building,” Thacker says. “I think for small and medium enterprises, it’s going to be pretty tough.”

Thacker advises startups that create products for the high-risk category to recruit experts to manage regulatory compliance as soon as possible. Having the right people on board could prevent situations in which an organization believes regulations apply to it, but they don’t, or the other way around.

If a company is new to the AI field and it has no experience with security, it might have the false impression that just checking for things like data poisoning or adversarial examples might satisfy all the security requirements, which is false. “That’s probably one thing where perhaps somewhere the legal text could have done a bit better,” says Dr. Shrishak. It should have made it more clear that “these are just basic requirements” and that companies should think about compliance in a much broader way.

Enforcing EU AI Act regulations

The AI Act can be a step in the right direction, but having rules for AI is one thing. Properly enforcing them is another. “If a regulator cannot enforce them, then as a company, I don’t really need to follow anything – it’s just a piece of paper,” says Dr. Shrishak.

In the EU, the situation is complex. A research paper published in 2021 by the members of the Robotics and AI Law Society suggested that the enforcement mechanisms considered for the AI Act might not be sufficient. “The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions,” the paper reads.

Thacker also believes that “the enforcement is probably going to lag behind by a lot “for multiple reasons. First, there could be miscommunication between different governmental bodies. Second, there might not be enough people who understand both AI and legislation. Despite these challenges, proactive efforts and cross-disciplinary education could bridge these gaps not just in Europe, but in other places that aim to set rules for AI.

Regulating AI across the world

Striking a balance between regulating AI and promoting innovation is a delicate task. In the EU, there have been intense conversations on how far to push these rules. French President Emmanuel Macron, for instance, argued that European tech companies might be at a disadvantage in comparison to their competitors in the US or China.

Traditionally, the EU regulated technology proactively, while the US encouraged creativity, thinking that rules could be set a bit later. “I think there are arguments on both sides in terms of what one’s right or wrong,” says Derek Holt, CEO of Digital.ai. “We need to foster innovation, but to do it in a way that is secure and safe.”

In the years ahead, governments will tend to favor one approach or another, learn from each other, make mistakes, fix them, and then correct course. Not regulating AI is not an option, says Dr. Shrishak. He argues that doing this would harm both citizens and the tech world.

The AI Act, along with initiatives like US President Biden’s executive order on artificial intelligence, are igniting a crucial debate for our generation. Regulating AI is not only about shaping a technology. It is about making sure this technology aligns with the values that underpin our society.

Compliance, Generative AI, Regulation