Grant Gross
Senior Writer

What IT leaders need to know about the EU AI Act

Feature
Apr 30, 20247 mins
Artificial IntelligenceComplianceGovernment

As the legislation nears final passage, organizations both developing and deploying AI will have new transparency and risk assessment requirements, with the final rules not yet written.

Business woman lawyer manager holding legal documents consulting mature older client at office meeting, two professional executives experts discussing financial accounting papers working together.
Credit: insta_photos / Shutterstock

The European Parliament voted in mid-March to approve the EU AI Act, the world’s first major piece of legislation that would regulate the use and deployment of artificial intelligence applications.

The vote isn’t the final passage, but it indicates that many CIOs at organizations using AI tools will have new regulations to comply with, as the law will apply both to organizations developing AIs and those simply deploying them. The law will also extend beyond the borders of the EU member nations, as any company interacting with residents of the EU will be subject to the regulations.

AI legislation has been years in the making, with the EU first proposing the legislation in April 2021. Many leading voices have called for some type of AI regulation, Elon Musk and Sam Altman of OpenAI among them, but the EU AI Act also has its detractors.

The law will create new mandates for organizations to validate, monitor, and audit the entire AI lifecycle, says Kjell Carlsson, head of AI strategy at Domino Data Lab, a data science and AI company.

“With the passing of the EU AI act, the scariest thing about AI is now, unequivocally, AI regulation itself,” Carlsson says. “Between the astronomical fines, sweeping scope, and unclear definitions, every organization operating in the EU now runs a potentially lethal risk in their AI-, ML-, and analytics-driven activities.”

Carlsson fears the law will have a “profound cooling effect” on AI research and adoption. The multimillion-dollar fines in the legislation will translate directly into fewer AI-based products and services, he predicts. Fines can be up to €35 million (US $37.4 million) or 7% of a company’s annual revenue, whichever amount is greater.

Still, organizations can’t just ignore the AI revolution to avoid the regulations, Carlsson adds. “Using these technologies is not optional, and every organization must increase their use of AI in order to survive and thrive,” he says.

What’s in the legislation?

The EU AI Act is broad, comprising 458 pages, but it covers three major areas:

Banned uses of AI: The regulations ban AI applications that threaten human rights, including biometric categorization systems based on sensitive physical characteristics. The untargeted scraping of facial images from the internet or security footage to create facial recognition databases is also prohibited.

The law would also ban AI systems that monitor employee or student emotions, conduct social scoring, or engage in predictive policing based on a person’s profile or characteristics. Also prohibited are AI systems that manipulate human behavior or exploit people’s vulnerabilities.

Obligations for high-risk AI systems: Organizations usingAI tools that create significant potential harm to health, safety, human rights, the environment, democracy, and the rule of law are also regulated. They must conduct risk assessments, take steps to reduce risk, maintain use logs, comply with transparency requirements, and ensure human oversight. EU residents will have a right to submit complaints about high-risk AI systems and receive explanations about decisions.

Examples of high-risk systems include AIs used in critical infrastructure, education and vocational training, employment decisions, healthcare, banking, and those that could influence elections. Some law enforcement and border control agency uses of AI will be regulated.

Transparency requirements: General-purpose AI systems, and the AI models they are based on, must comply with transparency requirements, such as publishing detailed summaries of the content used for training. The most powerful general-purpose AIs will face additional regulations, and they must perform model evaluations, assess and mitigate risks, and report on incidents.

In addition, deepfakes — artificial or manipulated images, audio, and video content — will be required to be clearly labelled.

Transparency and rulemaking concerns

Lawyers and other observers of the EU AI Act point to a couple major issues that could trip up CIOs.

First, the transparency rules could be difficult to comply with, particularly for organizations that don’t have extensive documentation about their AI tools or don’t have a good handle on their internal data. The requirements to monitor AI development and use will add governance obligations for companies using both high-risk and general-purpose AIs.

Secondly, although parts of the EU AI Act wouldn’t go into effect until two years after the final passage, many of the details affecting regulations have yet to be written. In some cases, regulators don’t have to finalize the rules until six months before the law goes into effect.

The transparency and monitoring requirements will be a new experience for some organizations, Domino Data Lab’s Carlsson says.

“Most companies today face a learning curve when it comes to capabilities for governing, monitoring, and managing the AI lifecycle,” he says. “Except for the most advanced AI companies or in heavily regulated industries like financial services and pharma, governance often stops with the data.”

The law will require high-risk AI systems to provide extensive documentation about their AI operations and use of data, adds Julie Myers Wood, CEO of Guidepost Solutions, a compliance and cybersecurity vendor. Many companies will need to increase their investments in data management and application development processes, and the law may even require the redesign of some AI systems to make them more interpretable and explainable, she adds.

“Compliance could be particularly challenging for companies that rely heavily on AI models that inherently lack transparency, such as deep neural networks, if these issues were not carefully addressed during the development or acquisition lifecycle,” she says.

While the law isn’t technically retroactive, the transparency rules will apply to AI systems that have already been developed or deployed, notes Nichole Sterling, a partner at the BakerHostetler law firm focused on data privacy and cross-border legal issues.

Companies using AI should begin documenting their processes and AI data use and examine their data management practices now, adds James Sherer, a partner at BakerHostetler and co-leader of its emerging technology and AI teams.

“If you have very good practices, you probably have the basics of a lot of these things in place,” he says. “If you don’t, there’s going to be a big documentation push, and you’re probably not going to be able to prove half of it.”

The unknown unknowns

Meanwhile, EU regulators have up to 18 months from the final passage of the legislation to write many of the specific definitions and rules in the law. The proposed law has a lot of regulations to comply with, and some of the regulations are likely to be nuanced, Sherer says.

“There’s a lot of moving parts, and there’s a lot of boxes to be checked,” Sherer says. “There are a lot of holes that need to be filled in by regulatory input.”

Observers of the law are waiting on guidelines for assessing high-risk AI systems, examples of uses cases, and possible codes of conduct, Sterling says. “That would be really helpful to have,” she says.

Finally, the legislation focuses more on the effect of AI systems than on the systems themselves, which could make compliance difficult, given the rapid advancements in AI and its unpredictability, Sherer says.

“You may have an idea of what a system is going to do,” he says. “If its effects start to change, then you’ve got a lot of these requirements that would trail behind that.”

Grant Gross
Senior Writer

Grant Gross, a senior writer at CIO, is a long-time technology journalist. He previously served as Washington correspondent and later senior editor at IDG News Service. Earlier in his career, he was managing editor at Linux.com and news editor at tech careers site Techies.com. In the distant past, he worked as a reporter and editor at newspapers in Minnesota and the Dakotas.

More from this author