Peter Sayer
Executive Editor, News

The Rome Call for AI Ethics: Should CIOs heed it?

News Analysis
Feb 28, 20235 mins
Artificial Intelligence

Microsoft and IBM recently renewed support for the Rome Call for AI Ethics, which outlines six short principles to guide the development and use of AI in the enterprise.

Man in suit signing contract planning strategizing
Credit: Thinkstock

As enterprises increasingly look to artificial intelligence (AI) to support, speed up, or even supplant human decision-making, calls have rung out for AI’s use and development to be subject to a higher power: our collective sense of right and wrong.

One such entity weighing in on the need for AI ethics is the Vatican, which exactly three years ago, on Feb. 28, 2020, brought together representatives from Microsoft and IBM to first sign the Rome Call for AI Ethics, a commitment to develop AI that serves humanity as a whole.

This ethical commitment, which brings together high-tech and religious leadership, as well as universities and government entities, was renewed in January 2023, with representatives of the Muslim and Jewish faiths joining alongside the Vatican.

In many ways, the Rome Call is symbolic, enforcing principles that many IT vendors and enterprises are already undertaking around AI’s use and development. But it also raises the profile of an emerging issue that has real impact on people around the globe — something CIOs must consider in their approaches to AI.

Laying the groundwork

IBM and others in the IT industry had been thinking about the ethics of AI since long before signing the Rome Call, says Christina Montgomery, the company’s chief privacy officer and chair of its AI ethics board.

“It’s essentially a reiteration of principles that we had adopted internally, that Microsoft had adopted internally, and that a number of companies were adopting or thinking about at the time,” she says.

It’s natural for IBM, a company that traces its origins back over a century, to take a more holisitic view of its technology, she says. “We’re very different culturally from a lot of new technology companies and we think deeply about the technology that we’re putting into the world.”

Deep thought about the ethics of AI is something IBM is encouraging in other ways, supporting the development of a network of universities that will incorporate the principles of the Rome Call for AI Ethics in their curriculums, something that will eventually lead to a new generation of graduates better equipped to consider such questions.

The six principles

The Rome Call itself consists of a preamble and six succinct principles that supporters commit to. In their entirety, they are:

  1. Transparency: AI systems must be understandable to all.
  2. Inclusion: These systems must not discriminate against anyone because every human being has equal dignity.
  3. Responsibility: There must always be someone who takes responsibility for what a machine does.
  4. Impartiality: AI systems must not follow or create biases.
  5. Reliability: AI must be reliable.
  6. Security and privacy: These systems must be secure and respect the privacy of users.

While software vendors Microsoft and IBM were the first two enterprises to support the Rome Call, its ethos is aimed more broadly at any organization using the technology, in enterprises, governments, and civil society.

It will be easier for enterprises to comply with some of these principles than with others. Reliability and security can be taken into account at every level, but CIOs may need to bake inclusion and impartiality into project requirements at an early stage.

The principle of responsibility will require broader buy-in, as it requires a cultural shift to avoid blaming unwelcome decisions on an algorithm, whether AI-based or not.

Transparency, though, is a whole other matter.

Hurdles to answering the call

Shlomit Yanisky-Ravid, a visiting professor at Fordham University’s School of Law, says that unless we understand what an AI is really doing, we won’t be able to think about the ethical issues around it. “That’s where I see a lot of gaps and conflicts between the industry and the ethical and legal demands,” she says.

The EU’s General Data Protection Regulation (GDPR) already includes provisions that some academics construe as a right to explainability of software in general. Articles 13-15 give those who are subject to the effects of automated decisions a right to “meaningful information about the logic involved.”

For IBM’s Montgomery, it’s clear: “Using AI models in your operations that aren’t explainable, that aren’t transparent, could have unintended consequences.”

But there’s a problem, says Yanisky-Ravid: “We can speak about transparency, we can speak about explainability, but we cannot really make it happen — at least for now.”

Her speciality is intellectual property law, where the opacity of AI systems is making for interesting cases involving the moral right of AIs to be recognized as inventors or creators.

Some of those cases involve Stephen Thaler, creator of an AI tool called Dabus that he used to design a novel food container. His initial attempts to credit Dabus as co-inventor in patent filings around the world were rejected, with patent authorities insisting only a human could be responsible for the process of invention. However, Thaler later won one case on appeal: IP Australia, the government agency, has recognized Dabus as an inventor. Other appeals are ongoing.

Some may be put off by the fact the first signatories of the call included a representative of the Pontifical Academy of Life, an ethics think tank run by the Catholic Church, but IBM’s Montgomery says it was never intended to be just a religious call. “The goal is to extend it as much as possible.”

Whatever their beliefs, CIOs should be engaging with the ethical questions around AI right now, she says. “If you wait, it’s too late,” she says.