Thornton May
Columnist

AI value begins with managing the C-suite conversation

Opinion
Mar 03, 20235 mins
Artificial Intelligence

CIOs should know that AI has captured the imagination of the public, including their business colleagues. Dialogue is key to remediating misconceptions and steering the enterprise toward value creation.

Data stream. Bright, colorful background with bokeh effect
Credit: Shutterstock / Yurchanka Siarhei

Every futurist and forecaster I have talked to is convinced the transformative technology of the next seven years is artificial intelligence. Everyone seems to be talking about AI. Unfortunately, most of these conversations do not lead to value creation or greater understanding. And, as an IT leader, you can bet these same conversations are reverberating throughout your organization — in particular, in the C-suite.

CIOs need to jump into the conversational maelstrom, figure out which stakeholders are talking about AI, inventory what they are saying, remediate toxic misconceptions, and guide the discussion toward value-creating projects and processes.

A brief history of AI hype and impact

AI has been part of the IT conversation since the term was coined by Stanford University computer scientist John McCarthy in 1956. Conversations around AI have generally tracked alongside multiples waves of enthusiasm and valleys of disappointment for the technology. In 1983 the prevalent conversation regarding AI was “It’s coming, it’s coming!” thanks in part to Edward Feigenbaum and Pamela McCorduck’s The Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to the World. And then just a year later, in 1984, a subset of AI startup companies in Silicon Valley collapsed, spectacularly ushering in a period known as “the AI winter.” At that point, AI conversations, when they occurred, typically concluded with the determination “not yet.”

Around the turn of the century we — most of us unknowingly — entered the age of artificial narrow intelligence (ANI), sometimes referred to as “weak AI.” ANI is AI that specializes in onearea. John Zerelli, writing in A Citizen’s Guide to Artificial Intelligence,contends, “Every major AI in existence today is domain-specific” — i.e., ANI.

The general path forward for ANI has been that it moves into a given domain and 7 to 10 years later it becomes impossible to compete/perform that particular task/activity without AI. Executives need to have tactical conversations regarding which domains and activity areas — aka, in AI-speak, which definable problems and measurable goals — should be targeted with which ANI resources.

By 2009 we were surrounded by invisible ANI, in the form of purchase, viewing, listening recommendations; medical diagnostics; university admissions tasks; job placement; etc. Today ANI is ubiquitous, invisible, and fundamentally misunderstood. Ray Kurzweil, computer scientist, futurist, and director of engineering at Google, keeps telling people that if AI systems went on strike “our civilization would be crippled.”

Today the general population is not talking substantively about AI, despite the fact that ahead-of-the-curve high performers have concluded that one can never outcompete those who use AI effectively.

In The Age of AI: And Our Human Future, Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher tell us that “AI will usher in a world in which decisions are made in three primary ways: by humans [which is familiar]; by machines [which is becoming familiar], and by collaboration between humans and machines.” Organizations need to have conversations detailing how critical decisions will be made.

Taking practical steps

Organizations need to have conversations with every employee to determine their preferences regarding what kind of AI assistance they need to maximize their performance/engagement.

One of the most important conversations about AI that is not happening enough today is how it should be regulated. In his still-relevant mega-best-seller Future Shock, my former boss Alvin Toffler correctly prophesied a technology-intensive future and counseled the need for a technology ombudsman, “a public agency charged with receiving, investigating, and acting on complaints having to do with irresponsible application of technology.”

Fast forward to 2017 when legal scholar Andrew Tutt wrote “An FDA for Algorithms,” in Administrative Law Review, explaining the need for “critical thought about how best to prevent, deter, and compensate for the harms that they cause” and a government agency specifically tailored for that purpose.

One of the conversations that each and every one of us has to have is with our elected representatives. What is their position, what is their understanding of AI — it’s impacts and potential harms.

Demis Hassabis, CEO of DeepMind Technologies, the company acquired by Google that created AlphaGo, the program that beat the world Go champion in 2016, cautions that AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization.

Elon Musk, Martin Rees — Astronomer Royal, astrophysicist, and author of On The Future: Prospects for Humanity — and the late Stephen Hawking have each warned about misusing, misunderstanding, mismanaging, and under-regulating AI.

John Brockman, who has served as literary agent for most of the seminal thinkers in the AI space and is editor of Possible Minds: Twenty-Five Ways of Looking at AI,argues that “AI is too big for any one perspective.” The best way to expand one’s understanding of this incredibly important topic is engaging in conversations. And that includes within the walls of your business.

Don’t let your organization lead itself astray with an overeager approach to AI.

Thornton May
Columnist

Thornton May is a futurist. He has designed and delivered executive education programs at UCLA, UC-Berkeley, Babson, Hong Kong University of Science and Technology, THE Ohio State University [where he co-founded and directs the Digital Solutions Gallery program], and the University of Kentucky. His book, The New Know: Innovation Powered by Analytics examines the intersection of the analytic and executive tribes.

More from this author