Your current web browser is outdated. For best viewing experience, please consider upgrading to the latest version.
ERROR
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed
ERROR
Main Error Mesage Here
More detailed message would go here to provide context for the user and how to proceed
search DONATE
Close Nav

‘The Age of AI and Our Human Future’ Review: Where Is My Thinking Machine?

back to top
commentary

‘The Age of AI and Our Human Future’ Review: Where Is My Thinking Machine?

The Wall Street Journal December 1, 2021
OtherTechnology

The next step in artificial intelligence remains largely unknown. Predicting its future implications means spinning out ‘what if’ scenarios.

Amazement and alarm have been persistent features, the yin and yang, of discourse about the power of computers since the earliest days of electronic “thinking machines.”

In 1950, a year before the world’s first commercial computer was delivered, the pioneering computer scientist Alan Turing published an essay asking “Can machines think?” A few years later another pioneer, John McCarthy, coined the term “artificial intelligence”—AI. That’s when the trouble began.

The sloppy but clever term “artificial intelligence” has fueled misdirection and misunderstanding: Imagine, in the 1920s, calling the car an artificial horse, or the airplane an artificial bird, or the electric motor an artificial waterwheel.

“Computing” and even “supercomputing” are clear, even familiar terms. But the arrival of today’s machines that do more than calculate at blazing speeds—that instead engage in inference to recognize a face or answer a natural language question—constitutes a distinction with a difference, and raises practical and theoretical questions about the future uses of AI.

The amount of money chasing AI alone suggests something big is happening: annual global venture investing in AI startups has soared from $3 billion a decade ago to $80 billion in 2021. Over half is with U.S. firms, about one-fourth with Chinese. More than six dozen AI startups have valuations north of $1 billion, chasing applications in every corner of the economy from the Holy Grail of self-driving cars to healthcare and shopping, and from shipping to security.

Consider another metric of AI’s ascendance: An Amazon search for “artificial intelligence” yields 50,000 book titles. Joining that legion comes “The Age of AI and Our Human Future.” Its multiple authors—Henry Kissinger, the former secretary of state; Eric Schmidt, the former CEO of Google; and Daniel Huttenlocher, dean of MIT’s Schwarzman College of Computing—say their book is a distillation of group discussions about how AI will “soon affect nearly every field of human endeavor.”

“The Age of AI” doesn’t tackle how society has come to such a turning point. It poses more questions than it answers, and focuses, in the main, on news and disinformation, healthcare and national security, with a smattering about “human identity.” That most AI applications are nascent is evidenced by the authors’ need to fall back frequently on “what if” in an echo of similar discussions nearly everywhere.

The authors are animated by the sense many share that we’re on the cusp of a broad technological tipping point of deep economic and geopolitical consequence. “The Age of AI” makes the indisputable case that AI will add features to products and services that were heretofore impossible.

In an important nod to reality, the authors highlight “AI’s limits,” the problems with adequate and useful datasets and the “brittleness,” especially the “narrowness,” of AI—an AI that can drive a car can’t read a pathology slide, and vice versa.

However, AI’s narrowness is a feature, not a bug. This explains why hundreds of AI startups target narrow applications within a specific business domain. It explains why apps have been so astonishingly successful (consumers have access to more than 200 million apps, most of which are not games). Many are powered by hyper-narrow AI and enabled by a network—the Cloud—of unprecedented reach.

The book’s discursive nature illuminates, if unintentionally, a common challenge in forecasting AI: It’s far easier today than in Turing’s time to drift into the hypothetical. While narrow AI can be awesome, there’s yet a very long road to the computational horsepower needed to go from an AI “engine” that can infer “if you liked y, you’ll probably like x,” to broad existential questions of AI’s threat to “perceptions of reality” and humanity.

The authors’ occasional hyperbole (they’re hardly alone) emerges from the assumption that artificial general intelligence, AGI, is imminent. Even though they do note that “scientists and philosophers disagree about whether true AGI is even possible,” they still ask what will become of human identity as “machines increasingly perform tasks only humans used to be capable of.” But machines have inspired that question since, well, the dawn of machines.

Such AGI imaginings are redolent of last decade’s (discredited) trope of a “singularity” wherein computing power supersedes the brain. One might note that science remains a very long way from fully explaining human intelligence, memory and wisdom. More practically, Facebook, to take a famous example, uses AI to help flag “inappropriate” content but still employs 15,000 human “content moderators.” And neither can easily conquer the infamous “I know it when I see it” problem of pornography.

While “The Age of AI” briefly cautions about the astounding computational power needed to approach AGI, the technical literature today is replete with analyses of the staggering power-needs of even extant, narrow AI. Researchers point out that the learning phase for a single AI application can consume more energy than 10,000 cars do in a day. Overall, the AI rush has driven a 300,000-fold aggregate growth in computer power dedicated to training models over the past half-dozen years.

If there are challenges to discerning the future, there are also unexamined lessons from the recent past. The authors’ exploration of global competition for AI dominance is reminiscent of the early 1980s, when President Reagan ignored exhortations to expand government spending on “next-generation” computing to counter Japan’s programs, that era’s “Asian tiger.” Relevant today, if unexplored in “The Age of AI,” is exactly how America’s innovators went on to dominate the subsequent digital decades.

Despite the authors’ declared optimism, they are captivated by the notion that AI “is changing human thought.” History will likely record that human thought and identity both endure and prosper.

This piece originally appeared at The Wall Street Journal

______________________

Mark P. Mills is a senior fellow at the Manhattan Institute; a partner in Cottonwood Venture Partners, an energy-tech venture fund.

Photo by Peshkova/iStock

Saved!
Close