A note about this series:
A half-century after the terms “machine learning” and “artificial intelligence” were first coined, the age of AI is now in sight, emerging from three symbiotic silicon infrastructures: ubiquitous sensors generating massive data, high-performance communication networks, and the post-Moore’s Law era where Huang’s Law propels exascale computing. It’s no coincidence that this emulates the human architecture: a distributed sensory apparatus, labyrinthine nervous system, and a brain. And, as with human intelligence, AI energy use is a feature not a bug. (Part 1, Part 2, Part 3, and Part 4):
Will future bureaucrats impose CAFÉ-like fuel efficiency standards on the engines of artificial intelligence (AI)?
After all, computing necessarily uses energy. And we know that AI computers under the hood of a useful robocar won’t have extension cords. If you do the math, when the all-robocar future does arrive, the energy used just by all those automotive AI ‘brains’ will itself outstrip the fuel used by all the cars on California roads today. That is, as they say, not nothing.
Or, put differently, the energy needed by silicon sensors and logic to navigate cars will degrade a vehicle’s propulsion fuel mileage by at least 10%, likely more. Measured in Detroit rather than Silicon Valley terms, that means a robocar’s brain will burn fuel at the equivalent rate of 150 mpg. That may sound impressive but it’s at least 1,000 times less efficient than the fuel use of the average natural, if addled brain.
Mark P. Mills is a senior fellow at the Manhattan Institute and a faculty fellow at Northwestern University’s McCormick School of Engineering. In 2016, he was named “Energy Writer of the Year” by the American Energy Society. Follow him on Twitter here.
Photo by iStock