Speaking at CES 2018, NVIDIA Chief Executive Jen-Hsun Huang described the computing challenge involved in enabling autonomous driving as the greatest of its kind. The company estimates the computing demands of a driverless vehicle are between 50 and 100 times more intensive than those placed on the most advanced cars available today.
“The number of challenges necessary to solve in order for the industry to bring autonomous vehicles to the world is utterly daunting,” he said. “We’ve built PCs, laptops, consoles and super computers, but autonomous vehicles represent a level of complexity the world has never known – it’s on all the time, monitoring multiple sensors, and because lives are at stake, its decisions must always be the right ones, made using software no-one has ever known how to write.”
How, then, does the industry plan to bridge a gap of this magnitude? Many are now adamant that artificial intelligence (AI) will be indispensable when approaching the questions of autonomous vehicles. “A typical modern vehicle has up to 100 Electronic Control Units (ECU),” explains Danny Shapiro, NVIDIA’s Senior Director of Automotive, “but whilst these are performing important functions that have societal benefits, they’re running set algorithms, meaning they perform fixed tasks. And there’s no way that computer vision using fixed algorithms can handle the diversity of things that happen on the road.”
Bence Varga, Head of European Sales at AI software company AImotive, agrees, arguing that any discussion of autonomous vehicles must have occupant and road-user protection at its core. “From a safety perspective,” he says, “traditional computer vision algorithms working via a decision-tree basis have severe limitations. Due to the diminishing return inherent in development of those solutions, it becomes increasingly difficult to draw up algorithms which, for example, can recognise a car from different angles, or one that’s partially occluded.” AImotive has developed aiDrive, a software suite that uses AI in enabling self-driving capabilities.
Changing weather, varied surfaces, road closures, diverse driving cultures – the sheer number of variables on the road is enormous, but humans, with an advanced set of interconnected senses and decision-making abilities, are well-placed to adapt. The industry has therefore turned to deep learning technology, which as co-founder of drive.ai Tao Wang explains, involves concepts originally inspired by findings in neural science. Modern deep-learning typically uses an algorithm called back-propagation, which adjusts parameters within an artificial neural network to minimise the difference between actual output, and the desired output, with the latter based on huge amounts of real data which, in effect, train the system.
The introduction of deep-learning into the vehicle, says Wang, represents the automotive industry taking the next step using a technology with which it already has some experience. “AI has already made its way into the automotive industry,” he suggests. “Adaptive cruise control, for example, is a form of AI on its own, albeit with limited operational domains. Features like ACC provide additional value to end customers. The automotive sector is undergoing drastic transformation with the rise of electric vehicles and ride-sharing, and AI can help the sector catch up with the pace of the new era and stay relevant.”
But what are the challenges for suppliers looking to enter the market and work with the automotive industry? The view across the board is plain to see – meeting the stringent safety standards of the automotive industry. For NVIDIA, says Shapiro, this has been a decade-long process since the company first turned its attention to products besides graphic processors. New manufacturing facilities were sought, and changes were made to ensure that products could operate at all temperature ranges and in harsh conditions, such as vibration, shock, dirt and dust.
The goal is ISO26262 certification, an automotive-specific standard with a focus on safety-critical systems. NVIDIA’s scalable Xavier processor, capable of 30 trillion operations per second and now being used to develop platforms by the likes of self-driving developers Aurora, meets the mark. Work continues for companies like AImotive to make the grade.
Other concerns include the potential ‘opacity’ of reasoning when AI decision-making systems are in play, given the sheer complexity of the technology involved. “There are currently some opaque areas where the reasoning behind AI decision-making is concerned,” says Varga. “Ongoing research is looking in detail at this question, and new answers will continue to arise. To ensure safety we conduct, together within our partners, in-depth inspections and benchmarks of all systems. Our neural networks are trained on annotated test data from all around the world to ensure they generalise properly in all situations.” Simulations are key in this regard, with millions of scenarios digitally rehearsed before being tested on the road.
Drive.ai’s Wang believes such concerns are applicable to any emerging technology, and that acceptance will only come with proof of safety. “Deep learning is not exactly opaque,” he suggests, “as there are means to visualise what the neural network is ‘thinking’, and informing people of the new kind of knowledge will be crucial in making them comfortable with a new technology.
But whilst it will be important for OEMs to understand AI, says Varga, of equal importance will be for AI to understand humans. “This means the system will have to predict what other actors on the road are going to do, and plan accordingly,” he says. “The sensor setup we are creating, including cameras supported by other sensors, won’t have blind spots. That, coupled with AI’s capabilities to understand and generalise its environment, will lead to a superhuman driver that’s safe to share the road with.”
Of course, not all the headlines about AI are positive. Notable figures including theoretical physicist Steven Hawking and Tesla Chief Executive Elon Musk have repeatedly expressed concern. “The real risk with AI isn’t malice, but competence,” said Hawking in 2015. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.” With the automotive industry likely to play an increased role in AI development, how closely should OEMs pay attention to the thought of new, potentially lethal safety concerns?
The good news, says Varga, is that an apocalyptic situation is unlikely. Current AI, he suggests, is worlds behind the Artificial General Intelligence thought of when discussing the ‘singularity’. “Our AI is trained only to drive,” he says. “It cannot modify itself, nor reprogram itself. As far as we’re concerned, the idea of an autonomous vehicle rebellion is pure and utter science fiction.”
Shapiro agrees: “Some people’s view of AI seems to be based on the Terminator movies”, he says, adding that AI systems already outperform human beings in terms of detection, tracking and understanding distance and speed. “What’s more, they don’t get distracted,” he says, “nor do they experience road rage, nor anger, and they don’t get drunk.” Far from posing a risk, AI already has the potential to deliver huge safety benefits.
Moving forward, he says, NVIDIA is confident the checks, balances and tools are in place to ensure that AI won’t create problems. “We can test and validate to see where applications work and where they fail,” he concludes. “And then, we can go back, analyse, and adjust. We have great ways of analysing and diagnosing the neural networks, and understanding how they perform, and they will only continue to improve.” AI, it seems, will prove an ongoing project which, all going well, will perfect itself over time, for the potential benefit of billions.
This article appeared in the Q1 2018 issue of Automotive Megatrends Magazine.