Danny Shapiro is senior director of automotive at NVIDIA, a company at the forefront of driverless car computing. He spoke to Transport Network about the central debates around connected and autonomous vehicles (CAV).
Developing driverless cars is one of the ‘hardest computer challenges ever’ but it can and will succeed, Mr Shapiro (pictured) told Transport Network.
He told Transport Network: ‘The complexity is enormous, one of the hardest computer challenges ever. [But] If we take human driven cars off the road we could solve the problem today.’
When asked whether a business model had been developed yet for driverless cars, Mr Shapiro insisted that there were CAV markets and said NVIDIA was working with around 400 companies on various projects.
‘The attempts to roll out the technology are in more contained spaces. We are working with Volvo Trucks and in mining and quarry work where you don’t have drivers. Some people are looking at college campuses; there is a San Francisco start-up working on driverless postal deliveries, and we will see more and more robot taxi trials happening.’
The firm is also working with British AI start-up Academy of Robotics, which has developed Europe’s first roadworthy autonomous delivery vehicle, and is already involved in a partnership with infrastructure giant Eurovia to support highways maintenance.
Mr Shapiro also argued that in the next year or so the latest class of vehicles will be leveraging more and more Level 4 CAV technology; with the ability to operate on a predetermined route with a driver in the vehicle who can take over if necessary.
Level 5 would be full autonomy, capable of regular road use and interaction with other drivers, which is something Mr Shapiro believes is possible and the company is working towards.
Divide and conquer
NVIDIA is taking a ‘divide and conquer’ approach to the problem by identifying and tackling ‘one critical piece of functionality at a time’, each one being dubbed a NVIDIA Drive AV ‘mission’.
There are scores of such missions underpinned by deep neural networks (DNN) – a complex system of gathering and processing data designed to mimic the human brain.
A neural network has an input and an output layer of computational nodes, a network with more than one ‘hidden layer’ –set of processes between the input and output layer - is generally referred to as a DNN.
Among NVIDIA’s long list of DNNs and driverless missions, it has developed a WaitNet DNN that is able to detect intersections without using a map; the ClearSightNet DNN trained to evaluate cameras’ ability to see clearly, and a LaneNet DNN, which ‘increases lane detection range, lane edge recall, and lane detection robustness with pixel-level precision’
It also uses recurrent neural networks for tasks like predicting the future – or at least the future position and velocity of dynamic objects such as cars and pedestrians. This uses computational methods and sensor data, such as a sequence of images, to figure out how an object is moving in time.
NVIDIA is a world leader in this type of machine learning and high speed computing, particularly when it comes to interactive 3d graphics. It originally made its name in computer gaming after revolutionising the industry with its GPU, (graphics processing unit) and advancing ‘parallel computing’, which resulted in much faster and complex computing power.
The computer processor giant, which has a multi-billion pound annual turnover, went on to use these computing systems to help advance artificial intelligence and deep learning.
This software can take in data that it is trained to identify – such as certain objects or patterns – and develop inference processes, where the software can identify patterns it has never seen before through its existing neural network base.
Key to this is the amount and variety of data and simulations you can use to build up the software’s neural knowledge.
Mr Shapiro points out: ‘It’s not important how many miles a CAV has driven. What is important is the variety - covering an area of different driving behaviours so all these varieties are factored in.’
Another mission NVIDIA has developed is the ‘Safety Force Field’ (SFF). This ‘analyses and predicts the dynamics of the surrounding environment by taking in sensor data and determining a set of actions to protect the vehicle and other road users’.
‘Backed by robust calculations, SFF makes it possible for vehicles to achieve safety based on mathematical zero-collisions verification,’ NVIDIA says.
The idea is that if every vehicle was connected to such a force field it would solve not only driver safety but the philosopher’s ‘trolley problem' – what happens when a driverless car has to swerve to avoid a crash, but thereby puts other drivers or pedestrians in danger? Who does it save?
Mr Shapiro tells Transport Network: ‘The car is not going to run them over. A driverless car will never get to the trolley problem because of the safety force field. If every vehicle had a safety force field there would be no collisions, this is mathematically provable.’
The NVIDIA press statement regarding the SFF it has a list of disclaimers. Although NVIDIA points out that it makes these standard disclaimers on any product launch and common procedure for any company operating in the U.S., as required by the U.S. Securities and Exchange Commission (SEC).
Even Mr Shapiro suggests that in the early days of driverless cars there may still be crashes as the technology develops but it is still undoubtedly the direction we must take towards greater road safety.
It is hard to argue against this, as Mr Shapiro points out human drivers are basically…well, awful, and despite all our improvements in recent years, we still kill around 1.2 million people a year worldwide on the roads.
Another key issue raised by the law commissions of England Wales and Scotland no less, is how to avoid pedestrians shutting down driverless car lanes by jumping out in front of a queue of cars and simply standing there. How do driverless cars navigate busy pedestrian areas?
The law commissions point out that the idea of nudging a car forward in the same way a human driver might to encourage people to move is a highly difficult thing to programme for, and no one is entirely sure it should be programmed for.
Mr Shapiro does not have much time for such pessimism. ‘The driverless car can be trained to drive like a human drives. It can mimic human behaviour and they will be able to understand how to drive in such situations. It’s not hard coding, it’s using deep learning to recognise hazards and unusual issues and anomalies.
‘Even if it has not been told what to do, it knows what approach to take and it knows what outcomes to achieve.’
His argument is that there is little that cannot be discovered and therefore modelled for on the roads. We just need to discover the unknown, unknowns.
Transport Network suggests that to some the situation might to some seem like a type of infinity model, like monkeys jumping up and down on keyboards trying to write Shakespeare.
‘That is random, this is much more structured. We run the simulations and we do thousands of permutations. Models can be written; it’s just exhaustive testing that is necessary.’
Model by model, mission by mission, NVIDIA is wearing down the driverless car mountain, it just takes time and…maybe just a few caveats.