To Save Lives With Automation, Look To The Human Brain

The former head of autonomous and HMI strategy at JLR explains how medical research into cognition can make automated driving—and our roads—safer.

byCarl Pickering|
Self-Driving Tech photo
Share

0

[Editor's note: Carl Pickering spent 20 years working on research and technology strategy at Jaguar Land Rover, eventually leading the firm's autonomous technology strategy and global human-machine interface efforts. He recently relocated to Portland, Oregon where he now serves as CEO of a new startup called ADAM CogTec. In this guest post, Carl explains why automation will creep into our cars rather than arriving all at once, and how understanding the human brain can solve the thorny problems that arise from this growing human-machine interaction.]

The media frenzy surrounding increasing levels of vehicle automation is currently setting false expectations of imminent and significant improvements in road safety when in fact they are years away. While I believe that these efforts to improve road safety will eventually succeed, they are distracting attention from an option that could be available right now and will significantly increase road safety in every single vehicle on the road today. Such a system would not be automation alone, but would be built on insight into human cognition as much as artificial intelligence, improving human performance in automated and manually-driven vehicles alike.

This system is what I am working on today, and I call it the "Cognition Layer." Before explaining what it is and how it works, it's important to understand why the rise of automation makes understanding and measuring human cognition more important than ever.

The majority of car and truck manufacturers are re-evaluating their introduction dates for increasing levels of automation and the pattern across the industry is consistently pushing these dates out into the future. Vehicle automation is going to take much longer than originally anticipated and those OEMs with publicly declared ambitious launch dates are becoming more considered in their statements for the introduction of conditional and highly automated vehicles. This is because the challenges of cost, legislation and liability, lack of global design and test standards, and the technical resolution of many of the edge cases is proving to be more difficult than originally anticipated. 

Some industry experts predict that there will be approximately 8 million highly automated vehicles on our roads by 2025. This is a tiny proportion of the existing vehicle population on our roads which currently exceeds 1 billion, so the safety impact will be very low until we see more widespread deployment of automated vehicles. While we wait for the "autonomous revolution," automation will steadily creep into cars as use cases and conditions allow.

The first axis of introduction is the scope of usage, and naturally automation will be introduced in conditions that are most conducive to automation. For example, automated parking where the vehicle is moving slowly under reasonably controlled conditions, or automated highway driving where all traffic is flowing in the same direction are the most likely early uses of deployment. As confidence in the technology increases, this scope will extend to the more complex, unpredictable environments of urban and rural driving. 

The second axis of incremental introduction covers the operating domain which will dictate when and where these increased levels of automation can be used, and this operating domain will also increase incrementally. For example, initially automation will be capable of use during bright, dry daylight conditions.  However, driving in ice and snow in sub-zero temperatures, or during the black of night on small country roads, will require levels of sophistication way beyond today’s technology. There may also be a geographical domain to these incremental introductions where western roads with good infrastructure and road conditions will be the first target but developing countries like China with mixed infrastructure and varied road conditions from one region to the next may need to limit the use of automation systems.

As the incremental expansion of scope of use and operating domain continues, the automated system and the driver will be increasingly work together. As a result, we are now entering the era of human machine cooperation. There will be frequent occurrences of controlled handover from the vehicle to the driver when the vehicle identifies conditions outside its scope of use or operating domain, and the driver will continue trying to transfer control to the vehicle wherever possible. In addition, system failures are inevitable, and any system failure will also require the vehicle to transfer control to the driver. In all these situations the system will need to safely hand back control to the driver and therefore will need to know the cognitive state of the driver. 

Into this tricky gap between man and machine is where the "Cognition Layer" comes in. The "Cognition Layer" is a software layer that can be installed on top of any existing Driver Monitoring System (DMS) and is a critical addition to the DMS rather than a replacement for it. The DMS is an essential element and provides monitoring of the driver’s “physical layer” by monitoring driver out of position, driver identification, driver hands on wheel detection, together with essential head and eye tracking. Using imperceptible signals and response tracking technology used to measure cognition in coma patients, the "Cognition Layer" goes a step beyond the driver's physical state and measures their actual mental state.

The “Cognition Layer” comprises three elements. Cognitive driver state, cognitive driver workload and cognitive driver performance enhancement. All three elements are required to achieve increased driver safety and facilitate human machine cooperation, both of which will save lives, and will be required for many years to come for reasons explained above. Our product to address all three elements is called the ADAM Platform Attention and Awareness Management System.

The first element, the current driver cognitive state measurement, is needed because driving performance significantly deteriorates when drivers are under the influence of alcohol or drugs, sleep deprivation, or extreme stress or anxiety.  This deterioration in performance leads to human error. The cognition layer is required to prevent this human error and provide a warning when a driver is cognitively impaired and should not drive. Typical uses include parents wanting to know if their child should be driving or not, truck fleet operators who want to know if their driver's cognitive state is normal and healthy, and when automation systems need to hand back control to the driver, it is critical that the system knows that the driver is in a safe cognitive state to resume control. 

The second element of Cognitive Driver Workload is important because it helps to quantify the level of risk at any given time. For example, there is low risk when the driver cognitive state is healthy and normal, and the driver workload due to road conditions, traffic levels and weather factor risks are low, so the risk of driver error is also low. However, a driver with a cognitively impaired state, under high workload conditions, such as busy night time traffic in poor weather conditions, is at much higher risk and therefore more prone to driver error.

The third element of driver cognitive performance enhancement is vital because knowing the state and workload of the driver can be used to initiate systems to actively manage driver attention and awareness. We have patented such a system to substantially increase driver attention and awareness by providing visual cues to the driver and to manipulate the driver’s gaze to potential hazards that have not been seen. Our experimental lab results show consistent reduction in driver errors, and improvement in driving performance. I am convinced that this system will make ALL drivers better and works in ALL existing vehicles.

The “Cognition Layer” in comparison to increased levels of automation is available now, not tomorrow.  It is low cost so not cost prohibitive or reserved for the rich few, and the “Cognition Layer” offers Human/Machine cooperation which is the critical, and underestimated enabler for the successful introduction of automated driving. I believe that the roads can be a safer place by deploying the “Cognition Layer” to all vehicles with a steering wheel today and would urgently commend the parallel approach of introducing both the “Cognition Layer” and progressing with the incremental roll-out of automated vehicles to achieve substantial safety improvements immediately.

stripe
Car TechSelf-Driving Tech