Bridging the Gap: Enhancing Autonomous Driving Safety with Avara
by Xinyao Ma on 2024-07-30
Autonomous driving , VR , human perception
Thanks to recent advances in machine learning (ML) techniques, Autonomous Driving (AD) has seen significant breakthroughs with enhanced capabilities. The potential of AD technology is vast, promising safer roads, reduced traffic congestion, and greater mobility for all. However, alongside these advancements comes a critical challenge: the susceptibility of ML models to adversarial evasion attacks. These attacks pose a severe threat, undermining the reliability and safety of autonomous driving systems.
The Adversarial Threat in Autonomous Driving
Adversarial evasion attacks involve subtle manipulations to input data, which can lead ML models to make incorrect decisions. In the context of autonomous driving, this could mean misinterpreting traffic signs, failing to recognize obstacles, or making unsafe driving decisions. Despite concerted efforts by researchers to mitigate these attacks, a significant gap remains in fully understanding such adversarial maneuvers, especially from a human driver’s perspective.
Introducing Avara: A Unified Evaluation Platform
To bridge this critical gap, we propose Avara, the first unified evaluation platform designed to assess human drivers' perceptibility to adversarial attacks in AD contexts. Avara leverages cutting-edge Virtual Reality (VR) and eye-tracking technology to capture multi-modal driver awareness data, providing detailed assessments of driver perception.
Multi-Modal Awareness Evaluation Metrics
Avara’s innovative approach integrates three distinct sources of multi-modal awareness evaluation metrics:
- Visual Attention Data: Using eye-tracking technology, Avara captures where and how long a driver focuses on specific elements within the driving environment.
- VR Simulations: Immersive VR simulations create realistic driving scenarios, allowing researchers to introduce adversarial attacks in a controlled and safe environment.
- Behavioral Feedback: Participants provide feedback on their perception and response to adversarial evasion attacks, offering valuable insights into the human experience of these threats.