A premier forum uniting academic, industry, and standards communities to explore advances in Foundation Models and 3D Perception in Cooperative Autonomous Driving (CAD).
The 5th edition of the full-day DriveX workshop explores advances in Foundation Models and 3D Perception in Cooperative Autonomous Driving (CAD). This workshop brings together leading researchers and practitioners to discuss cutting-edge developments in large language models (LLMs), vision-language models (VLMs), vision-language action models (VLAs), and their applications to autonomous driving systems. Topics include 3D object detection, semantic segmentation, sensor fusion, V2X communication, cooperative perception, and real-world applications.
We explore methods to enhance scene understanding, perception accuracy, dataset curation, and novelty detection. By uniting experts across perception, V2X, and foundation model domains, this workshop aims to foster innovation in cooperative autonomous driving and intelligent transportation systems. The workshop addresses critical challenges in multi-modal sensor data fusion, vehicle-infrastructure coordination, and intelligent transportation systems that leverage both onboard and roadside sensing capabilities.
This year, we expanded our focus with the addition of V2X applications, exploring real-world vehicle-to-infrastructure connectivity that extends past collaborative perception. The workshop provides a platform for discussing V2X for localization, tolling, road safety, monitoring, and data analytics, bridging the gap between theoretical advances and practical deployment in intelligent transportation systems. Through keynote presentations, panel discussions, paper presentations, and challenge tracks, DriveX 2026 creates a comprehensive forum for advancing the state-of-the-art in foundation model-driven cooperative autonomous driving.
| Start | End | Program | Speaker | Affiliation |
|---|---|---|---|---|
| 08:00 | 08:10 | Introduction | ||
| 08:10 | 08:30 | Keynote Presentation 1 Keynote | Dr. Jim Misener | Qualcomm |
| 08:30 | 08:50 | Keynote Presentation 2 Keynote | Prof. Jiaqi Ma | UCLA |
| 08:50 | 09:10 | Keynote Presentation 3 Keynote | Prof. Ignacio Alvarez | THI & Intel Labs |
| 09:10 | 09:30 | Keynote Presentation 4 Keynote | Prof. Cathy Wu | MIT |
| 09:30 | 10:00 | Coffee Break | ||
| 10:00 | 10:20 | Keynote Presentation 5 Keynote | Prof. Ziran Wang | Purdue Uni. |
| 10:20 | 10:40 | Keynote Presentation 6 Keynote | Prof. Vincent Fremont | École Centrale de Nantes |
| 10:40 | 11:00 | Keynote Presentation 7 Keynote | Prof. Bassam Alrifaee | Uni. of the German Federal Armed Forces Munich |
| 11:00 | 11:20 | Keynote Presentation 8 Keynote | Prof. Ayesha Choudhary | Jawaharlal Nehru Uni. |
| 11:20 | 12:00 | Panel Discussion I Panel | ||
| 12:00 | 13:00 | Lunch | ||
| 13:00 | 13:20 | Keynote Presentation 9 Keynote | Prof. Thomas Bräunl | Uni. of Western Australia |
| 13:20 | 13:40 | Keynote Presentation 10 Keynote | Prof. Johannes Betz | TUM |
| 13:40 | 14:00 | Keynote Presentation 11 Keynote | Prof. Valentina Donzella | Queen Mary Uni. of London |
| 14:00 | 14:20 | Keynote Presentation 12 Keynote | Prof. Felix Heide | Princeton Uni. & Torc Robotics |
| 14:20 | 14:40 | Coffee Break | ||
| 14:40 | 15:00 | Keynote Presentation 13 Keynote | Prof. Jiachen Li | Uni. of California Riverside |
| 15:00 | 15:20 | Keynote Presentation 14 Keynote | Prof. Fawad Ahmad | Rochester Institute of Technology |
| 15:20 | 15:40 | Keynote Presentation 15 Keynote | Dr. Ran Tian | NVIDIA |
| 15:40 | 16:00 | Keynote Presentation 16 Keynote | Zhenzhen Liu | Cornell University |
| 16:00 | 16:30 | Panel Discussion II Panel | ||
| 16:30 | 17:00 | Coffee Break | ||
| 17:00 | 17:10 | Paper Oral 1 Oral | ||
| 17:10 | 17:20 | Paper Oral 2 Oral | ||
| 17:20 | 17:30 | Paper Oral 3 Oral | ||
| 17:30 | 17:40 | Paper Oral 4 Oral | ||
| 17:40 | 17:50 | Paper Oral 5 Oral | ||
| 17:50 | 18:00 | Award Ceremony & Closing |
Final schedule, room allocation, and speaker order will be announced closer to the workshop date.
DriveX 2026 invites high-quality contributions on foundation models, V2X-based cooperative perception, large driving models, 3D perception, and related topics outlined above.
We welcome:
Submissions must follow the official IEEE IV2026 style guidelines. Detailed submission instructions will be provided.
The DriveX Challenge fosters rigorous, reproducible benchmarking of cooperative perception and planning on real-world V2X datasets. Tracks are designed in close collaboration with dataset creators and industry partners.
V2I-Based Cooperative Perception
Infrastructure-vehicle fusion using
TUMTraf V2X CP.
Focus on cooperative 3D detection and tracking with infrastructure-mounted LiDAR, radar, and cameras,
emphasizing occlusion handling, long-range awareness, and reliability under real-world conditions.
Accident Scene Understanding & Safety Reasoning
Built upon
TUMTraf Accid3nD.
Participants design models for high-risk scenarios, proactive risk assessment,
and early accident prediction using cooperative perception signals to support Vision-Zero mobility.
End-to-End Multi-Agent Autonomous Driving
Using
V2XPnP and
V2V4Real,
teams explore end-to-end policies and trajectory planning with single-vehicle,
multi-vehicle, and vehicle-infrastructure inputs. The track highlights how cooperative intelligence
improves policy learning, coordination, and safety.
Competition Timeline
Top-performing teams will be invited to present at the workshop. Detailed rules, baselines, and submission instructions will be released on the official challenge page.
University of California, Riverside
Technical University of Munich
University of California, Los Angeles
The University of Hong Kong
The University of Sydney
The University of Sydney
The University of Sydney
Tsinghua University
University of California, Los Angeles
DriveX 2026 welcomes sponsorship from industry, startups, and institutions interested in foundation models, cooperative perception, simulation, and large-scale autonomous driving systems.
For sponsorship opportunities, please contact: walter.zimmer@cs.tum.edu.