.

End-to-End Autonomous Driving: Perception, Prediction, Planning and Simulation @CVPR2023

A diversity of computer vision capabilities are all critical in building industry-level autonomous driving systems, ranging from 2D to 3D perception, prediction, planning, to scene simulation. This has inspired a surge of relevant research, growing at a fast pace with increasingly accurate and efficient new methods (e.g. BEV-based 3D detection, HDMapNet, NeRF) developed continuously. Much more than simple combination of individual independently developed methods, autonomous driving also requires synergistic integration of different functions as a whole. This however is far away from the current situation that researchers in the sub-fields of perception, planning and simulation make largely limited idea exchange and communication. This calls for a system-level perspective on the advancement of autonomous driving. This workshop aims to provide a platform where researchers from different sub-fields can focus on exchanging the frontier ideas across boundaries, leading to holistic system-aware understanding and systematic research attempts in the future. Suggested topics include, but are not limited to:

  • - 3D obejct detetion
  • - Traffic lane detection and HD map construction
  • - End-to-end perception, prediction and planning
  • - Autonomous driving environment simulation

Submission

Instructions

Paper submission will follow CVPR 2023 format. All papers will be reviewed with double blind policy. Sbumission is through CMT.

CMT Submission

Important Dates

Action Date
Paper submission deadline March 20th, 2023
Notification to authors March 29th, 2023
Camera ready deadline April 5th, 2023

Schedule

Time

Title

12:30-12:35 Welcome and introduction

Perception

Time

Speaker

Title

12:35-13:10 Philipp Krähenbühl (UT Austin) TBD
13:10-13:45 Pei Sun (Waymo) Scaling 3d object detection towards E2E driving
13:45-14:20 Jia Deng (Princeton) TBD

Prediction & Planing

Time

Speaker

Title

14:20-14:55 Jamie Shotton (Wayve) Frontiers in Embodied AI for Autonomous Driving
14:55-15:30 Nikolai Smolyanskiy (NVIDIA) PredictionNet: Real-Time Traffic Prediction
for Autonomous Vehicles
15:30-16:05 Raquel Urtasun (Waabi) Next Generation Autonomy for the Safe Development
and Deployment of Self-Driving Technology
16:05-16:40 Daniel Cremers (Munich) Dynamic 3D Scene Understanding
for Autonomous Vehicles

Simulation

Time

Speaker

Title

16:40-17:15 Angjoo Kanazawa (UCB) Perceiving 4D People in the World;
Progress on Human Mesh Recovery
17:15-17:50 Matthias Niessner (Munich) 3D Semantic Scene Understanding
17:50-18:25 Bolei Zhou (UCLA) Building an Open-source Research Platform for
AI and Autonomous Driving Research

Closing remark

Time

Title

18:25-18:30 Closing remark

Speakers

Organizers