Low-Light Enhancement
Autonomous Driving
Diffusion Model
Image Quality
AI Safety
LightDiff: A Diffusion Framework for Low-Light Driving Conditions

Cameras play an essential role in autonomous driving, but their efficiency drops significantly at night. LightDiff addresses this by using a multi-condition controlled diffusion model to improve low-light images without paired human data, instead utilizing dynamic degradation.

Highlights:

  • Tailored low-light image enhancement for autonomous applications.
  • Novel adaptation of input weights from multimodal data including depth, RGB, and text.
  • Context-consistent scene illumination maintaining visual integrity.
  • Perception-guided reinforcement learning to align enhanced images with detection models.
  • Proven efficacy on the nuScenes dataset with robust detection improvements in night-time conditions.

LightDiff is a critical innovation for the safety of autonomous driving at night. It underscores the potential for AI-driven enhancement to mitigate environmental constraints on vision-centric systems, and is particularly promising for models requiring improved perception under diverse lighting conditions.

Personalized AI news from scientific papers.