OpenBias: Open-set Bias Detection in Text-to-Image Generative Models

OpenBias is a pioneering approach to identifying biases in text-to-image generative models like Stable Diffusion, where conventional methods are limited by predefined bias sets. OpenBias, a three-stage process involving a Large Language Model (LLM), generative model, and a Vision Question Answering model, offers a quantitative way to detect new biases.

  • Broadens bias detection beyond known biases
  • Utilizes LLM to suggest biases based on captions
  • Generates images to identify and quantify biases
  • Aligns with closed-set bias methods and human judgment

This innovative method is significant because it allows for the detection of previously unknown biases in models widely used by the public. It addresses ethical concerns about AI safety and fairness. The adaptability of OpenBias could pave the way for more responsible AI applications where biases can be continuously monitored and mitigated. Explore the study.

Personalized AI news from scientific papers.