OpenBias is a pioneering approach to identifying biases in text-to-image generative models like Stable Diffusion, where conventional methods are limited by predefined bias sets. OpenBias, a three-stage process involving a Large Language Model (LLM), generative model, and a Vision Question Answering model, offers a quantitative way to detect new biases.
This innovative method is significant because it allows for the detection of previously unknown biases in models widely used by the public. It addresses ethical concerns about AI safety and fairness. The adaptability of OpenBias could pave the way for more responsible AI applications where biases can be continuously monitored and mitigated. Explore the study.