
With deepfakes becoming ever-more sophisticated, it’s paramount to adapt existing models for universal detection. This paper demonstrates how adapting the CLIP model using Prompt Tuning greatly improves deepfake detection across diverse datasets.
The work is essential as it shows the adaptability of VLMs like CLIP to combat deepfakes, a growing threat to information integrity. Read the full paper