Researchers Rasita Vinay, Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani explored the responsiveness of OpenAI’s Large Language Models (LLMs) like GPT-3.5-turbo and GPT-4 to emotional prompts in generating disinformation. The study analyzed over 19,800 synthetic disinformation social media posts and found that prompt politeness significantly affects the output.
This research highlights the potential ethical challenges of prompt engineering and its impact on content generation quality. It is vital for advancing transparent AI applications and setting guidelines for responsible use, ensuring AI technologies do not inadvertently amplify disinformation. The findings are crucial for future research in developing more robust models capable of resisting manipulative prompts. Read more