This research investigates whether LLMs can autonomously enhance each other in a negotiation game through self-play, reflective practice, and AI-driven feedback. The experiment involved two models—GPT and Claude—acting as negotiators while another model acted as a critic providing essential feedback.
Key findings include:
The ability to self-improve through structured feedback indicates a remarkable potential for developing highly competent AI agents with lower dependency on human intervention. Continued exploration in this direction could lead to more sophisticated AI systems capable of handling complex negotiations independently.