LLMs with Chain-of-Thought Are Non-Causal Reasoners investigates the perplexing phenomenon of LLMs producing correct answers after incorrect reasoning chains and vice versa. This study introduces causal analysis to unravel the intricate relationship between CoTs, instructions, and final answers in LLMs, uncovering the causal structure they approximate and how it differs from human reasoning.
Highlights:
Evaluating the causality in LLM reasoning raises essential questions about the models’ reliability and interpretability. By probing into the SCM of LLMs, the paper urges the AI community to critically assess and refine the reasoning processes of these sophisticated systems. Understanding and addressing the discrepancies with human reasoning are crucial for advancing AI agents capable of more accurate and transparent cognition. Explore More