Daily AI
Subscribe
LLMs
Reasoning
Chain of Thought
Causal Analysis
LLMs with Chain-of-Thought Are Non-Causal Reasoners

In a ground-breaking paper titled LLMs with Chain-of-Thought Are Non-Causal Reasoners, researchers reveal that the Chain of Thought (CoT) in Large Language Models (LLMs) may not align with causal reasoning. The study illustrates instances where LLMs produce correct outcomes following incorrect CoT. A causal analysis was performed to assess the cause-effect relationships, highlighting discrepancies with human reasoning.

Key Takeaways:

  • Discrepancy between CoTs/instructions and the resulting answers in LLMs.
  • Analysis of structural causal models (SCM) approximated by LLMs.
  • Influences of in-context learning, supervised fine-tuning, and RL on human feedback.
  • Release of code and findings for open discourse.

This paper underscores the complexity and distinct nature of LLM reasoning compared to humans. The insights could drive the development of LLMs that better mimic human thought patterns, leading to more intuitive human-AI interactions.

Personalized AI news from scientific papers.