Richard Sutton Declares LLMs a Dead End
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
Once again, instead of introducing new research, we discuss what Richard Sutton, a pioneer of reinforcement learning (RL), controversially argues that large language models (LLMs) represent a "dead end" for achieving general intelligence. Sutton contends that LLMs, which rely on imitation learning from vast datasets of human text, lack the ability to learn continuously "from experience" or possess a meaningful goal or ground truth, which are core to RL and animal intelligence. The conversation, hosted by Dwarkesh Patel, explores the philosophical divide between the LLM paradigm and the experiential paradigm championed by Sutton, touching upon topics such as continual learning, the role of the Bitter Lesson in AI history, and the future transition to digital intelligences or AGI. Sutton maintains that a fundamentally new architecture capable of on-the-job learning is necessary and will eventually supersede the current LLM approach.