Interconnects

A podcast by Nathan Lambert

Categories:

96 Episodes

  1. State of play of AI progress (and related brakes on an intelligence explosion)

    Published: 4/30/2025
  2. Transparency and (shifting) priority stacks

    Published: 4/28/2025
  3. OpenAI's o3: Over-optimization is back and weirder than ever

    Published: 4/19/2025
  4. OpenAI's GPT-4.1 and separating the API from ChatGPT

    Published: 4/14/2025
  5. Llama 4: Did Meta just push the panic button?

    Published: 4/7/2025
  6. RL backlog: OpenAI's many RLs, clarifying distillation, and latent reasoning

    Published: 4/5/2025
  7. Gemini 2.5 Pro and Google's second chance with AI

    Published: 3/26/2025
  8. Managing frontier model training organizations (or teams)

    Published: 3/19/2025
  9. Gemma 3, OLMo 2 32B, and the growing potential of open-source AI

    Published: 3/13/2025
  10. Interviewing Eugene Vinitsky on self-play for self-driving and what else people do with RL

    Published: 3/12/2025
  11. Elicitation, the simplest way to understand post-training

    Published: 3/10/2025
  12. Where inference-time scaling pushes the market for AI companies

    Published: 3/5/2025
  13. GPT-4.5: "Not a frontier model"?

    Published: 2/28/2025
  14. Character training: Understanding and crafting a language model's personality

    Published: 2/26/2025
  15. Claude 3.7 thonks and what's next for inference-time scaling

    Published: 2/24/2025
  16. Grok 3 and an accelerating AI roadmap

    Published: 2/18/2025
  17. An unexpected RL Renaissance

    Published: 2/13/2025
  18. Deep Research, information vs. insight, and the nature of science

    Published: 2/12/2025
  19. Making the U.S. the home for open-source AI

    Published: 2/5/2025
  20. Why reasoning models will generalize

    Published: 1/28/2025

1 / 5

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai