Interconnects

A podcast by Nathan Lambert

Categories:

96 Episodes

  1. Model commoditization and product moats

    Published: 3/13/2024
  2. The koan of an open-source LLM

    Published: 3/6/2024
  3. Interviewing Louis Castricato of Synth Labs and Eleuther AI on RLHF, Gemini Drama, DPO, founding Carper AI, preference data, reward models, and everything in between

    Published: 3/4/2024
  4. How to cultivate a high-signal AI feed

    Published: 2/28/2024
  5. Google ships it: Gemma open LLMs and Gemini backlash

    Published: 2/22/2024
  6. 10 Sora and Gemini 1.5 follow-ups: code-base in context, deepfakes, pixel-peeping, inference costs, and more

    Published: 2/20/2024
  7. Releases! OpenAI’s Sora for video, Gemini 1.5's infinite context, and a secret Mistral model

    Published: 2/16/2024
  8. Why reward models are still key to understanding alignment

    Published: 2/14/2024
  9. Alignment-as-a-Service: Scale AI vs. the new guys

    Published: 2/7/2024
  10. Open Language Models (OLMos) and the LLM landscape

    Published: 2/1/2024
  11. Model merging lessons in The Waifu Research Department

    Published: 1/29/2024
  12. Local LLMs, some facts some fiction

    Published: 1/24/2024
  13. Multimodal blogging: My AI tools to expand your audience

    Published: 1/17/2024
  14. Multimodal LM roundup: Unified IO 2, inputs and outputs, Gemini, LLaVA-RLHF, and RLHF questions

    Published: 1/10/2024
  15. Where 2024’s “open GPT4” can’t match OpenAI’s

    Published: 1/5/2024
  16. Interviewing Tri Dao and Michael Poli of Together AI on the future of LLM architectures

    Published: 12/21/2023

5 / 5

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai