Why AI Needs Culture (Not Just Data) [Sponsored] (Sara Saab, Enzo Blindow - Prolific)

Machine Learning Street Talk (MLST) - A podcast by Machine Learning Street Talk (MLST)

Categories:

We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific. **SPONSOR MESSAGES**—cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economyOct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlstSubmit investment deck: https://cyber.fund/contact?utm_source=mlst— While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service.When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation.We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment." Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models. Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration.As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity. Visit Prolific: https://www.prolific.com/Sara Saab (VP Product):https://uk.linkedin.com/in/sarasaabEnzo Blindow (VP Data & AI):https://uk.linkedin.com/in/enzoblindowTRANSCRIPT:https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLETOC:[00:00:00] Intro & Background[00:03:16] Human-in-the-Loop Challenges[00:17:19] Can AIs Understand?[00:32:02] Benchmarking & Vibes[00:51:00] Agentic Misalignment Study[01:03:00] Data Quality vs Quantity[01:16:00] Future of AI OversightREFS:Anthropic Agentic Misalignmenthttps://www.anthropic.com/research/agentic-misalignmentValue Compasshttps://arxiv.org/pdf/2409.09586Reasoning Models Don’t Always Say What They Think (Anthropic)https://www.anthropic.com/research/reasoning-models-dont-say-think https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdfApollo research - science of evals blog posthttps://www.apolloresearch.ai/blog/we-need-a-science-of-evals Leaderboard Illusion https://www.youtube.com/watch?v=9W_OhS38rIE MLST videoThe Leaderboard Illusion [2025]Shivalika Singh et alhttps://arxiv.org/abs/2504.20879(Truncated, full list on YT)