AI Snake Oil distinguishes between artificial intelligence applications that genuinely work and those that cannot deliver on their promises. Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton University, argue that while generative AI represents real technological achievement, predictive AI for human behavior—forecasting who will commit crimes, default on loans, or succeed at jobs—is fundamentally unreliable. Drawing on extensive research into the limitations of machine learning when applied to human prediction, they show why these systems consistently fail while their vendors claim success. The authors trace how flawed studies, misleading metrics, and inadequate scrutiny have allowed ineffective AI products to proliferate across criminal justice, hiring, healthcare, and education. They examine specific failures: recidivism prediction tools that are no better than random guessing, hiring algorithms that simply replicate existing biases, and content moderation systems that cannot reliably identify harmful speech. Rather than condemning all AI, Narayanan and Kapoor provide tools for distinguishing genuine capability from snake oil, helping readers evaluate claims made about AI products. For policymakers, journalists, and citizens navigating an AI-saturated world, this book offers essential critical perspective.