80,000 Hours Podcast

80,000 Hours Podcast

byThe 80,000 Hours team

EducationTechnology

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.

Episodes(40 episodes)

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health
What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors — share their most powerful and actionable insights from the front lines of global health and development. You’ll hear about the critical need to boost agricultural productivity in sub-Saharan Africa, the staggering impact of lead poisoning on children in low-income countries, and the social forces that contribute to high neonatal mortality rates in India.What’s so striking is how some of the most effective interventions sound al...
Published: Apr 7, 2026Duration: 4h 6m 50s
What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.
When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its supporters are some combination of 'hypocritical', 'naive', and 'anti-democratic'. Rob Wiblin dissects each claim finding that all three are mediocre arguments dressed up as hard truths. (Though the 'naive' one is at least interesting.)Watch on YouTube: What Everyone is Missing about Anthropic vs The PentagonPlus, from 13:43: Leaked documents from Meta revealed that 10% of the company's total revenue — around $16 billion a year — came from...
Published: Apr 3, 2026Duration: 20m 37s
Could a biologist armed with AI kill a billion people? | Dr Richard Moulange
Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.For years, experts have reassured us that 'tacit knowledge' — the hand...
Published: Mar 31, 2026Duration: 3h 7m 51s
#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war
Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.Samuel’s case isn’t th...
Published: Mar 24, 2026Duration: 1h 12m 6s
#239 – Rose Hadshar on why automating human labour will break our political system
The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist o...
Published: Mar 17, 2026Duration: 2h 14m 8s
#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)
How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.Today’s...
Published: Mar 10, 2026Duration: 1h 11m 3s
Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision...
Published: Mar 6, 2026Duration: 31m 28s
#237 – Robert Long on how we're not ready for AI consciousness
Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with that?Robert Long founded Eleos AI to explore questions like these, on the basis that AI may one day be capable of suffering — or already is. In today’s episode, Robert and host Luisa Rodriguez explore the many ways in which AI consciousness may be very different from anything we’re used to.Things get strange fast: If AI is conscious, where does tha...
Published: Mar 3, 2026Duration: 3h 25m 40s
#236 – Max Harms on why teaching AI right from wrong could get everyone killed
Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.In Max’s view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that gro...
Published: Feb 24, 2026Duration: 2h 41m 20s
#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’
Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan?Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last...
Published: Feb 17, 2026Duration: 2h 54m 37s
What the hell happened with AGI timelines in 2025?
In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of 2025, sentiment swung all the way back in the other direction, with people's forecasts for when AI might really shake up the world blowing out even further than they had been before reasoning models came along.What the hell happened? Was it just swings in vibes and mood? Confusion? A series of fundamentally unexpected and unpredictable research results?Host Rob Wiblin has been trying to make...
Published: Feb 10, 2026Duration: 25m 34s
#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.From an evolutionary perspective, that’s to be expected, right? If your heart or...
Published: Feb 3, 2026Duration: 2h 51m 17s
#234 – David Duvenaud on why 'aligned AI' would still kill democracy
Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'Links...
Published: Jan 27, 2026Duration: 2h 31m 48s
#145 Classic episode – Christopher Brown on why slavery abolition wasn't inevitable
In many ways, humanity seems to have become more humane and inclusive over time. While there’s still a lot of progress to be made, campaigns to give people of different genders, races, sexualities, ethnicities, beliefs, and abilities equal treatment and rights have had significant success.It’s tempting to believe this was inevitable — that the arc of history “bends toward justice,” and that as humans get richer, we’ll make even more moral progress.But today's guest Christopher Brown — a professor of history at Columbia University and specialist in the abolitionist movement and the British Empire...
Published: Jan 20, 2026Duration: 2h 56m 17s
#233 – James Smith on how to prevent a mirror life catastrophe
When James Smith first heard about mirror bacteria, he was sceptical. But within two weeks, he’d dropped everything to work on it full time, considering it the worst biothreat that he’d seen described. What convinced him?Mirror bacteria would be constructed entirely from molecules that are the mirror images of their naturally occurring counterparts. This seemingly trivial difference creates a fundamental break in the tree of life. For billions of years, the mechanisms underlying immune systems and keeping natural populations of microorganisms in check have evolved to recognise threats by their molecular shape — like a hand f...
Published: Jan 13, 2026Duration: 2h 9m 40s
#144 Classic episode – Athena Aktipis on why cancer is a fundamental universal phenomena
What’s the opposite of cancer? If you answered “cure,” “antidote,” or “antivenom” — you’ve obviously been reading the antonym section at www.merriam-webster.com/thesaurus/cancer.But today’s guest Athena Aktipis says that the opposite of cancer is us: it's having a functional multicellular body that’s cooperating effectively in order to make that multicellular body function.If, like us, you found her answer far more satisfying than the dictionary, maybe you could consider closing your dozens of merriam-webster.com tabs, and start listening to this podcast instead.Rebroadcast: this episode was originally release...
Published: Jan 9, 2026Duration: 3h 30m 30s
#142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language
John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work, he's written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.Rebroadcast: this episode was originally released in December 2022.YouTube video version: https://youtu.be/MEd7TT_nMJELinks to learn more, video, and full transcript: https://80k.link/JM...
Published: Jan 6, 2026Duration: 1h 35m 5s
2025 Highlight-o-thon: Oops! All Bests
It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itselfIan Dunt on why the unelected House of Lords is by far the best part of the British governmentSam Bowman’s strategy to get NIMBYs to love it when things get built next to their housesBuck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans…as well as...
Published: Dec 29, 2025Duration: 1h 40m 17s
#232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings
Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration. Links to learn more and full transcript: https://80k.info/am25For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral c...
Published: Dec 19, 2025Duration: 2h 37m 15s
#231 – Paul Scharre on how AI-controlled robots will and won't change war
In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” Protocol demanded he report it to superiors, which would very likely trigger a retaliatory nuclear strike. Petrov didn’t. He reasoned that if the US were actually attacking, they wouldn’t fire just 5 missiles — they’d empty the silos. He bet the fate of the world on a hunch that his machine was broken. He was right.Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, has a question...
Published: Dec 17, 2025Duration: 2h 45m 17s