Future of Life Institute Podcast

83

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Recent Episodes
  • How Will We Cooperate with AIs? (with Allison Duettmann)
    Apr 11, 2025 – 01:36:02
  • Brain-like AGI and why it's Dangerous (with Steven Byrnes)
    Apr 4, 2025 – 01:13:13
  • How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
    Mar 28, 2025 – 01:34:33
  • Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
    Mar 21, 2025 – 02:23:12
  • Keep the Future Human (with Anthony Aguirre)
    Mar 13, 2025 – 01:21:03
  • We Created AI. Why Don't We Understand It? (with Samir Varma)
    Mar 6, 2025 – 01:16:15
  • Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)
    Feb 27, 2025 – 01:22:33
  • Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity
    Feb 14, 2025 – 46:09
  • Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective
    Jan 24, 2025 – 01:25:56
  • David Dalrymple on Safeguarded, Transformative AI
    Jan 9, 2025 – 01:40:06
  • Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters
    Dec 19, 2024 – 01:09:26
  • Nathan Labenz on the State of AI and Progress since GPT-4
    Dec 5, 2024 – 03:20:04
  • Connor Leahy on Why Humanity Risks Extinction from AGI
    Nov 22, 2024 – 01:58:50
  • Suzy Shepherd on Imagining Superintelligence and "Writing Doom"
    Nov 8, 2024 – 01:03:08
  • Andrea Miotti on a Narrow Path to Safe, Transformative AI
    Oct 25, 2024 – 01:28:09
  • Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents
    Oct 11, 2024 – 01:30:29
  • Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI
    Sep 27, 2024 – 02:08:44
  • Tom Barnes on How to Build a Resilient World
    Sep 12, 2024 – 01:19:41
  • Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond
    Aug 22, 2024 – 02:16:11
  • Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal
    Aug 9, 2024 – 01:03:10
  • Mary Robinson (Former President of Ireland) on Long-View Leadership
    Jul 25, 2024 – 30:01
  • Emilia Javorsky on how AI Concentrates Power
    Jul 11, 2024 – 01:03:35
  • Anton Korinek on Automating Work and the Economics of an Intelligence Explosion
    Jun 21, 2024 – 01:32:24
  • Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light
    Jun 7, 2024 – 01:36:01
  • Christian Nunes on Deepfakes (with Max Tegmark)
    May 24, 2024 – 37:12
  • Dan Faggella on the Race to AGI
    May 3, 2024 – 01:45:20
  • Liron Shapira on Superintelligence Goals
    Apr 19, 2024 – 01:26:30
  • Annie Jacobsen on Nuclear War - a Second by Second Timeline
    Apr 5, 2024 – 01:26:28
  • Katja Grace on the Largest Survey of AI Researchers
    Mar 14, 2024 – 01:08:00
  • Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
    Feb 29, 2024 – 01:36:05
  • Sneha Revanur on the Social Effects of AI
    Feb 16, 2024 – 57:48
  • Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
    Feb 2, 2024 – 01:31:13
  • Special: Flo Crivello on AI as a New Form of Life
    Jan 19, 2024 – 47:39
  • Carl Robichaud on Preventing Nuclear War
    Jan 6, 2024 – 01:39:03
  • Frank Sauer on Autonomous Weapon Systems
    Dec 14, 2023 – 01:42:40
  • Darren McKee on Uncontrollable Superintelligence
    Dec 1, 2023 – 01:40:37
  • Mark Brakel on the UK AI Summit and the Future of AI Policy
    Nov 17, 2023 – 01:48:36
  • Dan Hendrycks on Catastrophic AI Risks
    Nov 3, 2023 – 02:07:24
  • Samuel Hammond on AGI and Institutional Disruption
    Oct 20, 2023 – 02:14:51
  • Imagine A World: What if AI advisors helped us make better decisions?
    Oct 17, 2023 – 59:44
  • Imagine A World: What if narrow AI fractured our shared reality?
    Oct 10, 2023 – 50:36
  • Steve Omohundro on Provably Safe AGI
    Oct 5, 2023 – 02:02:32
  • Imagine A World: What if AI enabled us to communicate with animals?
    Oct 3, 2023 – 01:04:07
  • Imagine A World: What if some people could live forever?
    Sep 26, 2023 – 58:53
  • Johannes Ackva on Managing Climate Change
    Sep 21, 2023 – 01:40:13
  • Imagine A World: What if we had digital nations untethered to geography?
    Sep 19, 2023 – 55:38
  • Imagine A World: What if global challenges led to more centralization?
    Sep 12, 2023 – 01:00:28
  • Tom Davidson on How Quickly AI Could Automate the Economy
    Sep 8, 2023 – 01:56:22
  • Imagine A World: What if we designed and built AI in an inclusive way?
    Sep 5, 2023 – 52:51
  • Imagine A World: What if new governance mechanisms helped us coordinate?
    Sep 5, 2023 – 01:02:35
Recent Reviews
  • 457/26777633
    Science-smart interviewer asks very good questions!
    Great, in depth interviews.
  • VV7425795
    Fantastic contribution to mankind! Thanks!
    🤗👍🏻
  • malfoxley
    Gerat show!
    Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!
  • Peterpaul1925
    Amazing Podcast !
    People need to know about this excellent podcast (and the Future of Life Institute) focusing on the most important issues facing the world. The topics are big, current, and supremely important; the guests are luminaries in their fields; and the Interviewer, Lucas Perry, brings it all forth in such a compelling way. He is so well informed on a wide range of issues and makes the conversations stimulating and thought-provoking. Aftger each episode I listened to so far I found myself telling other people about what was discussed; it's that valuable. After one episode, I started contributing to FLI. What a find. Thankyou FLI and Lucas.
  • jingalli89
    Great podcast on initiatives that are critical for our future.
    Lucas/FLI do an excellent job of conducting in-depth interviews with incredible people whose work stands to radically impact humanity’s future. It’s a badly missing and needed resource in today’s world, is always high-quality, and I'm able to learn something new/unique/valuable each time. Great job to Lucas and team!
  • V. Antimonov
    Really enjoyed Max’s conversation with Yuval
    Keep up the great work!
  • emp_wri
    Informative podcast
    A serious discussion of issues from a scientific and learned perspective without the noise of propaganda or fear mongering. An intelligent podcast for a curious audience.
  • McFarland1911
    Very Useful
    This podcast was an amazing source paired with the “LAW” article on the FLI website for my researched argument
Similar Podcasts
Disclaimer: The podcast and artwork on this page are property of the podcast owner, and not endorsed by UP.audio.