Mar 20 2025 50 mins 15
Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.
Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.
This debate was recorded in August 2023.
00:00 Intro and Alex’s Background
05:29 Alex's Views on AI and Technology
06:45 Alex’s Non-Doomer Position
11:20 Goal-to-Action Mapping
15:20 Outcome Pump Thought Experiment
21:07 Liron’s Doom Argument
29:10 The Dangers of Goal-to-Action Mappers
34:39 The China Argument and Existential Risks
45:18 Ideological Turing Test
48:38 Final Thoughts
Show Notes
Alexander Campbell’s Twitter: https://x.com/abcampbell
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com