Does AI Competition = AI Alignment? Debate with Gil Mark


Episode Artwork
1.0x
0% played 00:00 00:00
Feb 09 2025 77 mins   17

My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.

I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.

00:00 Introduction

02:36 Gil & Liron’s Early Doom Days

04:58: AIs : Humans :: Humans : Ants

08:02 The Convergence of AI Goals

15:19 What’s Your P(Doom)™

19:23 Multiple AIs and Human Welfare

24:42 Gil’s Alignment Claim

42:31 Cheaters and Frankensteins

55:55 Superintelligent Game Theory

01:01:16 Slower Takeoff via Resource Competition

01:07:57 Recapping the Disagreement

01:15:39 Post-Debate Banter

Show Notes

Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/

Gil’s Twitter: https://x.com/gmfromgm

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com