In the episode ”A.I. News: Open Letter To Pause, Existential Risks to Humanity, Its Supporters & Deniers (S5, E8),” I discuss the recent Open Letter to Pause Giant AI Experiments recently composed by the Future of Life Institute, and present the arguments for taking the risk analysis more seriously. With signers of the letter including Elon Musk, Emad Mostaque, Steve Wozniak, Max Tegmark, Tristan Harris and Aza Raskin, I share some of their positions including some recent articles, podcasts and videos discussing the dilemma.
#deeplearning #AIrevolution #humanextinction #generativeAI #blackbox #dontlookup #LivBoeree #Danielschmachtenberger #tristanharris #azaraskin #maxtegmark #eliezeryudkowsky #centerforhumanetechnology #lexfridman #Moloch #machinelearning #AGI
Pause Giant AI Experiments: An Open Letter
The A.I. Dilemma – March 9, 2023 by Center for Humane Technology
Meditations On Moloch By Scott Alexander
Misalignment, AI & Moloch | Daniel Schmachtenberger and Liv Boeree https://youtu.be/KCSsKV5F4xc
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Live: Eliezer Yudkowsky – Is Artificial General Intelligence too Dangerous to Build?
The 'Don't Look Up' Thinking That Could Doom Us With AI
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
Please visit my website at: http://www.notascrazyasyouthink.com/
Don't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJen
Instagram: @ jengaita
LinkedIn: @ jensiciliano
Twitter: @ jsiciliano