In the episode ”A.I. News: Open Letter To Pause, Existential Risks to Humanity, Its Supporters & Deniers (S5, E8),” I discuss the recent Open Letter to Pause Giant AI Experiments recently composed by the Future of Life Institute, and present the arguments for taking the risk analysis more seriously. With signers of the letter including Elon Musk, Emad Mostaque, Steve Wozniak, Max Tegmark, Tristan Harris and Aza Raskin, I share some of their positions including some recent articles, podcasts and videos discussing the dilemma.
#deeplearning #AIrevolution #humanextinction #generativeAI #blackbox #dontlookup #LivBoeree #Danielschmachtenberger #tristanharris #azaraskin #maxtegmark #eliezeryudkowsky #centerforhumanetechnology #lexfridman #Moloch #machinelearning #AGI
References:
Pause Giant AI Experiments: An Open Letter
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
The A.I. Dilemma – March 9, 2023 by Center for Humane Technology
https://youtu.be/xoVJKj8lcNQ
Meditations On Moloch By Scott Alexander
https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Misalignment, AI & Moloch | Daniel Schmachtenberger and Liv Boeree https://youtu.be/KCSsKV5F4xc
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Live: Eliezer Yudkowsky – Is Artificial General Intelligence too Dangerous to Build?
https://www.youtube.com/live/3_YX6AgxxYw?feature=share
The 'Don't Look Up' Thinking That Could Doom Us With AI
https://time.com/6273743/thinking-that-could-doom-us-with-ai/
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
https://youtu.be/VcVfceTsD0A
Please visit my website at: http://www.notascrazyasyouthink.com/
Don't forget to subscribe to the Not As Crazy As You Think YouTube channel @SicilianoJen
Connect:
Instagram: @ jengaita
LinkedIn: @ jensiciliano
Twitter: @ jsiciliano