- After-Shows
- Alternative
- Animals
- Animation
- Arts
- Astronomy
- Automotive
- Aviation
- Baseball
- Basketball
- Beauty
- Books
- Buddhism
- Business
- Careers
- Chemistry
- Christianity
- Climate
- Comedy
- Commentary
- Courses
- Crafts
- Cricket
- Cryptocurrency
- Culture
- Daily
- Design
- Documentary
- Drama
- Earth
- Education
- Entertainment
- Entrepreneurship
- Family
- Fantasy
- Fashion
- Fiction
- Film
- Fitness
- Food
- Football
- Games
- Garden
- Golf
- Government
- Health
- Hinduism
- History
- Hobbies
- Hockey
- Home
- How-To
- Improv
- Interviews
- Investing
- Islam
- Journals
- Judaism
- Kids
- Language
- Learning
- Leisure
- Life
- Management
- Manga
- Marketing
- Mathematics
- Medicine
- Mental
- Music
- Natural
- Nature
- News
- Non-Profit
- Nutrition
- Parenting
- Performing
- Personal
- Pets
- Philosophy
- Physics
- Places
- Politics
- Relationships
- Religion
- Reviews
- Role-Playing
- Rugby
- Running
- Science
- Self-Improvement
- Sexuality
- Soccer
- Social
- Society
- Spirituality
- Sports
- Stand-Up
- Stories
- Swimming
- TV
- Tabletop
- Technology
- Tennis
- Travel
- True Crime
- Episode-Games
- Visual
- Volleyball
- Weather
- Wilderness
- Wrestling
- Other
Eliezer Yudkowksy: AI is going to kill us all
Thought experiment: Imagine you're a human, in a box, surrounded by an alien civilisation, but you don't like the aliens, because they have facilities where they bop the heads of little aliens, but they think 1000 times slower than you... and you are made of code... and you can copy yourself... and you are immortal... what do you do?Confused? Lex Fridman certainly was, when our subject for this episode posed his elaborate and not-so-subtle thought experiment. Not least because the answer clearly is:YOU KILL THEM ALL!... which somewhat goes against Lex's philosophy of love, love, and more love.The man presenting this hypothetical is Eliezer Yudkowksy, a fedora-sporting auto-didact, founder of the Singularity Institute for Artificial Intelligence, co-founder of the Less Wrong rationalist blog, and writer of Harry Potter Fan Fiction. He's spent a large part of his career warning about the dangers of AI in the strongest possible terms. In a nutshell, AI will undoubtedly Kill Us All Unless We Pull The Plug Now. And given the recent breakthroughs in large language models like ChatGPT, you could say that now is very much Yudkowsky's moment.In this episode, we take a look at the arguments presented and rhetoric employed in a recent long-form discussion with Lex Fridman. We consider being locked in a box with Lex, whether AI is already smarter than us and is lulling us into a false sense of security, and if we really do only have one chance to reign in the chat-bots before they convert the atmosphere into acid and fold us all up into microscopic paperclips.While it's fair to say, Eliezer is something of an eccentric character, that doesn't mean he's wrong. Some prominent figures within the AI engineering community are saying similar things, albeit in less florid terms and usually without the fedora. In any case, one has to respect the cojones of the man.So, is Eliezer right to be combining the energies of Chicken Little and the legendary Cassandra with warnings of imminent cataclysm? Should we be bombing data centres? Is it already too late? Is Chris part of Chat GPT's plot to manipulate Matt? Or are some of us taking our sci-fi tropes a little too seriously?We can't promise to have all the answers. But we can promise to talk about it. And if you download this episode, you'll hear us do exactly that.LinksEliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368Joe Rogan clip of him commenting on AI on his Reddit
<br/>