- After-Shows
- Alternative
- Animals
- Animation
- Arts
- Astronomy
- Automotive
- Aviation
- Baseball
- Basketball
- Beauty
- Books
- Buddhism
- Business
- Careers
- Chemistry
- Christianity
- Climate
- Comedy
- Commentary
- Courses
- Crafts
- Cricket
- Cryptocurrency
- Culture
- Daily
- Design
- Documentary
- Drama
- Earth
- Education
- Entertainment
- Entrepreneurship
- Family
- Fantasy
- Fashion
- Fiction
- Film
- Fitness
- Food
- Football
- Games
- Garden
- Golf
- Government
- Health
- Hinduism
- History
- Hobbies
- Hockey
- Home
- How-To
- Improv
- Interviews
- Investing
- Islam
- Journals
- Judaism
- Kids
- Language
- Learning
- Leisure
- Life
- Management
- Manga
- Marketing
- Mathematics
- Medicine
- Mental
- Music
- Natural
- Nature
- News
- Non-Profit
- Nutrition
- Parenting
- Performing
- Personal
- Pets
- Philosophy
- Physics
- Places
- Politics
- Relationships
- Religion
- Reviews
- Role-Playing
- Rugby
- Running
- Science
- Self-Improvement
- Sexuality
- Soccer
- Social
- Society
- Spirituality
- Sports
- Stand-Up
- Stories
- Swimming
- TV
- Tabletop
- Technology
- Tennis
- Travel
- True Crime
- Episode-Games
- Visual
- Volleyball
- Weather
- Wilderness
- Wrestling
- Other
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Podcast: Dwarkesh Podcast (Lunar Society formerly) (LS 41 · TOP 1.5% what is this?)
Episode: Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
Pub date: 2023-04-06
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(0:00:00) - TIME article(0:09:06) - Are humans aligned?(0:37:35) - Large language models(1:07:15) - Can AIs help with alignment?(1:30:17) - Society’s response to AI(1:44:42) - Predictions (or lack thereof)(1:56:55) - Being Eliezer(2:13:06) - Othogonality(2:35:00) - Could alignment be easier than we think?(3:02:15) - What will AIs want?(3:43:54) - Writing fiction & whether rationality helps you win <br /><br />This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
The podcast and artwork embedded on this page are from Dwarkesh Patel, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.