- After-Shows
- Alternative
- Animals
- Animation
- Arts
- Astronomy
- Automotive
- Aviation
- Baseball
- Basketball
- Beauty
- Books
- Buddhism
- Business
- Careers
- Chemistry
- Christianity
- Climate
- Comedy
- Commentary
- Courses
- Crafts
- Cricket
- Cryptocurrency
- Culture
- Daily
- Design
- Documentary
- Drama
- Earth
- Education
- Entertainment
- Entrepreneurship
- Family
- Fantasy
- Fashion
- Fiction
- Film
- Fitness
- Food
- Football
- Games
- Garden
- Golf
- Government
- Health
- Hinduism
- History
- Hobbies
- Hockey
- Home
- How-To
- Improv
- Interviews
- Investing
- Islam
- Journals
- Judaism
- Kids
- Language
- Learning
- Leisure
- Life
- Management
- Manga
- Marketing
- Mathematics
- Medicine
- Mental
- Music
- Natural
- Nature
- News
- Non-Profit
- Nutrition
- Parenting
- Performing
- Personal
- Pets
- Philosophy
- Physics
- Places
- Politics
- Relationships
- Religion
- Reviews
- Role-Playing
- Rugby
- Running
- Science
- Self-Improvement
- Sexuality
- Soccer
- Social
- Society
- Spirituality
- Sports
- Stand-Up
- Stories
- Swimming
- TV
- Tabletop
- Technology
- Tennis
- Travel
- True Crime
- Episode-Games
- Visual
- Volleyball
- Weather
- Wilderness
- Wrestling
- Other
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Podcast: Dwarkesh Podcast (Lunar Society formerly) (LS 41 · TOP 1.5% what is this?)
Episode: Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Pub date: 2023-03-27
I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:* time to AGI* leaks and spies* what's after generative models* post AGI futures* working with Microsoft and competing with Google* difficulty of aligning superhuman AIWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00) - Time to AGI(05:57) - What’s after generative models?(10:57) - Data, models, and research(15:27) - Alignment(20:53) - Post AGI Future(26:56) - New ideas are overrated(36:22) - Is progress inevitable?(41:27) - Future Breakthroughs <br /><br />This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com
The podcast and artwork embedded on this page are from Dwarkesh Patel, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.