Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & AGI in 2 years

0 Views· 08/22/23
The Podcast Browser
The Podcast Browser
0 Subscribers
0
In Manga

Podcast: Dwarkesh Podcast (Lunar Society formerly) (LS 41 · TOP 1.5% what is this?)
Episode: Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & AGI in 2 years
Pub date: 2023-08-08



Here is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.---I’m running an experiment on this episode.I’m not doing an ad.Instead, I’m just going to ask you to pay for whatever value you feel you personally got out of this conversation.Pay here: https://bit.ly/3ONINtp---Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:02:03) - Scaling(00:16:49) - Language(00:24:01) - Economic Usefulness(00:39:08) - Bioterrorism(00:44:38) - Cybersecurity(00:48:22) - Alignment & mechanistic interpretability(00:58:46) - Does alignment research require scale?(01:06:33) - Misuse vs misalignment(01:10:09) - What if AI goes well?(01:12:08) - China(01:16:14) - How to think about alignment(01:30:21) - Manhattan Project(01:32:34) - Is modern security good enough?(01:37:12) - Inefficiencies in training(01:46:56) - Anthropic’s Long Term Benefit Trust(01:52:21) - Is Claude conscious?(01:57:17) - Keeping a low profile <br /><br />This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

The podcast and artwork embedded on this page are from Dwarkesh Patel, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

Show more

 0 Comments sort   Sort By


Up next