Duet, Co-Pilot, And A Code Whisperer Walk Into A Bar In San Francisco

0 Views· 09/08/23
tcp.fm
tcp.fm
0 Subscribers
0

Welcome episode 226 of the Cloud Pod podcast - where the forecast is always cloudy! This week Justin, Matt and Ryan chat about all the news and announcements from Google Next, including - surprise surprise - the hot topic of AI, GKE Enterprise, Duet, Co-Pilot, Code Whisperer and more! There’s even some non-Next news thrown into the episode. So whether you’re interested in BART or Bard, we’ve got the news from SF just for you.  Titles we almost went with this week: 🎙️The cloud pod sings a duet, guess who was singing 🤖You get AI, you get AI, Everyone Gets AI 🔍Does a Mandiant Hunt, Or does a Hunter mandiant?  🌨️The Cloud Pod goes into ROM Mode  🔎Does a mandalorian Hunt, Or does a Hunter a mandalorian?  A big thanks to this week’s sponsor: Foghorn Consulting provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring?  Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week. 📰General News this Week:📰 01:23 Introducing Code Llama, a state-of-the-art large language model for coding   So you know Github Copilot, Duet AI, and Codewhisperer…. But do you know Code LLama? (Meta you better get good stickers on this) Meta has released the source code for the Llama 2 based Code Specialized LLM in three sizes 7B, 13B, and 35B parameters.   Each model is trained with 500b tokens of code and code-related data.  The 7B and 13b base and instructor models have also been trained with fill-in-the-middle capability allowing them to insert code into existing code.  The 7B model can run on a single GPU, the 34B model however returns the best results and for the best for coding assistance… while the 7b and 13b are great for real-time code completions.  Training recipes for Code Llama are available on the Github Repository.  04:08📢 Matthew - “It's interesting; if you go deep into the article there, they start to digress into like ‘Hey, this 7 and the 13 billion are better for near real time response back’ and the 34 billion…  is better for fine tuning for yourself. So they really go into a little bit more detail of how to do it. And, you know, I think they also put out some code snippets if you kind of dive into it a little bit more, which I thought was very nice.” 05:32 OpenTF Announces Fork of Terraform  Remember when we talked about Open TF’s manifest begging HashiCorp to backtrack on adopting a BSL license? Well guess what?  HashiCorp didn’t listen. Insert sad sound effect.  In response, OpenTF has officially forked Terraform. They hope to have the repository available to you within the next 1-2 weeks, with their goal to have an OpenTF 1.6 release.  Want to keep up with their progress? They’ve created a public repository where you can track their progress. Check that out here.  06:37

Show more

 0 Comments sort   Sort By


Up next