The 1000x Developer - a16z Podcast

## Metadata
- Author: **a16z Podcast**
- Full Title: The 1000x Developer
- Category: #podcasts
- URL: https://share.snipd.com/episode/2e54814b-6b6c-43d3-a0fd-527fc42fe672
## Highlights
- AI Revolution: The Future of Programming
Key takeaways:
• The AI is going to be running in the cloud, and this will make it easier to implement suggestions and features.
• Traditional IDEs are going to become less important as the AI develops.
Transcript:
Speaker 1
Like a gradient descent. Sort of how do you train an AI actually? So yeah, I 100% agree with that. I think it humanizes the process, as opposed to this like mechanistic sort of trial and error thing. So this is to iterate a lot faster on tools. Traditional IDEs are very hard to build. They're super complex classic algorithms. I'm sure you remember stuff, but like some of these IDEs are super big. Like you download IntelliJ. That's like a couple gigabytes of stuff. And they're clunky. They take up a ton of RAM, like try starting X code, you know, just to confuse your computer. The cool thing about the AI revolution is that the AI is going to be running in the cloud. You're going to give it a prompt, like, you know, what you're talking about, what your code is about. And it'll be able to implement suggestions and implement features and tools without having this heavy algorithmic, hard to maintain sort of piece of software. So I'm really excited about that. ([Time 0:18:22](https://share.snipd.com/snip/6903aa0f-1b8f-4e17-a50c-09939101a0e7))
- How Ghost Rider Was Built
Key takeaways:
• Ghost Rider was built on top of GPT3, which was a decision made based on the input of UX.
• There are multiple ways to answer the question of why Ghost Rider was built on top of GPT3, but one of the main reasons is that it allows for a better user experience.
Transcript:
Speaker 2
Well, something that I've heard you talk about is also a decision that you made, which was to build Ghost Rider on top of your own models. So something like a co-pilot is built on top of GPT-3 to my knowledge. And that's a decision to be built off another platform, but you went a different route. So can you speak a little bit more to how you made that decision and what kind of inputs led to that output?
Speaker 1
Well, first of all, how crazy it is that Microsoft had another company, whereas Rufflett built a RON thing. There are multiple ways to answer this. One of them is UX. UX is inherently inseparable from the infrastructure for how a product works. I think most people think it's separate things. But if you're serious about making products, famous Alan Kay quote to Steve Jobs, he told Steve Jobs, if you're serious about making software, you have to make hardware. ([Time 0:19:37](https://share.snipd.com/snip/6c209a48-a35f-4858-b5d7-79ec7372264c))
- How OpenAI Became a Core Part of Google's Technology
Key takeaways:
• For the company, it was important to be able to optimize the latency of their interactions with their platform, in order to create a great user experience.
• They also felt that they had a data advantage, which would allow them to train more advanced AI over time.
Transcript:
Speaker 1
And so I think for us, it was like, if this is going to be a core interaction with our platform, we have to be able to optimize it. And we have to get the latency down to the point that we feel it's going to be a really great user experience. And we weren't able to really get that when we're hitting something over an API. Because the latency will be all over the place. We can get the caching right. We can get the location right. We didn't have control about any of these things. That's a huge downside of being a consumer of a near API. And then the other part is a strategic part, which is if you believe that this is primary platform shift, and this is going to be a core part of your technology, then you have to build it. If you call yourself a technology company, that means you build technology, right? It doesn't mean you're just like building glue code on top of like existing technology. Finally, we think that we have a bit of a data advantage. And that data advantage will compound over time. And so it will allow us to train more advanced AIs over time. So all these three reasons just made sense for us to bring at least part of it in-house. I should say that we still use OpenAI for a lot of the bigger workloads that require reading large models. ([Time 0:20:41](https://share.snipd.com/snip/d5a7c150-d917-4737-aca6-fdc56013754f))
- How Ghostwriter Got to Its Unique Interface
Key takeaways:
• Ghostwriter was originally designed as a chatbot, but was later pivoted to a robot on your shoulder that only speaks up when it has confidence.
• The UX and people's experience were linked to the back end models being built.
Transcript:
Speaker 2
Something that I found really interesting was Daniel Gross and Nat Friedman were on the Stretecory podcast. And they talked about how they ultimately, their investors are not the creators of Co-Pilot, but how Co-Pilot ultimately got to the interface that it now is. And originally, they actually wanted to create it as a chatbot. They thought, oh, people will run into an error. They're going to want to talk to someone and ask, hey, how do I fix this? And they're going to get a response and implement it. But ultimately, they ended up pivoting to what you might imagine as like a robot on your shoulder that only speaks up when it has confidence. And so I know it's the early days of Ghostwriter, but thoughts on how you got to the specific interface that is Ghostwriter today and how you, as you said, kind of linked the UX, the people See to what's happening in the back end as you're building these models.
Speaker 1
So I think there's two modalities. One is pull and one is push, right? So pull is the human knows what they want, and they're going to ask for it. You write a prompt, you're going to wait a little bit and you've got to get it. ([Time 0:22:01](https://share.snipd.com/snip/d9ec8821-56c3-4f2d-8854-f37741d6b39b))
- The Future of Development
Key takeaways:
• The middle end of the coding process is going to disappear because of the power that front end engineers have access to.
• The back end low level platform engineering is going to become more powerful.
Transcript:
Speaker 1
And so the middle end I think will probably disappear because of that, because of pressure from both sides. The front end engineer is just going to get way, way more powerful. So front end engineer will be able to build full stack products just because they have access to all this really powerful platforms. And they're going to be able to just produce a lot more, be able to use AI in every part of the coding process. Whether it's testing, CI, CD, design, everything is going to be sort of power, but AI just made a lot better, including quality control, by the way. So that's the sort of on the front end side. And then on the sort of back end low level sort of platform engineering, I think those people are just going to get a lot more powerful. Like imagine John, right, John, a con Mac is what we call 10x developer today. And giving him a army of AI developers that he could delegate to work to, that he could ask questions of, you're just going to make him 100x 1000x more productive. And so you can have maybe fewer of those sort of low level 10x engineers, but they're going to be 1000x engineers. And so maybe a single company would need two or three, like elite engineers. And then maybe dozens of front engineers going to building all these products and maintaining with a customer's seat. ([Time 0:27:20](https://share.snipd.com/snip/8c08d286-4491-4b55-8dcf-83b82543348f))