This week we focus on the latest in AI and developer productivity. We cover tools like phind, GitHub Copilot and Continue.
Key Topics Discussed:
Key Takeaways:
Conclusion:
AI is rapidly democratizing innovation in the developer community. With the right skills and mindset, developers can leverage these tools to boost their productivity and creativity.
We've been recording ever since you stopped talking about proprietary information. Oh. Sean, Jacob, Brandon, these three guys are the best in the business here. And in this specific room. In this room. I'm definitely top three in this room. And welcome back to the Big Cheese AI Podcast. I'm your 28th best moderator, Andrej Herakles. I'm joined by Sean Hize and Jacob Wise, two tech leaders in Indianapolis. And last but not least, Brandon Corbin, one of the brightest AI minds in the Midwest. Guys, this is episode three. How are we feeling about the podcast? Yeah, I'm having a good time. Well, before we start diving into the developer productivity, which is the topic of the podcast today, we had some stuff come over from the Biden administration. So does anyone have any kind of recap or something we can tell the audience about what this means for AI and the development of it over time from the president? So basically what they're trying to do. So there's the executive order that came out for AI in America. There's also the AI council that's happening in the UK. And all these governments are basically trying to come together to figure out how do we get our hands wrapped around this potentially human destroying technology? It almost feels like we're being misled, right? Like AI is going to potentially be destructive to humanity and all this stuff. When in reality, we have stupid LLMs that require 128 gigabytes of memory to be able to do anything. It's not like they're going to break out all of a sudden and be like the Terminator and all this nonsense when in reality, the real thing that's concerning me is quantum computers, right? Quantum computers is a bigger threat to humanity than AI is at least right now. Now when we combine AI and quantum computers, then yeah, then it's the end of the world. But right now, the quantum computer. So I have a stat that I was looking up specifically for this. So right now, encryption. So your blockchain, your HTTPS, like everything that's kind of secured, most of it's going through what's 2048 bit RCA, which is just an encryption scheme. And so right now, like a classic quantum or a classic computer, it would take 1 billion years for a classical computer to be able to break the RCA encryption. Okay, so that's how we know it's safe because it takes a billion years and no one's going to spend a billion years. With a quantum computer, it's 100 seconds. 100 seconds. And now all of a sudden, within 100 seconds, you could potentially break any wallet, any crypto wallet, any secure transmission between two different computers, anything that's stored encrypted on your device. 100 seconds, it can be broken. Why are we not talking about that versus these stupid LLMs? Again, they appear intelligent, but calling them artificial intelligence is, the only reason we're doing that is because it's what people understand. So that makes a lot of sense. And a lot of people are looking at it like more, oh, AI is going to take over. That's the clickbait thing. Right. But if you look at the executive order, it's more aligned with how the day-to-day average person is mentally modeling AI. What's it? This is a threat to me. This is a threat to me. So a lot of the provisions in the executive order have to do with protecting workers' rights and protecting the citizens of the United States, not from impending doom from an AI takeover, but more from a, you know, we need to make sure that people are protected. So like six out of the, there's eight principles in the executive order. Six out of eight of them have to do in some way with risk management. Right. Right. Two of them have to do with advancing things. And in this executive order, which you had to put into an LLM just to even understand it, because it's so long. Claude, thank you. Right. The word investment is written five times. Right. So this isn't about investing in AI. Now, they are, there are provisions in this that talk about investing in AI. And they actually have a, I didn't even know this, they have a tech modernization fund at the federal government level. Yeah. Which has like a one point, I actually found how much money they have. It looks like they've spent 750 million and they have like 1.25, but it's basically a blank check. So to parlay onto that, what's kind of is so the fellowship. So they have basically an AI fellowship that you can go sign up for. And if you do get accepted, I think it's like 12 to 24 month run where they actually are going to put you into a government program to basically try to figure out how they can leverage AI to improve it. And I think it's like the pay for it's like 150. Yeah. But you need to be in DC and you're going to be in DC for those 12 to 24 months. There's a lot of specific line items in this that have to do with basically asking government agencies to start implementing AI, which I thought was the more interesting thing about this. Because so you don't get the high level garbledy gook. Right. But then you get these specific like this within 360 days, this must happen. This was well, a lot of it is, hey, we should probably start looking at AI use in government from a government operations perspective. You never hear the government talk about making their operations more efficient. But at the end of the day, when you read it, it's mostly about trying to protect American citizens from the, you know, the threats of AI, which really have to do with what we've been talking about from the beginning, which is, am I going to have a job next week? Right. Yeah. Also the bias, right? Like, oh, the bias is in there. That's one of the seven things. Yep. It's in there. They want to make sure that, but at the same time. They just basically said they don't like that. Yeah. They didn't really say. Yeah. We don't want your AIs to be biased towards, you know, people who don't have white skin. And but the reality is, is it's kind of like, well, isn't like the credit score. I mean, credit score is, is, you know, I mean, is a suspect. And so there's like so many different, so I don't know. They're basically making sure they get it out there that they want to mitigate risk and they want to mitigate threats. Those are very, they're not any different than the things we've already seen. There are a lot of provisions that potentially will spend money on things. A lot of those things seem to be projects related to research and identifying opportunities, but not necessarily investing in a new thing that's going to, you know, get us to the moon faster or advance our society. Well, I'm just looking at the roadmap here that they just released yesterday. And in Azure and Microsoft, obviously are deeply connected with the government. And they've got in Q1 of 24, they're going to be using open AI and Azure government. And, and they're going to be doing a co-pilot, Microsoft 365 co-pilot. It looks like summer of 2024. So it's coming, like absolutely coming. The investment is here. They're going to be implementing AI into government institutions. And this is just all a part of the greater conversation of how do we, how do we protect against it? How do we make things more efficient and do it in a responsible way? So, yeah, so there's basically nothing to fear. The Biden administration has come out with some legislation because AI is the biggest thing since sliced bread right now to basically assure the nation and people who are working regular jobs that, hey, there are provisions within our government system that are going to protect you from some revolutionary technology. Basically is what we're saying. It's not too dangerous. It's obviously someone spent a lot of time on writing this. Sam. Sam from open AI is highly motivated to have regulations be added, right? Because they're primed for it. They're basically saying, yeah, here's how you should do it. It just happens to align with how their business model is set up. Right. And it seems like a new customer. Exactly. And it seems like there's it right now. There's the regulation will come in when your models hit a certain size, right? Like, so if a model has, I forget what the exact parameters are for a model, but a model that it hits this size needs to be regulated a little bit differently. And it just it happens to be just on the other side of, I think, chat, GBT four, right of like once it passes chat, you do before in the size of how big the data set is and how many hardware it takes to train these models that they need to go through some sort of a check, if you will. But even then, that seems that seems short sighted, I guess would be the right word. It's almost like when the iPhone first came out. So like we're talking 2007, right? 2007, the iPhone comes out and we realize that, oh, people are really obsessed with their phones. We need to like implement a bunch of things to kind of regulate and control it. And it's like the the first version of the iPhone was nothing. So like the moment that these large language models start coming, the next generation, they're all going to basically fall within the preview of whatever regulation that they want. I don't know. It just seems it just seems like they need some talking points to. Yeah, well, I definitely feel like we can all agree that. Sam Altman is an esoteric founder that you should be like 25 percent as good as him and you'll be extremely successful. Yeah. Well, the point of this podcast is using AI for developers and developer extensions and tools and things like that. So the first question we have is how transformative are tools like Copilot and AI Genie for the modern developers workflow? Yeah, I've been using Copilot since, you know, early, early release, you know, and really these tools are a lot like the generative AI tools where at first you get kind of the wow factor of like, oh, my God, this is magic. I can do anything with them. I'm never going to have to code again. But the reality is they're great. They're great supplementary tools. It's like a calculator, right? Like think about like long division doing it on paper takes forever, but you still kind of have to understand what's going on there. When you get a calculator, I can punch that in and I can do that math a lot faster. Right. So it's the same conversation we've been having about the other set of tools is the people who kind of understand how to leverage the tools are going to get a lot further, a lot faster. Now, I'm having trouble. I use these tools every day. I'm having trouble keeping up with even the tools that are coming out. Right. Like, and it's just incredible the velocity at which things are picking up. Like just the other day, I think I installed a continue.dev VS code, by the way, is the code editor that I use. A lot of these things are built as extensions into that tool. And then they're kind of sitting alongside your editor. I mean, I'm still blown away at some of the things that you can do. And we can talk about a lot of different specific tools. But in general, it should elevate just like the other the generative tools. It should elevate everybody's skill set. The baseline is getting higher. The main product that's sold in the marketplace is Copilot, which is owned by GitHub, which is owned by Microsoft, which is partnered with OpenAI. Right. Right. OK. So is Copilot using? Well, that's so... So let's just talk about Copilot for a second. And then I have some questions. So Copilot is most... The way that I use it is it's just an installed extension on VS code, which is the most common text editor development tool. I used to use Sublime. Yeah. Yeah. And then VS code came and I was like, "Whoa, this is way better. And wow, it's VS code." But is that like how you use it? Yeah. I mean... Like while you're writing code, basically, in the code editor, this is augmenting what you do? Yeah, absolutely. Yeah. Copilot is the... I would say the most brain dead use case as far as using LLMs is just you can literally type a comment that says, "I want to build a... I want a function that does X, Y, and Z," and then tab or enter, and it will start to suggest what that code should be. I would say most of the time the code needs at least a good amount of editing. But the cool part is it can help you get unstuck, right? So those moments where you're just like... I mean, I find it for dummy data a lot. It's very cool. I'm trying to mock up data. I don't have an API ready yet. I'll go and I'll stub out my component, and then I'll stub out an object at the top, and I'll just start hitting tab and it'll just populate fake data for me, which is cool. It's really cool. That's just one use case, obviously. But yeah, I mean, the easiest way and the most intuitive way to use it is just the auto complete inline in the actual editor. And it's $10 a month per user, right? Yeah. Yes. So the pricing is... They made it very affordable. So $10 a month. It's not token based. Yeah. So you can just do unlimited. I mean, so you're getting at this... Who's the product? Yeah. You better train your models. We know we're the product. We know we're the product. We know we're the product. But this is... So for all you developers out there, right, that are writing code, that are building these products, that have the opportunity to use modern... I know that some of you might be on like net beans or actually have to use like Xcode or some terrible thing, but now you can use Xcode, write Xcode. Anyways, long story short is like, this is a must have tool. This is a baseline use of AI to be productive as a developer tool. And for us as a company, we tried it at first and we did not let our junior developers... I hate that word. Junior developers use the tool. I think there's some detriment potentially there. We also were testing out for a long period of time because you're talking about something that's impacting your code base because you can start typing and it'll suggest something. You press tab, that's in your code base. So, but I think the thing about Copilot is it's the go-to. It's literally the Gmail right now of code assistance. And you should, for $10 a month, you probably want to check that tool out. Well, Jacob, so you're not just developing in a vacuum, like you guys own a firm. And so you're having to collaborate with other developers. So platforms like Continue, could you talk a little bit more about what that enables you in regards to collaboration through AI? Yeah. Yeah, yeah. So Continue is one I haven't used as much, but we were talking about Ollama as a local LLM last week or a couple of weeks ago. And it's really cool because it can actually add context to your code base. So VS Code, I'm sure, or Copilot, I'm sure has some level of context, but this Continue allows you to pick specific models and then load up as many files as you want, whatever code base you want, and then ask it questions about said code base. Like, can you please refactor this component to do X, Y, and Z? Or can you help me find bugs or efficiencies in this file? That helps as far as like from a code review perspective. Oftentimes what I'll do is I'll take a developer's code base and I will just poke at it a little bit and ask a few questions and see what just what bubbles up from that. So it's a good tool for quickly assessing or just like basically refactoring. The other day I wanted to write something in Next.js that someone had written in Vue. I just said, please rewrite this in Next.js and it did it pretty good. What's the impact of that? So it's the Gmail. How much time is it actually saving you guys having this kind of tool? Yeah, personally, what I found it to do is not save as much time as make code quality better. I still spend about the same amount of time as it would have taken me to review or to do a task, but I'm noticing the outputs a lot better. I think for people that are in a flow state, which for me never happens because I'm in and out of 20,000 things. Jacob's pretty similar. He does get probably more than I do. But I think for someone that's in a flow state, that could probably make you faster for sure. I like the idea about increasing code quality and reviewing pull requests. I think those tools will get augmented into those platforms. But just from a pure developer productivity perspective, there's definitely gains for me. I don't have to go Google all the different array methods on JavaScript every two days anymore. It's just kind of like in there. You're just talking about productivity. Every percent matters. And we talk a lot about getting unstuck, doing the things you don't like doing. Yep. Right? Yeah. Unit tests. Right. Because we talked about a lot. Writing code faster is one thing. Right. But the other side of developer productivity is doing the things that take a lot of time, builds and tests. So for you, how is AI? Because I know that you're all in on that. Those are the things that really actually, when you look at GitHub surveys and data, those are the two tasks that are really killing developers when it comes to productivity. So how are you using AI to help you with that? So I use continue. So I use continue, but I use a local LLM because I'm too cheap. Right? Like I don't want to pay $10 a month. He doesn't even want my Google Workspace, folks. Yeah. No, seriously. Every time that something comes up, it's like a per month. I'm like, I already have like a thousand of these damn things. I don't need another. So I'll always try to find ways to run it locally or to do it for free. So I'm extremely cheap when it comes to that stuff. So I use continue. I'll use continue with Code Llama just because Code Llama seems to be the one that runs. I'd like to use find, but it's like 39 gig or something. And so it's a fat boy. Yeah. And I haven't, and there hasn't been quantized. We said continue. I mean, I use this stuff every, what is continue? Continue.dev is the extension that, and it's just the domain as well, but continue.dev. And it's just an extension that lets you basically pick which large language model you'd like to use. And so it's kind of code pilot-esque, but the difference being is that it's only always happening in the left or the left pane of code. So it's not like actually like in, as you're trying to type and do the type and all that stuff. I saw that the other day. Yeah. And so, but I use it for, I'll use it for select a whole function and just be like, try to find any potential bugs. Right. And I'll go through and I'll try to find it. Comment this code is another one that I use all the time. Just select it, say, comment this code, and then it just goes through and it starts to do it. So something that developers just never do basically what they do. Exactly. You're supposed to comment every single line of your code you write to tell what it does. So that if someone else comes in, you know. Or you in six months comes back and says, what did I write? Or you can write yourself a little note, like, please refactor this. Yeah. And you don't have a deadline. Oh no. So yeah. I mean, comments. What about for tests though? Because no one wants to write tests. Like you keep, I'm not writing. So unit tests all the time. So I select the whole function. So I'm doing it all with Deno and Oak. Right. And so then I'm just like, select that function and just say, hey, write me a Deno based test. Deno's a node replacement. So Deno's. What's Oak? Oak is a RESTful framework built for Deno. Okay. And so you basically just include it. Again, I love Deno. I really do. Like, I'm just like, the more I use it, I'm just like, this is native TypeScript support, right? Just like out of the gate. You're just, you don't need to do any transpiling. You don't need to do any of this magic. It just works. Nice. And it's got its own test suite. So it's got its own, you know, imports for assertion and whatever. But yeah, then it just goes through and it writes out these test scripts. And it's just like, wow, this is actually. Now, again, with Code Llama, I'm using the 7 billion parameter one. Half of the time I need to go and like, it gives me like, it starts just dumping out. Okay, here's what I'm going to do first. Right. And then it gets to the code. And so I got to go delete the, okay, here's what I'm doing first out of the code. And then it's like, end code. I got to go clean that up. And every once in a while, I find a few hangnails. And I'm just like, what the hell? Where did this even come from? Yeah. I think I sent some code to continue last week that I had some weird character inside my code and it broke everything. But you know, something that came up this week was finds Code Llama model outperforms chat GPT-4. So find, let's talk about, just tell everybody what find is. Because it's something that feel like that has been out there. PHIND.com. It's really great. And it's underutilized. PHIND.com. This is another one that Sean and I caught on to pretty early. And we love because it's less of, you know, give it code and it tells you what to do or how to do it. It's more of like a search engine pair programmer for coding. And I actually used it this morning. I used it today. Well, I wanted to mess around with LinkedIn's API and I wanted to build out these services for it. And I was like, okay, I'm going to go to the docs and do my usual, go to the docs, try to figure it out. And, you know, I'm skilled enough at that. I could have done that, but I used it and it helped me get from start to, I mean, I was o-authing into LinkedIn in an hour and grabbing connections. The only hangup was the scope for the token wasn't good, but, you know, whatever LinkedIn blocks everything now. But anyway, the point is find.com is a great tool as a kind of collaboration of like, ask a general question. I'm kind of know around what I'm trying to accomplish. I don't know all the specifics. If I did, I would just do it. Right. And then it can give you some suggestions on how to accomplish that goal. I use it for database architecture design. I use it for, you know, it's a question might be like, I'm trying to build a messaging system. What's a good database architecture for that? The traditional path would be go to Google, read a couple of stack overflow. People say, this is this, or you could use this schema and this is the advantages, disadvantages. It kind of does that work for you with sources, with code examples. And it's not, it's just like every other tool. It's not the answer, but it helps me get from zero to the finish line or start to finish a lot faster. I use it for testing if a framework has a capability. So I'm like, Hey, I want to do this in this and see what it comes up with. If it's like that part of the function is blank. You're like, okay, yeah, it doesn't support that. Right. And then one of the other ancillary thing that I love about find, which I think, you know, is an interesting thing is it actually provides the links to the web pages that it was referenced sources when it did that. So somehow when they built their model, I love that because they're basically crediting the websites and it's a lot of stack overflow links. I'm sure. I hope they're throwing them some bounds. But it gives you, okay, here's the answer, but here's where I got that answer and it's on the web. Yeah. Well, it makes sense to me because that's the general flow for me as well, which is in the past, I would look up a problem, Google it, click on the top three or four links, try to understand what each link is talking about, and then use my brain to condense that into what the commonalities were between all those links. It's doing that work for me. So it's just, I'm now I'm stepping into the next, or I'm jumping in on the second step, which is, oh, here's all the information as one message. And here's where the links are to the original. If you want to check that out and here's what you could do with that. So I absolutely love it. So find actually, though, there is some controversy around. I wouldn't be surprised. Yes. And so this is the best. Basically, someone on Reddit was asking, like, what's up with the find? Why are people all up in arms? And so I'll just read you what the top comment is right now. So this is their current business model strategy. Release an open source model that performs well. Let the community embrace it and get hyped. Find an investor and sell the story of the next open AI. Get a huge evaluation and raise a ton of cash. Go closed source due to competitive reasons and cash out some founder's equity on the next round. Right, because I'm using it for free right now. Yeah. Yeah. So apparently, version seven is going to be proprietary. And so they're kind of doing the open AI rug pull, which is a reality, right? Open AI was originally built. They had the foresight to say, we need to make this open source. So it's not one proprietary company that basically controls all of AI. Right. They didn't want to have a repeat of what Google-- It's called open AI. Right, exactly. So that's when Elon came on. Yeah, nonprofit. And Elon said, OK, yeah, and I'll put in billions of dollars into this because it's the right thing to do. And then they all of a sudden go, oh, this is too powerful for the public. We're going closed source. But isn't that just the thing now? Like, if you really want to create a billion dollar company, you give your crap away for free for way too long. Because you have so much runway. Exactly. They pull it right out from under you. Just be like, thank you. I'm glad you brought that up because I always use-- I mean, I use it all the time. And I'm sitting here thinking to myself, how are they affording to do this for free? Yeah. And I mean, of course, I was like, they've got investors and there is going to be eventually a way to monetize all this. But yeah, that was always a burning question for me. And it is. I mean, so again, but people love it. But yeah, I think it's going to be a similar thing. And apparently the wizard large language model crew are saying that that find basically trained all their stuff on their own data set. So there's a bunch of internal-- Until further notice, you can go to their website. It will help you right. Yeah. Just don't get too comfortable with it because you might end up paying $24 a month for it. Yeah. Well, and here's the other interesting part that I found during my research is chat GPT-4, find code llama, find code llama is performing better at programming tasks. But chat GPT-4 is way better at understanding vague instructions. So still the winner in my book where it's like, I am a programmer that's trying to figure out what I'm trying to do. The code output may not be as good as if I knew what it was. And eventually, I think there's going to be a hybrid model where it's like, you send it through the LLM that understands human language better, and then you send it to the one that understands how to write programs better. So almost like a create my prompt. Like Dolly. Like Dolly is doing with chat GPT. For the developer productivity exercise, I went and said, I want to look at on Ollama, I want to look at one of the more code focused models. And there's a model out there called SQL Coder. So you can install SQL Coder. Well, SQL is a querying language for relational databases. Right? Well, I, you know, writing some of those queries can be pretty heavy. And so, you know, it's not a chat GPT type thing. Right? Right. But what you can do, and what I did is I basically, you can give it your database, your Postgres database schema, you create a model file locally. And then you create a model. They call it a model, but you create a model that is prompted based off of your database schema. Right. And then you load that thing up. And you can just say, hey, give, show me all my top spending customers. Right. Yeah. So then it just generates the SQL based on the model. I tested it and it works. That's, that's the thing that I've been really excited about is like, you know, I've used chat GPT for and find and all these things for a while now. And they're really, really good. But once you, and it's just like every other use case, when once you start adding context of what you're actually working on, it gets way more powerful. I don't know how many times I've used it for SQL and it says, assuming your table name is in this. Assuming the table structure of this, but this thing knows your table structure. Which is great. But here's the, here's the thing that, that in my review of this for this week's episode, I noticed is like, this ain't no chat GPT. Like if you don't prompt it perfectly, it starts telling you about John Smith's database for 1984. You know what I mean? I mean, it's seriously. So like that's, and I think that for developers that are out there that are trying to do things outside of this easy, happy, I mean, one of the reasons why chat GPT is so popular is it's so usable. You just throw anything at it and it works. But for developers don't get discouraged like me and keep pushing because I think a lot of it comes down to the prompt. Yeah. Well, and people don't understand that like prompt or chat GPT is a full product, right? So, so when you're sending your prompt in and you're, there's, there's a lot of magic going on before it even gets to the large language model. Right. And, and even the output is, you know, they're sitting there after the output's done, they're scanning it, making sure it's not trying to say anything nefarious or whatever that when you go and you just do code llama, you're like, what is this? Like, you can, you can get some screwed up stuff. Totally. Especially when you're training it on your own. Right. Okay. Well, Brandon, break this down for me then. Yeah. So as a person who's non-technical, I go to chat GPT four and I go to put in my prompt. Right. Tell me a little bit more on how I use it the right way. What are the right, how to really structure it. And literally, like, I don't, I want to know from the moment I click enter, you know, how is it coming back and understanding what I'm putting in? So maybe I can actually get better at. Right. So we know that when a prompt comes in, they're doing some pre-processing, it's then getting sent to the large language model. And then as the output's getting streamed back, they're also doing some pre-prompt stuff because every once in a while you'll be kind of halfway through and then it just stops and it throws their little error thing. Right. So that kind of tells us that it's monitoring the output as well to say, Hey, here's our rules. Here's our rules of engagement. And if it's breaking this and we just need to stop it, square out. But now like for chat GPT specifically, if you're looking for ways to ensure the best prompt is it's kind of like there's the, I want you to act like this to do this, to give me output like that. Right. Like that's kind of the three steps that you really want for any kind of prompting. I need you to act like a, a marketer from Apple to write me headlines for my website that resembles something like this. Right. And you give it a couple examples with those three kind of rules with chat GPT specifically. And then the others probably mimic it just because chat GPT is, you know, everybody's copying it would be the best way to kind of get your output. But as far as like when we put the input in until we get the output out, there's a lot of magic that's happening beyond just straight up large language model processing the tokens. And so that's where I think a lot of people are like, Oh, I can just run Mistral or I can run these on my local machines. And then you're just getting these results that are like, this kind of sucks. So it's like, cause you got to put a lot of work into it, you know, let alone that you've got millions of people trying to hit this end point at the same time. And we're just running this one local model and it's still kind of flaky. So there's just so much magic that goes on like what OpenAI and Claude and Bard and all those guys have done to be able to scale this. When I take a step back, I'm sitting at a table full of 10x developers. Sean, Jacob, Brandon, these three guys are the best in the business here. And in this specific room. In this room. I'm definitely top three in this room. Yeah, seriously, these are 10x developers who have built successful companies and deployed a ton of products and have been researching this stuff for years and on the bleeding edge of a lot of things. Well, as a developer, you guys are talking at like this level. I think that what might be a little beneficial to like go around the square here really quick. And just if you're just getting started as a developer, what makes a good developer? What things do you focus on? And how, as a developer that's kind of up and coming and getting into the space with AI, how would you look at it if you started there with the knowledge you have now? Yeah. Well, I think time and a relentless pursuit of figuring out whatever you were trying to figure out. You just can't stop. You can't give up. You can never stop. So for me, I beat video games when I was a kid. Like I was the kid that wouldn't stop until I could I beat the game. Like the only damn thing I could never beat 8-2 and Super Mario Brothers for some reason. I'm still pissed about it. But my dad was the same way. Like he beat GoldenEye on like double O-A-Gym and unlocked all the secret levels and beat Metal Gear. Like we would sit there and beat video games. And that sort of like when I was a kid, like unlocked my ADHD thing, which is like I'm never going to stop. Yeah. And so when I was in college, I found out there's some kids that can make websites and I couldn't and it really made me mad. And so I just went and I never stopped. And I think that there's this like mountain you have to climb as a new developer where you don't know anything yet and you have to like beat the levels. Right. And like knowing what levels to play is one thing. Like for me, it was learning web development. So I learned HTML, CSS and PHP in some weirdly the same order because I was trying to do this whole thing. Right. You know what I mean? Trying to do too much. But for me, it wasn't any one specific thing. It was like literally understanding that you are going to have to spend a lot of time on this and think about it as an exercise of you have to win. You can't lose. You can't get you once you get stuck, you have to figure out a way to get unstuck. And for a lot of once you get a job figuring out how to get unstuck revolves around not internalizing everything, you know, and making it your problem when you have people that can help you. I think that's the thing. But for me, I was on my own. No one helped me. So it's just being relentless. So I have a little perspective on it just because my son who is taking a, you know, intro to web development or something in college and he hates it. He hates programming. He hates it. And I'm always just like, no, you know, you should be. He's like, I have no interest in coding. He's like, but I'm in this class. And so use chat GPT, like abuse it, go in and be like, here's what I have to do. And then it's going to tell you what you can do this code you can do. And he's just copying and pasting code straight from chat GPT, paste it into his project and ask me, he's like, now tell me, does this does this I'm probably gonna get hopefully the teacher doesn't listen. But he's like, so now tell me, like, where does this feel like it's not a junior, like or a newbie trying to program? Right. And so I'll kind of look at the code. I'm like, OK, I see what you're trying to do. Blah, blah, blah. I think anybody who wants to start getting the one you just shared today, the haunted Halloween Angry Birds, Angry, Angry, Angry pumpkin. Yeah, it is what it is. Pumpkins. Yeah, right. So is a is a brilliant example of this that you if you can have an idea of I want to build this. Yeah. And I think that for me, that's the most important thing for anybody who wants to start getting into development, have something that you want to build. Yes, because that's going to motivate the level. You have to figure out what the level. What do you want to build and then start going and asking Chad to be how do I even start? And it's going to lead you down that path. And that's really where I think because if you have something you want to build, you're going to have the fortitude to stay with it because it's so easy to be like, oh, fuck this, I'm done. I don't want to deal with this anymore. Yeah, I've got something I want to build. And that's been the motivator for me every time to get into development is I just didn't have any money to pay a developer. So I'm like, ah, screw it. I'm just going to figure it out my own. And then, you know, 20 years later, now I can develop. Yeah, no, I love that. And just my own personal experience, I was banging my head against the wall doing tutorials when I was first getting into programming. I must have watched a thousand tutorials on authentication and authorization. It never made sense to me. But what Brandon just said and what Sean is saying, the relentlessness and picking a thing to build and building it, ChatGPT can help you with that. So I go to a meetup in town where I meet with a lot of junior developers, and they always ask me the same question. What programming language should I learn to get a job? Right? Well, the answer is it doesn't matter, really. I mean, to a certain extent. I mean, as long as it's JavaScript. Yeah, as long as it's JavaScript, you'll get a job. No, you could be .NET, you could be anything. What's more important to me is to piggyback off of what Brandon just said is pick something to build. It could all matter of fact, if it already exists, it's easier because you don't need to sit there and pontificate along specifications and requirements. Go build Angry Birds again and use ChatGPT, which is what they were alluding to. Sean shared a link. I forget who did it, but it was one of the Verstel guys shared it out. It was Guillermo. Yeah. And it was somebody who went to ChatGPT and he said, "I am trying to build Angry Birds," whatever, "and show me how to do it." And it went step by step. It was like 400 prompts. It took him 15 hours. 12 hours. Or whatever, 12 hours. 400 messages. I mean, it's a browser JavaScript game that works in 12 hours. Right. Pretty freaking cool, right? And my nephew just messaged me the other day. He's like, "I want to start building games. How do I do that? I have not built a lot of games, so I cannot help him." Step by step, how would you go build a game? But what I told him was I sent him that thread and I said, "You have an idea, pick an idea, build that idea, use ChatGPT to get you unstuck, and then you can always ask me and I'll try to help." But this is an unprecedented time for junior developers because they can get from zero to a working product. It may not be perfect, but this is the ultimate get unstuck tool. So what makes a good developer? Asking good questions. Relentless drive. Picking a problem and solving it. And then just doing that over and over again. And eventually you wake up one day and you're like, "Wow, I'm actually... I know how to do this." But I think part of this goes to the kind of the pitfall I'm seeing with people adopting AI is there's this fear that they're going to be missing out on something or that they're... By getting into this, they're dipping their toes into some sinful exercise. I hear a lot of people say, "Oh my God, I learned all the basics of HTML and CSS. What a waste of time. I shouldn't have done that." And it's not true.