In this episode we discuss the opportunities and challenges of integrating AI into your product.
Topics Covered:
Relevant Quotes:
Key Takeaways:
It's hard to say, but OpenAI killed our startup. It's on the blockchain! It's a better way! And welcome back to the Big Cheese AI Podcast. I am the world's 27th best moderator. I'm joined by Sean Hise and Jacob Wise, two tech leaders in Indianapolis. And last but not least, Brandon Corbin, one of the brightest AI minds in the Midwest. The topic for today is adding AI into your product. But before we dive into that, we've got some big news coming out from ChatGPT. And this just killed a lot of startups, including ours. Including Big Cheese, but that's all right. So the first question we have, what are your guys' thoughts on the new GPT-4 Turbo and its expanded context capabilities? The biggest announcement is GTP-4 Turbo, which comes in with 128,000 context window. So if you've ever used Claude, Claude has 100,000 token context window. For any-- it's basically 300 pages of a book. So it's a big book that you could hypothetically just select all of it and paste it into ChatGPT and start having conversations with it. Now again, you're going to pay for a lot of those tokens. And there's only 4,000K out, which basically means you can't say, hey, rewrite this entire 300-page book, because only 4,000 of those tokens are coming out. But it's a lot faster. So I've been playing with it. A lot of people on Reddit are kind of complaining that it's stupider. That's what I was going to say. With the added context, so you're giving it a lot of data to look at. Have you seen that? Have you seen-- if you throw a bunch at it, it's not as good? I haven't yet. So what I'm actually waiting for is-- historically, what I do is we'll generate the transcript from the pod here. And then I go to Claude. And I say, hey, Claude, here's our whole transcript. Go ahead and just generate our show notes. So if you've ever seen our show notes, where it goes through and it's like, here's your topics. Here's some relevant quotes. Key takeaways. Yeah, key takeaways. That Claude is doing all of that. And it's reading through our hour-long transcript. So I'm actually going to do that this time with GTP4 Turbo and just see how well it works. And I have been using it for a couple of the different products that I'm working on now. And I mean, it's definitely way faster. So for all the layman's out there, when you are interacting with a chatbot, the chatbot isn't saving each of those entries to a database. No, it's not. Yeah. It literally has to-- It's a one-shot. --send the entire conversation back every time. So every single time you ask a question, that amount of information is larger. You can kind of cut it down. But no. So now, another thing that they rolled out is threading. And so what they're going to basically do now is if-- and I think it might be off the assistance API. I'm not 100% sure because I haven't done this yet. But they are supporting a threaded. So instead of now sending over the entire chat message, or all of the previous chats, you're going to basically send over a thread ID. So they are going to start-- So they are saving stuff. They are. They are saving the state management from the API itself. And that could actually be great. Because then they are worrying about how do they truncate it? How do they figure out the optimal thing? Because when you're interacting with chat GPT just in the interface, it's actually pretty damn smart. And you can remember things way above. So they have a lot of black magic kind of going on behind the scenes for them anyway. But yeah, so now I think they're just rolling that out to make it available to the API. So yeah, chat applications won't necessarily have to worry about that. So even though it has a larger context window, you don't actually really need it now. Right. And now Jacob was mentioning a few weeks back about the cost and managing the cost of AI and how that might hinder businesses adopting AI in general. Well, with chat GPT 4 Turbo, now they have reduced the cost, right? Yeah. Basically, it's half the price of what you would do with chat GDP 4, which is good. Because most like-- but Big Cheese's chat interface was actually just relying on 3.5 MBO, because it's way faster and it's way cheaper, right? Like, I mean, the GDP 4, the OG version, was like $0.06 per 1,000 tokens. I mean, so that just gets way out of price. And so now we're looking at $0.03 per 1,000 tokens. Or I think it's like $0.01 for the-- I can't remember if it's the input or the output. So just to back up, so OpenAI takes over what? Earlier this year. When did it actually go viral? I don't even remember. I think it's only been like eight months. It's been eight months. I don't know. I specifically remember it just took over TikTok. And I spent like two hours going down this wormhole, thinking the world was either coming to an end or restarting. The next thing I knew, Andre was sending me massive Slack posts that were written by, obviously, AI. And they were wonderful. But what model was that? What model were they using at that point? 3.5. That was just 3.5. Yeah, probably just 3.5. So 3.5 came out and was like-- but they built-- but we know now that they built-- and now there's an application layer behind this all that's making it all work. Yes. And now to add to the confusion of the GPT being hard, now we've got GPTs. And we have a big announcement to make on the Big Cheese AI podcast today. It's hard to say, but OpenAI killed our startup. And can-- Brandon, you talk about how OpenAI killed our startup. So the whole concept for Big Cheese was the ability to go hire AI employees. Which you can do at bigcheese.ai on the internet. Exactly. Right now. Check it out. You can go do it right now. And you can sign up. And you can go basically hire a CTO. You can go hire a content writer. You can go hire all of these different kinds of workers that are all based on ChatGPT. But they all have their own personas. They all have their own rules in the way that they interact with things. And so-- and there's other platforms that are out there that are doing a very similar thing. And it really kind of comes down to how well can you prompt these? How well can you add in the additional tools to go get and read content and do those types of things? All that being said, these new GPTs, which is what OpenAI is calling them, are basically custom agents that you can go build and then sell. So I can go right now and create a GPT that has its own personality, its own kind of engagement rules, has its own documentation-- so they do have a RAG architecture, which we'll talk in a little bit later here-- where I can go upload a bunch of PDFs. So just as an example, we're going to create now a new Nest JS. So I've been working in Nest for this one project. So I'm going to go and I'm going to create now a GPT called the Nest Smart Guy. I don't know, whatever. And I'm going to go and grab a bunch of PDFs that have all of the documentation of Nest. I'm going to go upload it to this new assistant. I'm going to say, hey, you are a Nest expert. And you can ask any questions about Nest. And you're going to give answers to it and blah, blah, blah, blah. And so you can-- and I want you to run functions. So hypothetically, we could write-- you could do a bunch of custom functions that would write code for them or whatever. And now you just have this one source that you can go and have a conversation with. And I can charge you for it. So that's the most exciting part, is that they're going to make it so you can publish these and that you can actually charge people to use them for. So what I want to do now is to take a lot of the agents that we have in Big Cheese and just move them over as pure GPTs within chat GPT and even reinforce them now with PDFs or whatever kind of content that we would want to have within them. So these agents are going to turn into almost like an app store. That's what I was just going to say. I mean, you think about a guy like Sam and his play on things. This guy's thinking big. And what was the biggest thing that took Apple from being just this phone company to making them the first trillion dollar organization ever on the face of the planet? It was because they created the app store. And we talked all last episode about what-- in general, over the Big Cheese podcast in general, we're talking about the evolution and like, where is this going in the world of AI when it comes to scaling? I just saw this LinkedIn post where this guy was like, traditional B2B go to market strategies doesn't work anymore. And I really think that maybe it's these AI GPTs. That's just the marketplace. That becomes the next app store. And you can hit them with an API, too. So like, I can have an API. I can write a thing and I'm going to be like, I just want to use this GPT for this feature within our website. And then if somebody needs to go update documentation, you just go over to chat GPT and you upload it. So you should be able to hit these all with the API. My understanding is the way the GPT marketplace is going to work, it doesn't really exist yet or not everyone has access to it yet. You should be able to go opt into a GPT. So let's just say MailChimp makes a GPT. And I say I want to use MailChimp because I have a MailChimp account. Can I-- do you think they're going to enable-- and well, let me back up. So it should be able to understand what you're trying to ask it. So if I went into chat GPT and I said, hey, I want to create a new email campaign, it should know that I've enabled MailChimp. From what I read from their press release is there's the potential for that to happen. Yeah, I thought I read that in their release or they were implying that. We don't have it yet. So we won't know until we know. But I thought that was really cool. I thought that was really cool. What's stopping though-- so you're saying, OK, every company has their documentation out there. What's stopping me from going to Superbase and scraping all of their docs and creating one that's for them? Nothing? Yeah. Yeah, nothing. No, I mean, well, hell, so Andrew Huberman, if you guys follow him at all, he's a neuroscientist or whatever. Someone just released a Huberman GPT that they just took all of that guy's information and just basically set one up that you can go and have a chat with. So here's a question. So he was already in the core model. Oh, yeah. Right. Right? Well, oh, that's-- yeah, sorry. Go ahead. You know what I'm saying? So what-- and obviously, Superbase's detailed documentation or MailChimp's API spec and all the other things they can throw at it aren't going to be in there. But I've been asked this question multiple times on the pod, which is you have this general knowledge model, and then you make it special. Like if they make that Huberman GPT, is it just more specialized? Is it just know that it's supposed to look at that section of the data? How does it actually-- That's going to be probably more of the RAG architecture. So retrieval augmented generation. And so when you have a bunch of PDFs and Word docs and Tech docs that you upload, they basically get-- they get chunked. And so when I ask a question, we go and we take that question, and we try to find all of the chunks that are relevant for that question or for that prompt. And then we include that along with our question. So we say, here's all the data that we know, and then here's our user's prompt. And then we just let the large language model try to figure it out from there. So we're just basically giving that large language model very concrete information that we want you to go on. It might exist in the model itself. So maybe that would make it more efficient. 100%. Yeah. It makes it more efficient, and it just knows, oh, well, I don't need to go-- it's all just right here. So I can go and try to figure out what you're saying. To me, this means a couple of things. It means-- I think it means more power back to content creators and people who have domain-specific knowledge. So if I'm trying to make a GPT about WordPress SEO, well, Yoast is probably in a pretty good position to do that. He has all the content. Now, a lot of it's published, but he can create more content and continue to own that. So cue the lawsuits of people getting their content harvested and resold as GPTs. Well, and so that's another thing that ChatGPT is now rolling out, is that they will go-- like we talked about in the last episode, where Amazon-- or, sorry, Adobe and Google have their own legal that they're willing to go to court and to help you cover costs. They are-- OpenAI mentioned that, too, in the Dev Day. And so, yeah, there's probably going to be all sorts of crazy lawsuits and shit. So this is the situation. If someone goes in and copy and pastes all of Andrew Huberman's content from the internet, creates a GPT, goes to monetize it, and then gets sued by Andrew Huberman, next thing you know, you'll be in court getting supported by ChatGPT in order for you not to get sued. From what I think about a Sam Altman or a type is that they're going to build this and let the market work it out. They're not going to-- obviously, things that aren't going to happen that change the fundamental nature of intellectual property, probably. I hope not. ChatGPT4, Turbo, go check it out. This is the biggest news. But the topic of today's conversation is kind of on the opposite side of that and kind of leans to the point we made on AI killed our startup. Well, hey, there's still opportunities to add AI to your products. And that is a topic of today's conversation. We're going to get really deep into not just general concepts on what it means to put AI into your product, but actually the tactical steps to put AI into your product and the different ways you can be thinking about it. So the first question we have is, so in what ways do you think AI products differ fundamentally from traditional products that leverage AI? Sean. Well, I think it's just this high level statement is there's AI products and there's products that leverage AI. And we built an AI product. And I think that there's a huge question in the marketplace of-- I equate it to what happened with NFTs and crypto, which is in the last couple of years or in the last year, there's people that are just seeing dollar signs. They're like, oh, AI. And then we talk to our data scientist friends and they're like, what are you even talking about? What even is this? Is this-- where are you talking about machine learning? All these things that are, ah, that doesn't matter. It's AI. People will buy it. But at the end of the day, you have to build a product that gets product market fit, that scales, that gets customers and all these things. And we've talked about-- I've talked to people that purchase-- or IT people that are purchasing tech for companies. And we've talked about that in previous podcasts. But I think at the end of the day, we're talking about AI products probably just-- if your product is an AI offering, it's going to be constantly at risk from being consumed by open AI, Microsoft, Google, and all these different companies. So what we're really kind of focusing our energy on is helping companies that exist that offer a product. It doesn't matter what it is. How can they add AI to their product? Yeah, because I mean, that's really the question. Because as we've been talking about this week before the podcast, that's where my mind's been. OK, look, you can have a chat GPT wrapper and then have it do where you can create these videos. And then all of a sudden, YouTube comes out with a feature where you can just upload your entire YouTube clip. And all of a sudden, shorts get automatically created for you. Now Opus Clip doesn't exist. The real moat for startups that exist right now is in the past seven years, all the dry powder and the billions of dollars that were invested in all these B2B SaaS companies, all these people that have real moats because they have users. And so it's layering AI as a feature into these existing-- because the real moat is the users. The real moat is the actual product that's not just based off a LLM. It's based off an actual use case that you're solving a problem for. So I think that's why you're running into a case where a lot of these-- we're throwing a ton of money at AI startups because one of them is bound to hit. But then on the other side, it's like, yeah, a lot of them get killed by chat GPT because they have $30 billion from Microsoft to-- And the other thing that I think is important to talk about is what layer are you-- if you're an AI company, there's different places to live at. Are you in the AI infrastructure layer like OpenAI? Are you on the application side? Are you helping people build products like the Vercell side of things? Or are you literally a niche AI product that transcribes videos that's going to get consumed by-- If the majority of your product is just consuming the OpenAI API, you're going to have a problem. Yeah, Jacob shared that article earlier this week. And I was like, yeah. I think it's totally true. But now let's actually start to dive into it. OK, so I'm a startup founder. I'm a technology business owner. I'm looking to add AI into my existing product with my existing user base. When considering adding AI to a product, what are the key factors you need to be taking into account when you first start to approach it, Brandon? You know, so kind of like what we talked about yesterday is that AI can obviously encompass just a ton, right? There's so much different stuff that it could encompass. So it's really stepping back and saying, OK, what are the features that we want to have in our product? What areas do we think we can have a better user experience and that we could embrace with AI? So one client that we're talking to right now that could potentially come on and we'll help them kind of incorporate it, they have-- won't get into the details, but they've got a bunch of users. And these users-- like there's a lot of information in advanced studies that are going on and all these public papers that are out there, right? But if you've ever tried to read like a scientific paper, you know it's actually really hard to understand what it is. Well, so they would like us to basically take these papers and generate them into something that someone with a high school degree can actually go and read and comprehend and now make that just part of their platform. That makes all-- that's a perfect example of what these large language models can do, right? It's something that we can kind of do behind the scenes. You're not necessarily opening it up so every single one of your users is interacting with some large language model, some guys out there trying to figure out how to hack it or whatever. This is kind of going on behind the scenes in the infrastructure of the product that goes and says, all right, we're taking all these scientific papers, we're now going to turn them into something that's actually consumable by normal people. That's going into our database. Those types of things I think are gravy for any product that's out there. The moment that you want to have users, your end users, interacting with a large language model, that's where you need to slow it down just a little bit and go, OK, we need to start thinking about what could this potentially cost us, right? Is somebody going to-- and then really do your security audits because somebody is going to figure out how to make your AI do something that you never anticipated. All you have to do is go back to the original one that Microsoft released on Twitter that all of a sudden everybody started getting into talk about Hitler or whatever the hell it was. So I think those are going to be the kinds of things that people are going to have to really figure out. So I-- right now I'm very bullish on trying to figure out what you can do with large language models behind the scenes with your databases that already exist, some service that's running over here to do it. Those are the ones that I think people really need to be thinking about right now. Yeah. And what I would just reiterate is what you're saying is add value, right? Whatever new tool, a new thing that comes out-- NFTs I think were a good cautionary tale of everybody had to get an NFT into their app, right? And I was like-- I still get questions about that. I was like, what? And apologies if you have one in your app. But I was always thinking, what value is this providing the user? There is some surface level stuff going, OK-- It's on the blockchain! Damn! It's a better way! But yeah, but you have to start with-- Hello. --what value is this providing to my user? Is this making their life easier in any way? And if it is, which LLMs are very good at doing, especially in the case of condensing down a lot of information into something that's more legible or consumable. So yeah, just start there. Just always start with, where can I make my user experience better? Yeah, and I'm sorry, Dre. I was just going to say that from what I'm seeing in the market, there's an issue, which is a lot of startups are getting blocked from funding. And almost in the ideation phase, they're like, well, I love your idea, but what are you going to do about AI? And if you don't know that, go learn the fundamental nature and have a-- Have an answer. --answer to that question. Say, you know, we're going to do this. We're going to offer this augmentation to this service, and it's going to use an LLM, and it's going to make a lot of things better. It doesn't change the fact that you're solving a problem. Not AI and LLMs is not everything, right? But people want to know what your plan is. I think it's OK to have a plan, but I also think that from a founder's perspective, that's not the-- if you're going to an investor to raise capital, and you've got your idea, you've spent time, and you've got some early client feedback, maybe even you have some LOIs, you've deployed some sort of website, things are looking OK, and you're-- maybe you have a few customers, and you're going out to raise a first round of funding. If that investor isn't focused on your key vision and mission and what you're really trying to solve for your customer, and they just want to be investing in a feature, then that's probably not the right investor for you. There are investors that are going to invest in what you're actually generating. And then hopefully, along the way, like Jacob mentioned, there's layers that you can put AI into your product that might not even be front-facing. Your customer and users might not even know that it's AI that's generating them these nutrition facts, like for my athletes on All In. They might not even know that, but that's valuable to get to product market fit. And that's obviously the number one goal. AI is not product market fit. Your solution to a problem is product market fit. And I think that's probably the secret sauce or next step is, OK, everyone always-- the first-- you use chat GPT and you think, OK, well, I need to create some form of chat GPT in my app. Well, I think where it's going is the back-end services, that how do I normalize this data or process this user input to make it easier for them so they don't have to do everything so precisely or whatever. So there's a bunch of applications that are hidden that the user may never even see. And a lot of companies are already doing this. But it's becoming more approachable for the startups. I think it is. And I think the fundamentals will always come back. The same thing that's made software companies grow from the beginning through the NFTs, through AI, it's still going to come back down to user experience, user retention, value added compared to the cost you're actually charging, whether or not that's AI on the front page or AI working in the back end to help service some service to make your product even stickier. Well, that's what's going to win. Well, Sean, can you discuss a little bit about the importance of infrastructure and data strategy when integrating AI into a product? So just let's-- we talked about the strategy in terms of what is this. OK, you could or need to add AI to your product or your business. And we're talking about things that have unique value propositions outside of AI. But really, let's actually discuss how you would go about doing this. And the process, high level. What is the infrastructure and the data strategy? What's the product strategy, which we talked a lot about? How are we going to implement this? And then my favorite topic, what's the financial and cost implications? But for me, let's focus on the infrastructure, because this is the question I get asked all the time, which is, I have a database. I have a product. Yeah. All right, Brandon, I just have a-- I've got a startup. It's the latest, greatest web app. I got my native app. I got my database. Right? And now I need to build AI into my product. From a infrastructure perspective, what do I need to do? All right, so I think that, again, I always tend to go, I want to start in the back office first. Right? Like, let's get our bearings. Let's understand what we're dealing with. We can control costs better, because again, the moment that it's available to the public, we just lose some control. Right? The biggest thing that I saw-- and I'm going to give the crew at X-Ray Glass-- so X-Ray Glass was a startup that I was part of a little bit earlier. And brilliant-- there's two CTOs, and they're both brilliant. But they actually built a tool using Discord and the logs of the platform that they-- like, it was a product manager's wet dream. So I could go in, and I could be like, tell me how many users are using English and Spanish as their primary languages for the glasses. And X-Ray is this great product that basically is for people who are deaf or hard of hearing to be able to see subtitles as we're having this conversation. So this entire conversation would be subtitled in my glasses. Right? So very cool. But I can go to Discord, and I can basically ask it questions. So like, think about how much time you guys might have spent building out the analytics dashboard for all in. Right? Now, you don't necessarily need that. You have this kind of natural language where I can just be like, hey, what's going on for this week? How many users did I get? And it goes, and it tries to figure out what queries to go query the database. And then it goes, queries the database, gets the results back, and then uses that and your query to then go and formulate what the answer is. And it was-- now, truthfully, ours worked maybe half the time. Right? But again, this has been like seven months ago. Oh, but Brandon, you underestimate me. We used a semantic layer, baby. We actually implemented in a no-code, low-code-- Did you? Yes. Oh, yeah. We used kube.dev. It attaches onto the Postgres database. It builds out the-- yeah, baby. Come on, man. So you can talk-- you can basically ask it these questions. You can-- actually, he goes in and asks its questions in a GUI, and then it generates the SQL for us. And so-- but it's not executing the SQL? It's not AI. It's not-- No. Oh, it is. Oh, yes. Oh, yes. So wait, there's no AI? There's no AI. Well, how the hell are you doing it? But it's just a really good platform. kube.dev. It's a semantic layer. But it's a good example of what people might consider AI. You know what I mean? But anyways, getting back to the point, like-- Now, hold on. I got to go look this up. So kube would be-- OK, so this would be another example, though, that they-- I mean, surely they're-- They're going to have-- There would be a great AI to add AI to. Right. Exactly. That's the thing you're seeing. So yes, the step we took was, all right, we're not going to build out an entire analytics dashboard layer. We're going to put it into a database that abstracts that. And we can have our user, Andre, go in and make queries on the fly. Like the traditional way of doing it, you're right, is so cumbersome. Someone says, build me a dashboard that generates this report. And then that report changes slightly. So you have to go in and change that. And it's super expensive. There was zero back-end code written to generate the analytics dashboard. That's really cool. And I think what you're saying, too, by the way-- Eat it! But you're paying for it. Yeah. No! No, it's free for a while. Oh, so someone's paying for it. Yeah. There's a free tier. Yeah, as your data's being sold on the black market. Yeah, yeah. I guess you know how many nutrition logs and-- It's nice, though. I will say, I do like these. That's a nice little animation that they've got going on. And they've got their little AI stuff going on down there. But back to your point about starting with back-end services. And-- OK, let's just define that. Versus for someone that has a SaaS product, because they might not understand that. Your admin side. Yeah, your admin side. The stuff that only your admin users are going to use. The customer-- it's not customer-facing. It's the stuff that people spend a lot of time building that doesn't really return a lot of value until you get to a certain scale. It's the things that-- they are valuable. You do need to know what's happening. But not until you're processing millions and millions of transactions does it really constitute the value there. So yes, incremental is great. Because like you said this week, you get the setup. You get the base layer of, all right, now we are chatting with whatever thing we're chatting with. We have the keys all set up. We have the infrastructure. We know how to actually interact with it. You can control costs. You can play with it and figure out what works, what doesn't work. And then you can start to play around with what is actually possible before giving it to a customer and saying, have at it. I think I'm going more, what do I need to do? So I think that if I'm a product manager and I don't have AI in my product, I go to my tech team and I say, hey, how can we wear our beak? So you look at it from all in. We said, you know what? It would be great to send this weekly summary. OK, great. So what do you actually need to do? So if you have a traditional database, your data is saved in a very non-AI-friendly format. I mean, you're saving your stuff in basically glorified Excel spreadsheets that are normalized and have relationships. But what you can do now, and this stuff hasn't been out there in terms of production-ready databases for a very long time, you can vectorize your data. And so you can use platforms like on Postgres, which is the most common production database out there now. You can use PG vector. And basically what you're doing is you're saving your data as vectorized data. And what that allows you to do is do these nearest neighbor searches. And so a first step to me, if I was a product manager, is say, how can we kind of dip our toes into this and start maybe vectorizing, start storing our data in a way that an AI application or feature can take advantage of that? Well, who was playing with SQL Coder? I was. OK, so with SQL Coder, the large language model, which is a model for SQL, you should be able to hypothetically take your schema, give it to the SQL Coder, and basically say, I want you to tell me how many users have-- right with their left hand, whatever the data structure is, right? The-- Which is awesome, by the way. From a back office perspective, the days of us, I mean, rest in peace all of the analytics guys and data architects. Because they're going to be-- A, the expectation from them is going to be sky high. And they're going to have to implement these tools. Now, I like Power BI. I like these semantic tools like Kube Dev. But people are going to basically be like, I want to text a mofo how many customers I have last-- Totally. I want to chat with my database. You know what I mean? I need to chat with my database. The real value in these texts to SQL, as long as it's back office and trusted people and you have the right parameters, lots of stipulations there. But the real value is I was messing around with a bunch of NBA data. I'm an NBA fan, but I'm not a stats guy. We just have one of the best NBA analytics websites that Jacob and-- or one of our other coders on our team, Daniel Cavanaugh, work. It's called craftedNBA.com. It gets over 35,000 organic Google visits per month. CraftedNBA.com? Yeah. We don't play around. It's badass. It's badass. So Cavi's the brains behind that. I was curious about-- I was like, what's an indicator of success in the NBA? So I have a bunch of historical data. I uploaded it to-- I was using Olama. So this was a local LLM that I gave in a CSV. And I said, help me understand how to find an indicator for success. I'm not a stats guy. Remember that. Remember that fact. It gave me step by step how it was doing it. And then it gave me why and what the outcome was. And it ended up being point differential and offensive efficiency was number one. But that would have taken me a long time to figure out through Google and then writing the queries and then all those things. So these are the things that excite me. I know I have questions. I have questions. I have data. This is the bridge between those two things. And it can even tell you, for clarifications, it said, knowing this value, this is what we assume you mean by success. Because it was a very open-ended question. What are the indicators for success? And it even said, you don't have any playoff data in here. So that's how we would probably best understand success. And I was like, whoa. Dude, is this thing-- I assume this is all Next JS. It's actually Vue, baby. Is it Vue? We have Vue, too. We do Vue Next. It's quick. It's very quick. We just moved everything to Superbase last week. Is that launched? It's launched. It's launched, baby. I'm on it right now. I'm on it. Well, that has existed. The UI has existed for a while. Probably needs a refresh. Yeah, but we let one of our guys just go loose with it. Dude, I mean, it's like a ton of data. Oh, yeah. That's nuts. So there's people tweeting about Crafted MBA every day. So we went viral the other day because Dan-- he's a really good point guard, basketball expert-- he tweeted the-- what was it? The gunner? The gunner list. Yeah, yeah, that shower thought. It was a new statistic that no one's ever created before. And it was called the gunner stat. And it was basically players that don't pass. It was like a ball hog stat. But the list that it came up with was super legit. Like Cam Johnson from the Nets was number one. And that kind of went viral because everyone was like, oh, that's the biggest gunner in the NBA. But anyways, we got to get back to the promise here, the customer promise, which is people don't want to talk about back office forever. Right? Sure. So we've got to get AI into our product so that we can sell it to our customers. So how do I-- if I'm thinking about building AI into my product, what are the types of features that I could implement with my own data? Let's not even talk about sending embeddings to chat GPT and the risk there or the mitigation. Let's keep it to just features and value. What are the types of things that I could do in my traditional SaaS platform and leverage AI and maybe get some more customers? The biggest one that people could start doing where the consumer is now involved is in your help and documentation. Right? So historically, when you have a documentation, a lot of times you either have docs that are kind of like within some small section of your website, or maybe you have your own dedicated website that's just for the documentation. You have to go as a user and try to find the information that you want, and then try to parse their hard coded documentation into whatever it is that you need. Now with things like the RAG architectures and vector databases and these large language models, you can hypothetically upload all of your documentation to a RAG database. Right? So it's all chunked and whatnot. And now I can ask a question, and now the documentations can kind of morph to what I need versus like-- so it's this kind of shift in the way that you actually think through how the documentation works. So a good example is, I think it's LangChain, Mendible. Mendible. Mendible.com is a good example. So Mendible can take-- it basically allows you to have search within your documentation, and it goes and it does all the vectorization and it stores it. But I can go and be like, all right, how do I do this with LangChain? I want it in JavaScript, and I want it to be working with AWS's Bedrock. And it's going to be like, here's how you do it. Because now the documentation can morph to what you need. So I think that that's a very big way that people can start doing this. And there's tools that are out there right now that you can literally just go and be like, here's our website, and we want to have you as our documentation. And they'll just go start scraping all of your content on your website and turning that into a documentation kind of stored AI. So for me, a low-ish hanging fruit aspect of adding AI to your product has to be-- and this would be something that you don't have to store unique data for, necessarily-- is just doing what a lot of these products are doing is, hey, take an input, give it an idea, and get some content back. Like Notion's enhancing their documents and all their applications. There's even an icon that's persisting through the internet, which is that magic wand. Yeah. So I mean, I think that everybody that's building an AI product, if you're building anything that's generating information and then publishing it, putting it somewhere, a communication tool, any type of tool, I think the generative AI and the generative enhancement tool and the magic wand icon kind of concept, is it-- I mean, that'd be on my roadmap. I know it absolutely should be on the roadmap. But it's one of those ones where I think people-- you can't just be like, oh, we're just going to do this, and we're going to throw it out there. So OK, well, then fix that. How do you-- what would you do differently, or how would you approach that? Because from a product manager's standpoint, I'm sure there's tons of product managers out there. They're like, I want the magic wand. Yeah. And have the magic wand. It's going to be so much more just about you really need to make sure, from a security standpoint, that people can't just go hog wild and having-- they're just going to take it-- basically, take your endpoint. They're going to load it up in Insomnia or Postman or whatever and just sit there all day long just being like, brrrrrrr, just generating a ton of shit. Well, back the truck up. Well, I guess we need to delay that conversation for a few minutes because I want to hear what Jacob has to think about these products and adding to your AI. Right. Yeah, yeah. No, I agree. I think you have to be thinking about the front end and where to add those things. I think there's a big value add in input data ingestion. Creation of course is fun, and there's totally-- that's how I use AI today. That's the most value I get out of it. I think there's a ton of value in helping users get data into a system in a better, easier format. And that could be prompting questions, understanding where they're at in the process, that kind of stuff. But data ingestion is so important for a lot of applications. The integrity of the data, the completeness of the data, all the things that are very valuable. And if you can make that easier for your user, if you can make that a better user experience, you not only make their life easier, but you add a ton of value to your app. That's actually something I was just talking with Aaron about, one of my colleagues. And so basically, he met with his insurance broker, and he's working with about people from 11 different states. And he was looking for an AI product for his intake form. So he has people from all-- because this guy knows everything there is to know about different insurance policies, because it differs whether or not you travel out of state and you go hiking or you live in Indianapolis as opposed to Denver and whatnot. So getting the information on a specific client to be able to give them the right package on an intake form is actually a lot more difficult and cumbersome than you would imagine. And so being able to leverage the input of just basic data from a user or from a potential insurance client-- So are we talking about the death of forms because of AI? I hope. I mean, obviously, you hope that, right? I think there's a huge opportunity in the form. So Jacob and Dre are onto something. I mean, it's the ingestion and the synthesis of the inputs versus just taking an input and trying to enhance it. Yeah. I mean, I have a lot of credit to Dre, because we talked about this earlier. And I was always thinking there's a data normalization aspect, but even just the assistance aspect, because it's the worst part about the internet. I mean, I'll think of another example, too. So-- It also opens up a huge risk point. Yeah. Just the-- What model are you using? Right. Where is it? Yeah. Exactly. I'm even thinking-- so one of the main things we focus on with all in on the user onboarding standpoint, saying it reduced user churn and also increased activation, was creating this self-guided onboarding flow for new users on the all in app. So a coach signs up for our application, and we had him basically fill out a little survey. I'm joining all in. I coach basketball, and I'd like to develop my athletes and help their mental wellness and their nutrition. And I also want to use this for team communication. And the whole effort there, the whole survey idea, was a best practice we took from monday.com, where you have a user self-identify what they want to use your product for. And also, in this case, since most people were joining our application where they might not have looked at our website, we're also educating our user whilst they sign up for our platform. So I could think of a direct thing from a user onboarding perspective. You do all this work on your marketing site and in your product and these self-user onboarding tours for the base goal of helping to educate your user on how your product could best serve them. Right. And that's how amazing an AI solution would be for an onboarding tool. If you're signing up for a platform, and you briefly go through, put some information on what your pain point is, and based off everything that this AI or this LLM understands about your software, will then give back to that new user signing up exactly how they should leverage your platform. And also maybe one to two to three steps on getting started based on their unique use case. Now that would be pretty powerful from a product perspective. So ingestion, generative AI, back office integrations and enhancements. Thing that you also need to consider in this, and this is my rant because I'm pissed and I want to talk about it, is the finance and the cost considerations of implementing AI in your product. This is something that, you know, for the product manager that's out there that's like, "Hey, I just want the magic wand." Right? Yeah. Well, there's also a CFO right there. Right? And there's also, I mean, at the end of the day, there is money involved and assets equals, you know, owners, equity, whatever. Like there's a balance sheet. Okay? It all adds up. Things cost money. And you want to be in the green, not the red. Okay? And that's something that really, really has always frustrated me about the startup world is that the economics don't apply. Right? Because it's all based off growth. It costs at a transactional level, unlike anything you do. Well, unlike anything you do that you don't already think about as a service. Well, to the degree of which it costs, it's very different than other services. So here's an example. And this is insane to me. GitHub Copilot. Okay? We've talked about this a bunch. So GitHub Copilot is the coding assistance tool that I use, that Jacob used, that our team uses. And what we're paying, you can pay $10 a month. And most people are on the personal plans. Okay? So people- The personal 10? Yeah. The personal 10 is just a flat cost. And what we talked about the other day was, "Wow, it's just a flat cost." Yeah. Right? It's not token-based. Right? Yeah. Right? And so people out there, they're creating this. And this is what happens with new products. It's very, all the time is you put it out there and say, "Oh, we're going to cost $10." So they've amassed 1.5 million active users on this platform. Yeah. Okay? It's costing them $20 a month to run that. They're running this at a $30 million a month loss. Hmm. Yeah. Yeah. Yeah. Well, my thought on that is they could probably go talk to Verizon Wireless. Because there's a lot of overlap. Yeah. Absolutely. It's literally the business they've been into. They switch for minutes, then they went to data consumption and they put caps on shit and they throttled you. And this is a model that's existed in other industries for a while. But here's the rant part, which is, how many times are we going to buy a product and then they get you and they sink you in and they get you with all this data and they keep adding all these features and they create it so you're never going to quit and then jack your prices up? Yeah. Because that's exactly what's going to fucking happen. It's going to continue. Okay. And good on Adobe because guess what? I mean, how long has Firefly been around? They already throttle you after you go over. So they implemented the throttling. And to me, it's like, if you're going to build that, if, okay, Microsoft, whatever, just put the token thing out there. Because some of these customers they said are costing them over $100 a month. Oh yeah, that wouldn't be surprising. My only other thought on that is because OpenAI does have access to Microsoft, they're going to be investing heavily in their infrastructure and they can probably drive down those costs. But you're absolutely right. It's literally in this article, it says, "This is exactly why Microsoft is working to develop its own AI chips." But hey, there are other where... There's chips, that's crazy. But this is why Nvidia's stock is up. We're burning like nuclear power plants worth of energy with all this AI stuff in it. And the issue that I have is that the costs are out of control. Eventually businesses have to make money. And so don't just chase this in terms of, be careful, right? Because there's costs to it and you could run up a bill. The way that we set it up for the Big Cheese chat is the whole token economics is built into the product. So if you're going to have anything where users are going to be able to interact with these large language models or whatever AI you might have, you need to have that token economics built into the product. Somebody signs up, you give them a thousand tokens and as they're going and chatting with the models, you're decrementing that or... Decrementing? Yeah, you're decrementing. That's basically how it's going to work. So for users and customers and consumers that are using AI products, get ready to get metered. Get ready to be... Everybody gets a little free chunk and then you're going to run out and then you're going to have to pay for it. And here's the other thing is Copilot. So Copilot, people that are listening to this are probably confused because Copilot is actually also the name of what Microsoft's office product is in it. That is... Go ahead. Yeah, well, I don't know if you're going to say this, hopefully. Come on, hopefully. But they just announced they're going to increase the price on that and Google's doing the same thing because they understand that, "Oh shit, people use this product." You're using the Google one. Oh, I use Bard all the time. So wait, are you guys paying for Bard? Bard's free. But with the workplace? Yeah, it's free right now. But as we know, what are they going to do? Yeah, they're going to get you in, they're going to suck you in, they're going to start figuring out what you're doing, they're going to build more services and then they're going to charge you for it. They're either going to charge you for it or they're going to figure out how to give you better ads based on the content. They shot me an ad. Well, both things. They shot me an ad in my Google Gmail account today that said, "Sign up for," it's gone now because I exited it and it's not re-showing, but it said, "Sign up today for basically the product within," I forget what they call it now, shit. But anyway, it was a $30 a month thing. And I'm like, "Why would I pay for this?" Because they have Bard, but it's the integration. I'm sure there's more to it. I need to actually look into that. But yeah, that's where it's going is they got me interested. I use it a bunch and now the next thing is how can I sell them? Yeah. I mean, at the end of the day, you're looking at probably per capita, people looking at their monthly costs. The internet's now a utility, right? That's another $100. I mean, your cell phone's a utility. Now AI eventually becomes this utility. You want a price per person. Well, I'm paying $20 a month just for a chat GBT plus, right? Like, yeah. I mean, I'm- And the only way to fix this and to lower the cost is to build better chips, build better technology and continue to iterate. That will happen at a level. Maybe the smart people in the room are just like, "Yeah, economic opportunity breeds innovation." And so it's going to fix itself. We can just rely on that. Maybe. But I'm not in that group. Well, look at Bitcoin, man. That never fixed itself. Yeah. Well, I think potentially a better correlation was AWS and just server costs in general, how expensive those used to be. And then you have platforms like AWS come out and, oh, it's cheaper. And then over time, it's become progressively more cheaper. $1,500 bill that I got from AWS because I had Kendra running by accident. Oopsie doodle. That wasn't cheap. Yeah, you definitely don't want to have a platform where your team can just start adding on AI services to everything. They do it on purpose. I'm convinced of it. Amazon or the whole AWS thing is just like, I guarantee like 70% of the revenue is from people screwing up. I was at Rally, which is a conference, a tech conference, cool conference, Elevate, good job. Yeah, it was awesome. We'll shout out to you guys in the pod. But well, we are shouting out to you in the pod. But there was a founder who was asking all about AI. And I was like, you have to consider the cost. And they were like, you know, I've talked to 30 different people about AI this week. And nobody mentioned that. Nobody mentioned that. They were all, you know, this and that. But nobody mentioned the cost. And for me, it's like, yeah, maybe I'm just risk adverse. But at the end of the day, cost considerations are real. You know, there's ways to mitigate that. The most profitable companies that are used to selling their software for a profit, i.e. Adobe, you know, the Vercels of the world, those companies, they are going to meter you. They are going to do token based. So think about that when you're implementing AI in your product. Don't just give the farm away for free and just hope. It's probably not going to work. I mean, there's a scale that that works at. Google, Microsoft, those guys can do that. They've got enough cash in the coffers. But your startup should not probably be doing that. Yeah. And so as we're taking a step back and talking about adding AI into your product, look, it's probably not going to be... It's a feature to layer into your current value offering you're giving to your users. It can exist in the onboarding. It can exist in enhancing your particular feature that has just information. Well, now you can sum it up for a person in a better way. There's ways that you can implement it, but you're going to need to work with a good team that knows exactly what you need to look out for in regards to cost, in regards to the back end structure, everything you need to be looking at. You need the right team. And the big cheese team is a good team to work with. So to wrap this up, so AI clients, you guys have clients. We're working with clients building AI into their products, mine included. How are you guys going about this? What are different outcomes have you guys observed in the different things you guys have been working? So much of this is we like to think that the AI is some new magical thing, and it's just another way of achieving outcomes. It does it very efficiently. It does it very effectively, where we don't necessarily need a bunch of developer resources to do these things. So the one that I'm... And again, I'm just kind of hyperfixing it all just because we had the call with them right before the podcast. But really the idea of how can we... Going back to what you were saying, how can we make the user experience better just because of AI? Right? Like we can take burdens off the users. We can make onboarding better. We can make the marketing better. We can send... Newsletters could be better. There's just a lot of things that we can do that can ultimately improve the user experience just using AI. But just to say, "We need AI." No, you need to figure out what does that actually look like for your organization. So that would be a good example of like, "Let's talk. Let's figure out... Let's get on the horn. Let's talk about different ideas." Kind of like what we did with All In, where we go through and we say, "Okay, what is your offering? Where are those areas for opportunity?" And we find five or six of them. We say, "Okay, let's go pick some of the lower hanging fruit to go in and say, 'Yeah, we can actually build this out. This is something we could test and iterate through very quickly to see what kind of things or what kind of results we're gonna get.'" Because the reality is that when we're doing development, we know when we say, "Hey, one plus one is always gonna equal two. It's always gonna equal two when we're coding." When we're doing these large language models, meh, maybe. Like, there's just things that we don't actually know how the large language models are gonna work until you do it. And depending on how many resources are going on or what's going on with open AI at the time, you might get radically different answers. So there's a lot more kind of testing and planning and iterating to figure out what's actually gonna work from an AI model. Because now we're kind of in this weird fuzzy coding with natural language versus hard A plus B equals C. Yeah. And I would even say this, Brandon, from a business perspective, I think one of the unique things about Big Cheese is AI can be implemented not just on the product front, but also on the marketing and the go-to-market as well. The integration with AI doesn't just live at the product level. It lives throughout the entire company. And the more you can leverage that, you can automate systems, you can create better systems, you can ultimately get more users and more money. And so that's the opportunity. I think one of our biggest takeaways today might be if you're a company and you haven't implemented AI in your service offering to your customers, well, if you haven't even implemented it in your site, your company at all, you might wanna start there. Yeah, totally. No, and a product is way more than the code that you've shipped. It's the help docs, it's the chat experience, it's the people you're talking to within the organization. There are so many ways to implement AI to help your product, your company, other than creating the magic wand, which is still cool. You still should do that. It's pretty cool. But no, there's just so many other ways to impact your users, and that's what we're focused on at Crafted. And that's kind of what I think what everyone should be looking at. Yeah, holistic AI implementation into your company. Exactly. And if you're an AI company startup and you're looking to figure out how you go to market, we should talk too, because we've got some pretty good ideas on how to help you there. Well, everyone, this was the fourth episode of the Big Cheese AI Podcast. I am Andre Herakles, joined by Sean Heisz, Jacob Weiss, and Brandon Corbin. And High Noon. We'll see you guys next week. See you guys next week.