In this episode we break down complex AI concepts into digestible information for a general audience. Discuss the basics of AI, machine learning, and deep learning, and how these technologies are evolving.
Topics Covered:
Relevant Quotes:
Guests:
Remember to tune in for the next episode and stay updated with upcoming guests and topics. If you have suggestions, you are invited to email pod@bigcheeseai.com.
So if you guys have enjoyed the Big Cheese AI... [laughter] Let me do the outro real quick. Rekt 'em! Damn near killed 'em! [music] And welcome back to the Big Cheese AI podcast. I am Andre Herakos, the world's 26th best moderator, joined by Sean Heisz and Jacob Wise, two tech leaders in Indianapolis. And last but not least, Brandon Corbin, one of the brightest AI minds in the Midwest. Today, we're taking a step back, and we're going to start diving into the history of AI, demystifying AI, and kind of just making it more palatable for people who aren't as technical to kind of dip their toe in understanding what exactly it is, what is AI, what are LLMs, what are vector databases, what are these things that we've been talking about for four episodes. So to kick it off, the first question we have is, what is artificial intelligence, and how does it differ from traditional computing models? Let's prefix this. Really what we want to do with the podcast is we want to make the podcast a way for non-nerds to be able to understand and be able to leverage AI. That's really what our focus is here, is to empower the people to be able to go and to understand what we're talking about. So everything that we do here is we want to try to make it in ways that can be described, so I can explain it like I'm five. Yes, and then so if you want to explain AI to a five-year-old, you want to think of AI as this magic treehouse. And what that kind of equates to most is that AI is a container of other technologies. This isn't one specific thing. If you talk about achieving artificial intelligence, there's two types. There's the weak and the strong, which we'll get into. But AI is a parent concept, right? And under that, you have the machine learning, deep learning. You have all these concepts. You have these things that kind of help you achieve AI, right? And so-- And there are people that are out there right now that are like, "We shouldn't even be calling even large language models AI." Right. And we got into a lot of arguments, and there was a startup that I was involved with like eight or nine years ago called TrackAhead, and it was actually using IBM Watson. The chief technology officer hated that I called it AI. I'm like, "But that's what the normies will understand it as, right?" Like, yes, it's machine learning. Yes, it's not really true AI, but that's kind of how people are understanding it. So we are in this kind of weird thing. Is it really intelligent? Like these large language models that we're interacting with, they're not really intelligent. They're really good at predicting what the next word should be, but they have no concept of what they're actually talking about. Yeah, I want to kind of like expand on the prediction of the next word. So like when you're using a large language model, it's got this huge data set, this huge data decision tree that you give it an input, and then it's going and looking and trying to predict what the next word given your input should be using-- and we'll talk about this later-- vector databases and called embeddings as well. And it's not magic. There's a thing called temperature. We were talking about this the other day. That is kind of a hard-- I think a harder concept to understand. But can you explain-- so I ask ChatGPT or some large language model, what's the meaning of the universe? How--if I've got the temperature-- so first of all, tell me-- talk about temperature a little bit, and then talk about like once I give it that prompt, what's actually happening at a high level under the hood? So temperature is a parameter, and you won't see it in ChatGPT. If you're interacting with ChatGPT, you won't see it. But if you actually go to the OpenAI Playground-- and I think it's like playground.openai.com-- that's where you'll actually get some of the additional parameters that you can have to kind of tweak these models, and one of them is the temperature. And temperature basically being the higher the temperature, the more "creative" it's going to be. And so there is--you do have these kind of parameters, and there's top K and top P, and we won't necessarily even waste time getting into what those are, but basically that temperature allows you-- how creative are we going to allow this large language model to get and what the next words that it predicts are? Yeah, and for me that was like kind of an "aha" moment when I was digging into this more is if your temperature is set to zero and you give the same large language model the same prompt, the output will be the same every single time. So I think about it--and this may not be exactly correct-- but I think about it as like a big decision tree. So word number one, if it's going to be "the" or "a," if it's 80% of the time it should be "the" and 20% of the time it should be "a," if temperature's at zero, it's always going to choose the most likely. If you turn that temperature up, sometimes it goes to the other decision-- or the other branch in that decision tree. That's a good segue to literally how this all kind of begins, because you talk about machine learning, you talk about artificial intelligence, and really at the end of the day what we're doing is mathematical work. We went back and you look back and you think about where AI came from. You talk about Alan Turing, you talk about the history of AI. Where this came from and headed to is really how to simulate how the brain works and how to make more advanced decision-making capabilities out of an artificial entity. And so obviously that business problem has existed for a long period of time. And you mentioned that this stuff's been around since the '50s. Yeah, so the first AI program, which is called Logic Theorist, it was a groundbreaking project from 1955 to 1956 developed by Alan Newell and J.C. Straw and Hubert A. Simon. There's not a lot of Huberts anymore. People don't name their kids Hubert. There's going to be a rise in Huberts in the next 90 minutes. But it was 1955, and the whole concept was basically being able to generate an algorithm that could basically predict different mathematical models, and it was all based on Principia Mathematica. But yeah, so that was kind of their first model. And Alan Turing, which was the Turing test, which you may or may not have heard of, Alan Turing's paper kind of came out and basically described, here's how we could do a Turing test. You can tell the guy was a wee bit egotistical when he called the test after himself. Have you seen the movie? I actually haven't seen the movie. It's a good analogy to kind of all this stuff. I mean, they had to solve an unsolvable problem, and they had to build a machine to do it. Like multiple times? Well, ultimately, yeah. I mean, ultimately, yeah, he didn't exactly gain financially from this, and he kind of saved the day. But yeah, it's a good movie. It kind of is like a more real-world example of them having to solve one of these big problems where you just need computing power, and you need to run through a lot of permutations of different solutions in order to get to the end goal and the output. And that's kind of a lot of this AI stuff. Yeah, but it was not -- I mean, think about this. It was 1950, 1950, that Alan Turing wrote his paper about basically the Turing test. I mean, it was 1950. That's just blowing my mind. Well, I mean, he had to, like, what, save the world, right? So there was a business need. For all the millennials out there, that was only 50 years ago. So if you're thinking about AI, you're thinking about an input that's generating an output, right? And you're talking about, well, how do you get there? So there's this concept that you have to understand if you're getting into this and you want to learn more, and that's really the machine learning aspects. So, like, machine learning is a core concept of AI. And so basically you have, let's say, an input, and you're kind of converting that based off of some attributes of that data into numbers. You're taking some of this input. And the great thing about AI, and that's why you're seeing so much cool stuff with images, is it could be any type of data, right? Any type of data that can be kind of converted somehow to mathematics based off some attribute, it'll run through this supervised model, and you'll say, "Okay, I'm going to show this computer a picture of a flower, right? And I'm going to give it 10 flowers, and I'm going to identify attributes on those 10 flowers, and I'm going to send it through, but at the end I'm going to be like, 'Hey, it's a flower.'" But actually it knows the whole time, it's a flower, right? And then you're going to send it, and it's going to see the data. It's going to start to recognize the patterns in that data so that the next time when you test it with a piece of data that it wasn't told if it was a flower, it's going to know that it's a flower. Yeah. Right? And at some pretty--you know, depending on the training. And the point is it's still a machine, so it's still a computer. There's nothing magical happening there. It's simply saying, "Here's an image with data points, and we are telling you it's a flower." And it gets enough of those data points that are associated with a flower, and it starts to understand that pattern, which is why--I forget the example, but it was like x-rays were falsely detecting cancer, I think, because every x-ray had maybe the doctor's hand in it or something. I forget what it was. It was like one of those things where they were training it over and over and over again, and they had something in the image of all images that had cancer. And it was like, "Okay, well, anything with that in it is now a cancer." So it's a false positive. But it just goes to show you it's not understanding what the image is. It's simply understanding the pattern, the data points underneath it, and looking for similarity. And then there's another concept, which is the unsupervised, which is basically the exact same thing except for you don't tell it that anything's a flower. It's just taking in that data and then it's just going to kind of categorize it, which has a bunch of different use cases that are more abstract and like categorization type stuff. But I believe ChatGPT is a mix between all three different types of machine learning, which is the supervised, the unsupervised, and the third, which is like the retraining, which is a huge aspect of artificial intelligence because, you know, as more inputs flow through and the outputs are derived, it can go back and obviously like relearn and reupdate. So at the end of the day, it's like you're giving it a bunch of data that's converted to math that's eventually tried to generate some output based off of that. And you can kind of create a model from that. And I think that if you look at where we've gone in the last couple of years with ChatGPT, they just created a huge freaking model trained on a bunch of data. And that must have been the first time that that was done. That's one of the reasons why it's so useful. >> It's where they realize that, oh, hey, if we just download the entire freaking Internet, right, we can train these models on it. And that's pretty much what they did is they just had such a huge corpus of copy. >> Yeah. And I guess that kind of brings up the point of we've talked a lot about different models, like which model is better for specific use cases. And this would be why the SQL model that you've been using is better at creating or writing SQL code. But we also have talked about ChatGPT 4 is the best at understanding general language. And then you could maybe boil it down and then send that request to something that's more specifically fine-tuned. So that's what we're talking about is the model is derived from the data it was trained on. >> And I think that -- so if you look at, like, okay, I'm going to show a picture of a flower, and it's going to go through this thing, I think you'd be -- if you're just getting into this and you're learning, like there's an innovation that happened in order for this to really function. And it's the concept of the neural network. >> So it's crazy. They literally figured out how to make a computer operate like the human brain. So they literally went and studied neurons, right? And they built an algorithm and a computing system that behaves like our brain. Because every neuron has an input, there's some sort of internal processing layer, and then an output to the next neuron. So there's -- and we have tons of them. >> We've got a lot. >> How many, I'll ask? >> I mean, and think about it. So right now, all this stuff's flowing through, and, like, unconsciously I'm just speaking. I bet a billion neurons just generated this sentence, right? And so they were like, well, how does that work? And so they studied the best computer ever built. And so that's really what -- honestly, going through this episode and research, that's what honestly is starting to freak me out. Because if you think about it -- >> 86 billion neurons. >> In my brain. >> In just your brain specifically. And chatGPT actually mentioned your brain specifically, which is kind of funny. Which is about 30% lower than average. >> Okay, so my brain has around 56 billion neurons. Brandon's is pushing 100. >> Yeah, but 86 billion neurons is what the average human brain would have. >> Pre-college. >> It's scary to me, but it also demonstrates why we're at where we're at. Because it's a different type -- this is a totally different type of mathematics and technology. Is they literally went and studied the brain, figured out how it worked, and then tried to kind of replicate that process. And it's basically these three layers that are talking and then talking back to each other and giving feedback to the beginning and going back and back and back and back. >> Dude, and it started in 1943. >> Neural networks? >> Neural networks. >> Yeah. >> 1943. >> So machine learning is, from what I understand as a non-technical person to kind of put it into perspective here, how AI came to be now and how we can use it is before computers weren't big enough to actually run any of these very complicated models to generate what machine learning does. So machine learning basically allows a computer to intake a lot of data and information and then create reasons for like what it is, definitions. So instead of it just being a random pixel that's yellow that's attached to a flower, it now knows that it's a flower because it's got other data to support it. >> Well, it's going to break up that flower into a bunch of different bits and send those as inputs, right? And then it's going to come back and say, "Collectively, based off of all these pieces, I think that is a flower." >> Right. >> Yep, and so that's where machine learning comes into play. And then you have the neural networks, which then allows there to be basically a human brain that can understand what that -- >> Yeah, the neural networks is literally like the processing power, the processing of that. >> So how it would process -- the way that it processes that machine learning's output is that of similarity to a human, and that's what makes these artificial intelligence -- >> If designed after the way the human brain works, does it actually work that way? And does some doctor -- someone's probably going to -- I'm guessing that our brains are more sophisticated than these algorithms that they came up with. >> Yeah, and I think that that's kind of a big piece is that we think this is how our brains function. And so we're kind of building technology that kind of mimics, you know, what's it like having a neocortex on top of a, like, reptilian brain or whatever. And maybe it's right, maybe it's not right. We really don't know. You know, like, we don't even understand how, like, how consciousness truly works. We don't understand -- like, we know when we give somebody a bunch of medicine that they disappear for, like, five or six hours, you know, during surgery or whatever. So, yeah, it's -- >> And I think it's important to say at this point that it's like AI is such a -- probably a bad term because it's artificial intelligence, and that's a very generalized term. But what we're looking at today is still this concept of weak AI. They're doing what they've been instructed to do. Everybody's waiting them to get sentience or to get self-awareness. But these models and this technology that we've been building, it's still -- even CHAT GPT, it's a weak AI system. It's going to do what it's been instructed to do. >> It's not conscious. >> It's not conscious. >> It's not, like, it's not doing its own kind of thoughts. It's literally just predicting what the next word is going to be. That's it. Now, it does it very well. >> And we're starting to be able to mimic consciousness by, like, adding, like, other APIs and layers to what should be the next step. So, like, you can give it an open-ended task, like, hey, figure out how to sell this thing on the Internet. Well, it can go to the Internet, you know, it can go to the greatest tool ever created and say, how would I do this? Reads the information, and it gives back the prediction, and then it can prompt the next thing, which that kind of looks like somebody who is aware. >> Yeah, they're replicating -- the big probably advancement right now is AI implementations replicating human tasks, which is really what's happening. >> Yeah, and I just want to reiterate, like, it doesn't actually know anything that it's doing. >> But Ray Kurzweil, a futurist, is predicting that AI will become fully self-aware by 2045. >> Yeah, but he's -- so Ray's been calling all sorts of predictions for quite a while. Don't mean to say the guy's crazy, but he also takes, like, a fistful of pills every day just so he can live forever, right? Like, so anybody -- >> Oh, this is the PayPal guy, right? >> Yeah, well, so he was at Google. Is he still with Google? >> I don't know. >> I wonder. You know, again, anybody that wants to live forever, in my opinion, is suspect, because, like, I'm, like, I'm 47 at this point, and, ah, you know, if my time gets called tomorrow, I've got a good life, you know, but he wants to live to be 1,000 years old, and he's taking, you know, bags of horse pills. So, yeah, I'm not -- I don't know. >> I think it's just important to kind of give the perspective of where this stuff's at. Like, it's not -- we're not at Skynet yet. >> Right. >> It will happen, but we're not there yet. >> Yeah. >> Yeah, and there's no reason that you should be scared of using AI. >> No, no. >> Like, look, basically the way these things were created is that with machine learning and the neural network, you can now complete a task a little bit faster with a little bit better information and get from A to B faster, but you as a human know what A and B both are. Machine learning tasks can help you once you start that journey. >> Yes. >> But there's nothing that you should be scared of, like, when you're using AI. >> Yeah, totally. >> It's just a resource to your advantage. Don't see it as something that, like, you're cheating. You're not. It's like you're just taking advantage and leveraging a software, the new -- not the internet, but a very advanced software that can help you do your job better. In a lot of -- in almost all aspects of your working life, for sure. >> Yeah, 100%. And you made a really good point. We are the creative ones. Like, I use AI a lot, but it's generally a get unstuck tool, right? It's fill in the gaps tool. It's I have a problem, and we'll get into more real world kind of use cases later, but don't be afraid to use it, first of all. That's kind of the point of this whole thing is, like, definitely try it out, because once you do, there will be, you know, light bulbs flashing and going off. And also, like, you're still the one -- like, it's still only as valuable as your creativity and your input. So that's the power of it is it's there to enable you, the creator. >> And there's a lot of misconceptions with AI as well. And so the next question we have, what are common myths about AI? Being sentient or conscious, and why are they inaccurate? >> Because it's not. >> Yeah, it's just not. Even the best -- chat, open it. It's not -- it's a weak AI. It's not conscious of anything. It's just really good at mimicking. >> Intelligence. >> Mimicking. >> That's really what it's doing, mimicking intelligence. >> Yeah. And they're creating tools specifically to trick us to do the -- you talk about the black magic underneath the hood at chat, GPT's actual interface. Like, they're doing things under the hood that aren't even AI, right? It's just like an API that goes and tries to sort out or filter what you're trying to do. All of this is in an effort to make it look like it's magic, right, and to make it look like it's understanding things. Can you explain vectorization at a very, very high level for people? >> So this is going to get us to our RAD conversation. >> Yeah, that's fine. >> So basically the way that you can think of a word vector is we can take a string of text and we can turn that into a -- the best example, and I'll give it to Aaron, who's the CEO of PromPrivacy, who's talking about that it's more like a lat long. So I can take a string and I can generate a geolocation for that text. And that text is now somewhere like over here. And then I can take another piece of text and I can generate another lat long, and that's over here. So if this sentence has "I like cats" and then this other sentence has "I like kittens and cakes," they're kind of closely related, right? So that's really what we're doing when we're taking text and we're vectorizing it is that we're basically giving this text a geolocation, a point in space-time to be able to say, yeah, that's where this is. >> And if "cats" gets geocoded once, it's going to have the similar geocode the next time. >> Exactly. So "cat" will always be geocoded towards this location, and "kitten" will always be geocoded to that location. They're always going to be very similar. And so when we talk about, like, RAG architecture or even just when you're using ChatGPT and you're uploading a PDF to it, what it's basically doing is it's taking that PDF, it's converting the different chunks -- so we won't get into the details of it -- but basically saying, we'll take different parts of this document. >> It has to break it up into different parts, yeah. >> Right. And we're going to turn that then into these numeric values, these location points. And then we can say, hey, when somebody asks us a question about, hey, tell me everything you know about cats, it's going to take that user's prompt, it's going to generate a new geolocation, if you will, and then try to find everything that's just around it. >> The nearest neighbor search. >> The nearest neighbor, right? And then it's like, oh, here's all the nearest neighbors. Now we're going to take that, those results, and we're going to also add that to our large language model. So now tell me everything you know about cats, and here's everything that we've now pulled from the database that are nearest neighbors to cats. >> So this isn't really new technology. >> No. >> This is a transformation in storage architecture. >> Yes. >> Because everything we talked about was the same thing with machine learning. You're converting these data points based off of something to numbers, and it's going through and making a decision about it. >> Right. >> But what we're talking about is doing this with a bunch of text and language data. >> Yes, yeah, and they would say, like, we build a lot of databases. Those are structured data sets. This is unstructured data. So a sentence is not, you know, there's not a beginning, middle, and end. It's just here's a bunch of words strung together. Vectorization is the process of helping you find relatives or closest siblings to that text string. >> Yeah. >> And the machine learning uses it. Like, that's how it understands it. That's what it does. >> Exactly. >> Okay. >> Yeah, yeah, and that's really -- so when you're doing anything where you're dealing with a RAG, which is retrieval augmented generation. >> Yeah. >> Right, is basically just taking text, turning it into a geolocation lookup, because you can say, give me these two geopoints. How far away are they? Right, and there's a mathematical algorithm to basically be able to calculate how far away two geopoints are. Same exact thing happens with large language models and when you're uploading a bunch of data. >> Yeah. >> So when you're creating a GPT and you upload a bunch of documents for that GPT, which I think stops at, like, 10 or 20 documents that you're allowed to have, same exact concept. They're basically just embedding all of the different chunks. And so when you're having the conversation, it takes your answer -- or your question converts that into a geolocation. It says how similar are other things that are related to this. >> I think people are getting so excited about the RAG architectures because it is allowing companies and even people just with their own data -- I mean, I know you do this with your personal data set. It's allowing people to interact with information, unstructured data, documents, PDFs, or, you know, whatever organizational data you have, but with human language. So I can't even tell you how many times I give up looking in our Google Drive for a specific document because we don't have them tagged very well or, like, the title is not really what I was thinking the title should be. This is hopefully -- it's not the holy grail, but, like, this is a big step forward into retrieving data faster. >> You're not just retrieving the document. You're retrieving the content from the document. And it might be able to harness some other pieces in order to -- that are close neighbors to that. Right? So you're saying, let's say I'm looking for a document, and that document has specific information on treatment of a disease. Right? And it's going to find inside of that model, right, there might be a bunch of neighbors to that. So it might find a document that you uploaded, but it also might find something that exists that's nearest to it. >> Yeah, I mean, I think that this is going to create a new job, which is just your data curator guy or gal. Someone who's only functioning -- especially in larger organizations, they'll probably have entire groups of people. But data is going to become so valuable for -- especially your domain-specific proprietary information. It's going to be so valuable for other people in your organization to be on the same page. Like, how much time is already spent in organizations of trying to figure out what my process should be for whatever steps? If you can make it much more approachable for people to retrieve that information, I mean, hello, you're making everyone's life a lot easier and spending more time on the actual work and less time on trying to find the right information. >> Yeah, we've got two more sections to go through, guys. You're going to be kind of quick hitters, because I want to go through each question, question by question, because I think they're really good. So the practical applications of AI. Question number one, can you give examples on how AI is currently used for writing better emails or creating business documents? Jacob. >> Yeah, I mean, there's a ton of platforms out there specifically for writing emails. I think nowadays I'll use just chat GPT-4, and I will give it, you know, just a very basic prompt. You don't have to be a prompt engineer to do this kind of stuff. It's just -- I would challenge you, go to BARD, go to chat GPT-4, just say, hey, I want to write an email about the following things, and then just list out your -- whatever bullet points you want to say, right? I do it all the time. Then out -- you know, press enter, out comes a pretty dang good email that always starts with a really bad sentence, so just delete the first sentence. >> Always, always, right? >> I forget what it is. It's like, hope all is well or something like that. >> Wait, I do that all the time, and I'm a human. >> No, it's not that. Whatever it is. I'll have to get tested. But, yeah, so that's like the best, like, general use case. Every single day you're going to write an email, right? And if you're like me and lazy, you can just use that as your starting point. I never just copy and paste that in, by the way. It's always, oh, cool, this is kind of what I wanted to talk about. Now I'll adjust it. >> The other thing that I'll say, too, on writing the emails is once you've gone in, you've got the first version of that email. Let's say that, like, it sounds super professional because a lot of them will come out sounding like, I don't know, an AI wrote it. You can say, hey, I'm actually a 25-year-old CEO. Here's a link to my LinkedIn profile right in the way of me. >> Are you 25? >> I'm 25 years old. >> My daughter is as old as he is. I could be your dad. >> That's possible. That is how that works. >> You guys look a lot alike. >> Sorry, my mom is white. I don't know how that all -- really tan. All right, next question. How do AI tools like Notion AI assist in creating education and blog post outlines? >> Notion's AI is a newer thing. It's basically their answer to RAG, right? And if anyone's ever used Notion before, every platform is going to come out with this. Every document management or resource management platform like this is going to have some sort of general ask me a question about my documents, right? And that's all this is. And not all, but I'm sure it's very complicated. I'm sure their engineers spent lots of time. >> Yeah. I mean, the one thing that I would challenge about those tools, especially Notion, is I'm pretty sure that shit gets super expensive. Because you're looking at -- that's one thing that I think that for people that are starting to think, how am I going to get into AI? I still think ChatGPT right now is your best bang for your buck. Because they're the source of that infrastructure, and they're selling it directly to you, and the tools keep getting better. >> Or Claude. Claude's another good one. >> Claude, BARD, or OpenAI are going to be the tools. If you're looking for practical applications, writing emails, or just like searching up documents, those things would be great. The other question we have is how do you leverage AI for social media posts? I'm currently doing that a lot. So you can upload an entire YouTube clip to a series of platforms. The one that I use is called Opus Clip. All the shorts that we post come directly from Opus Clip. Basically, it will take a, let's say, 50-minute podcast video. You upload it. You click on it about 20 minutes later instead of five a couple episodes ago. >> Yeah. It's a little long. >> Yeah, it'll create all custom, already ready for you. >> How many does it generate? So when you upload the podcast to Opus Clip, how many shorts does it generate for you? >> Yeah, so you'll generate, if it's 50 minutes, probably 18 to 25 shorts. Now, when I uploaded Sean's, that was only like two minutes long. It created two. >> Right, right. So then when it generates all of those, do you just basically say -- is there a customization that you can do to them, or is it pretty much just like, yep, those are all the ones? >> You can just download them, but I don't. There's an edit function. And so you can go in. You can edit where it starts, where it ends. You can actually -- they have this new update now where you can click on add these images to it, and it'll flash images whilst the video is going that kind of correlates with the conversation we're having in the podcast, which I think is pretty cool. But no, yeah, creating effective social media posts, like seriously, if you want to create engaging social media posts for your company and you want to use a practical application for AI, 100%, you should be using Opus Clip. 100%, you should be using that to create very engaging shorts. They put the text and the transcript on there. Like our videos get pretty good views, and we've been at it for about 18 days, so it's been great. So if you guys have enjoyed the Big Cheese AI podcast and have some ideas for future podcasts and topics you want us to cover, email pod@bigcheeseai. And the other thing is that at episode 10, we're going to start inviting guests to join us on the pod. I think there's a gentleman from Prop Privacy, the CEO, that might be on the show. If you're interested in coming, we have high noons and other 100 calorie drinks so you can stay healthy and also get a little buzz on while you're here at the podcast. I'm Andre. We've got Sean Hise, Jacob Wise, and Brandon Corbin. Thanks for tuning in to this episode of The Big Cheese Pod, and see you guys next week.