In this episode of the BigCheese AI Podcast, our regular hosts Brandon Corbin, Sean Hise, Jacob Wise, and DeAndre Harakas dive deep into the pressing concerns surrounding the ethics of artificial intelligence. They tackle the question of why ethics in AI is a significant point of discourse, its real-world implications, and the viewpoints of key players in the tech industry.
And welcome back to the Big Cheese AI podcast. I'm the world's second best moderator, joined by Sean Hise, Jacob Wise, and Brandon Corbin, the usual suspects here at the Big Cheese AI podcast. Today, we're diving into the ethics of AI. Now, in our pre-podcast meeting that we have before every single podcast-- that may or may not happen every single time-- we started talking about what we're going to talk about, the ethics of AI, and that why are we talking about it? What makes it important? What's the end game, and why are people taking this stuff into consideration, and why is it such a hot-button topic? We'll cover the big highlight news, the big news from what's happened in the last couple of weeks around ethics in AI. But in general, what is the end game? Why are we concerned about this? What's the point in talking about ethics in AI, Sean? I think it's pretty simple, which is there's an overall thought that people are trying to conclude on, which is, is AI good, or is AI bad? And people have placed a lot of emphasis on this concept as being this potential existential threat to humanity. The biggest billionaires in the world, like Elon Musk, Sam Altman, when they founded OpenAI, for example, which is really viewed as the leader in democratizing AI for everybody, they looked at it as, hey, this could possibly be a really big change in how the world works. And so we're going to implement this with ethics and morality in mind. Which you go back and you say, wow, is this technology so important that really they're questioning should it exist or not? Right. And these are the richest people in the world, like sitting back and saying, OK, there is an opportunity to make a ton of money. Obviously, these are money people. Musk wants to make money. Sam Altman wanted to make money. These people are money people. And they make it a nonprofit. Yeah, they made it a nonprofit because they are either really, really good at marketing and really, really smart, or they understood the risks. And I think that-- we'll get into that more. But I think it's just a question that you have to address and you have to answer. And I think maybe you look back at history or in your own experience and you think, how should I approach thinking about this? Yeah, so my stance being, again, so as the-- with Gen Xers, it was always no one cares about your feelings. No one cares about you even. Your parents don't even care about you. That's why you're a latchkey kid. And you just grow up knowing that no one's going to be out there to protect you. And so then any time that there's something that comes out from a company or a product that's like, we're going to implement these things to protect you, is always a huge red flag for me. And I assume it's probably for a lot of people around my age range, which is the 45-year-olds. And so when we started-- when you threw out the idea of, let's do ethics and AI, my guttural reaction is it's all bullshit. I'm like, there's no need for it. You don't need to have technology that's self-aware of its ethics or trying to overcorrect itself to make sure it doesn't offend anybody. And so then during my lunch that I have, before the podcast, I go to this place called Arnie's, and I have this gluten-free pizza and maybe a drink or two. But then I go through all the things. And so this time I went to chat GPT, and I'm like, what other technologies were launched that required us to have ethics as a conversation point, thinking that there was none? I'm like, AI is the first one where we've had to worry about ethics. And then the very first one was nuclear technology. And that one was just like, oh, yeah, that makes sense, right? Where we have to have this-- I mean, talk about out of any technology advancement, nuclear was really the one that was just like, this could potentially destroy the entire-- they literally thought-- I don't know if you guys watched Oppenheimer. Yeah, it's so good. But they literally thought there was a chance when they ignited this thing that it would pretty much just ignite every atom in the planet and just cause the whole fucking fabric of space time to disintegrate. So there was obviously some ethical implications that they would have to think through. And we launched it anyways. And we did it. And we did it. But the other ones that were applicable was biotechnology and genetics, so CRISPR, where now you have the ability to hypothetically do designer babies. That's a huge implication, what is going to happen. And as soon as everybody starts doing designer babies, does not everybody have to do designer babies, right? Where you can go in and be like, you know, I want my baby to have blue eyes. I want him to have blonde hair. I want him to be 6 foot 4 when he's fully grown, built like an Adonis. Isn't that like Gattaca is kind of the premise? It was a natural born, wasn't the lab grown? That's where we will get with CRISPR, is that-- because eventually what we'll do is we'll first take on just doing genetic problems. Like if this child is Down syndrome, we can go in and correct it before it's born. I mean, I have friends that have had that gene. And they have selected their embryos based off of the gene that the kids that didn't have. I mean, those are children I know that are born without these deficiencies that they had the risk of getting if they were to have been born naturally. It's interesting to me that that's where it always starts, right? You think about the neural link technology, and where do they go to first? People with disabilities. That's where they start. And you think about this like revolutionary CRISPR technology, right? At least we go to those kind of things. But so quickly, it's like we're starting off with this side, but we're really trying to make a billion dollars on the other side once it's widely adopted. It's altruistic to be like, we want to help the people who are paraplegic. And so if that's the case, I mean, the question is, right? Is that what open AI did? We come out as a nonprofit. We come out wanting to serve people with disabilities, very similar in context. And we very quickly, as soon as we've kind of gotten past that point, then we flip it on its head and go make the money. Jacob? I read that article that Sean sent, and it was kind of their response to Elon. I think we'll get more into that here in a little bit. But essentially, they stated a few good things that they were doing, helping people in third world countries or less privileged people. And they were like, OK, to advance this technology, we need money. And we can't get enough from donors for this nonprofit, because why would you? You know, just out of the goodness of your heart. So capitalism comes in to save them. Like, well, we'll create a for-profit company that is going to help us do all the good work. That's where I have full stop. So what is the reasoning that they thought that they're so important that they needed to be the company that raised the capital? Well, that's the scary, right? They feel like they're the only ones responsible enough in the whole world. You know what I mean? And they are the sole proprietors of that technology. They knew that this had the potential to make a bunch of money. And they didn't want to be the ones that didn't make the money. So their mission at that point quickly became non-altruistic when they saw the dollar signs. Because otherwise, they would have gone. They would have approached Microsoft or Google and said, hey, you need to acquire our company because you have the capital. And that's basically, essentially, what they did. $10 billion or whatever. Right. They just-- I mean, they could have stayed. Honestly, they could have stayed a nonprofit. Nothing would have changed. The only capital that's going to get invested is capital that's going to be for-profit. It's not NASA. We're not going to the fucking moon. That's the only altruistic thing that we've ever done, right? We've got to prove to people that we can go to the moon. Have you ever seen the moon speech, JFK? There was no economic reason to go to the moon whatsoever. Yeah. It was to inspire people. Well, and that's a good point. It was like, can capitalism ever fund altruistic ideas? And I would think you can probably say no, because it always goes to, OK, well, how are we going to turn this into a profit center, right? The government may not be perfect, but at least they can throw money at something like NASA. So there's really no gain. Like, we're not going to-- you know what I mean? Our lives didn't get better directly because we landed on the moon. Now we had all these technologies that derived from that. Yeah, Go Pro. Yeah. But that's how you have to fund those-- anyway, that's not this podcast. At the end of the day, there can be no consortium that isn't really a-- it would have to come from a government or something. That's what the purpose of government is, is to regulate, right? It was never going to come from a bunch of former Silicon Valley billionaires. They were never going to-- that was never going to happen. They like to toot that horn and look like they're playing the part of the saviors of the world. But at the end of the day, they're trying to make money. And I don't fault them for that, but it's like, yeah, you're doing it under the guise of, oh, we're here to save it. And there are going to be good things and already good things that have derived from it, but it's like, is that their goal, really? I mean, I would agree with that completely. We're all OK if you want to go make money, but stop with the, well, we're doing it for the right reasons. It's like, you do it for the right reasons until you have the ability to do it for the reasons you wanted the reasons for. And we talked about it in the Real Housewives podcast, but let's go back to that and review what happened, right? Their main guy, their main technology guy, came to the board and was really upset about what was happening because they were going to-- they were changing directions and going straight for the cash. And this was at the peak of their user base, which is-- and what are they really doing right now? They're serving productivity needs. They're serving business users. They're serving people that are using it to make their lives more convenient, just like the Yellowstone episode, buying the refrigerator, buying the washing machine. That's what people are using this technology for. They're not necessarily using it to grow crops in Africa. They're using it just for business purposes. From that clip, which actually I just realized is a good analogy, is, well, now they're also trying to build the hardware that drives it and powers it. Well, that makes sense because the clip he showed was GE, right, was producing the power. And then they were selling the appliances that consumed the power, or they were renting them. They were renting them even more. So you build-- and even if it was based off of a good idea and like, this is going to help people, you build the thing that they can't live without, or you convince them they can't live without it. And then you build the underlying technology that is required to run the thing, and you profit off of that as well. So it's been a series of never-ending cycles of this. So where do you start? You start, OK, you start in 1923. You got the refrigerator. You got your car. You got your washing machine, right? The next thing you know, we're going to-- I have one car for the house. Now I have two cars for my house, right? And then the next thing you know, they invent the phone. And then they invent the mobile phone. And all of these things you're renting, all of them. Oh, I have cable. Now I have streaming services, right? And oh, wait, wait, let's get super greedy. I don't want to pay Comcast. I'm going to make everybody pay for each of my pieces of content separately. And what have we done but layer on-- oh, fast food, right? Like, just the food supply in general just being basically like convenient, candified. There's a lot of stuff that's happened that unfortunately is just profiting off of people's tendency to buy things that make their lives easier and more convenient. And so we've layered on-- it's like a layer cake. And it's got like 10 things in it, right? And it's you wake up every morning in debt because it takes $5,000 a month just to live this really, really easy lifestyle. Yeah, I do think we're kind of nearing the end of that cycle. Because even younger people today live more about experiences, more about just like quality of life, like go to the gym. But they don't need all the things. I think that we were in a time where it's like, OK, consumerism is first. And it's like conveniences and all those things. I do think we're trending out. Oh, I mean-- But can you ever really free yourself from that? Like, the machine of capitalism is just going to come up with every possible little addiction. It's so hard. I mean, think about how much different your life would be if you didn't have a cell phone, if you didn't have-- if you didn't have a car. Well, first of all, you now-- you can't live without a car. Like, they built the system. The suburb was created to trap us out there. In a food desert, right? Like, you don't get off-- you know what I mean? You can't walk five minutes to go get food. You have to drive. Yeah, you literally have to. Now, why is that? Now, again, but keep in mind, so we are in Indiana. So we're in the Midwest. That's true. Where every city is literally designed for cars first and foremost. And walking people are not it. Now, I'm sorry. But here's the-- sorry to interrupt you. Here's the kicker is I think that Jacob was a little bit onto something here, because believe it or not, what's the number one city in Indianapolis? And everyone would agree that it's the best city. The best city in Indianapolis? The best city in Indiana? Or the best little suburb in Indianapolis, part of-- Carmel, Indiana. And Carmel, Indiana is regarded as not just one of the top places in Indiana, one of the top places in the country, because it was fundamentally designed a lot like Europe is, where it had walkable communities. And no black people. Yeah. Right? [LAUGHTER] And that leads us to the next subject. So what's going on? So there's this whole thing going on with lower income communities, higher well-off communities. Obviously, socioeconomically, there tends to be a difference in the kind of people that are there. I'm from over there. Other people aren't. So that's the case. But what's going on, actually, Brandon? [LAUGHTER] Come on. I want you to figure that one out real quick. I might be the last person to ask that question. No, so again-- This is back to the ethics piece, right? Well, full disclosure. So I was raised in Carmel, Indiana. Now, we were the poor part of Carmel. But nevertheless, the poor part of Carmel is still richer than 90% of the population that's out there. And it very much-- Carmel was very-- at least when I was growing up there, was very much just this insular-- it was white through and through. I mean, we might have had one or two people who were not white at our school at the time. So I mean, that thing-- so again, I don't even know what you're asking me. But I do know that Carmel was-- Well, we were talking to police cars. Oh, we probably had our conversation before the power. I think what we're getting at is the fact that one of the-- getting into the list of ethical dilemmas is you have-- one of the things with AI is just this-- is the perpetuation of existing bias. And right now, we know that's being done with generative content, potentially. But even more penalizing is potentially just the overall continuing widening gap. And maybe law enforcement, which we haven't talked about, is something that is going to explode in terms of their overall capabilities to perpetuate bias. Because there's things that they can do that basically would catch every single criminal. Instead of just like, oh, hey, we picked him up. He's got a warrant on-- it could get to the point where there's no crime because everybody already in jail. You know what I mean? So what's funny about this is that I have very strong opinions on a difference between the technology that police use versus the technology that non-police use. So non-police technology, in my humble opinion, should not be nerfed. Like, if I want to hear a racist joke coming out of a large language model, even if it's making fun of a white dude, I want it to be able to do that. And then I can justify, do I want to take this as something I believe or not? However, as we were talking before the show, when it comes to police and the technology they have, that shit needs to be locked down like-- I mean, so insanely perfect that they probably won't even be able to have access to that technology for the next 10 years because it just doesn't exist. Because the fact is, when it comes to the police who can remove your freedom, then it really matters to me to make sure that the technology they are using is grounded in non-racist tropes, non-bullshit, right? But when it comes to me chatting with a large language model, no, I want you to say a racist joke if I ask you for a racist joke and not tell me, oh, I'm sorry. I'm a large language model. I'm not allowed to do this, right? That part I don't agree with. But when it comes to the government and the police, 100%, anytime someone's individual liberties are at risk, then we either need to say, you can't use this until it's well-grounded. Yeah, basically, that's my stance. I think that gets into a conversation around-- there's a difference between, OK, I helped. And maybe there isn't. There's a major moral ethical dilemma around what large language models have done in terms of consuming other people's information and then repurposing it and selling it, right? That's a copyright. That's an intellectual property thing. But there's also this untested-- and that's where I think where they were talking about, where they're saying, should we even do this? There's some unintended consequences from some of this stuff. And if you put this in the hands of somebody that can take away your freedom, and you don't know what could potentially happen, that's scary. We talked about this a couple episodes ago. But they had technology that was like, give police-- given an image, police could run it through facial recognition and return someone back. Some of the concerns with that were, it misidentifies people of color more often because of the data that it was trained on, which is a problem. But that's inherent and biased in the data set, which can probably get-- That's a really good, concrete example. Yeah, that can get corrected probably over time. But that is a problem. So we can't deploy those technologies until we're certain that there isn't a bias and we're out. Because then, what do you do? Well, now you're going and arresting more people. And that person who wasn't a criminal that got arrested because of the misidentification is now probably more likely to distrust the police. So it's a whole cyclist. But the other part of that was scary was it was a private company that was altering the results based off of what they thought was right. So it was like a New York Times reporter was asking questions. So they took her out of the database. So the police couldn't find her anymore. I'm like, OK, so we definitely need to know if that kind of shit's happening. As a private citizen-- and the police probably don't have any clue that this is what the reality of these situations are. But yeah, that's where I have some major concerns. I also think that there's a concept of literally walking on eggshells in the future as a human being in this country. Yeah. You know what I mean? What's the precog? We talk about minority report all the time. OK, so this is where that stuff starts to creep into that, where it's like, we're going to predict, use machine learning. Think about if you're walking down the street and you cuss. Yeah, yeah. Or if you make an aggressive movement towards somebody or something. And then you get tagged or cataloged as an aggressive human being. I think it's probably not unrealistic to think that's exactly what it's going to look like. Our society has progressively moved towards this direction. Because if you do get called out for an aggressive movement, then what does that potentially prevent? It prevents someone maybe getting hurt. So then that protects someone. If that protects someone-- It's good. But here's the other part, too, that goes back to your point. OK, great. We want the police-- I was on board, but I thought there might have been a problem with the original idea. Was that like, OK, I agree 100% with it, but what's to stop a police officer from just logging in on his own personal account and doing the same thing? But I understand it would be the software. Then going to the software, are there going to be private companies developing the software? Exactly. Well, then how do we prevent those private companies that are not under the jurisdiction of what the police departments and the government's under? How do we control what their biases are that actually support the police officers? So then that might see corruption. Maybe that private company has ownership in prisons. And they're like, well, shit, we'll just-- Exactly. If I was in prison, I would. This is the issue. This is the issue. Is if you have AI systems trying to report fact-- they're large language models. They are not the source of truth of fact. Some of the data that they're trained on may or may not be fact. Say that one more time for the people in the back. The AI systems are not historical systems of record. They're not. They're not tagged. Most of the times, they're not tagged with the appropriate sourcing that a scholarly document would get created and published with. And not every single thing that gets created is a scholarly document. But if you're basically in the future, what's going to happen is you're going to have people using AI interfaces as system of record interfaces. And that is basically putting an abstraction layer in between you and the fact that could basically be manipulated by anyone. Yeah, it is a black box. And even the people who build them don't understand input and output. They don't understand how they get to that. They can't know that. It just is making a decision based off of the data. They don't know how to-- So real quick to that point, a quick story. And this might all be bullshit, so take it with a grain of salt. Because I can't actually claim any of this that I actually know the individuals who were involved. But the objective was to make an AI that could generate a new type of encryption scheme. And so the researchers gave the AI-- OK, so we have three AIs. We have A, B, and C. A and B, I want you to be able to communicate back and forth and make it so C cannot understand what you're talking about. And C, your objective is to try to intercept the communications between A and B and figure out what they're saying. So it runs a million iterations or whatever it is. And sure enough, it comes up with some scheme to be able to have A and B be able to communicate, and C can't. And the researchers were like, and we have no idea how it works. It just works. That's kind of the-- that's the point where we're eventually going to get to. Yeah, they didn't divulge the secret. No, it's like, well, there's an encryption thing that's happening here. We don't actually understand how it's happening. But we do know that A and B can encrypt and communicate securely, and C cannot intercept it. But we don't even understand the communication mechanisms that they're using to do this. And this is the Wild West that we're in now. That's the ethical thing. That's the ethical thing. And this is similar to me. I don't want to minimize that there's multiple ethical dilemmas. And I don't want to minimize any of them. But I think if you're talking about, does Sam Altman care about ethical bias for when it comes to genders and other stereotypes? Do you think that's one of the reasons why he made it an open source nonprofit? No. They did it for more of the big risk. And I'm not saying that's not a big risk. It's a huge risk. It's terrible. It could have terrible consequences. But you know what's terrible? The world ending. Yeah. Right? Stuff that is beyond scary. And I think that that is where you don't want to get too caught up in just one little thing. You need to look at all the aspects. So I mean, for me, it's like, what is your-- what is it for you? If it's racial bias or gender bias, for you, what's the thing that gets you-- I guess maybe not the most scared, but that you would maybe focus on that's not necessarily in the news or-- I mean, the thing that I would focus on is actually exactly what Brandon just said. So when we kicked off the podcast, we talked about the end game of what if these ethics aren't managed or talked about, what's the end game? And based off everything I just heard, especially with your thing at the end, there's in late stage, like you're walking down the street. Well, late stage, they're walking down the street. This is late stage. And you go to jail for whatever reason. And no one can tell you exactly why you went to jail. But they know you absolutely should, based off this model that no one understands. That is the precom. Well, yeah, that is terrifying to me. And I think maybe that's never the reality. But we have all these systems that could have their biases or have their opinions in swing. And we're talking about financial decisions, right? Like, can I get a loan or not? Well, it used to be you go to the bank, your local bank, and they would be like, well, Jacob, you don't have a house. You don't have any money in your bank. I'm not going to give you $100,000, right? But at least it was based off of this guy's relationship. And I knew why I wasn't going to get that money, right? Capital One right now, I can go and request a bigger line of credit on my credit card. And they're just going to ask me a few questions. They're going to be like, how much you make? What's your profession? You know, whatever points they need. Their algorithm is going to come back instantly and tell me, can I have money or not, right? Which is good. That's cool. I like that. I didn't have to go through a bunch of bullshit. I didn't have to call anybody. It didn't take me that much time. But as you can see, that working conversely, like, what if you don't get it? What if they're like, actually, you're at risk. We're going to cancel your credit card. And you needed that to make ends meet this next month, whatever. That's not the end of the world for one person to happen to. But that happens to a lot of people. Then you create this section of our society that is unsustainable. And it's only a matter of time before they grow up to the point where it's like, we're back to the French Revolution. It's like, well, shit, I don't want to be a part of that world. We've got to take the power back. Take the power. Fuck them. Because I think the AI doesn't take into consideration-- and Sean and I have been talking about this since the Super Bowl-- the incredible power of human innovation and hustle. That's the biggest loss right now, in my opinion, in AI. With the AI revolution is the lack of focus on what humans are able to do on their own. Right. That's a good point. And we looked at-- the Super Bowl commercial was a great example because literally, those were some of the coolest things that I've ever seen. And they were commercials, which is funny. But they were created by humans and through writing and through imagination. And so I go back to what Jacob said earlier, which is there's probably at some point-- it's only been-- 1923 was 100 years ago, right? That little clip from that show, if you haven't seen it, check out-- That's one old man ago, by the way. Check out the refrigerator clip from 1923 that Taylor Sheridan show on FX or whatever. But there's going to be some point where people just kind of put it all down and just say, hey, we're going to create-- we're not going Amish, right? But it's going to be something that's like, hey, let's live without some of these conveniences and let's live a human lifestyle. And I think that that's a good way to balance some of these things out. And my biggest fear with all this is the credit system that you're talking about, the place you go get a hamburger, the judicial system is all running on these super hyper-efficient models that where-- that's where you check in. That's how you pay. That's where the decision is made. And so the ability to make a decision outside of that model doesn't exist. That's scary. That's like the cashless society, right? Yeah. You know what I mean? You're just going to have people yelling into bots, be like, customer service. So I think we can-- Please state your identification number. I think what you're actually saying that the biggest realistic existential threat of AI is lack of freedom. Yes. Yeah. And under the guise, it was presented to us like, oh, it's going to take away all this cognitive load and it's going to make our life so much easier. But then you wake up and it's like, well, shit. It doesn't matter until it affects you, right? Once you get affected by not getting alone or not getting-- or you get a cop knocking on your door because you showed up in a video and you look like the guy who did the thing, that's when people will start to be like, oh, shit. So as a white middle class man who is completely unfazed by a lot of the bullshit that has to happen to people who aren't a white middle class man, as I was chatting with ChachiBT, having my debate about the ethics in AI, I'm like, one of the-- so you keep saying systemic bias. And you're using that as an argument. Has there been any real world instances of this? Or is this more of a moral boogeyman to justify the overreaching ethics of AI, is what I asked it, right? So it just shows you how white I am, right, in this idea. And then it goes, yeah, dummy. Here's a list of shit that actually happens in the real world. So recruitment tools bias, right? There's been multiple AI tools that people have built from recruitment. And that's like black names versus white names. Right. Yeah. Facial recognition, right? So facial recognition, like the black people are way more inclined to be picked out, even if they're not doing anything. Health care allocation disparities, which basically, you know, where white people are getting a lot more stuff. Credit loans and services, criminal justice and sentencing. And it just systematically just ripped my ass apart, where it was just like, OK, yeah. But the problem-- and I can empathize with what your thoughts pattern there, because it's like, to us, it doesn't appear to be that big of a problem. Exactly. Because we are human, and we are good at recognizing patterns. But I mean, how many people do I know that are affected by this versus aren't? For me, the pattern doesn't look that systemic. Yes. Yeah. But we know that it is, because we can look at data, and we can see that it is. That's the problem, though, right? When you tell someone about global warming, and in 20 years or 30 years, it's going to be a problem, they're like, I don't give a fuck. I don't care about that. Today, I'm fine. Weather's fine today. It doesn't bother me. So it's not a real problem. The thing that I'll say is the whole thing with the ethics and AI, from my perspective, is that it's just compounding the current things that already are problems. So why? The question is, OK, yeah, it blew your mind. But why did it? Well, because what's an AI model trained on? If you've watched our podcast, you should know all this by now. [LAUGHTER] Models. What are models trained on? Data. Where are they pulling data from? The actual jail systems. Who have been the primary demographic that have been in jail systems for the past 50 or 7,500 years? Black people. So what is it going to identify more? Black people. Why? Because there's more photos of black people and black names. That's the only reason it's doing that. It's probably less to do with these-- But haven't we already done that as humans, though? What do we depict? What has black television shows and movies depicted over the last 20 years? Yeah. I mean, that was done by us. And now the AI is doing it, too. So it's one of those things that the perpetuation of bias is real. Yeah, like a self-fulfilling prophecy, right? There's no denying that. And I think that-- If there was racism before, it's racism after. It's now in our AI. It's now worse. It's just become-- you know what's actually funny is, like, not funny but sad, I guess? It's becoming more nuanced. Because before, it was like you could see someone being racist, right? And then it became, oh, now it's in the criminal justice system. Now it's, oh, now it's the decisions that are happening behind closed doors. It's like, eventually, it's just out of sight, out of mind. And now I can sleep better at night. To me, though, there's absolutely hope. Because every single day, I pick up my daughter at a school in Lawrence Township where they speak Spanish all day. And you walk in, and there's literally every-- it is the most diverse situation, the most diverse set of teachers. And I mean, you walk in, they're literally speaking a different language. You walk in there, and there's little African-American boys speaking Spanish. There's white kids sitting next to them, and they don't see color. And they don't now. They actually got thrown off a little bit by Black History Month a little bit. Because some of the things that-- it kind of perpetuates that bias a little bit. I don't know if you've seen the video about Morgan-- I think that's all. You saw that Morgan Freeman video where he's like, basically, fuck Black History Month. He's like, basically, it's just history. So I want to actually talk about this. And it has nothing to do with AI, but it is something that I've struggled with. Can we pause so I can pee really quick? Yeah, please. Yeah, let's pause it. I'll go pee. I'll go with you, bud. [LAUGHTER] Come on. Come on. Go pee with me. Hold it for me. Dylan, you going to put that in there? First mover advantage, Apple rarely takes the first move. They weren't the first person to make the smartphone. They're classic second. Their stock is getting beat down right now, because they don't have a competitive AI product in the market. They didn't have a fucking watch either. Right. OK, just wait. Yeah. Just wait. AJAX-- so they're calling it AJAX, apparently. I know, which is just a horrible name. But apparently JAX, J-A-X, is a acronym or something within the AI space. I don't-- I won't even pretend to know what it is. But yeah, so their internal model is apparently called AJAX. And the rumor has it that we'll see it coming out in iOS 18, which should be the next version that comes out this year. And we should hopefully have a smarter Siri. Well, and the thing about that is is that you're probably going to decentralize the computing model officially, which is all the models will be running decentralized on our-- On the device, on device. On device. I think it was just so sweet of you, Brandon. The last, I don't know, 20 episodes, Brandon has just got on here and just really just attacks Siri so hard. And you can just tell that you're a little beat down about it. You're a little-- you've lost your enthusiasm. You're like, I really just hope that they fix Siri. No, dude, Siri's got to be better. Again, like any time that-- the moment that I open my mouth and I say, hey, Siri, you see my wife just go white. Because she knows I'm just going to go-- I'm going to start screaming at it. Because my dad does the same thing, where he's just like, you're an idiot. I tried to do something with that last night, and it didn't work. And I just said, never mind. I just folded my phone out and did it manually. I'm like, just never mind. Well, he can tell you if you open your phone. Let's skip the my kids go to Forrest Glenn in conversation, because I don't even remember what the fuck we're talking about. Yeah, that's all right. Well, I mean, in general, guys, we've talked about ethics and AI. And we've came to some level of conclusion. Final thoughts? I think that there are multiple ethical and moral dilemmas. I think, for me, the potential loss of understanding of fact and historical information and the ability to manipulate reality is really hitting home for me. Because if everything's in a model, and the model's based off of whatever gets input to it, and it's so abstracted that you don't really know where it came from, and that's put in blind trust is placed to it, that you placed a lot of power in the hands of very few people. And that's probably why they love it so much, right? Why is this such a big opportunity? Well, why did Elon Musk ever pay for Twitter or X or whatever? Because he probably thought, this is a platform that I can actually use-- He got the world's biggest microphone. Yeah, he can manipulate things, right, to my vision. I mean, shit, if you can do that, but now you're talking about something that nobody questions. It's like, oh, it's just truth. I got the thing. It's a machine. It's perfect. It's not like you take the output from the poem it wrote to you or the paper or the opinion and run it through a code compiler that's like, oh, yeah, that's actually factual. Like, it may work for some applications like writing code, and you can go test that code and make sure that that code is what you thought it was going to be. But you're talking about potentially making life and death decisions when it comes to the health care system or something in the judicial or law enforcement that is really scary and has to be met with the utmost risk management. And it's like Ellie Sattler in Jurassic Park. She's like, you need to shut this down. He's like, the only ones on my side's the blood-sucking lawyer. Seriously. It's like the smarter you are and the more you know, the more you're probably-- and that's probably why those are the ones that-- we said, oh, they're billionaires, but they're really smart. They can think all the way through, and they know there's probably some sort of existential threat. And I don't necessarily think it's the sentience. I'm thinking more of it's like it's entrapment, the possible of us being sort of trapped. Yeah. And that were the death-- I was alluding to the death of capitalism earlier, just because it's like you automate everything. People-- well, then we have to create a new system, because the people aren't generating the-- they're not the ones producing anymore. So what are they going to do? They can't just give people money, right? I mean, we could. I think the next generation, these kids that are growing up are going to be the ones that figure out the best way to think about this. I hope so. Because they're going to grow up with it. Did you guys hear about the goody two, LLM? We'll wrap it up on that guy, on that point. What is it? So it's called goody dash two. It's the most groundbreaking AI model prioritizing safe and ethics. And it will not answer a single fucking question. You can ask it anything, and it will be like, I'm sorry. As an AI agent, I'm not able to get into that, no matter what the question is. And it was built as a goof, but it's very funny, because it is true. This is exactly what it does. It won't answer a single question that you ask it, no matter how safe or secure it is. Yeah. And I actually agreed with your point earlier about, I don't give a shit if a large language model gives me something that's offensive. Like, if I ask it to tell me a dirty joke, it tells me a dirty joke. I think that's-- I'm an adult. I think we should be able to discern those things. We don't have the proper regulations. We don't have the proper processes for someone to go say, oh, well, I'm going to police this on my app. And it's a futile effort, because, OK, well, now who's policing the police? And who gets to decide what the-- everyone's always going to have these motivations. But the reality, though, is, as long as we understand-- I think it'd be safer, because we would understand that they are flawed, and they are these imperfect things. If we try to bolster this image that they're infallible, imperfect, it's like, no. Just talk about it like it is. These things are-- we don't know what's happening. In 30 years, are we going to look back and say google.com was the greatest thing that ever existed on the internet? Because it pointed to sources. You didn't give you the answer. You absolutely will have large language models that, when you're interacting with it, will give-- Gemini does a decent job where it's trying to give you the sources. But there's almost a lack of incentive in the future to create web content. Yeah, we tell you. Oh, yeah. Yeah. Do you know what I mean? For sure. I would say this to wrap it all up. Let's wrap it up. Let's wrap it all up, guys. There might be a future where we all look back and we're like, remember when Google was just a search engine? Right. How cool was that? Remember when people did websites? How cool was that? In the next generation, your kids, and especially your kids' kids, are going to be the people that define that. And in the meantime, we get to sit around as a group of the four of us with an iterating background week over week, creating a better podcast for all of you, and talk all the bullshit as it continues to unfold. The Ethics in AI, this is the Big Cheese AI Podcast. I'm DeAndre Herakles, joined by Sean Hines, Brandon Corbin, and Jacob Wise. Follow us on Instagram, TikTok, YouTube, and LinkedIn. And don't forget X. We posted one time on X, but you should still fucking follow us. We constantly keep getting Elon's posts to notify us on our videos. Every time Elon posts now, every single X user is basically getting notified. And the last thing that I'll say-- the last thing that I'll say is I know that Elon says Tesla doesn't market, but the most brilliant marketing idea was to just buy the entire social media platform. And then your stock skyrocketed. So he is the genius after all.