BigCheese AI Podcast Show Notes
Episode: The Rise of Machines and Personal Health
Hosted by: DeAndre Harakas Guests: Sean Hise, Jacob Wise, Brandon Corbin
Topics Covered:
Relevant Quotes:
The podcast discussed the current state of AI in healthcare, focusing on the use (and often the misuse) of AI in wearables and personal health apps. The team shared their concerns about the accuracy of health metrics provided by wearables, the ethical considerations of AI diagnosis without human oversight, and the privacy policies of health apps that may share sensitive data. Brandon Corbin announced the release of the BigCheese AI Daily, an AI-generated podcast providing daily updates on AI news and products.
And welcome back to the Big Cheese AI podcast. I am DeAndre Herakles, the world's second best moderator, joined by Sean Hise, Jacob Wise, and Brandon Corbin. We are the Big Cheese AI team, usual suspects here on the pod. Today, we're talking about the rise of machines and personal health going through wearables in tech and how AI is being integrated into those, personal health, the data privacy when it comes to these things and how to navigate that. Seems kind of interesting. I looked at a couple of notes, and there's wild news stories out there right now with United Health and things like that. Yeah, so we'll kick it off with the worst one. So with United Health-- and I'm going to butcher the company that they were using-- that they basically brought in an AI provider to try to help identify when patients could be released. Because when the beds free up is a very important metric within any hospital, because that's how they have to plan and account for all of it. So they end up bringing in this technology. It's something like-- it's two letters health. It's like NT Health or MP Health or something like that. But the problem was it was wrong 90% of the time. It was inaccurate in predicting when these people should be leaving. And apparently, that decisions were made based on this AI of how they're going to pay things and go through the Medicaid and whatnot. So these people, if they were staying longer, they were getting declined than it should be. But apparently, the AI just didn't take into consideration like comorbidity and all these other attributes of customers that were-- or patients that are in the hospital. So it was just always wrong. And I think it's a really important thing to kind of consider is that right now, anything that's going on in AI is in beta. And I think we have to just accept that, that this is all in beta. We have these ideas. We're like, yeah, we think if we run it through this, this, and this, and this, we're going to get these results. And it should be good. Then you run it through the real world, just like you guys know. Like when you release a product for the first time, everybody can go test it at the company. And everybody tests it. It's like, yep, we're all good. Oh, yeah, it's great. And then you release it. Then all of a sudden, oh, well, single sign-on doesn't work for Microsoft people, right? Because, oh, well, we didn't test that one because we're just Google or whatever it is. Same thing's happening with a lot of these AI platforms where these companies are releasing it. And then next thing they know, they're like, oh. So when it comes to the health stuff-- and that's why I'm kind of excited to talk about this. And the majority, I think, of the things that are going to come from me probably don't use it yet, right, will be the thing. Because it's just not there. And when it comes to health and the rules and the regulations and the FDA and all of the laws that go into place, there's a lot of things you have to take into consideration before just running into this. And right now, we've got a handful of news articles of these young bucks who are building these products. They're like, yeah, we're going to be your AI therapist. And they have no idea what the laws, what the regulations are or anything. They're just releasing it because they can do it. And again, I like that. But at the same time, it's going to bite you in the ass when all of a sudden, someone's like, well, my AI told me I had AIDS. And I went, and I don't. And now I'm going to go sue the hell out of them. Yeah, I think the draw for AI in health care is the personalized medicine. Yes. So not having either the bias from a doctor, and they need to sell you these certain medications, or I wear a wearable like Jacob does, and they can look at all my data. And now it's specific to me. Right. So Jacob, I think you were talking a little bit about how it integrates with Copilot. You were doing some testing and things like that. Yeah, yeah. So I have a Whoop. Shout out to Whoop. And Brandon has an Apple Watch, which has some similar functionality. But they have a partnership with OpenAI. And basically, they put a chatbot inside of the app. And I can ask it questions. I don't use it very often because it seemed a little novel to me. It's just not quite there yet. But you asked your Apple Watch, how did I sleep this week? And what was the response? It was pretty concise. We'll do this. So this is the beginning of Apple's teasing of AI. So right now, it only does it with your sleep. But you can go, how was my sleep this week? About nine hours and four minutes a night. Nine hours and four minutes a night, which, by the way, is ridiculous for me. So you don't have Siri on the Australian accent like I do. No. [LAUGHTER] No. No. You've got to go into your Siri settings and set the accent. Because you can set it to basically any accent you want. That's all. So yours is Australian. Yeah. And everyone's like, oh, man, your Siri sounds awesome. I'm like, yeah, it's an Australian Siri. But-- so it's there. So they're beginning to. And I think in the next release, we'll probably see a lot more. I try to do the same thing with my resting heart rate. And it didn't do it. But sleep is the one that's kind of teasing it out. But with the Whoop-- Yeah, no. So every morning, I fill out a journal. And I say everything that I want to track, like did I take melatonin last night? And did I work out yesterday? Or stretch, or those kinds of things. Where it's going is more personalized feedback to me. It already tells me, hey, your strain is at this level. We think you could work out-- or your recovery is at this level. We think you could exert this much strain, which is just their measurement of how hard you can actually work that day. But the sleep thing was interesting because it was very verbose. They're using chat GPT for OpenAI under the hood. And it was like, on Monday, you got seven hours. On Tuesday-- so it was like, basically, the chart in the app is better than the output that I got. But I could have asked it, summarize my sleep for the week, and then what can I do to sleep better, which I thought was cool. Does it proactively reach out? Or is it-- Yeah. It does. So yeah, they have the chat bot. But they also have all of the notifications every morning that's like, you met this target. And it's super, super helpful of like, OK, I need to work harder at the gym today. Or tomorrow, I need to make sure-- tonight, I need to make sure I get more sleep. Yesterday, I drank beers. And it was like, you are a piece of shit. So it's great. It's great. How much is it? That's a good question. I think-- It's like $20. Yeah, it's around $20 a month, depending on if you get it for 12 or 18 months or whatever their deal is. And how long have you been doing it? Almost four years now. Oh, wow. Yeah. So there was a study that said-- that looked at wearables. And I got this from the-- kind of a rabbit hole from the show notes. But that said that the heart rate monitors in wearables, it can be off by as much as 20%. And the caloric metrics can be off by as much as 100%. And that the sleep metrics were found to be significantly off as well. And I think that the issue that I have is the variance in the accuracy of the wearables. And specifically, I think, really come to me-- this isn't in the data which wearables are which. But the Apple Watch is not very good with measuring these-- this telemetry. Like, my dad has the Garmin. And it does-- it's more closer to the whoop in terms of-- it does like a battery system. So it tells you how much you recharged your battery. It has a lot more metrics. It's very, very health-focused. But I have a significant issue if somebody's putting out a device that's being used to input a model that's telling you what to do. That could be off by as much as 100% for exercise and 20% for heart rate. Yeah. And that's a good point. So I have a Garmin instinct. And also, my girlfriend has an Apple Watch that I've tested. And out of the three of them, I just did the old-fashioned method of getting my heart rate. The whoop was by far the closest to my actual heart rate. But the Garmin was off depending on-- I think it said I had a heart rate of 110 one time. And I was at a 70. And it's just it was either lagging or-- it's just not quite there yet. But to your point, it is interesting because how accurate are these things? And they're now telling me what I should be doing. That's the problem I have. So you can go ahead, Brandon. So I've been tracking things now for about nine years. And what I've come to the conclusion of is that I don't necessarily care if it's wrong as long as it's consistently wrong. So I can see that, oh, here's my-- my charts are going up and down. As long as it's consistently wrong, then that at least gives me a mental model that I can work around. Ideally, it would not be wrong. But if it's inconsistently wrong, that's the worst case scenario that you could possibly have. Well, and I think that-- I think that there's two things that kind of come to mind. One is when you're measuring all this stuff and you're chasing it as a person, I think there's a mental health issue potentially where-- and I actually just pulled the office before I left today. And I started asking about wearables. And they had a significant amount of people that they knew that have some sort of anxiety or mental health-related issue to chasing the metrics on their wearables. Basically, if their heart rate gets too out of whack or if they don't burn enough calories, they're chasing this number. Yeah, I agree with that. And I think-- I've been wearing this for four years. In the beginning, 100% was trying to hit all my metrics. That's the only thing that mattered to me. Now it's more about I switched my goals up. It's just about overall health. And I much-- I look at the overall targets that I'm trying to hit, resting heart rate, for example. I want to get my resting heart rate down to around 50. It's around 59 right now. So that's just something that I'm watching over months. I don't wake up every morning and think, OK, what's it at today? It's more about-- Well, and we were talking about this the other day, that everybody's resting heart rate was really different, even though they weren't necessarily-- Yeah, definitely don't compare against other people. Compare against yourself. Right, exactly. Right, and I just don't think the context is being provided there. And then Brandon brings up a good point. And I think what you guys are both kind of saying is that after using this stuff for a really long time, you're normalizing it inside of yourself. But I think that not everyone is the same. There's a little bit of worry there, maybe not as much as not having any sort of measurement whatsoever, because I think preventative health management is basically the way to go. But I'm sitting here and I'm going, there is definitely some sort of implication here. When we turn the AI on that's actually reacting to the metrics on your behalf-- Yeah, and I think to your point, just like everything with AI that we're seeing is, it's not the end all, be all. It's just another input to make a decision with. I think the first thing you jump to, even if it's not AI, some new input in your life, you're like, OK, well now this is the thing that I must listen to. It's fact. It's data driven, right? But again, it's data driven, but it's not fact. It's just an input that you need to figure out yourself how valuable is that to you personally. Yeah, and it kind of brings up the whole-- exactly what you were just saying. So it's bad data supporting a large language model that puts out more bad data, which then opens up a whole can of worms in terms of how far along are these actual wearables? Are these things-- there's billions of dollars invested in this industry and we're still off that much? I think that there should absolutely be a nutrition fax on the back of these wearables that says your heart rate monitor accuracy rating will be up to 80% accurate. It's not about-- Your sleep scores are going to be up to 95% accurate. Because otherwise, you're basically saying, you know-- Fuck it. She burned 300 calories, we think. Yeah. From a regulatory perspective, you would hope they would implement that. But from a company perspective, they want nothing to do with that data. I mean, they want you to think that this is accurate information. So why would you be paying for it in the first place? Well, it's-- go ahead. Sorry, that nutrition fax is a really interesting idea for these types of things. Because I do think that if you're going to be putting a device out that's going to say, I'm going to measure these various attributes of your person, that we should have a reliability score of its accuracy. And well, OK, because think of a couple of things. A WHOOP is not prescribed by a doctor or monitored as a treatment from a physician. OK? So I'm guessing that a WHOOP, for example, or a Garmin, or even Apple, is not-- is it subject to PH-- is it treated as PHI? Because is PHI only data that's related to an actual treatment that is being treated by a doctor inside the health care system? Your Apple Health data would be PHI. So it's health information, but the measurement of it isn't regulated like other procedures. Yeah. Like, it's not a procedure. But is it a diagnostic? Yeah. Because people are sharing it with their physicians. Correct. They're exporting it, giving it to their doctors and whatnot. I share it with my brother and other people in the community. Everyone will know how I worked out today and what I slept. You just join communities. And I'm public. I'm just like, here, take my information. Oh, so WHOOP. Yeah, through WHOOP. Yeah, yeah, yeah. And I mean, that's actually-- there's nothing potentially wrong with it. But I think the issue is being transparent with people about what is real and what is not. Because that's my biggest issue with AI. And you guys know I say that every single podcast. This is kind of interesting. So to your guys' point, I looked up is the health information that's provided by an Apple Watch, whatever, protected by HIPAA. And it says, it's important to note that HIPAA rules generally do not protect the privacy or security of your health information when it is accessed through or stored on your personal cell phones or tablets. And so your health information is not actually-- Where's that from? So I actually searched this on Microsoft Edge. And then Copilot gave me the answer. Oh, so the AI is telling us. [LAUGHTER] See, but this is just another case in point of my main issue with AI, which is AI cannot be a fact. It is a large language. I'm guessing that that's probably true. You know what I mean? But when you put the LLM in front of the truth, and you don't source the truth, and you're given information that is, let's say, to be used in a court of law to prosecute someone or to make a fucking procedure code linked to a diagnosis on a chart, that stuff matters. And with that said, how many times have you-- in the past year, have I sat and listened to someone that's going through an issue with their health? And they'll go to one doctor. They'll say something. They go to the next doctor. Doctor says something completely different. Yeah. Again, I think it's a good point, what you're saying. Don't treat it like the absolute truth. And the exciting part for me is the same thing I'm excited for AI for everything else, which is go to my doctor. He has all my records, or her. She has all my records. We're equal opportunity. There's female doctors. There's a lot of them. Anyway, cut that. Cut it. [LAUGHTER] You're getting so-- there are lots of doctors. OK. Moving on. But yeah, I want my-- What if it's not a her or? Oh my god. I want my doctor to be able-- wouldn't it be cool if they are the domain level expert? Just type in, this is what Jacob has. It's chlamydia again. And-- [LAUGHTER] And like-- Dig that hole. They can see all of my data, or all the things that I've had before, or whatever, and then make a better, more informed decision of like, oh, this is what I might prescribe him knowing x, y, and z. There was that big push for all the health data to be consolidated. Well, the next step would be-- first of all, did that work? I have no idea. My doctor never seems to know. But if it were consolidated in a place that they could actually leverage, then they could ask it appropriate questions using these large language models that are attuned to that diagnostics tool, and then use their professional opinion to do it. You bring up probably a better point of anything, which is, who should own your health information? Probably you. Yeah, I think you absolutely should. Honestly, for me, if they could just figure out how to make it more accurate, I'd rather trust that to my device and Apple than I would to UnitedHealthcare. Oh, 100%. Are you kidding me? I worked in the health insurance industry for three years. It is the worst technology in America. OK? You're talking about claim systems that are still running mainframes. Yeah, 100%. Right? They're not even relational. Yeah, there's a little-- yes. So anyways, I think that's pretty cool. I mean, yeah, I think that there's a-- overall, I mean, you've got to think-- you've been tracking your health data for four years, nine years. I've been doing it for about a half a year. I held on an Apple Watch for the reason of not wanting to chase and constantly be connected. Now, I don't anymore. So once I decided that I'm shutting Nomi or that I'm passing Nomi on to another maintainer, I pretty much don't track anything. Now, I mean, I track it, but I don't reference it. Like, my sleep's tracked through a wee thing mad in my bed. So obviously, Apple's tracking everything. But I don't really go back. But before, I would. I mean, I was completely anal retentive about my data. I'd wake up. I'd be delving in. Where's my mood? What's going on? I had this full coverage of everything that was happening in my life. And a lot of users would report on that, too, where they would almost get overwhelmed, where they would get to a point where they're like, I want to track everything. And then they get burnt out trying to track everything. And there's not-- tracking every time you pee, it's fine. It shows you where you are from a dehydration standpoint or what's going on. But it's kind of like, do you really need it? Are you actually going to do it? So there's always a lot of conversation on our subreddit about basically trying to find that right balance of how much are you tracking, what's really important, what's relevant to you that can kind of help you make your life. You think from like a-- sorry, go ahead. So stupid. You know those cameras, the 360 cameras that they have now? The problem I have with tracking stuff is it's not automatic. And I just want one of those cameras that's like-- Just follow you around. It's like, oh, he's peeing. He's peeing again. Yeah, that was-- But-- oh, shit. Oh! Brandon hits things, episode 27. But so where I-- everybody, when they started using Nomi, a lot of them were like, I want to track automatically. I want to track automatically. And I always had this kind of guttural reaction to it where I'm like, first, I think you need to not track automatically. Because once it's set up automatically, you're not referring to it. And in this case, Nomi didn't have necessarily some of that proactive reaching out and being like, hey, you're-- so it was more of like, just get into the rhythm of tracking. And so I would. I would always say, just track every time you pee. Because that helps you kind of get into the habit of just-- you open the app, you hit the P button, you close it, you're on your way. That's it. That's all it was. But that you get into the habit-- Hey, Siri, I'm currently peeing. No, so you do-- so I did. I would track. I could do-- I could literally be like, hey, track value. And it'll be like, what do you want to track? And I'll be like, water. Stop. Water. And then how much-- and I could put how much water I need. That's cool. Right. I think that you bring up a good point as we move on to talking about the mental health side of things, too, mental wellness. As typically, most apps have to refer it because you don't want to be regulated by any actual health standards. You just want to call it wellness and then be able to do whatever you want and click a box. We've been there. And that-- what was I going to say? I forgot what I was going to say. Mental health. So this is when I walked in and I was freaking out. This is what I was thinking about. And I think that we have a problem, big problem. Oh, yeah. Actually, I just remember what I was going to say. OK, go ahead. To be continued. I was going to say was, for my technology and building an actual platform that works for mental wellness, my mind kept going to, before we started the podcast, OK, we need to make it automated. I was like, why do we keep-- I'm looking at all these apps and different apps that are made, you have to go in and log in. I'm thinking, how do we automate that? But I think the opposite is actually true, and that's why I agree with what you just said, in that what's the actual end result we want for the user? The end result we want for the user is for them to be cognizant of what's going on in their mind. They've got to be mindful of their body. The vehicle for that is the mobile application and the repetitive action to think about that whenever you click the button. So automating it does completely remove the whole point. It does make it easier, but then it doesn't happen. And so I had some that were automated. Again, the sleep's automated. That gets automatically pulled into KnowMe every day. So I can kind of see some of the automated versus not. But for mood and some of those things, again, you're just not going to have an AI that's going to be able to tell you what your internal thoughts are. And journaling is important. Actually, Whoop does that too, where a lot of things are automated that I don't want to manually track my heart rate. It's been two minutes. What's your heart rate? OK, hold on. [LAUGHTER] But every morning, I do a journal entry, and it is a good mindful activity. It's like, oh, yeah, yesterday I did stretch, or I did this, or whatever. Yeah, mindfulness is huge. I think just being conscious of things is very important. But back to your [INAUDIBLE] Oh, you ready for the juice? So health care, OK. So sticking on health care real quick is there's a very dark future in all this because of the way the American health care system works. So the American health care system is based off of-- and I've already mentioned this in this episode-- is based off of diagnosis and procedure. So you match a diagnosis to a procedure. Those procedures have to link up in a database that says, oh, yeah, you can bill insurance for that. And the other side of our health care system is that-- so basically, what you're saying is you're paying for-- you pay for volume of service. So when you go-- so the more the doctor does, the more procedures that are provided, the more they get reimbursed from insurance. So there's an incentive to provide services. Well, the other problem that we have in the health care system in the United States is that if you want to be a doctor, you have to go to school for, what, eight, nine years and rack up hundreds and hundreds of thousands of dollars in debt. Now, it's really hard. And it's not accessible. So you have a disincentive to create doctors and nurses, nurses probably to some of the lesser extent. And so the supply of health care providers is low. And the demand from a procedure perspective and from our culture-- and maybe these wellness devices will make that better. And they probably have, because we're doing more proactive wellness stuff. But there's not enough doctors. And the insurance engines, these are Fortune 30 companies that need to make money. What are they going to do? Well, you know what they're going to fucking do. They're going to hire AI and start billing insurance with AI. That's what's going to happen. They're going to start servicing actual prescription work, meaning this has a need. I'm going to send this through insurance. And the provider is going to be artificial intelligence. And the first place it's going to come into play is mental health. You're going to start seeing companies bill insurance for mental health services and other services that are not being provided by humans. And there's going to be some sort of tenancy or review of that from a human in the background. So they can turn one doctor into 10. Well, I wonder if, though, if the companies that you can do that or a platform that you can charge against insurance, I wonder if those are going to have to go through an FDA approval. Sure. And so if we can get to a point where software can be ran through an FDA process, but once it's done through that process, it can't change. It can't be drifting. You can't retrain it. I mean, it's solidified. And the moment that you would modify anything of that, you're going to have to go back through that FDA process. So I bet we will start seeing the rise of software that has gone through the process that does have the efficacy that says, yes, this works. We've got the data to back it up. But I hope-- That's the doom and gloom. I think the reality might be somewhere in the middle. And we talked about this yesterday at the event, which was augmentation of really smart people with AI. So I think that regardless of what you're going to have is, you're going to have data that's input. Let's say it's all the readings that come out. And instead of the doctor going through the chart going, dun, dun, dun, dun, you're going probable fracture of tibia with hematoma, you know what I mean, like a suggested diagnosis. That was that click-baity article we saw the other day that said an AI nurse, I guess, was 70% better than real human nurses. And they cherry-picked basically all the analytical stuff that reading a chart and comparing dosage and prescriptions against. Is there a conflict with another prescription? Well, yeah, of course a computer is going to be better at that. But that's a good use case for that. And that's what we have to talk about with AI, which is AI is good at summarizing information. Yeah, and you know what the nurse is good at. They're good at bedside manner and making sure you understand and are comfortable with the information and that they can communicate the actual knowledge, because that's what they're experts in. And they can communicate that to people so that they can actually take that information in, rather than like, I don't want an LLM to tell me what to do next. It brings up the ethical side of it, right? Because it's the doom and gloom. But then you also think, well, that opens up an entire industry, where now you can do a one-to-many health care plan kind of thing. So let's imagine there's an app that is a mental wellness app for schools, and it allows every student to log their mental wellness. And if they hit a certain thing, they could actually get prescribed through a doctor. But you could have five doctors that were in charge of a 20,000 kid school. Wouldn't that reduce the cost dramatically? Well, it also would be-- Is that what you were saying? It would be. Would that enable that possibility? Yeah, because think about it from a business perspective. Why does the insurance company want to-- this is why it's probably going to happen. The insurance company wants to make money. If they can create an employee they don't have to pay that's doing the procedure, then they're making a fuck ton of money, right? Because they're going to bill insurance for the procedure at the price that the insurance company and the hospital have agreed to, right? They're going to bill that price. And if maybe they go, oh, we'll give you 15% off for AI, they're still going to get their money. They're going to get all that money. Yeah. Right? I mean, sure, they're going to have to software licensing fees and whoever they're paying for the AI. But they're going to be able to make more money. And as we know, the way that Wall Street works once you go public is that they're figuring out a way for you to make more profit. And a lot of times, the way to make more profit is to spend less money and to cut things, right? And if you can cut spend and add revenue, that is going to get more approval. So all you insurance companies, you can take just steal my idea and go make a billion dollars. But I'm sure that's in the works. Yeah, this is a little off topic, but we were at a conference yesterday and there was a chart on productivity. And during COVID, productivity went down because no one worked. But it went way up once everybody had to work remote, right? And then it stayed up pretty much. It leveled out a little bit. And then the advent of AI went up again, and it's leveled out, right? So it's like productivity is going up. You know what I mean? We're making more money per person. It's just we just-- I want a little cut of that, if you wouldn't mind. Yeah, no, that-- Well, no, slow your roll there, buddy. We've got expenses. No, there's no doubt. You cannot argue the number one use case for AI is to increase productivity of individual contributors. That's the best way to use AI right now. I mean, all the different things that you can do in your day to day to do more with less. That's AI. Yeah, and of course, you have to throttle or you have to make sure that people, the higher ups, don't look at AI as like, oh, I can replace everybody. I mean, they have to be looking at-- they probably won't initially, and that'll burn some bridges, and people will get pissed off and switch careers. But they have to look at it as like, OK, how can I make my best employees better and my mid employees less mid, and continue down that path of get more for less. But I do think-- so all companies have the attrition, right? And they're always looking to get rid of that bottom 20. Now, before, you'd get rid of that bottom 20, and hopefully, you would either bring in a couple top line people to bring it-- No, I agree. I know, it's tough. I think we're going to get to a point where, again, we're going to start setting up more and more autonomous agents that are just running on standard operating procedures, that these people just sit there, and they just are literally checking boxes on their thing. Those people-- Productivity and augmenting-- --are going to get you fired. --and enhancing productivity of existing worker bees is also a potential Trojan horse to the long term. Oh, yeah, just keep-- new feature, new feature. All right, now, what is there left for me to do here? But I go back to-- yeah, I go back to, if you're a highly skilled, highly capable person, there's always going to be a place for you. 100%. Yeah, so go build the automation. But if you're not-- If you're not, I'm sorry. And that's why my kid's going to be a plumber. He's going to be welding. Yeah, you just simply-- you have to be able to compete. You can't-- probably are gone the days in the next x amount of years, in the near future, where you can just do a job kind of stagnant. Yeah, and just rest on those laurels and be like-- If you're not growing, you're dying in a world where AI can just literally be the-- Replace you. Yeah. If you're just going to be, it will be better than you. If you're growing and learning, you will grow and learn better than it. Because it is a byproduct of what you're learning and growing in. And the people that delve into it realize what it can do for you. It should hopefully be able to augment you so you do have more like superpowers. But at the same time, if you're just a schlub and you're sitting there just like, be, be, be, be, be, you know, I think it's going to be real easy for those folks to get fired. So did you guys see the UNA thing on the show notes? The app? Which one was it? I saw you post it. I'm going to pull it up. So UNA-- Yeah. Oh, our TV is gone. So understanding UNA, a science-backed therapeutic method. Meet UNA, the innovative coaching tool that makes use of cognitive behavioral therapy patterns in its advanced AI system. It's designed to provide robust support for your personal growth journey, aiding you in achieving life goals and boosting self-confidence. It's an AI you talk to that coaches you, mentally coaches you. Did you sign up for it? No, but you literally talk to an AI, and it tells you what to do. That's awesome. I'm looking at these companies, right? And I'm thinking, OK, this kind of product, it just seems like-- and this is always kind of being confirmed, but I'm curious in your guys' perspective-- products, anyone can make these kind of products. It seems like with the advent of AI and all these software products coming out, there could be 12 UNAs. What separates them? If they're all leveraging AI, I mean, is there any-- So-- Where's the moat in these things? With the UNA-- and I think UNA did it where they have-- they actually have doctors, right? So they can ground, hypothetically, their AI on professionals that have access, that have knowledge. But that's an example of scaling out a health care workforce with AI agents. Yeah. What's that therapist platform? Therapeanie? Better help? No. They're approved. They're approved. That's what that is. That's who gave me this shirt. And congratulations to TheraProof, by the way. Yeah, they're now a portfolio company of Elevate, right? Yeah, that's right. And they won that-- Crossroads pitch. That's amazing. Tiffany's just crushing it over there. Doing a good job. No, I was thinking about the one that helps you. They have a really good UI/UX designer. Oh, yes. Yeah, I heard he's pretty good. I build rough ideas, is my new slogan. Yeah. It was just so rough. No, you're good. I can't remember the name of it. But anyway, you go on and you look for a therapist, or a life coach, or something like that. And-- Better help? Better help. Yeah. Yeah. Yeah, that's all over the radio. Certainly, they've-- we should be sponsored by them. I feel like every podcast is, or whatever. But can we get on that? We could, except for I'm going to constantly tell them how bad their privacy policy is. OK. They probably won't want to be part of it. Well, OK. Well, then my-- If you want to read the most horrifying privacy policy, go read Better Helps Privacy Policy. And you're going to realize that everything that you do there and involve in-- They just did it. --very well could be sold to third parties for advertising a mark. I saw that recently in a privacy policy, where they were like, you can opt out of us selling your shit. I'm like, no. Send an email to them. Yeah, and if you say no, they're still going to sell your shit. Yeah. So before you delve into it, anybody who's thinking about using a product for monitoring, managing, tracking any of your health care stuff, I know you don't want to do it. Go read the frigging privacy policy. Copy and paste it into the chat, TPG, and tell it to-- Tell me anything that's concerning about this privacy policy. Paste it in. I use Claude for that, because a lot of them are pretty big. But nevertheless-- We got our Claude drop of the day, baby. Shout out Claude. Yeah. So I was using Gemini the other day. So I did see a clickbait headline that said, Claude III passes GPT-4. Yeah, Ultra, like Claude III Ultra, not Sonnet. Because Sonnet's good. What about the FLIR Metal version? Which one? That's for all you 1990s basketball card collectors. Yeah. No, what I was going to get to is, the better helps of the world are the ones who will probably really, really shine with these types of products, because it'd be their funnel to like, OK, now we have a service that we can provide you. So they sell you the onboarding. You're paying for the onboarding experience that collects your data over time. That answers Andre's question. Because that was my-- What separates the AI companies from the other AI companies in a world where they're all using the same LLM is-- Can they service it. --the positioning that they came from. Yeah. Why is Grammarly on the NCAA tournament on the commercial? Why? Well, Grammarly is already in your browser. And already helping you write emails. Well, now they're going to be the AI email writer. Well, duh. You already had it installed. They already had a huge customer base and the amount of data that they could either train on their own or just say, you know what, I'm just going to bolt on this. Some winners are coming out of this AI race just because of where they already started. Oh, yeah. Yeah, that's what-- I think we said this a couple of times. I agree with you. I think you either needed to have an existing customer base to then add AI on top of, or you take a page out of the book of the Grammarly and of the BetterHelp, it's distribution. The thing that makes you win from a startup perspective in mid-stage to growth stage is, are you distributing your product better than the other people? Or else you're no different than everybody else. So I do think, though, that where everybody's going and playing with the standard-- we're using ChatGPT, or we're using Claude, we're using Gemini, we're using whatever-- that I think we will get to a point where some of these companies are going to release models that are highly, highly specific. And these general models are-- I mean, they're fine. They're general models. They can tell you a story about a dog and a cat, whatever. But they're just not as good when it really comes down to some of that fine-tuned detail. Well, that goes back to-- we talked about Builder.io, which they are using a general model somewhere in their pipeline, but they're using some very, very nitty-gritty specific models. He always talks about that. He always talks about using specific models and how that's more beneficial. Yeah. Who's he? Builder is basically this one savant developer type. Oh, really? Yeah. Builder.io is just one nerd that just-- I mean, I think that there is a company behind it, but every single YouTube video, every single commit-- It's a pretty big company, but yes, the founder is super, super technical, and he's in the weeds with building their shit. But no, yeah, something you said I was going to-- oh, oh, yeah. I think it was just the specific models. And that's how these companies are going to win in the long run, is they build those specific models and they break down the task or the problem into much smaller tasks. So today I heard-- and I won't tell you who or whatever-- but this idea that we will eventually have some models that do get FDA approved, and that those models will be able to be directly connected to specific medicines. They will be highly trained on those medicines, on its issues, and that when you get this medicine-- If they can prove it with science, that they can be a human in their task. You embed that micro ML, whatever you want to call it, into the product itself, just like you could do RFID or whatever. And then it can say, what questions do you have about this specific thing? So there was a good tweet the other day. Oh, by the way, that's going to happen. Oh, it was from the guy who built Clearbit and is now building Reflect. Oh, yeah, Jonathan. Alex McCaw. Alex-- Jonathan. Jonathan Alex. One of those guys. One of those guys. Clearbit? Was it in Fishers, right? No. Clearbit is-- I'm thinking a clear object. Yeah. So Alex McCaw. I've always wanted to use Clearbit. He's a technical co-founder. He's a technical founder, Silicon Valley savant boy. Lives in New Zealand now, I think. But I always follow him. He's one of those people that-- he's like if Jeb Banner had started five $1 billion companies. OK. So really just a cool guy. Cool dude, smart, great developer. I mean, Jeb wasn't a developer, but very smart with product and marketing. But quits the company and goes and starts another one right when it gets to the point where they go to-- HubSpot. Yeah, when they get to that level where they're just churn, churn, churn, customers, customers, customers. But he was talking about fine-tuned models. And I guess there's companies that are being created that are literally just creating fine-tuned models, and that's their product. And he was like, why would you invest in that company? It's going to get eaten by the bigger companies. And based off what you just said, I was like, well, maybe the play isn't go invest in a company that's building a fine-tuned model. Invest in the potential for them to create a product that's using that fine-tuned model. And that could actually be something that they can maybe even keep proprietary, sell into more specialized areas. Right, yeah, 100%. Yeah, that's where we're at. There were a couple, though, that we had through the-- there's two things that we need to cover. First will be the news that we've got. But then also we should talk about the daily. See if we can get a couple subscribers. It's super rough at this first pass. We'll get to that in a second. We've got some insane news dropping on this podcast, and you are not going to believe. So just stay tuned. So leaplife.app was one that literally came out today that was on the weekly news, the Big Cheese Weekly. And this one's actually-- because, again, I have very strong opinions on anybody who's building anything of life tracking where the privacy is a matter. Right? And I had multiple people offer me to buy Nomi from me just so they could get access to the data. And I'm like, nope, no interest. Right? None. You can't have this information because the data is just way too valuable. So leaplife.app has kind of a similar approach. They take a chat interface where they're asking you questions, and you're just kind of having this conversation with what's going on. But they're highly-- it's all private. So it's first stored locally on your device. Then if it is ever synced to their cloud, it's encrypted locally. So it's end-to-end encryption. They can't actually read it. This one is one that-- and I just started playing around with it today for a little bit. And it's interesting. Right? It does seem like it does a pretty good job. But for me, the absolute thing is always going to be about the privacy and the information that you put up there. Another one that we'd had was a doctor GPT, which was-- I saw that. --which is a GPT, like literally a GPT that somebody built. They threw a bunch of stuff in there. So as you're going and chatting with this, just understand that you're going to be sending all of your information to OpenAI. And like-- So is it just an inevitability that no matter what's going on, people are just going to start using AI tools on their own to do whatever they want that may or may not be-- you know, they're going to say, oh, I don't need a doctor. I'm going to use a GPT. Oh, I-- no, absolutely. I don't need to-- I don't need to-- Doctors are going to hate it, because they're going to come in. And I'm going to be like, well, listen, this GPT told me that I've got gonorrhea, and because it burns when I pee. Well, so it's like the WebMD self-- my mom does that all the time. And any time she's sick, she's like, well, I spent all night researching my disease that I have. Which is the worst-- the absolute worst thing you can do. Well, here's the thing. You can-- I've got 30 minutes to live. You-- yeah, yeah. You should use these tools to be better informed when you go to the doctor. But you have to understand that they're still the expert. I had a guy-- I won't call him out specifically. He's in IT, and we're going back and forth on how to do something. I was coding it. He was implementing it. And he was like, well, I went to chat GPT, and here's where I think your bug is. I was like, oh, thank you for that. It's wrong. But I do appreciate that you sent that to me so confidently. And he was like, insisting that I try this. I'm like, all right, asshole. That is the 2024 version of fuck you. Yeah. What did you say his name was again? I forget. No comment. No comment. All right, well, we got to tell everybody what you did, Brandon. Yeah. So Dylan with Maverick Marketing, who's been so kind to helping us record all of our stuff here, last week when we had our podcast was like, hey, Manchester United has a completely AI-generated daily podcast that goes out. I think that that would be a really interesting concept. And so of course, this weekend, I'm just like, all the ants dancing around in my brain. I'm like, I can't stop thinking about this. So I'm like, I think we could do this. I think it would be fairly easy to go to generate a daily podcast that goes through the latest news, goes through the latest product launches that the weekly is pulling in. And yep, it's out there. So now you can search for the Big Cheese AI Daily on Apple and Spotify and some other place that I did it. So did you automate the publishing process too? 100%. So the whole thing. So eventually, I won't even have to do anything. You are a straight G, bro. So now every morning, I wake up, and I have to just run one script. And it goes through, and it gets all the headlines. It then just starts running through, and it does the intro. It does the news intro. It does the news. It does the product intro. Does the product listing, and then it does an outro. Yeah. First of all, it's an amazing thing that you did in a weekend. I know you put a lot of time into it. Manically, just like-- If you have any interest in automating information and things like that, go check it out. And just for the fact that it's so good that-- and this could be applied to any industry too. We just did it because you had already built a feed that was like-- We officially automated ourselves. So this was to be our last show. So at the end, though, it does say-- so at the beginning and at the end, it does say, hey, listen. This is AI generated. This isn't an endorsement by Big Cheese. It's just randomly picked news. Because again, when the AI is doing it, it can be like, this is the greatest thing since sliced bread. And it's going to talk about some shitty AI platform. But that we have this-- oh, and then at the very end, it's like, if you'd like more of a human touch, make sure to check out our weekly podcast at bigcheese.com-- or bigcheese.ai, or follow us on YouTube. So I do think that it could be a very useful tool for anybody who just needs to have content continually pushed out there that hopefully drives them to other places. There's a podcast called Snacks by Robin Hood. I don't know if you ever listened to it. But there's a-- I'd say a large percent of maybe more nuanced or content consumers that really like the daily concept. That's why those newsletters that people put out like TLDR and the plethora of AI newsletters that are out there are really important. Snacks also has a newsletter. But it's just that ability to get a quick daily rundown of what's going on, and get links to stuff, and get ideas for stuff. But that's what I like about this daily podcast. 11 to 15 minutes is usually about the right thing. It's something you can listen to on the way to work and get what you need. I listened to the episode you posted this weekend. And I was out and about with my kids. And I wanted to sit there and keep listening to it. It was just good info. And as you roll these things out, there's things you're going to stumble onto that you're like, oh, yeah. The city was like, it's in New York, and why? I did see that. Or it would try to say-- so before for products, I'm like, I want you to-- if you need to mention more information about the product, here's the URL. And I just output the URL. And it's like, and you can learn more at httpss.blahblahblahblah.com. It was trying to say HTTPS. So I ended up having to go and remove that. So there's just this whole fine tuning that you've got to go through. But the real test is we need Dylan from Maverick Marketing to test our podcast versus the Manchester podcast and see-- Except for he did start it, but he listens to it at like four times the thing. I couldn't even tell. It was like Chippendale. Like, blahblahblahblahblah. Yeah. There we go. But that's it. Big Cheese AI Daily. Check it out. Sick. Thank you so much tuning in. See you guys next week.