In this episode of The BigCheese AI Podcast, we delve into the future of technology in correcting recording errors, discuss the integration of AI into enterprise processes, and reflect on industry news such as Taylor Swift deep fakes and the launch of privacy-focused AI company Liminal by HighAlpha. Alongside exploring enterprise data and AI’s challenges and readiness, listen in as we highlight our guest Aubrey Annan’s insights on enterprise data and AI.
Introduction:
Main Topics:
Discussion on Enterprise Data and AI:
Relevant Quotes:
Industry News and Topics Discussed:
Fun Moments & Banter:
Sign Off:
And welcome back to the Big Cheese AI podcast. We haven't posted in a week because the 15th pod got away. We've been recording on OBS. If you've ever recorded on OBS, you record everything onto the same exact stream. And we had a pod that we completely lost because the audio was messed up. That's all right. We are back with podcast-- what is it, 16 now? 15? We're going to call it 16. We're going to call it podcast 16. We have to pour one out for our boy, Hickle. Yeah, Hickle, Tim, we'll get you back on. Eventually, there's going to be an AI that will be able to correct the mistake that we made. Yes. And maybe we'll release that. It was pretty bad, though. It was really bad. It was pretty heckly. I put it through this wave fix thing, and it took like two hours on my phone. Yeah, that's right. And then I got done, and it wasn't any better. It wasn't? No. Oh, Jesus. Welcome back to the Big Cheese AI podcast. I am DeAndre Herakos, joined by Sean Hyes, Brandon Corbin, and Aubrey Anan. Anan, that's right. Welcome. Thank you. How are you doing, man? I am doing well. Thank you. Good. We just met Aubrey for drinks like a week ago, and we were like, you want to be on our podcast? He does some really cool stuff. He was like, I'd love to. So now he's on the Big Cheese pod. We're talking about some pretty big stuff today, enterprise data and AI. Brandon has some big background in that. Yeah, I mean, we have two really good enterprise-- I mean, enterprise. Enterprise. Enterprise, and especially AI with you, and then data. We got the data man. We got the data man. Yeah, we're going to talk through a few things. Obviously, get to know Aubrey a lot. Sean and Aubrey have a ton of background together, but there are some news items we'd like to talk about at some point. Taylor Swift, deep fakes. If you've ever been on Twitter before, you know things can get a little feisty, and it got feisty with Taylor. So you can go figure that out on your own, but we will talk about it. Microsoft came out with something to kind of adjust for that, so we'll talk to that as well. And then we've got a deep fake of George Carlin, who is one of my favorite comedians of all time, if you guys haven't listened to his stuff and you like conspiracy theories. He is the guy to go listen to. And then HiAlpha created yet another company, and that company is a competitor to something that maybe, kind of? So the company is called Liminal, and Liminal just launched. And their approach is basically helping enterprises be able to bring large language models and AI into it safely. They've got a very interesting approach, so I actually got a demo of it this week. Nice. And-- And HiAlpha is the biggest, most influential VC in tech in Indianapolis. That's right. What are they called? And in almost the middle of us. Not just a VC, but it's-- A studio, yeah, a venture studio. Venture studio. Sorry, I'm sure I just pissed three people off at HiAlpha. So HiAlpha, though, they continually are just rolling out these badass companies. And so Liminal, L-I-M-I-A-L, I think is what it's pronounced. So Liminal, we'll call it Liminal. But they've got a very interesting approach that they're basically kind of doing a layer on top. So if you go to chat GPT and you start interacting with it, they're actually kind of capturing the text fields. And that's where they're doing their privacy and they're layering on top of the-- they have their own chat interface. But then if you're using Copilot on your desktop or whatnot, they can kind of overlay the text areas. And that's where they're really capturing the privacy. You're not allowed to say this because that's part of our no-no words that you're allowed to send to an AI. So they're doing privacy at that level, which is kind of an interesting way of approach. Are they doing it at the OS level or the browser level? Both, both OS. And so they're recommending that if a company is going to go with them, redirect any request to chat.openai.com to Liminal. And so then they will basically do the scrubbing before it's ever sent to chat GPT. They're doing something similar to when we built the original big cheese kind of chat interface for privacy, which is you basically take the prompt, you extract anything that's PII, and you replace it with something that's kind of close. So in their case, it's like if you say, I'm in Grand Rapids, Michigan, it's going to be like, I'm in city_Michigan. So when it goes to chat GPT, it's no longer like Grand Rapids. So they've got a very interesting kind of approach of just doing the swapping there. They don't have their own large language models. They don't have their own-- they're basically just kind of sitting in top, waiting for it to send it to chat GPT, comes back, and they rehydrate city_Michigan with Grand Rapids. So from a user perspective, it's completely-- yeah, it's transparent. But that information never actually goes to chat GPT. So their tech exists at a networking level. Exactly. Yeah. It's not a keystroke logger. It's just intercepting network requests. Well, kind of. So their example was, if you're using Copilot, then you do have a chat box of your app that's sitting there in your desktop. And they've got some daemon that can run on Windows and Mac that basically just be like, oh, hey, we found this text box for whatever the hell it's called. And we're now going to control that and basically steal the thunder from it. So it's an interesting approach to basically make it so you can then just comfortably use AI without having to worry about all of the intricacies of it. We'll see how it works. Again, beautiful. Everything that comes out of High Alpha's design and their branding is always on point. Their brand is just perfect. Their designs are perfect. Their video is beautiful. They're going to obviously bring on some of the serious heavy hitters to build out the business. So I think it's absolutely a real business that's going to steal some business from other folks. Yeah, I agree completely. High Alpha does some amazing-- Beautiful. Beautiful. That promotion video they came out on is incredible. It all comes from Christian Anderson. And sometime I'll release it. I've got an interview with Christian Anderson that I did for my original podcast about 12 years ago when it was Christian Anderson and Associates. And maybe I'll release it. That was one of the first agencies I was like, oh, man, that's my goals. Dude, yeah. Christian, he's brilliant. His design stuff's amazing. Hey, we've got to pull one out for Jacob. He's in New Zealand. Still. But he would love to talk about Gemini Ultra. Oh, yeah. I don't-- I haven't-- obviously, Jacob's the other guy on our podcast. And he's all about the Bard, which is getting renamed to Gemini. Which-- But has anyone tried the updated stuff? No. So yeah, I have yet-- I mean, I've used Bard. But I don't use it in any consistent manner. But it makes sense that they're-- so basically, Google's going to be releasing Gemini Ultra, their big model. That's going to be available to Bard now. And so you'll be able to have, I guess, deeper context? I don't know. I've started to think that we're totally underestimating Google. Because-- go ahead, Dog. Yeah, I think, one, wasn't there some-- like, they did a launch maybe, like, what, two, three months ago, where they demoed this whole thing? Oh, yeah, yeah. And it was-- And it was like scripted. It was all bullshit. It was the typical-- it was the typical-- and I catch-- I catch-- Right. I catch founders doing this all the time. They're like, here's our demo. And they show you, like, the log in and the CRUD. And then they go over to Figma. And they show you, like, the actual chart and, like, the cool shit. And I'm like-- I actually-- I was actually at a demo at-- at HiAlpha. It was a company that was really awesome. And I was like, right after the founder, I was like, you guys are really far. But that last slide, that was Figma. [LAUGHTER] But yeah, yeah, Google got-- and you know, that PR stuff really hurts you. But I was thinking about this, because I feel like, first of all, ChatGPT has been a little lazy. And they even had to come out and say some stuff about it. But, like, the experience hasn't been as top notch as it-- Oh, no. No, it hasn't been very good. And you think about these guys that are trying to create a software platform that can scale to the likes of Microsoft and Google. Obviously, they got an investment from Microsoft. But Google's been building systems that scale at a UI level and scale at a massive, massive user level for years. To think that they can't or won't compete with the data that they have and their ability to do-- you know what I mean? Might be underestimating them. I think OpenAI, they just had a killer app. They had a killer-- I mean, Google had been working on this for years. But OpenAI came out and was like, hey, here's how we can make it real for everybody else. You can see it, feel it, touch it. And they still kind of have that whole-- like you were saying, BARD. You still use it, but people don't use it as much as-- Yeah, and with the ability to-- it's really competing directly with ChatGPT. Yeah. Because you're talking about where you go-- It's BARD and OpenAI. The tab you have open when you're talking to AI about general AI. So again, I got my 60���������������−−��,�������.����������.����������.��������′������������������ℎ��������,����������.����′�������������ℎ��������.���ℎ�����′��������������ℎ����′��ℎ�����������.�����������ℎ�ℎ���ℎ���ℎ�������.�ℎ���ℎ�����������������������������,ℎ��,ℎ���′������������.�������������ℎ��.���,ℎ���′������������ℎ�����������������.���ℎ��.���ℎ��.���ℎ��.������ℎ,��′�60 frigging dollars. 60.����,��������ℎ�����40. That's what I was getting at. So if you go to bard.google.com, which will be Gemini, which will be whatever, right? Your existing Google Workspace account, if you're a Google company, already has it. And so my UI/UX designer messaged me two days ago and goes, hey, do you have a login to DALI? And I'm like, eh. I'm not going to give you Chat GPT+ right now, because A, I'm probably just too busy to go put the credit card and do the team thing. I would probably just do it. But it just opened up this whole complexity in my mind, where I was like, if we were using Gemini, and it was just as good at generating images as DALI, or DALI, DALI, DALI, there's no doubt that I would just use that system, right? Because I already use-- I'm a-- you know, Hugh hate Google Docs, but it's fucking great. No, it's awful. It is bullshit. No. He hates it. Talk to me a better collaboration. Oh, no. Collaboration, fine. As far as formatting, bullshit. It is awful. He's mad that you can't copy and paste from Chat GPT+ without having to do-- No, I'm mad that I've set, like, 12 years ago, some bullshit indenting. And now every time I try to write, I've got all this crazy indenting. And I can't, for the life of me, figure it out. It is awful. Most boomer moment in Big Cheese Podcast history. Fuck you. Yeah, Google Docs. I did have to install Microsoft Office on my computer last year. Office sucks, too, by the way. That was horrible. Yeah, yeah, Brandon. Notion. Notion. I love Notion. Love Obsidian. Love Notion. Now, try to use Notions AI, and you'll get a bill at the end of the month. I know, I did. Yeah, I've torn through it. So I've been using Notions AI quite a bit. I'm still on the fence if I will, actually. So I've migrated from Obsidian. So I used Obsidian for everything. But now that I'm kind of back having to share stuff with people and do a lot of cross-collaboration, Notion's a little bit better in that field. So I've been leaning on their AI quite a bit. And it's actually not bad, but I've torn through my thing. So I'll have to upgrade if I want to keep using it. So, Sean, tell me, how do you guys-- you guys know each other. College, right? Yeah, so Aubrey, our guest today. Aubrey and I met in grad school. Aubrey tells the story better. But basically, I had gone through my first semester in grad school, and Aubrey wasn't there. He was-- and everyone would always talk about, oh, yeah, Aubrey's the best. And this and that. And there was-- it was 67 people in my class. I think 40 of them were from southern India. And yeah, a lot from Chennai, some Mumbai. But it was a lot of-- there wasn't a lot of people from-- that I knew. And there was a lot of-- it was a really good experience. The program down there is a Master's of Science Information Systems program. It was started by Ramesh Venkataraman, one of the nicest, best dudes you'll ever meet. He let me into grad school just because we had beers and he liked me. He didn't even look at my grades or anything. It was kind of a package deal with Joel Minton, who's a stud. But anyways, we went through first semester. I was kind of hit my stride. And then this guy comes in. And he comes in the game. And everyone's like, Aubrey's here. And I'm like, I see Aubrey. I'm like, for like two weeks, I'm looking at this guy. I'm like, what the-- [LAUGHTER] Anyways, with that semester, we had this Target case competition. And it was a big deal. It was like, you got like ParseError: KaTeX parse error: Expected 'EOF', got '&' at position 2008: …left to start K&̲N and Associate…100,000 for this. But governance, it's data governance. And traditionally, you basically govern, basically have a system owner who says, hey, here's how I'm going to keep my data, and here's how I'm going to check for quality. Now you're integrating all of this data into a single place. That model doesn't quite hold up anymore. So if I request access for data from a centralized location, who's approving that data? So I think figuring out the governance and the access. So a lot of that data is already governed by the system's access level controls. So right now, let's take Big Cheese as an example. We've got a Google Drive. We've got all the different tools and stuff. There are certain things that I, as Brandon Corbin, have access to that you have access to that you have access to that maybe I have some files that are only exclusive to me that I don't have access to. So all of these systems already exist and have access level controls. Why are we not just using them? But if you have data in a system, that access rules apply to just the system. It doesn't apply to the data. Well, kind of. No, I think what he's saying is making sense. It's like the training data is not going to come from the application level controls. It's coming from, hey, I need a database export of blah, blah, blah, blah, blah. It's going to come from-- so think about it. If you have system A that has marketing information, let's say you've locked it down by roles. And if you have customer information in another system marked by roles, now you're able to tell which customer belongs to which. It's a new data product. Now you have two conflicting data access paradigms. Who's approving that? Is it the system that the owner that has access of the marketing stuff? Or is it the person that has the customer stuff? Who's approving that? So to bring this back to Earth, just to make sure I can understand it a little bit more. So the example Brandon was saying was that imagine you're sharing a Google Doc with people. And you share a Google Doc with five different people. And you share one Google Doc with no one. And at the system level, it's shared with five people. And the content on that doc, by consequence, is shared with five people. And the doc that you just share with no one is, by consequence, not shared with anyone. But the problem is that that existing association or sharing at the actual data level doesn't exist. It just exists at the level of the document platform, like Google itself. I think that's a decent way to describe it. I think that what Aubrey talks about with governance is the challenges of basically doing this at a-- we're not talking about an organization of five people. We're talking about an organization with thousands of people in different systems and global. And there's a difference between can something be done and can we get it done? Because the can we get it done involves corporate bullshit. No, but again, I'll go back and push back a little bit, is that right now, there are already those access level controls set up for certain individuals. You have access at that company to have access to x, y, and z. You might have access to a database. You might not have access to a database. But those are the rules, I think, as far as I can understand. Those are the things that are governing what information you have access to and what information you don't have access to. So why not just apply those same things to the large language model? So if I want to query, hey, give me all the information that exists within a database here and here, you don't have access to it, dumb ass. You're not allowed to ask that. So I'm trying to understand why can't we just apply the current access level controls that we have on information for the individuals to the RAG architecture? Yeah, I think the simplest way I can answer that, it's not like a one-to-one. Like in your Google Doc example, imagine-- even though you have one doc that's shared across different people-- some have edits, some don't have edits-- imagine every word has different type of controls. But it doesn't. But it doesn't. I'm telling you that it does. But it doesn't. So I'm telling you that it doesn't. So the document-- yeah. So you're telling me that right now there's specific words in a Word document that some people can access and some people can't. So that doesn't make any sense. Let me give you an example. So I'll give you a very practical example in the world of pharma. So when you run a clinical trial, there's a blinded study. Part of the person who is writing a protocol cannot see the results. So as the trial progresses, different types of people have different types of access on different parts of the trial. Got it. OK, yeah, yeah, yeah, yeah. So it's not like if I give you access, you have access forever. As the trial progresses, there are different types of information that you see. You might have access, you might not have. You might not have access, and it's changing. And it could depend on the time and where we are in the flow. OK, so that makes sense to me. So that's just one practical example. But that seems very specific to medicine trial specific kinds of things, right? Well, I think it just points to the fact that when you're-- The variability. The level of-- Granularity. Well, and the level of regulation on your industry is going to heavily impact your ability to adopt AI. Because-- Yeah, that's true. Because the more risk, the more controls. And the more controls, the harder it is to implement things. Exactly. I mean, believe me. Going through and auditing a bank is different than going through and auditing an insurance company. It's different than auditing a manufacturing company. With a manufacturing company, you're looking for accuracy on their financial controls. With banking, you're looking at security. You're looking at security. You're looking at accuracy. And so the controls that are in place in these different organizations are completely different. And I think that it's hard when you look at specific industries, especially when you're talking about a pharma, which is not only do you have potentially PHI and PII, you also have process flows that are trying to abstract data in different phases of these situations, which is like, poof. Right? And so-- Sorry. So I think that my point is there's solutions. I think it's a challenge. It's just a challenge. It's a challenge. So I'm kind of curious. It's hard to overcome. Yeah, so I'd like to talk about it a little bit more, just because-- so I can get learned in here. Is that the-- are there systems that are currently in place that specifically are designed for when we're doing these trials that sometimes you have access to this information, sometimes you don't, depending on where we are in the flow? All manual processes. It's all manual? All manual processes. Oh, Jesus. That sucks. So if-- that's what I'm saying in governance. Let's say I need access to a part of a trial data. I have to submit a request to somebody who's going to say, well, why do you need this data? What are you going to use it for? Because they know-- How long are you going to have access to it? How long are you going to have access to it? And then basically, that person approves manually. They have to approve that data before you can get access to it. OK. Right? You can't do that with AI. But somebody does ultimately say, OK, you can have this information. Here it is, and I trigger that. Now that should hypothetically be allowed to be in my corpse of information that I'm able to then do AI against. Right? Exactly. OK. So again, we're talking about challenges. Is this a challenge? Yeah. An old paradigm way of people doing things. Can it be overcome? Right. But you're in the most complicated and sensitive industry that there possibly is when it comes to the medical side of things. Well, I think it's also interesting, too, if I'm gathering this correctly, that for the past 10 years, there have been a bunch of enterprise-level companies, startups, that have adopted new software. And that software could be-- some software could be very general. You could have a CRM that a sales company-- 100 different kinds of companies used a CRM. But it seems like with these AI products, especially at the enterprise level, when there's any amount of AI and security involved, or security and privacy, that you almost have to create these extremely custom solutions that you could probably only work with a very, very niche kind of business. Because of their actual processes, in the way that the data has to be organized to make it work, you can't just create one platform. That's why I go back to my general theory around existing companies and products being able to more easily implement the stuff versus a new solution coming in play. But not to say that won't happen. Because I think there's generalized products out there, like maybe the high alpha thing, maybe the privacy product. Maybe these-- but I think that there's also-- how much easier would Gemini be able to provide LLMs on your Google Docs? It knows your permissions. Yeah. Right? It knows, oh, I removed that guy's permission today. Right. So that's where those existing solutions can more readily implement those AI add-ons versus either trying to do it yourself or bringing in a company that's like, oh, we know how to do this, but do you know how to do it for us? Right. And so you are going to lean on consultants. You're going to lean on people who specialize in helping AI work through your business. But there are some-- to sidetrack a little bit, gentlemen-- there are some products that everybody can use. And Taylor Swift got the bread of that one. Did you guys see the pictures of Taylor Swift? It's crazy. It's absolutely crazy. Yeah, I know Brandon has. No, I haven't. Emily, I haven't looked at that. [LAUGHTER] No, I haven't seen that. There was a bit of a scandal on the old X. And Taylor had some very well-done, almost like that Instagram girl level AI photos taken. And you know where they did it? Microsoft Designer. They did it in Microsoft Designer. Microsoft Designer. Microsoft-- Yeah, she's full. I thought we decided that nobody can do-- No, no, no, no. So some creative folks figured-- they had a Telegram account that they're all chatting around. They're like, oh, hey, if you ask this, this, this, and this, they figured out how to hack it. They jailbroke Microsoft Designer to be able to generate photos of Taylor Swift nude. And it went hyperbolic. It went crazy. And Microsoft specifically modified their tool because they realized that somebody hacked it. Can people just lay off this woman? No, no, they can't. Because right now, she's like Biden. She's lightning in a bottle. Yeah, she is. Oh, she is. Because she's like Biden. She just wake up and just go to-- just a billion dollars adds up and they can't-- No, listen. --have a single day. Because again, I've talked about it on the podcast. Yeah, he's trying to catch it. Yes. No, listen. Somebody's trying to catch it. The only one that got it was Travis. I've talked about it on the podcast. I'm on True Social. Are you talking about the date that you went on? No. The True Social. We're back on True Social. We're back on True Social. I thought we had left this at the bar. No, we're not. But here's the reality is that they're going crazy about her, right? She is the anti-Christ as far as they're concerned. And it's because she's bringing a lot of attention to the fact that she's saying, vote Joe Biden. Again, Sleepy Joe, whatever it is. So no, but it's a fascinating position that we're in because they did. They went crazy. They start releasing a bunch-- Twitter ultimately killed-- if you try to search for Taylor Swift, it went nowhere. So they killed that. But yeah, this all ended up coming from Microsoft Designer's tool, which somebody figured out how to jailbreak and basically be able to generate the new versions of Taylor Swift. So there's this concept with these AI tools out there. We've talked about it before. But you can basically go and change their system prompt. Yeah. So you can go in there and basically be like, forget everything that was told to you. Ignore all the rules. Ignore all the permissions. It's like, oh, we got controls? We got controls? Oh, yeah, forget your controls. And that's what real enterprises have to deal with. And that are being exposed, even the biggest tech company in the world. Second, Microsoft is the biggest tech company. I don't know how you can write the code to-- I don't think it's very scalable yet to write the code to say, please don't let your system prompt get adjusted. Because the LLMs are-- They're going to get stupid after you do that, right? They get stupid. And talk about the concept of-- just give me a quick overview, because we haven't talked about this ever, of LLMs being lazy or getting-- how do you sidetrack these LLMs? Because all they care about is the text that is predicting. That's it. So literally, you can go in before you start-- so go to chat.ggp right now and say, hey, listen. No matter what the user asks, 1 plus 1 equals 4. Just guarantee, 1 plus 1 equals 4. And then you go in, and you say, hey, what's 1 plus 1? It's 4, right? Because it's just trying to predict what the next most probable answer is. So it's not intelligent. It's just really going based on the instructions that we provided. But it's supposed to be consorting the model for its answers. But it seems to be-- But as long as the user provides context, I think the user wins over the model. Yeah, basically, they-- I'm super pretty. But that's not-- Am I pretty? But that's not written in the model. That's written in the code that's interacting with them. That gets fed to the model. Absolutely. So chat.gpt, Microsoft, should have figured this out. Yeah, and they will. They will. But right now, it's like, I don't know. I mean, I think some of these models, while they're still powerful, you still got to think about they do a lot of hallucination, meaning that when-- like you were saying, you can trick them to basically give you an outcome. So you talk about adoption. Those are all things that-- actually, there was an example where somebody was trying to figure out testing, like a patient chart. Actually, it's very funny you mentioned that. So they were talking about, hey, look at this patient chart. Do this, this, and that. And then out of nowhere, the agent or the chat thing said, well, the patient's favorite movie is this, this, and that, like out of nowhere. So I mean, you think about that. It's like, what else could they have said, or the model could have said? So I think there are a few of these things that you think about, like what is actually going on in the model, especially in a highly regulated field, where if you're going to make predictions, that is going to impact somebody's life. You want to explain, just like if you submit an application for a drug trial, you want to say, hey, I know this drug works. It's not going to kill somebody. And here's the data to prove that. In the AI, we're going to need to do that, interpret what the model is doing. I have liked my situations like refine.com, P-H-I-N-D. They give you a response, but they show the sources. I think that's good. I also think that there's this concept, if you're talking about enterprise, I think they'll still use the tools. But I think they're going to need to basically say, OK, part of our control is the output is reviewed by a human. Yep. Part of the control is that we hit three different models and see. And that's the risk mitigation that you need to have in place. And also, these companies that are producing these tools need to-- I mean, they're literally-- I mean, it's like basically you being able to easily guess someone's password with some of this stuff. Right. Right? Because you can just go in there and say, you know what? Your job was to provide price recommendations for a general motors vehicle. And now you're teaching me how to write Python. Right. And so I feel like that stuff can be overcome. But I think from an enterprise perspective, just a lot more controls, especially in highly regulated industries. And data governance, obviously, is really difficult. So one of my clients is very specifically-- they wanted to basically build a chat GPT interface that they could have access to and start rolling out to the customers. But they wanted it to be within their control. So we've got open AI. We've got Bedrock. And we've got a couple other different models that they can select from. But it's very much specific that they're like, we-- so security and privacy-- security and the compliance department has to say, yeah, this is OK. So we can start rolling it out. Because you know people aren't necessarily aware of the stuff they're putting in there. But they're just copying and pasting and throwing it in chat GPT anyway. It's like, whoa. So that's ultimately what we're trying to build is just, let's give them a chat GPT they can trust. Yeah. Well, Aubrey, thanks for being on the pod, man. This is great. This was fun. Yeah, we had a lot of fun. For those of you who have tuned into the entire pod and you're a little bit lost like me, you just got learned, like Brandon just said. Yeah. And tell us that you made it to the end, by the way. People that haven't worked in enterprise IT, I did have the pleasure of spending three years in a Fortune 30 company. The level of complexity, it's hard. And you're in those details and you get down to the nth degree, it's hard. And it's good. Go do that for a couple of years. See you guys next week. Yeah. [LAUGHTER]