The rise of large language models like ChatGPT has sparked excitement but also concerns around user privacy. In this episode of the BigCheese AI Podcast, we dive into the impact these new AI tools are having on privacy. They discuss emerging solutions that aim to better protect user data, while also exploring the potential of decentralized personalized LLMs that could live on your own device. The hosts debate whether tech giants or startups are best positioned to balance privacy and progress in AI. They ultimately predict that Apple may be poised to deliver private on-device LLMs customized to individual users.
We're recording okay nice okay all right
well time to clam up get a look get look
at a T-bone by sticking your head up a
bull's ass but you'd rather take the
butcher's word for it I am Andre Harakas your uh 30th best
moderator and I am joined by Sean Hise
Jacob Wise and Brandon Corbin hi I'm
Jacob Wise um one of the co-founders
that crafted been doing some uh uh some
sort of startup for a long time in my
life and a general technologist and I
work with Sean my name is Sean I am
co-founder with Jacob at Crafted um
Andre is one of our clients he's the man
um we've been working on all-in for a
long time um we work with startups uh
corporations established businesses we
do all things digital uh design and
development we've just recently in the
last year and a half gotten into the AI
space um working with Brandon um and
just really excited to be here I'm
Brandon um I am a the founder of happy
data Studios which is a agency here in
Indianapolis that focuses on creating uh
MVPs and proof of Concepts um Sean and I
had started talking and we realized I
think there's a lot of potential here so
that's kind of where we form this
Alliance that is now big cheese um and
I'm just a lifelong Creator I just love
the art of the start creating new things
and uh not very good at the uh after the
effects which is like once I've got
something build I'm like that was cool
let's move on to something else that's
why it's nice to have some other people
around and be like no we got to go
forward with it so the big cheese AI
podcast is something I'm super excited
about you know so you guys didn't give
yourselves enough credit Jacob wise and
Sean he again I've been working with
these guys for the past almost two years
now and it's been an absolutely
phenomenal experience both you know on
the professional level and on the
personal level they helped me take my
startup from being you know a hundred
user platform to now we've at least
registered over 12,000 users and so
we're working on some activation things
but seriously these guys are brilliant I
recently came into contact with Brandon
through this big cheese initiative and
from all intensive purposes one of the
smartest AI mines in the midwest so the
big cheese AI podcast we dive deep into
AI we dive deep into the foundations and
as it as it evolves and today we're
going to be talking about the privacy in
the New llm World so the first question
I have is how has the Advent of llms
impacted the privacy of user data
Brandon well people don't really care
about privacy like if we're just being
honest like the majority of people
aren't really thinking about data
privacy you're not thinking about what
you're posting up on Facebook you know
you're not thinking about what you're
posting out just in general my previous
life was spent building an application
called n me um that let you track all
the good and bad things in your life and
so that was my first forray into
realizing that people don't care about
privacy and that's when I realized oh my
God like like there needs like the world
needs a sin tracker first of all but
it's got to be completely private
because I mean I saw Affairs I saw drug
use I saw like every bad thing you could
think of I now had access that in my
database it was the most valuable data
that I've ever seen and it was
horrifying at the same time but no one
really cared I knew I I mean I knew bad
things about people that were using my
app and so then all of a sudden llms
come out and now I can go to chat GPT
and I can ask it about hey what's this
bump on my neck look like or what is you
know and all of a sudden now people are
just feeling like oh I can go and have a
chat conversation with this because it's
completely private it's not right like
the data that you're putting in to chat
GPT to claw to Bard all of that
information can and potentially will be
used for training of their models now uh
you know that's probably going to change
eventually I think the the government's
going to come out and say you can't do
that and they're going to have to make
some adjustments but it's absolutely
horrifying the amount of data that
people are putting in the the greatest
example was Samsung where Samsung's
people were like oh hey we could use
this to uh to potentially optimize our
chips and so go put in a bunch of data
of your chips into chat GPT and ask it
questions and kind of interact with it
and then they realized that later on
they could go and ask about their chips
and actually get answers about the chips
so they were training they were training
and this is earlier on in open AI so I'm
not sure if they're still doing it but
that's what ultimately spurn uh uh to
say okay nobody can use chat GPT as a as
a user I have no ability to go to chat
GPT right and say hey I don't want that
information in there right that's a big
part of the gdpr yeah is to have the
right to be forgotten absolutely I think
people need to be way more concerned
about the Privacy aspect and the
information that they're putting into
chat GPT and Bard and Claude because
it's it's out there well it's just like
the the typical you know struggle
between you know capitalism progress and
you know regul
where that's what you know some you
always see that play out the EU first
because they have stricter regulations
so where you know UK more aligns with
the United States they're going to be
like well hey if we if we crack down on
this are we going to get behind you know
is the Enterprise or Enterprises and our
startups going to suffer and so what
usually wins is stuff that makes money
yeah and um you know I mean I think from
from a chat GPT perspective yeah I mean
you're going to you're going to be
there's more risk because you're going
to be more forthcoming when you're
talking to these prompts right and that
is unlike social media things that you
would put out on Facebook that you know
your social network is going to see to
that point like um I think like there's
two parts there's the education piece to
it where people just don't know um what
they don't know and that's over we can
overcome that but then there's the side
of it where we just kind of inherently
either trust big Tech or we assume that
we're not important enough I I know I've
had this thought where it's like who
gives a [ __ ] if I talk about you know my
recipes and you almost feel like it's
just it's just so complicated and it's a
little overwhelming so you kind of throw
caution to the wind when you use these
tools well I remember the first time
that I was I came home from a golf round
and I'm talking to my wife and you know
we The Mention Of Tios had been made and
I was that was the first time I realized
that they're listening to our
conversations and now there's videos on
the internet of how to literally go into
your Google settings and turn off them
listening to you so that talk about
normalizing something like that right a
two ago like oh my God they're listening
to us and now they're like oh there's a
setting to turn the listening to you
without your permission off have you
seen your account page on Google where
they all the ad targeting I mean it's
horrifying that's a whole other thing
but I'm like okay they really know the
things I'm I'm searching right now and
talking about so yeah the question I
really have is llms this stuff's brand
new I mean it wasn't 5 months ago like
we weren't talking about this stuff and
now all of a sudden it's just taken the
entire world by storm and there is like
Sean mentioned a little bit of a
competing belief between regulation and
and really trying to strap it down but
also then allowing entrepreneurs to go
out and build the industry and make
things that are really amazing and along
that Journey obviously you're going to
have some risks you think about OPEC
prompts and PR privacy where you know
you kind of have to balance
functionality and privacy together so
like what are your guys' thoughts there
as well let's talk about the risk we
know the risk is there but like okay now
we take it to the enterprise we take it
to a company that wants to use chat GPT
right right like what are the what are
the tools out there that help mitigate
the risk and and what does that market
look like right the two that kind of
come top of mine and and full disclosure
prompt privacy is is a client of mine
now so I've been I've been working with
them through another client and so I'm
going to be helping them kind of come up
with their marketing and and product
marketing stuff but um so prompt privacy
came out with the entire premise of
basically allowing organizations to set
a certain Thresh hold of how much uh
private information do you want to be
sent to large language models so their
thing is a traditional chat interface
you'd go in say I want to chat with chat
gbt or Palm 2 or llama 2 they've got a
bunch of different if you've ever used
chat GPT it's a very similar kind of
user experience except for you can
actually pick different models Beyond
just like what open AI offers um and you
can say hey if anything in this prompt
is is considered pii or private
information according to our
organization I don't wanted to ever go
to chat GPT I only wanted to go Palm 2
which is running in their own like
private you know uh internet isolated
Hardware environment um and those are
great because they also offer like a
bunch of different user accounts so I
can go in and be like everybody on our
team has access to this I can see
exactly who's using what how many tokens
they're using what kind of tools they're
using how many are trying to pump a
bunch of information in there that's not
allowed right like that gets kicked back
out so those I think are where the
companies need to be focusing on is
tools like prompt privacy or even uh
chat jpt Enterprise which is I I still
haven't gotten any pricing from I've
been every every big client that I work
with we've all submitted information to
try to get access to the Enterprise
version of chat GPT which supposedly is
going to be isolated we'll be running in
its own environment we not have any kind
of cross you know data it's always just
going to be isolated there but then you
also have Azure through uh chat GPT
interace I was just going to say like
you know main Enterprise offering is
their deal with Microsoft well but
that's what's kind of weird is it seems
like there's there and and I'm not sure
how they're going to kind of figure this
out from a business relationship
standpoint because that's pretty much
what the azure's uh chat offering was is
again isolated to your own environment
it's running you know the models there
and so it it seems like there's like
overlap in what chat gbt Enterprise and
what open AI or what Microsoft's
offering so I'm still not sure like you
AI or sorry um uh Azure is just a you
know per per token Price Right chat gbt
I assume they're going to want you know
$20,000 a year or whatever it's going to
be it's got to it's got to be something
expensive um but those types of things
are if you're a big company who's
basically saying okay you guys can go
use chat GPT just don't put any
information into it that's proprietary
or
private you they're they're going to it
it that is is not that is not a viable
uh solution to trying to protect your
company's IP you mentioned prompt
privacy and and a couple of key um uh
feature sets that you mentioned there is
and it also sounds like there's a couple
different levels of protection there but
you've got audit logs sounds like um
you've got granular control over who can
do what right um You can completely box
or isolate um the data so more sensitive
um things that you just absolutely can't
have going anywhere and then you've got
scrubbing or and it scrubs it on the way
back to if you wanted to how does that
work yeah so the a redaction process so
opaque opaque prompt has that's kind of
what their entire model is is that they
sit in between um they're they're much
more develop opaque prompt is much more
developer focused where prompt privacy
wants to be more towards Enterprise
non-technical people that anybody can
just log into and kind of start chatting
and building things out yeah but opaque
prompt sits pretty much between your
application and uh your llm so in this
case I think it's primarily open AI or
whatever um and that when a request
comes in they they redact all pii and
but they replace it with like a tag
that's like addressor one and email
uncore one and
email2 um and then when you say hey
what's the second email in this list
right prompt PRI or open AI gets that
request and it just sees you know email2
so it just replies with email _ 2 is the
second email in this list and then when
it's coming back opaque prompt just
replaces email uncore 2 with whatever it
redacted initially H how does opaque
prompts determine what pii is I mean I'm
assuming there's I think it's pretty
much just your standard like so so they
do names I'm not sure exactly how they
do names reliably right because a name
obviously that's probably one of the
harder uh harder ones to do just like a
traditional reg regular expression
lookup for um but email phone number
Social Security number driver's license
credit card birth dates maybe uh so they
just kind of replace all of those yeah
it's more of like it's not 100% solution
nothing ever could get through here but
it's like this is going to capture most
of your your basic use cases and
probably some pretty unique ones right
impr prompt privacy does a similar thing
so there's they they have a full block
and then they have a redacted one I'm
not sure if the redacted one is fully
released yet but and there's they've
gotten it down to where it's 5
milliseconds to do the just to do the
scan and to do the extraction of the pii
and that's kind of important cuz every
time you're sending these things through
another step right that it's just more
time speed has always been an issue with
these with these systems yeah and with
large language models you know you could
have some if you're not doing streaming
where you're just getting the Instant
text back you know those things could
take 10 to 20 seconds for it to have a
full response back um but yeah so prompt
privacy is is really interesting and the
other thing that they're kind of doing
is they
incorporating uh so the so with them you
you kind of have almost like an AI
operating system so they have the rag
architecture meaning that you can upload
a bunch of documents and you can have
different vaults and like you could have
access to all of our technical
documentation you could have access to
all the marketing information and you
could just chat with those without
having to know anything about what rag
is or vector databases or anything like
that you're just uploading your
documentation and then you're just
picking it hey I want to have a chat and
here's the Vault I want to chat with and
now I have this full conversation about
our data so that's another thing that's
really kind of fascinating about what
they're doing but well that's
interesting too I mean I think that
opens up a whole world of value of of I
mean right now I feel like documentation
lots of companies talk big a big talk
about documentation but it's really hard
to implement it's really hard to extract
that value out of those documents you
have to have like start to finish really
exhaustive um uh procedures and
processes in place so I think this is
the promise of one keep the the data
private two make better use of it like
make it more valuable because people
have this information um and I think
that's what I'm most excited about these
kinds of of offerings is is just
exposing more data and to the to the
people that need it and and not the
people who don't belong there um and uh
adding that values so another one that's
kind of interesting it's been popping up
a lot on Reddit at least in the advert
in the ads that I see is uh Salesforce
and so I'm not exactly sure what
Salesforce is doing have neither am I
Matthew mccon keeps telling me I need to
see more out of AI in like random places
on Earth but I don't know actually what
is being done which is very either do I
so I think what it feels like that
they're trying to do is that they
actually so they've got a fairly uh
impressive AI research team over at
Salesforce and I think they're they are
actively working on building their own
large language model that will be
specific for your entire Salesforce uh
implementation so everything right CRM
you know landing page marketing you know
whatever all you're using Salesforce and
and I think that that's really what
we're going to see happen here is that
AI right now ai is this kind of
Standalone thing where once we kind of
get past it just being a standalone chat
interface it's going to be incorporated
into all the tools that we're currently
using right and we SE we've seen it with
Google they're already Incorporated for
30 bucks a month or whatever it is I can
get access to Bard within documents and
uh the docs into the sheets into the
email and I can have those conversations
so I think we'll just start seeing that
AI just turns into a feature that we now
incorporate into all of our existing
products and services and that most of
the companies who are trying to build
like the Standalone you know we have you
know this for AI or that for AI
ultimately have no moat and that they're
going to get squashed the moment that
well Photoshop right like photoshop's
done it where you've got the generative
refill now that you can in there and
basically take your photo drag it out
and be like fill this in illustrator
illustrator now has the ability to put
in hey I need a I want a donkey jumping
over the moon and and now it's going to
be a vector which is which is the
craziest one because there hasn't been
any good Vector gen image generators yet
they've all just been you know they
generate a PNG or a JPEG or whatever but
they're actually generating Vector uh
images now within illustrator we're
going to see it and everything we'll see
it and Microsoft's obviously going to
add it to all their stuff and so it's
going to be a very precarious position
for companies who are trying to build
something that is AI for this no it
needs to be this with AI it's privacy
because you're using you're extending
the tools you already use and you're
already putting your company's data
there so that's where all that data is
going to get embed put embeddings index
and you already trust it but but it's
also the accessibility right because
you're saying okay well we need to
that's where why these big players are
going to be the ones that that end up
really winning because they have all
your data it's In This Cloud that you
already trust even though it's in the
cloud but it's like yours you have some
sort of control over it it's if you're
getting audited that's part of your your
process that you're already doing
they're those companies have SAS 7s
they've been you know go through
external there's there's a huge Moe
there for the big players when it comes
to that because your that's where your
data is already hosted and that's what
that's that's the accessibility piece
where it's it's those two's tied
together so you talk about privacy and
you really talk about that is a leading
indicator for who's going to succeed in
AI in the Enterprise just talked about
yesterday or the other day about the the
immense amount of cost right to process
these transactions and you talk we we'll
talk about this later about token
economics but it's not easy to make
money you know
reselling very expensive transactions th
tokens for six cents that's a the more
we talk about it the more it makes sense
because they' they've already got these
people's credit cards right they already
they've already have a revenue stream
from them they've got their data they've
got the cash to to fund this operation
to buy those customers and provide the
value and and they're already you know
uh uh at some level they've got their
trust right their data is already there
so privacy should kind of come and
follow that so and and it's not to say
that there's not going to be companies
who music generation right like right
now there's not really but then that's
not entirely true because you have like
Adobe how many adobe's got some music
production right or you have audacity or
you have Garage Band or you have those
things those will just the music
generation piece will just be
incorporated directly into those
products and so that's really where I
see an opportunity at least for like big
cheese and and kind of what we're trying
to do here is we want to help those
companies actually incorporate this
functionality into their products and we
can do that right from an Enterprise
standpoint and you can do it in ways
that are private and not just being like
a generic rapper around Chad GPT that's
happening more in more places than one I
don't know if you guys saw over the
weekend the the huge vers
drama no so versell is a company that is
huge huge um right now they've raised a
bunch of money but they're basically a
platform that developers are using to
deploy their build and deploy their
applications and they've become they've
become super super popular they're like
the go-to solution well one of the
things that they have in their user
license in in license agreement is any
app that you Deploy on their platform
can be can be taken down by them at any
any time for any reason and so what
they've done in certain instances is
that certain people high up at the
company have decided that they really
like a certain app and they'll basically
release a version of that app on their
own and basically hijack that
application so there's this there's
there's basically this this ongoing
conversation where these you build these
developer friendly platforms because
that's one of open AI you know this is
probably a conversation for a different
day but you you're building a developer
friendly environment but then you're
you're putting in you know you read the
fine print because what you what you put
in there may not be yours right well
think about like with Facebook right
when Facebook rolled out their their app
platform and everybody just goes and
starts building entire businesses on the
Facebook app platform and then all of a
sudden Facebook's like either and no
we're just turning that thing off right
and everybody's like just sitting there
holding their bags uh or they're like oh
we're actually just going to build that
functionality into the into the
application itself same with Google I
mean Google had has how many platforms
that goog I mean Google's classic for
just launching things and then like if
they don't get a billion users they're
like all right we're just killing it
what was their their streaming gaming
platform Str I love I thought that idea
was very cool obviously it had some
latency issues but but you had a bunch
of people who buy hardware and a bunch
and then they just don't care they're
just like yeah it's not making a billion
dollars to just scrap it I'm still
waiting to get uh to get back into ways
or not WS waves W waves I use waves all
the time wave was cool yeah I'm glad
they haven't killed wave
no I did bum me out when Google actually
bought ways yeah cuz ways was like is is
a brilliant platform um but then Google
buys it and it was just like and then
all of a sudden you launch your app and
now it wants access to your location all
the time even if you're not using the
app kind of thing today in my memories
on Facebook was me going being like well
I guess I'm getting rid of ways because
they're now asking for I think that was
like four or five years ago or whatever
they've since switched that back to now
where you don't have to have that be
always the case but that's just another
example though of the Privacy right
where people just don't think about
they're like yeah okay whatever why
would an application need to know my
location when I'm not using it they
don't they don't they don't unless
they're trying to figure out patterns
and behavior well I think that is a
great segue into decentralized llms and
as llms become more compact Around The
Edge Where Do We foresee the impact in
use cases obviously we've outlined a few
but I'm super curious what you guys have
to say so decentralized llms in terms of
the way it mean that I take it and my
first excitement was when I actually
downloaded a model onto my machine now
my machine is you know silicon you know
an M1 it's an M1 it's an M1 but it's
it's the first second generation M1 um
so it does take quite a bit of computing
peer what what it Bas essentially means
is that instead of relying on a a cloud
service that's running these models on
you know x super Advanced Cloud Hardware
you can actually pull these in locally
and you have no external dependency to
run these models and um you know that
that takes care of a lot of different
issues that you might have one of them
being cost because a lot of the models
that you can run in these situations are
open source so not all models are chat
GPT 3.5 or you know all these
proprietary models there are versions uh
like Facebook implemented uh an open
source version so Facebook's llama llama
which is open source yeah right um and
you can
I kind of open sourcy well you can pull
it down on your computer yeah you can
run it on your own a lot of these have
like weird like you can't use them for a
commercial cuz I think some of the
model there's some weird licensing
issues but there's definitely models
that are out there that are completely
open source that you can use for all
sorts of crazy things and and long story
short is that just opens up a lot of
different use cases um and opportunities
for for you to to utilize this stuff and
I think what's happening what you see
there is that yeah the privacy and
security dependency might be released
but it's also kind of cool because you
can test out a lot of different types of
technology and some of them are more
specialized and the specialization I
think is is one of the most the the
coolest Parts about it so uh mistol is
one that's super popular right now keep
hearing mistol all time mistr all over
the place but then somebody created
Samantha mistol which is a model that is
specifically trained on Mental Health
relationships conversations and like
data about just your overall well-being
so you can have conversations about like
how do I how do I communicate this to my
wife without her getting M I've had this
I've had this question I have to ask it
so for the Layman explain is it you know
is it fine-tuning is it just the the
data because I've gone through and
trained and made embeddings of my own
data with chat gpts uh apis right and
that when you ask questions it responds
in context to that data but for some of
these other things like the example you
just mentioned how does a model that's
been trained on mental Wellness compare
to the same questions that you might ask
a a general knowledge model and why is
that important what it's ultimately
doing is it's it's it's injecting
additional weights into the model um so
like if if if all of a sudden now
basically is the question is Samantha
mistl as good of answering questions
that are not necessarily tied to
relationships and behavioral stuff as
just the standard mistal I would I would
assume probably not because I think the
weights end up getting shifted towards
that specific direction that the model
is being fine- tuned on um so it would
probably lose some context of some of
the other things that it was potentially
trained on so I understand that part but
what about the flip side so like if you
create this model that Samantha that is
around the mental wellness and things
that answer be better than the general
model yes the well specifically if
you're talking about the things that it
was fine- tuned on right so you can use
fine tuning to basically uh add
additional knowledge to these large
language models that's kind of getting
and the other thing about specialization
is is size yeah right so if you if you
have a model that's trained on a
specific activity like oh there's a lot
of models out there that are really good
at writing python code well you might be
able to run that locally because it
doesn't need to know you know what
happened in 1532 and you know
the Spanish Inquisition or something you
know what I mean specialization I think
like you said is you've got code ones
you've got behavioral ones you have you
know and they can all be trained on I
mean hell you can train whatever you
want well and I think the interesting
thing too is like okay mental health and
relationship advice these are very
private matters right and if you can
make a specialized model that can be
delivered and it runs on my computer I
can turn off my network I can run a
question prompt and then get it can give
me answer I can know that nothing is
leaving my box right that's what I think
is really exciting about what we're
talking about the decentralization and
and there are a few other parts of that
too right there's there's getting it
onto your machine and then why don't we
talk a little bit about like you know
what other ways are are people
delivering these models and and having
giving us access to them on our computer
that provides a little bit more privacy
right yeah like maybe even like living
on the browser like webm or something
like that right yeah yeah so you talked
about the web llm piece what was that
all about it was like literally running
the whole thing yeah it's pretty much so
there's a thing with web GPU which I
think is it's probably if you're running
Chrome you might have it in Chrome now
or it might be like an experiment flag
that you have to enable that basically
it's allowing the browsers to be able to
access the native GPU power uh in your
on your computer and so people have been
taking these models making them so they
can run purely in your browser and the
one that I played with now now keep in
mind again when when you run some of
these you your browser might have to
download like a gig right to have that
model running in the browser itself so
you need to have like a modern computer
you're not going to be running this on
like some Dell that's like 10 years old
with you know a half a gig of RAM um so
you need to have a good machine but
there's Transformer uh JS which is a
model that's out there um that you can
just you pull in through a CDN you can
even just do it through mpm and you
basically like here's the model that I
need and then when it runs it asynch
asynchronously will go and download the
whole model and then execute your
function so I use the classifier for
doing a comparison against some other
classifiers and the classifier was great
it was 80 Meg so you did have a little
hitch of like when the thing first loads
up and it needs to download it but
people are moving some of these large
language models into this as well but
those I you haven't seen like it's not
good right now it's just not good um you
know it's kind of gibberish and
sometimes it puts out stuff that you're
just like I have no idea what this is
but it's absolutely getting us to that
point where you will hypothetically just
be able to run this purely in your
browser and as more and more people
start figuring out how to compress these
models because right now they're all
just super fat right like a good uh a
seven a 7 billion parameter model is
probably going to be about 3 gig um a 13
billion parameter model is going to be
about 7 Gig um and then the 70 billion
parameters are you know like 30 or
something like that 30 gig um and so but
people are figuring out different
techniques to basically compress these
models to where they're a lot smaller
without losing like you can you can
scrunch this thing down to 20% of its
size without losing 80% of its
functionality you might lose like 20% of
its functionality yeah using quanti
quantization I think I'm pronouncing it
right where you basically you're taking
the weights and you're you're converting
the weights to be less precise but
they're still a number and so it can
still do some of its math but the two
that are kind of like leading the way
right now are oama so that's
o. aai and you can go and you can
Windows or windows isn't available yet
but Linux and Mac is and that's
basically runs um and the only interface
that you have for it is through your
terminal and so you can you basically
say hey uh run or o llama run llama 2
and if you don't have it downloaded it's
going to download it then it just takes
you into a chat interface and then you
can just be it's I mean these are these
are really simple delivery mechanisms
and you're you're seeing a community
around it anytime I see a community
around something you know that they're
on to something right right these
there's I'm on their Discord I'm on 's
Discord the other day and I'm like this
they there people talk about all kinds
of different models requirements people
are going bananas for this stuff and you
know you scratch the surf like all right
what are they doing and you know there's
a so long story short is there's a use
case for building these delivery
mechanisms to get these things on your
computer right to get these things in
your hand
and not to just say okay the only thing
that's going to exist in this space is a
chat GPT 3.5 4.0 doly like you're going
to see a huge ecosystem of this of these
models and then the delivery networks
associated with them so there's um
there's more and more apps that are
coming out that are like open source
chat interfaces that are similar to open
AI that you just run on your local
machine majority so you have a Libra
chat is one that's really impressive so
that guy whoever's the the maintainer of
that he's put a lot of work into it but
you can then basically add your chat gbt
API but I think it supports pm to but
you need to have that it supports Claude
but you need to have an anthropic key
which I don't know if anybody's got any
right now um but you'll eventually see
though no question those things should
just be adding AMA as another option
yeah Libra chat doesn't have it yet but
they absolutely have to be working on it
and if they're listening to this and
they're not be working on it because
that is like the perfect in my world
that's like the perfect scenario right
you've got a great interface for uh for
interacting with your chats you got
access to plugins you got access to this
but then your llm is not just you know a
cloud provider but it's running all
locally is this a desktop app you're
talking about uh it's uh it's a react
it's a it's a web based app now they
might have one for electron I'm not sure
yeah but it yeah it's just running in
your that was going to be my question
was um I got a llama the other day and
of course I'm very comfortable on the
command line um and and but it doesn't
seem like that far of a leap to throw
that into package that into a little
electron app or some other desktop app
the rest AP on your local machine
exactly that's the key as as you
literally can just as long as we can hit
local hostport 1473 everybody knows how
to do that right well from an like
thinking about like what i' would like
to do with the with the Big Cheese app
too is so right now it obviously is
using chat GPT for almost everything but
why not just be able to say your own AP
you've got Lama just click here and then
we'll try to make the hit the only
question I have is is I ran into Chrome
um Chrome I was running an app at a
normal domain that it didn't want to
actually allow me to hit Local Host
because it's insecure and so I didn't
like that kind of cross that that cross
uh domain security issue but of course
the other one too is uh GPT for all and
that was the first one that I played
around with and that is an application
you just download and it's got its own
guey um and then it's got like a bunch
of different models that you can just go
and download and so it's a very similar
kind of thing but it's got its own
desktop application that you can run and
that is available for Windows Linux and
Mac as well uh so I think that that's
another really good one that you could
potentially play with because if you're
not using AI right now you're at a
disadvantage yeah yeah I was I I was
just talking with some friends over the
weekend and none of them are in the
technology sector and I ask them you
know are you using Bard or or chat gbt
and their answer was no and I was
surprised by that so I started talking
to them about use cases of hey man you
can just throw a um couple bullet points
of a of a note you want to send somebody
and say hey format this um to be a
professional response to so and so saves
me I mean every day I'm probably saving
an hour of just brain power of doing
things that I'm not very good at which
is formatting text into a human sentence
you know I'm struggling with it now I
I've responded to so many email
well so the other I'll use it to write
test case or to write tests it's
phenomenal for writing tests here's my
code go or here's my module that I have
now go write me my unit test for this
and it goes and it will cover you know
the the deal though a lot for the people
that are really utilizing AI in their
daily uh work is it's mostly to augment
the mundane stuff that you don't want to
do so that you can focus on the really
creative stuff that makes you a human or
not a robot absolutely yeah and the
corporations right now are are looking
at this and they're seeing dollar signs
but they're saying how do we do this in
a secure private manner that's that's
scalable and how do we you know into
their mechanisms their audit trails and
their their their security procedures
absolutely I talked to a CIO last week
had lunch talked about AI talked about
all kinds of different things he was
laughing half the time about the the way
that he thinks certain companies are
going about trying to sell their
products to the Enterprise because his
argument was that they're not going to
buy this they're not going to they're
not going to buy from you right it's
like you know it's it's the it's like
the scene from from Tommy Boy you know
you can get a look a good look at a
T-bone by sticking your head up a bull's
ass but you'd rather take the butcher's
word for it right great line completely
unrelated completely unrelated but the
bottom line is that you know these these
These are they know that they're putting
uh they've been through the the the
trials and tribulations of Shadow it
right they've they've seen people try to
put all these different apps and their
ecosystems and now they're just getting
a handle on that right they're using
tools like zyo to audit their financials
and finding all these things that people
have bought and all they this data flow
and now people are sending their trade
secrets to AI platforms good luck right
so the the the the idea that you know um
a CIO that's in in a Microsoft or in a
cloud that that they're very comfortable
with they're they're going to be they're
going to be buying from those big
players and for the and for the little
guys the startups you need to to go
through those exercises and how your
customers buying your products and or
how are they're going to buy your
product and figure out where that fits
in um from a technology and an
architecture perspective you know I
think as as as contributors that we are
we're trying to figure out the best way
to leverage AI for for our daily work um
and I think it's it's an interesting
blend but um you know there's many sides
of this selling uh SAS right is is a
conversation for a different day um but
at the end of the day I think it's I
think that the the first mover advantage
and being really big help has helped
companies like a Microsoft and opening
and I think it's always helped the big
companies whenever something big happens
they're going to adopt something you
know Google they have the new AI
software where you can take 20 different
photos and now we're all smiling it's
like that's an entire company of itself
that now just got shut down or you know
the Twitter post that get post every
other day with you know Google just
killed a hundred different startups you
know today and um no I think this has
been great the thing that I'm picking up
from it correct me if I'm wrong is that
you know as we look at privacy in the
new LM world
and the fact that there's like these
major players Chad gbt the different
ones you've mentioned around they own
the data whatever you put into it they
get to have even if you're getting per
like prosecuted they can actually like
they can ask chpt by the way everybody
look into that just that's just another
box they can check it's just another box
they can check now but it seems to me
that the
decentralization in platforms like AMA
and things like that are is what's going
to make this stuff much more accessible
possible for maybe the more Average Joe
to have privacy yeah LM I think you're
going to see personal llms right like I
think we all will have our own llm
that's specifically built around us and
and my guess will be is that apple is
going to be be a part of that because
they what's going to be the best llm
it's one that's fine tuned on you yeah
exactly and who is who's who you always
trust right with your data yeah I try
yeah I and especially if it's running if
it can run on device and so since
they're now have their own silicon right
they have their own models they all have
their their neural engine in it they can
really fine-tune these models to be hyp
specific to their Hardware which is what
their special sauce has always been
right and now we can we don't need as
much horsepower where like an Android
needs you know 32 gig of RAM but the the
iPhone can run on eight gig of RAM and
have comparable comparisons right it's
because it's so specialized to their
Hardware that I think we'll see the same
thing Ajax is what they're calling that
which I hate that name I can't believe
they called it that they haven't been
doing web de very long have they yeah
their internal names I guess have always
been awful but um so Ajax is what
they're calling their internal name for
this and and but right now everything
that I'm seeing is that it is still
cloudbased AI um but when the the
release of the uh new Apple watch they
started to incorporate certain things
that are happening on device only so
Siri I can actually ask you questions
about health and never leaves device and
so they do have some sort of AI tiny
they've been doing AI you can call
anything AI like changing stuff
someone's face is that AI or is that
really good software I don't know but
like I've been you can go to your phone
right now anytime I ever want to show
someone the last fish i c just go to
photos and search fish is that
AI if you if you just built that app and
it took all your photos and you could
search for fish and it would find fish
you'd call that AI it's been on the
photos app for years that's true okay so
I mean you know it still can't tell the
difference between my kids because they
all look so similar all my kids are
literally twins basically even though
they're years apart you can't tell the
difference so yeah if Apple can just
tell me what I should cook for dinner
tonight I will give them a lot of
money well everyone this was the big
cheese AI podcast we came to a great
conclusion that of course Apple will be
the ones that saves us all um I'm Andre
Harakas we've got Sean Hise Jacob Wise and
Brandon Corbin We Are The Big Cheese AI
team stay tuned for the next episode
thank you