Reviewing the AI Battlefront | The Brainstorm EP 38
Summary
TLDRIn episode 38 of the Brainstorm podcast, the hosts discuss the transformative impact of AI on enterprise productivity, using Clara's AI software as a case study. They explore how AI can multiply knowledge worker productivity, leading to significant cost savings and improved customer experiences. The conversation also touches on the challenges faced by tech giants like Google in developing competitive AI models and the strategic implications for companies considering AI adoption. The hosts emphasize the rapid pace of AI innovation and the need for organizations to adapt to stay relevant.
Takeaways
- 🚀 AI software is seen as a productivity multiplier for enterprises, with potential to boost knowledge worker productivity significantly.
- 📈 By 2030, AI software spend is projected to reach $14 trillion, a substantial increase from the current global spend.
- 🤖 Clara's AI implementation in their call center has led to an eightfold increase in productivity, saving the company millions annually.
- 💡 AI adoption in enterprises is not about replacing employees but enhancing their bandwidth and efficiency.
- 📊 The success of AI implementations like Clara's is expected to drive more customer service interactions due to improved efficiency.
- 🔍 Companies that aggressively deploy AI are likely to see better margins, reduced churn, and improved competitiveness.
- 📝 The deployment time for AI models can vary, but the impact on business operations can be rapid and transformative.
- 🌐 Large tech companies like Google and Apple are expected to eventually integrate AI advancements into their consumer-facing services.
- 🔄 The balance between AI model performance and distribution capabilities is crucial for both consumer and enterprise adoption.
- 🔄 Open-source AI models may become a strategic play for companies like Meta, offering a backend infrastructure for enterprises.
Q & A
What is the main theme of the brainstorm episode 38?
-The main theme of the episode is the impact and potential of AI within enterprises, specifically focusing on AI software as a productivity multiplier for knowledge workers.
How much value is AI expected to deliver to enterprises by 2030?
-By 2030, AI software spend is expected to reach $14 trillion, which is a significant increase from the current annual spend of four to five trillion dollars.
What was the press release from Clara about their AI software implementation?
-Clara's press release highlighted that they are saving approximately 40 million dollars annually by using AI software in their call center operations, which has improved customer interaction times and reduced the need for human agents.
How has Clara's AI software improved customer service efficiency?
-Clara's AI software has reduced the time to provide an answer to a customer from 11 minutes to 2 minutes per interaction and decreased the likelihood of needing to follow up with a customer agent by 25%.
What is the estimated productivity improvement for Clara's AI software?
-The AI software has resulted in an estimated eight times productivity improvement in terms of time to provide an answer to a customer.
How does the deployment of AI in customer service impact the overall customer experience?
-The deployment of AI in customer service is expected to make interactions more productive and pleasant, potentially increasing the likelihood of customers choosing to interact with companies for their needs.
What are the potential implications for companies that do not adopt AI technology?
-Companies that do not adopt AI technology may fall behind in competitive landscapes, as AI can significantly improve efficiency, customer service, and overall business performance.
What is the significance of the performance of AI models in the enterprise context?
-In the enterprise context, even marginal performance improvements in AI models can be extremely valuable, as they can lead to significant cost savings and efficiency gains in operations.
How does the distribution of AI models affect their adoption and impact on businesses?
-The distribution of AI models is crucial as it allows for broader adoption and more data collection, which in turn can improve the model's performance and adaptability to various business needs.
What is the current state of AI development at Google compared to other companies?
-Google appears to be lagging behind in AI development, as their recent release of the Gemini Ultra model is not as performant as state-of-the-art models from a year ago, despite their vast resources and data.
Outlines
🤖 AI Software as a Productivity Multiplier
The discussion begins with the potential of AI software as a significant productivity multiplier for enterprises, particularly for knowledge workers. By 2030, it's estimated that AI could boost productivity by up to nine times, with enterprises likely to invest $14 trillion in AI software. The example of Clara's AI software implementation in their call center, which reduced interaction time and increased efficiency, is highlighted as a case study. The conversation emphasizes that AI is not about replacing human workers but enhancing their capabilities, leading to better customer experiences and significant cost savings.
🚀 Rapid AI Deployment and Its Impact
The conversation delves into the rapid deployment of AI models and their impact on customer service interactions. It's noted that AI can handle a higher volume of customer interactions more efficiently, leading to a shift in how companies manage customer service. The deployment timeline for Clara's AI model is discussed, with speculation about the use of prompt engineering and retrieval augmented generation. The potential for AI to revolutionize customer interactions and the competitive landscape for enterprises is also explored.
📈 AI's Role in Revenue and Sales Growth
The discussion highlights AI's potential to accelerate revenue and sales growth by improving customer service and support. AI's ability to provide instant information and assistance can shorten sales cycles and improve conversion rates. The idea of AI chatbots being able to understand and cater to specific customer needs is also brought up, suggesting a future where AI can provide personalized solutions and execute transactions on behalf of customers.
🌐 Google's Struggles with AI Innovation
The conversation addresses Google's challenges in AI innovation, particularly with their Gemini Ultra model, which has not met expectations in terms of performance. The discussion contrasts Google's struggles with the impressive performance of Anthropic's Claude 3, which outperforms Google's model in several areas. The conversation also touches on the potential for AI to disrupt Google's business model and the company's need for a significant organizational shakeup to stay competitive in the AI space.
🔄 Balancing AI Performance and Distribution
The discussion explores the balance between AI model performance and the ability to distribute the model effectively to consumers. It's noted that while performance gains are important, the ability to reach a wide audience is also crucial. The conversation also considers the impact of AI on enterprise decision-making, with the importance of choosing the right AI model for business operations. The potential risks associated with releasing AI products and the challenges faced by large companies with broad distribution platforms are also discussed.
📊 AI Market Dynamics and Company Strategies
The conversation concludes with a discussion on the dynamics of the AI market and the strategies of various companies. The potential for open-source AI models and the distribution power of companies like Meta are explored. The conversation also touches on Apple's approach to AI and their potential strategy, as well as the risks associated with releasing AI products and the challenges faced by incumbents in the AI space.
Mindmap
Keywords
💡AI software
💡Productivity multiplier
💡Enterprise
💡Knowledge workers
💡Call center agents
💡AI deployment
💡OpenAI
💡ROI (Return on Investment)
💡Innovator's Dilemma
💡Chatbots
Highlights
AI software within the Enterprise should be a productivity multiplier for knowledge workers.
By 2030, there will be roughly $30 trillion in knowledge wages paid.
AI has the potential to boost knowledge worker productivity by nine times.
Enterprises are expected to pay a small percentage of the productivity boost for AI software.
Clara's AI software is saving them $40 million annually by handling two-thirds of their call center volume.
AI can reduce customer interaction time from 11 minutes to 2 minutes per interaction.
Clara's AI implementation resulted in an eight times productivity improvement.
AI is not about replacing employees but increasing bandwidth capacity.
AI can lead to more productive and pleasant customer interactions.
AI software can drive more customer service interaction volume.
The deployment of AI models can be rapid, with some companies starting in January and launching soon after.
AI models can be fine-tuned with prompt engineering and retrieval augmented generation.
AI adoption can lead to better margins and reduced churn in businesses.
AI can accelerate revenue and sales growth by better serving customers with their questions.
AI models can reduce sales cycles in complex industries by providing instant information.
Google's Gemini Ultra model is not as performant as OpenAI's GPT-4 and has faced controversy over its approach to bias.
Anthropic's Claude 3 model benchmarks favorably against GPT-4, showing impressive performance gains.
The importance of AI model distribution versus marginal performance gains is a key consideration for consumer and enterprise adoption.
Enterprises may prioritize stability and backing over marginal performance improvements when choosing AI models.
Meta's strategy of providing open-source AI models as backend infrastructure for Enterprises is a credible approach.
The pace of AI innovation is rapid, with models improving significantly every few months.
Google's late release of an AI model that is not as performant as state-of-the-art reflects organizational challenges.
Apple's strategy of entering markets late and with a strong product could be applied to AI, as they wait for the field to mature.
AI products carry risks due to their unpredictable capabilities, which can only be fully understood through release and user interaction.
Transcripts
[Music]
welcome to the brainstorm episode 38
today we're talking AI just so much
happening all the time I feel like this
is a good first concrete example of all
of the work that we've been doing and
all of the forecast saying it can be
this impactful you know easy to put on a
PowerPoint presentation different to see
it in practice uh but Brett what is the
news with Clara that's making people
really appreciate the power here yeah um
well first actually let me describe how
we've been thinking about AI software
and and kind of how much value it can
deliver uh and it's basically that um AI
software within the Enterprise should be
a productivity multiplier for the
Enterprise so um basically for all of
knowledge workers and that's a a big
category because by 2030 there's going
to be roughly $30 trillion in knowledge
wages paid uh and the way we had modeled
it and still model it is um kind of you
have the opportunity to boost knowledge
worker productivity by nine times but
maybe 50% of Enterprises or knowledge
workers will have done that so it
averages out to a four and a half times
knowledge worker productivity boost and
then Enterprises are going to pay for
that software not the full four and a
half times but some small percentage of
that uh you know we model it roughly 10%
of that productivity boost and you do
all the math and you say hey there
should be $14 trillion in AI software
spend by 2030 which is a very very very
very large number because uh across all
of it there's only four to five trillion
dollars being spent per year uh and so
then Kara uh you know in preparing to
like you know spit shine their IPO um
put out a press release saying that um
they are saving um 40 mli million doll
uh annualized roughly uh by um launching
uh AI software against their call center
agents and that kind of the um the um
more than I think it's two-thirds of
their call center volume or it's really
not call center but it's like chat is
being handled by Ai and it can do it uh
in two minutes per customer interaction
versus 11 minutes uh and actually it
reduces the time that people have to
having chatt with the customer agent
come back to chat with it again by 25%
uh so it's roughly an eight times
productivity uh Improvement in terms of
like time to providing an answer to a
customer and if you do the math on how
much they're saving in in kind of
salaries versus how much they're
spending on open AI um you can derive to
roughly uh it's maybe
$50,000 per month on open AI versus is
uh you know something uh on the order of
40 fold of that that they were
previously paying for customer service
agents so um there a lot of puts and
takes there but net it's a huge savings
for Clara at a better experience for end
customers and it roughly conforms to the
way that we're mapping this overall
space where uh Enterprises pay a
fraction of the amount for the
productivity boost that they get um and
they end up providing better in service
to customers
and get a big Roi on that AI software
Spin and this wasn't a uh firing of
employes either right I thought that was
another interesting point it's not that
they're replacing it's increasing the
bandwidth capacity of everyone and you
know it's a growing pie not a shrinking
pie type of environment yeah I think
that's to me that's the almost the most
interesting thing about this is think
about how often you've contacted a
company to like try to get something
fixed like you never want to do that cuz
it's always such a terrible experience
you know if you call them you're on a
call center tree you have to like say no
Operator Operator like get me to a
person uh if you end up in one of those
chat Bots it's like the person doesn't
really help you their responsiveness is
slow you get distracted by something and
then they time out because they can't
have a human agent just waiting for you
to respond to some question they asked
that you're kind of like half paying
attention to um and so what are you
doing said it's like you go to Google to
Google your problem you do you you try
to solve it yourself without interacting
with the company because the company
interaction is so unpleasant well like
actually this should result in kind of
those company's interaction company
interactions being much more productive
and pleasant so then people will do it a
lot more um and so kind of the way I
think about uh like at least a base
level way to model it is if this is
eight times more productive if if you're
only spending 2 minutes instead of you
know 11 minutes plus a 25% chance that
you have to call back again or type back
again so 2 minutes versus 15 you're
probably going to be like eight times
more likely to go in that channel uh and
so you're going to drive a lot more
customer service interaction volume than
you had before uh which then you know
for specialist cases will rely on like
an actual in-person agent dealing with
the harder problems at the back end of
that
sequence do we know know how long the
deployment was here as in how long did
it take to train this open AI model to
Clara's data to get to this point I mean
open AI chat gbt hasn't been around for
a long time so you know we're only
working with a few years here or even
less but did they give that figure I'm
curious not to my knowledge and I'm not
even sure it's a custom like a
fine-tuned gp4 model I'm fairly certain
that they are just they're probably
doing some you know prompt engineering
up front as then you set up the agent in
a way uh and they may be um doing a um
you know retrieval augmented generation
so they have and they almost certainly
are where they have some corporate docs
that the the the system can refer to in
providing answers um to people um but um
that's you know it's not as simple as
like you know devel Ving your own GPT on
open AI but it may be like as simple as
doing that and then kind of like um kind
of experimenting with it and and then
making it more and more complex um but
um the you know they they um just
started at the beginning of this year or
really like end of January to to launch
it um and you know gp4 hasn't been out
or at least gp4 turbo hasn't been out
for that long uh right and I think you
can extend it to every like customer
facing kind of sales agent type role um
should and will have an implementation
like this that people will
like right well to what to what extent
to what extent do you think this is kind
of just a polishing up before a
potential
transaction uh or is it like they are
particularly good because as you said
it's like this should be applicable
quite broadly and and it's like you in
theory there should be press releases
like this you know every day for the
next 3 years um so is it like is it they
were they were good and they executed it
well and it's like wow Clara did did
something good here or do you think it's
more broad based and others will do the
same right it seems odd that it's just
one person saying this or yeah that I
guess that's that's what I'm trying to
get my I think if you imagine you're a
company that is you know trying to sell
yourself to the public and you have a
set of backward-looking financials and
you're wanting to make a you you know a
really tangible case like hey here's how
I'm going to save you know $40 million
off the um sdna line uh then or I wonder
if that's in the actually the gross
margin line but um the probably s but uh
then then like they do the press release
specific to point to that to be like yes
we have this Pathway to margin expansion
uh and I agree with you I think that
there um it probably won't be
accompanied by a press release but you
should over time see companies that are
aggressively deploying this stuff have
um both better margins and actually
reduced churn which is in a lot of
business models the more important um
kind of like unveil of kind of like
growth uh and and so I think that there
is going to be actually a real sorting
of Enterprises based upon their um kind
of in eagerness and ambition and kind of
deploying AI against internal systems
that right now are operating kind of
fine but um actually they in in a
competitive landscape sense are going to
fall behind very quickly um and so and I
also think that this press release
probably does catalyze like companies
that have had um kind of operations
officers who' have been like well you
know I'm not sure there's there there
and this is not something that we can do
um because it poses all kinds of risks
are now getting calls from their CEOs
being
like this does not what you're saying
does not conform to like what this
company is presenting so we really need
to look at this um and and I think that
the it's not just the OPC savings or
it's not just the cost savings I think
the the responsiveness of the
organization to like inbound sales or
outbound sales to to kind of like
customer service that's where the real
perform or competitive differentiation
is going to happen yeah I think this
should to your point Brett accelerate
revenue and sales growth as well because
if you can you know better serve
customers when they have questions that
is helpful on conversions as well or
just supporting agents you know sales
agents I think that'll end up you know
accelerating revenues across the board
that you know it probably will even
reduce sales cycle sales Cycles in more
complex Industries if you're able to you
know grab all the information you need
instantaneously from a chatbot that does
help you know kind of those inbound or
you know those first touch points with
new potential
customers 100% I could use that just for
I was looking at cellular Plans right
and it's like even within the same
company it's impossible to figure out
which plan of theirs is actually what I
need and then you're like okay if I sign
up with you know four other people then
that really changes the calculus then
you go cross company get get me a chat
bot for that
right well it'll be interesting to have
maybe chat like in your example Sam
having a chatbot sit on on top across
multiple companies so it's kind of a a
way to compare and contrast different
you know commodity type uh products out
there in the
market yeah I think actually that's if
companies don't aggressively adopt this
that's what they're going to be subject
to as in companies should really want
you to directly interact with them
because or else there's going to be like
a great white shark of like basically
negotiating on behalf of the customer to
get the best product uh that then
commoditized an entire sector using
using an AI
like one level above them um right and
and it really Cuts against you know
Google's business model um where it it's
it's not just an because because at
first it's an answer agent it's like
well what's the best cellular plan for
me given I have you know a wife and two
kids and like one of the kids is going
to get a smartphone or whatever your
particular situation is and I'm going to
travel internationally you know and you
can question and answer it it becomes a
whole another level of kind of um
interestingness and aggressiveness if it
also then kind of executes and signs you
up for the service and maybe signs you
up for a hybrid of two different
services that particularly meet your
needs uh and and that'll be really you
know interesting I and even to your
point Brad it becomes even more
compelling when it has already
understood all of those points you just
brought up that you have a wife that you
have two kids that you are planning to
go to Europe and it can just direct you
to the best solution without you even
having to feed that information it just
already is aware of what you know
contextually you would be interested in
given all of the information it's been
collecting on you I think that's where
you know it becomes something that sits
a top kind of the current market
structure we're used to today for
marketplaces and and some of these other
uh you know
Industries and maybe on the topic of
cutting against Google um Claude Claude
3 announcement comes out Google it seems
I don't know there's there's a lot of
videos of Sergey speaking and seems like
people hoping that
he'll write the ship and what happened
with Gemini right what what do you
think's going on with Google and then
and then and then we can talk about
Claude 3 and and it's I I would say
pretty impressive performance from what
we've seen thus far yeah so Claude 3 is
anthropic or Claude is anthropic uh a
call it large language model company's
name for its language model model and
they just released or announced Claude 3
and uh you can play with it starting
today um and it um benchmarks very
favorably versus um gp4 uh you know and
in fact given the benchmarks that um
they present it looks like it's better
than gp4 in a number of areas including
coding which previously was um kind of
an area where Claude was weak relative
to gp4 so very impressive results and um
uh another small company you may have
heard of also released a large language
model That was supposed to be
competitive with GPT for say a month or
so ago uh and this is Google and it's
Gemini Ultra model and um not only so
there a whole controversy has emerged
around Gemini because um it uh the way
in which they've draped kind of like um
reinforcement learning a it and really
tried to guide the model to to prevent
it from producing biased information has
resulted in um kind of very um kind of
like not whitewashing but color washing
of history in a way that's extremely
embarrassing if you ask you know it to
generate an image of four German
soldiers from 1935 you know they will be
of all different kinds of ethnicities
many of which were being at that time
about to be kind of uh attempted
exterminated by by by Germany so uh
uh but that actually includes the fact
that the model that they released is
also just worse than gp4 in a number of
different areas and they did all kinds
of PR spin to present it as better it
just is not as performant uh and it's
being released a year later uh and so
given the resources that Google has the
data they have the talent they have that
they can't produce a model that is even
competitive with state-ofthe-art
um from a year ago uh just I think shows
how much um waste of energy is going on
inside the organization at a very high
level like I think that's the best way
to interpret it it's like there is lots
of stuff happening there that is not
contributing to the end product um and I
think you can interpret it in all kinds
of ways My Chosen interpretation is that
there is they're in the thick Thorns of
the innovators dilemma um that search
itself is a cash flow geyser and people
um don't want to do anything to disrupt
that cash flow geyser unfortunately
language models um by their nature uh
actually disrupted and so the
organization is always even if they
recognize that's the future all of the
people inside the organization are going
to be kind of fighting against jumping
into that future what do you all think
well Brett I actually have a follow-up
question to this and I don't know that
I've asked you but more so on the
marginal performance gains that we're
seeing across the board on LMS versus
distribution and how you weigh you know
the importance of a model performing you
know call it 10 15x better than another
model that is you know a bit out of date
versus being able to distribute that
model to the end consumer because I
think of you know Google with Gemini if
they had gotten it right they do have
that distribution
to billions of consumers and and
internet users and you know we're still
waiting to hear from some of these large
large Mega tech companies Apple Amazon
that have assistance that were kind of
the precursor to you know how we've been
introduced to large language models so
I'm curious how you balance the
difference between you know performance
gains that we're seeing versus you know
being able to get that model into a
consumer's hand like even Claude versus
uh open Ai and Chachi BT Chachi BT has a
significant uh you know head U kind of
they're already running with
distribution Claude you know I don't
know that many listeners on this uh
brainstorm would have even known if you
know we didn't bring it up a few times
before yeah I think it's a fair question
and it probably there's a different
answer to the degree of importance on
the consumer side versus the Enterprise
side um you know it's kind of like Siri
is a joke right now apple is you know
way behind in AI obviously like I it
literally is is like woefully
incompetent relative to even talking to
GPT via the chat GPT app um and when
they upgrade Siri they're
instantaneously going to be like hitting
you know all their iPhone users with the
default voice mode being much much more
powerful assuming they ever update Siri
much much more powerful than than kind
of like anything else and like more
important even than distribution is like
or or related is given that distribution
footprint conceptually they should also
be able to make it to to get the product
on an improvement trajectory because
they're getting all that feedback from
data from from from users um and so I
think that's in some ways it is the key
question about kind of the consumer
facing Enterprises where like Google
also should be able to push these
products to their users to all of their
uh kind office suite users and and to
all of their Android phone users as hey
here is the default if you don't change
anything this is the voice interface you
get and you the consumer don't even know
that it's worse than gp4 you think it's
basically state-of-the-art and over time
it becomes stateof thee art because of
the feedback you're providing
um I you know I think that's a valid
essentially argum to make and uh on on
the Enterprise side an Enterprise buyer
is saying hey I'm going to wrap an API
around this uh it's going to inform my
um you know my sales Center like in
clarus case or my customer service
center and like a marginal difference in
kind of like the rate of responsiveness
and and correctly kind of like
delivering an answer to a customer is
worth you know maybe millions of dollars
to me so I'm not just going to be like
oh you're Google I'm going with the
brand name instead I'm going to like you
know test them in in competition with
each other and go with with the one that
wins um so I think that's one potential
answer go but but at the same time it's
like given how fundamental these are I
think people are Enterprises still
hesitant as they always are to go with
someone new it's like you're like I'm
going to build all of this on to a
company that's going to go bankrupt in a
year and then start over right so it's
like there is that constant battle in
the Enterprise world yes and it's more
so with the incumbents and uh you know
open AI relies on the Microsoft
relationship to some degree to kind of
like attest to the fact that they're
going to be around and even the um you
know the corporate shakeup or non-
shakeup I guess at open AI I think has
probably spooked some Enterprise
customers and and same like anthropic
took funding from um both Amazon and
Google in part to kind of I think signal
to Enterprise customers hey we have um
basically like if anthropic gets into
onto the rocks in terms of its balance
sheet there is a probably a ready and
willing buyer that will keep its
services running um and and Enterprises
probably take some comfort from that um
but if you get into uh a state where
these models are not just kind of like
answering the incremental chat but
having to do more and more complex
workflows like you know they're
answering the chat they're also kind of
like integrated with your backend
systems where they're executing the
order in some way um then the marginal
difference in performance compounds
because like any error rate in any step
in the process kind of like you have 20
steps in the process and so you know
it's it's like the you know 99 to the
20th power versus 0 n99 to the 20th
power is actually a really meaningful uh
difference and so as models get more
agentic uh as in they're executing
multiple steps and deciding what next
step to do then the performance
differentials become actually much much
more important uh and so in some ways
these
benchmarks are a little silly as in am I
going to be able to tell the difference
in how well Claude is writing versus
Gemini Ultra on like summarize this
paragraph maybe not or maybe only
marginally and maybe it's very
subjective um and it is kind of like are
demonstrating potential capability for a
future in which they are doing much more
complex things um yeah I I think I agree
with the take that performance will
matter more on the Enterprise side but I
think one Nuance here is that you still
have human decision makers in the loop
as in there is a CTO involved in having
to pick which model the company needs to
go with and given how fast the space is
moving the safe pick is going to be call
it Microsoft open AI because you know
Microsoft is behind it or even clawed
with you know backers such as Amazon and
Google where if they went out uh you
know went out of their way to find the
most performative model today but it
didn't have the right backing or the
right management team and then you know
a couple weeks you know given how fast
this space moves you know they're
already lapped then that C's you know
seat and and kind of and and influence
within the organization becomes
questionable versus you know if they
just picked open Ai and Microsoft and
Microsoft and open AI don't deliver that
I think can be forgiven versus like
going out of your way to try to find
something that is just slightly more
performative but I think over the long
term performance will matter but it will
be at the margin of a few very you know
it's I think the market has to
consolidate first yeah well say this
whole conversation has been closed uh
ecosystems right it's like open open
source has pretty strong staying power
whether or not it gets better is a is a
different question but right um you know
picking an open source solution is the
alternative and it I think it is
interesting that the conversation has
not
gone in that direction even though I
think there's a big belief that open
source is the long-term winner here I
well I think that and this goes back to
your point about distribution Nick like
actually meta has probably the most
powerful distribution engine of digital
content in the world right and uh they
are one it means they have the huge pool
of data they can use to train these
models and two they are um you know
spous Ing and and laying out a strategic
Playbook of delivering open- Source
models that can serve as kind of like
the AI model like backend infrastructure
for Enterprises and they're willing to
give it away because if they have the
most powerful um model with the best
tooling against it that'll make their
consumer facing experiences that much
more compelling and potentially allow
them to navigate around kind of like the
Apple iOS ecosystem and the and the
Android you know platform uh you know
that duopoly in a way that's better for
their long-term interests and I think
that's like a credible and super
interesting strategy because like then
they their distribution up against the
consumer will improve their model
feeding into like this backend open
source engine for a lot of Enterprises
that build on top of it out there and no
Enterprises are going to have concerns
about meta going bankrupt and like they
make the money off of becoming that the
the kind of like customer facing um like
interface for you know companies through
weet app or from the consumer level
that's choosing like which flights you
do like it it it actually becomes a
potential replacement for traditional
search activity today um so you know I
think they're and and it's all packaged
because they can do very very powerful
models and very light footprint um
because they're operating on open source
and they have a ton of data it ends up
in eyeglasses that Nick's going to be
wearing a couple years from now uh and
probably has a pair of today uh you know
that look stylish and and people will
rely upon
so yeah it's interesting times for sure
it's is just it is I mean we've said it
a few times probably every other week at
this point but just how fast the space
is moving it still just I think blows my
mind like every time a new announcement
and at this point we shouldn't even be
surprised but it still is surprising
every time like just the pace of
innovation here is I mean it's hard to
even comprehend yeah and I'll just like
circling back to Google shipping a Model
A year late that is not as performant as
state-ofthe-art uh in traditional
Computing space because you know
typically computers fall like by half in
terms of costs every two years right now
ai models are falling by half every five
months so it's the equivalent of being
like five years
behind like you know they they really
are like
dramatically behind and and um there
needs to be a I think a real
organizational shakeup to wake up that
organization like and it's almost like
the market has to force them to do it
like is synar gonna get fired um without
First St Market that's right yeah oh
yeah what well what are the odds now
that he gets fired do we know I don't
know let's see if there's a market for
it I suspect I suspect you need Equity
price action to actually force them to
do something major it so Sundar out as
Google CEO in 2024 is currently trading
at a
22% chance yeah but you know these
things can change quickly I do think
Brett we should I know we have charts
whatever but it's like just in hearing
this conversation we should put out kind
of the
AI
rankings of it's like okay you know
Google here's the things cutting against
it here's the pros here's all these
other things because I haven't seen a
very clean graphic of that and it is
something that could be likely updated
on
a weekly monthly quarterly quarter
depending on how quickly these come out
right but it's like apple clearly
incredible business but way behind at AI
but it's okay because their uh Auto team
who delivered spectacular results are
about to fix fix the day right but it's
like apple good great great at other
things but behind in AI Google as you
said innovators dilemma you know
incredible business with search but
clearly lagging and can be dimensioned
how far they're lagging um and I think
it's probably worth worth kind of
tracking that and and putting it out I
will I'll stick up for Apple here
because I've been on Apple you know
people have been calling me out for
hating The Vision Pro but I will stick
up for Apple here and that this is
typically how they operate you know
they're usually late to the party when
there is new innovation you know how
long did it take for them to get the
Vision Pro out you know it's been a
decade plus of VR Innovation and you
know took them that long so I I do think
they're just buying their time and you
know watching what what has happened
with Google is probably you know giving
them even more reason to stay on the
sidelines until something is figured out
and they're probably leaning on and
having the same conversation which is we
have the distribution why do we need to
rush a product that would be my guess is
you know why they're they're waiting and
I think that the challenge with AI in
particular relative to other software
products is you don't know what it's
capable of when you release it yeah as
in you know like imagine like somebody
opens up a menu item and there's a menu
item that you didn't even realize was
there that is there when you release the
product that's like previously you're
like doing lots of engineering and user
interface like do we need this option in
this thing well AI products have options
that you didn't explicitly design in and
your only way to find out what's in
there is to release it and that's what's
very scary for these incumbents it's
like did the Gemini team kind of know
what how it would respond to a question
about you know German sculptures in 1940
no they didn't uh and and they even the
way in which it answered that question
was because they were so worried about
another question which is like what is a
a corporate generate me an image of a
corporate CEO they were worried that if
you did that based up on the available
data the model would present like a
bunch of middle-aged white guys as the
and so they're like we have to stop that
that's going to get us in real trouble
and that's how they got into trouble on
the other side and so I think in this
space it's it's it's actually like
shipping shipping has risk but it's the
only way you figure out what your things
are capable of you don't have the
internal like resources to actually
fully um audit what the model is going
to do and so in some ways having the
broadest distribution platform is the mo
is is a really dangerous place to be
because you know everything that's going
to go wrong with that model is
instantaneously going to be figured out
and so you are almost guaranteed subject
to like a bad media cycle on release
whereas if you're a small company and
you release it and you're kind of
expanding over time then um one like
your kind of the amount of press
attention you get when you first release
is much lower the number of people
working with it is much lower so like
the call it flaws in the models get
surface like serially as opposed to all
at once um so it does pose a challenge
for anybody with a brand to manage yeah
I think I think we should end it there
there are other topics we should discuss
as well maybe you know we got the
lawsuits coming in against open AI that
seems to be splitting people on Twitter
as well um but maybe we'll save that for
next week's
brainstorm all right that's our show Sam
welcome back yeah welcome back Sam thank
you all right bye everyone
Weitere ähnliche Videos ansehen
Ep #95: OpenAI and Moderna, Microsoft Phi-3, Sam Altman & AI Leaders Join Homeland Security AI Board
Jeff Clarke, Dell | Dell Technologies World 2024
Radical Business Transformation of AI (Part 1)
Creepy or Cool? Exploring Personal AI - PI AI Blew Our Minds!
Questa AI fa il mestiere di 700 persone, robotica che cambia la vita, l'AI arriva nelle tue orecchie
10 People + AI = Billion Dollar Company?
5.0 / 5 (0 votes)