Is AI A Bubble?
Summary
TLDRThis video explores the hype and reality of artificial intelligence (AI), discussing its evolution, current capabilities, and potential future impacts. It highlights the difference between AI as it exists today and the speculative concept of artificial general intelligence (AGI). The script critiques the overuse of the term 'AI' by corporations for marketing purposes, and the skepticism around AI's true potential, suggesting that while AI has value, its revolutionary potential may be overstated. The video also mentions Ground News, a platform for unbiased news comparison.
Takeaways
- 😄 The script humorously begins by defining AI as a sloth, but clarifies that Artificial Intelligence is about computer systems imitating human behavior.
- 🕒 AI is not a new concept; its roots trace back to the early days of computing, but the current hype is centered around large language models and neural networks.
- 📈 Companies like Nvidia and Microsoft are capitalizing on the AI trend, with Nvidia becoming the third-largest company globally, largely due to its association with AI.
- 🤔 There's a debate on the transformative potential of AI, with opinions ranging from life-altering technology to a fleeting tech fad.
- 📊 Ground News is highlighted as a platform that aggregates articles from various sources, providing insights into political bias, ownership, and headlines.
- 🔍 The script discusses the limitations of AI, emphasizing that it cannot create original ideas without training data and lacks true understanding.
- 📝 AI's role in everyday life is still debated, with some seeing it as a professional tool rather than a daily utility for the average person.
- 🧩 AI is often oversold as a revolutionary technology, but many capabilities attributed to AI are things traditional software has been doing for a while.
- 🤖 The pursuit of Artificial General Intelligence (AGI) is compared to the Singularity, with the script suggesting that achieving true machine consciousness might be further away than some believe.
- 💡 The script criticizes the misuse of the term 'AI' by corporations to boost stock values, even when their products don't truly incorporate AI.
- 🕊 The potential and risks of AI are weighed, with concerns about misinformation and job displacement, but also recognition of AI's established value in areas like search engines and recommendations.
Q & A
What is the humorous definition of AI provided by Miriam Webster and Al Perkins in the video?
-Miriam Webster defines AI as a three-toed sloth, and Al Perkins describes a sloth as a slow-moving arboreal mammal. The name AI was humorously given to this creature because of its high-pitched screech.
What is the actual definition of Artificial Intelligence (AI) mentioned in the video?
-Artificial Intelligence is the ability for computer systems and algorithms to imitate intelligent human behavior.
Why do some companies use the term 'AI' to boost their stock values?
-Companies use the term 'AI' to attract investors, as it suggests that they are working on cutting-edge technology, which can lead to an increase in their share values.
How has Nvidia's value increased due to its association with AI?
-Nvidia, known for producing GPUs for gaming PCs and other computing products, has become the third-largest company on the planet due to its association with AI, seeing an increase in value that is considered obscene.
What is the role of Microsoft in the AI hype train according to the video?
-Microsoft is pushing AI into every corner of their operating system and has a close relationship with Open AI, making it the most valuable corporation on the planet.
What is the purpose of the Ground News website and app as described in the video?
-Ground News is a website and app that gathers related articles from over 50,000 sources worldwide, allowing users to compare how different outlets cover the same story, including political bias, ownership, and headlines.
What is the controversy mentioned in the video regarding Open AI's Safety and Security committee?
-The controversy is about Open AI setting up a Safety and Security committee as it starts training for the next Frontier Model, which has caused concerns due to the recent departure of previous safety team leaders.
How does the video describe the capabilities and limitations of large language models?
-Large language models take in a lot of information, require a user prompt, and through complex algorithms, create probabilities of what the output should be. However, they are not actually alive and cannot create wholly original or novel ideas without training data.
What is the difference between weak AI and strong AI (AGI) as discussed in the video?
-Weak AI is capable of doing very specific tasks it was programmed to do, while strong AI (AGI) would be capable of learning skills and tasks it was never trained to learn, potentially leading to machine sentience and consciousness.
What is the main concern regarding AI safety mentioned in the video?
-The main concern is not about robots taking over the world, but rather the potential for AI services to replace traditional search engines, where those in charge could manipulate information to benefit their own motives and agendas.
What is the video's perspective on the current hype around AI and its potential future impact?
-The video suggests that the current hype around AI might be a bubble and that it is more about style over substance. It questions whether the technology is truly revolutionary or just an evolutionary step in AI development.
Outlines
🤖 AI Hype and Its Impact on the Market
The video script starts by humorously defining AI as a sloth and then explains the true meaning of Artificial Intelligence (AI) as the simulation of human intelligence by computer systems. The script discusses the historical development of AI and its current hype, which is largely driven by large language models and neural networks. It critiques the use of the term 'AI' by corporations to boost stock values without a clear understanding of the technology. The video introduces Ground News, a platform that aggregates articles to show different perspectives on the same story, using the example of how OpenAI's safety committee is covered by various media outlets with different biases.
🧐 The Reality of AI Capabilities and Misconceptions
This paragraph delves into the limitations of AI, explaining that despite its impressive capabilities, AI does not understand or think in the human sense. It highlights examples where AI fails to provide accurate responses due to a lack of training data or incorrect algorithms. The script also points out that many AI claims are exaggerated, with traditional software often being mistaken for AI. It discusses the potential of AI in areas like natural language programming and data compilation but cautions against overstating its current capabilities.
🔮 Speculations on AGI and the Future of AI
The script explores the concept of Artificial General Intelligence (AGI), which refers to a strong AI capable of learning and applying skills to tasks it wasn't trained for. It questions the feasibility of achieving AGI with current technology and the hype surrounding it. The paragraph also discusses the potential societal impacts of AGI, such as the ability to control information and manipulate public opinion, and emphasizes that safety concerns should focus on realistic uses of AI rather than science fiction scenarios.
💡 AI as an Evolutionary Technology, Not a Revolutionary One
Here, the script argues that AI, particularly large language models, may not be as revolutionary as they are portrayed but rather an evolution of existing technology. It compares the AI bubble to the dot-com bubble of the late 90s, suggesting that a burst could lead to a more realistic assessment of AI's value. The video references a survey indicating that most people are confused about AI and have concerns about its impact on jobs and privacy, hinting at a potential backlash against AI integration.
🚫 Skepticism Towards AI Hype and Its Market Implications
The final paragraph expresses skepticism about the current AI hype, likening it to the metaverse craze and criticizing companies for using AI as a buzzword without substantial technological advancements. It discusses the potential negative consequences of AI, such as job displacement and privacy concerns, and suggests that the industry is focused more on style than substance. The script concludes with the acknowledgment that AI has value but questions the extent of its impact on everyday life and the validity of the extravagant claims made by some analysts and companies.
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Large Language Models (LLMs)
💡Neural Networks
💡AGI (Artificial General Intelligence)
💡Hype Train
💡Stock Values
💡Bias
💡Ground News
💡Blind Spot
💡Quantum Computing
💡Bubble
Highlights
Ground News sponsors the video, providing a platform for comparing news coverage from various sources.
Miriam Webster's humorous definition of AI as a sloth contrasts with the technological AI, emphasizing the gap between perception and reality.
AI's history dates back to early computers, but modern AI hype is not the same as video game enemies or chatbots from the past.
Corporations use the term 'AI' to boost share values, regardless of the actual application of AI in their products or services.
Nvidia's transformation into the third-largest company globally is attributed to its association with AI technology.
Microsoft's AI integration and partnership with Open AI has propelled it to become the most valuable corporation.
The debate on AI's societal impact is ongoing, with no consensus on its potential as a tech fad or a transformative force.
Large language models, like chat GPT, process information based on complex algorithms but lack original thinking capabilities.
AI's limitations are exposed when it fails to answer questions not present in its training data, as demonstrated by the '21st letter' example.
Many AI capabilities are not new but are rebranded traditional software functionalities, creating confusion in the market.
Amazon's brick-and-mortar store was falsely assumed to use AI for automatic checkouts, but it might just be monitored by employees.
Google's AI struggles with providing accurate information, as seen in its failed attempt to answer basic questions using Reddit posts.
AI's potential is real, with applications in natural language programming and data compilation, but its accuracy and reliability are still in question.
The pursuit of Artificial General Intelligence (AGI) is compared to the Singularity, with unknowns and debates on its feasibility.
Tests for AGI involve complex real-world problem-solving, far beyond current AI capabilities, highlighting the gap between ambition and reality.
The hype around AI may be a bubble, with inflated expectations and potential for a market correction, similar to the dot-com era.
AI's future is uncertain, with skepticism around its revolutionary potential and concerns about its current state as more style than substance.
Public opinion on AI is largely uninformed and negative, with concerns about job displacement and manipulation by tech companies.
The video concludes with a call to view AI as an evolutionary technology with potential but also with significant challenges and unknowns.
Transcripts
this video is sponsored by ground news
more from them in a bit Miriam Webster
defines AI as a three-toed sloth
according to Al Perkins a sloth is a
slow moving arboreal mammal and the name
AI was given to this creature because of
its high pitched Screech fascinating
Artificial Intelligence on the other
hand is the ability for computer systems
and algorithms to imitate intelligent
human behavior this is not anything new
the technology dates back to nearly the
beginning of computers but obviously the
AI that is gaining all the hype today
isn't quite the same as the enemies in
your video games or the countless chat
Bots that have been around for decades
clearly chat GPT is not the same as
clippy probably this whole AI thing it's
complicated we don't know the ultimate
potential of large language models and
neural net works and blah blue blah but
at the same time it doesn't take a
genius to figure out that the letters a
and I are being used as much as possible
by corporations looking to increase
their share values if a company so much
as utters AI investors cannot help but
invest even if they have no idea what it
means or what it does or what it might
do they just assume it's going to make
money Nvidia which was well known for
being the producer of gpus for gaming
PCs and other Computing products has
gone from a successful company to well
the third largest on the planet it has
seen an increase in value that is
nothing short of obscene all thanks to
this
AI Microsoft is now the most valuable
corporation on the planet as the company
is all aboard the AI hype train pushing
it into every corner of their operating
system and having an extremely close
relationship with open AI all the while
nobody can seem to agree on whether this
technology will change the very fabric
of our society or prove to be another
flash in the pan Tech fad everyone
everyone has an opinion I'm exhausted of
seeing this back and forth argument that
never seems to reach any conclusion and
speaking of ground news is a website and
app that Gathers related articles from
more than 50,000 sources around the
world in one place so you can compare
how different Outlets cover the same
story each story comes with a clear
breakdown of the political bias
ownership and headlines of the sources
reporting all backed by ratings from
three Independent News monitoring
organizations take a look at the story
about how open AI is setting up a Safety
and Security committee as it starts
training for the next Frontier Model
something that has caused a lot of
controversies as previous safety team
leaders have recently left the company
so this story has been covered by more
than 73 sources with 37% of them leaning
left while 14% leans right you can also
see the ownership information and for
this story 40% of the reporting Outlets
are owned by media conglomerates you can
even compare headlines to see how this
bias can affect framing we can compare
headlines quickly and see how some
organizations WIll Stoke The public's
existing fears about AI to get clicks
While others might capitalize on the
distrust that Sam Alman has garnered
over the past few months God news has
this feature called Blind Spot which
instantly shows you stories that the
left leaning or right leaning news media
conveniently won't report on so go to
ground. news/ husk or use the link in
the video description to subscribe today
if you sign up through my link you will
get 40% off the Vantage plan which is
what I use to get unlimited access to
all of their features I think gr news is
an exceptionally important website and
one you absolutely need to check out now
back to the video yes AI has been around
for a while dating back to the earliest
computers but the hype right now
surrounds progress on large language
models these are programs that take a
lot of information in require a user
prompt and through complex algorithms
basically create probabilities of what
the output should be as lifelike as they
might seem these things are not actually
alive there's no think thinking involved
they cannot create wholly original or
novel ideas they require training data
to do anything I've seen it suggested
that one of the best ways to prove this
is to ask a question with an answer that
the AI model has not been provided with
in its training data for example go to
chat GPT and ask what's the 21st letter
in this sentence you might get a
different response but I got the AI
saying confidently it's e and if you say
I don't think that's right it will try
and count again and give you a different
letter this time I it will sometimes
interpret two and one as letters maybe
the apostrophe and the reason questions
like this break the AI is because it
doesn't actually understand what a
letter is it doesn't understand anything
but it searches the internet and its
training data for the most likely answer
even with these limit ations the
technology is clearly capable of some
impressive things from assisting you in
writing computer programs to proof
reading your short stories to helping
you cheat on your homework but even
still a lot of what gets promoted as AI
isn't really AI a lot of the
Revolutionary abilities that AI has been
reported to have is actually stuff that
traditional software has been capable of
doing for a long time this creates a
problem where the idea of this
technology it's often more valuable than
what it can actually do because again
just saying AI makes stock number go up
so a while back Amazon launched a real
life store a brick and mortar shop where
customers could walk in grab items off
the shelf put them in their carts and
leave without ever checking out these
items were automatically charged to
people's accounts without any need to
speak to a cashier or even even use a
selfs serve system it was all done
automatically many assume this was
handled using an array of computers and
sensors and scanners but in reality the
company just probably hired a lot of
Indian employees to watch people while
they shopped on cameras and then they
would charge the users's Amazon accounts
uh Google was caught off guard by the
immediate success of chat GPT and has
been trying its hardest to launch its
own AI competitor they paid millions of
dollars to Reddit for access to the
website's posts to train their AI with
and it went about as well as you'd
expect when users asked questions like
how to make your cheese stick on Pizza
Google decided the best answer was to
use non-toxic glue this information by
the way was provided by an 11-year-old
Reddit post by user foxsmith which
Google's model decided was the most
reliable sort course for information
when asked about how many rocks you
should eat a day it did not give the
correct answer of zero and look things
just kept getting worse from here as you
can imagine any technology that uses the
internet to train itself is probably
going to make a few mistakes and those
mistakes have made AI a very easy and
fun punching bag right now but yeah not
all AIS are created equally you can't
say that at L LM or machine learning or
neural networks are completely useless
because well a lot of people are using
it there's potential here natural
language programming could allow people
to write their own programs with no code
required just describing what kind of
application they want to make and the AI
would handle the rest you could use this
to learn to research information but
again AI can be wrong so I view it in
the same way that I you Wikipedia it's
useful it's a nice jumping off point but
you need to check your sources it's not
100% accurate or compiling lots of data
really quickly to get back at your
homeowners association this does have
real value automatic translations
transcriptions stuff that may have been
possible with previous software is just
more accessible than ever but people are
taking this too far the end goal of open
AI is to achieve achieve artificial
general intelligence AGI is supposed to
refer to a strong AI that can learn
skills in tasks that it was never
trained to learn this could lead to
machine sentience Consciousness The
Singularity and right now everyone seems
to believe that if you just keep
throwing more computing power more
energy more information into these
algorithms you can simply manifest a AI
from the kinds of artificial
intelligence we have today everybody has
a different way of defining AGI because
well nobody really knows what it's
supposed to look like we often divide AI
into two categories weak Ai and strong
AI weak AI is capable of doing very
specific tasks stuff it was programmed
to do and everything we've ever made so
far has been a weak AI AGI would be a
strong AI and it could it could do a lot
of jobs it could do a lot of stuff would
put you out of a job but what might this
look like exactly there are a few
different tests for AGI but I've always
found that the best would be something
done with an actual robot in automaton
take a robot and ask it to repair the
plumbing electrical systems and
appliances of an early 20th century home
especially if none of that information
is you know available through
documentation on the internet every home
is going to be slightly different so the
robot will need to make assumptions to
learn to understand why your custom
1950s toaster is not working and then
try and figure out possibly through
trial and error how to get it back into
toast making condition while today there
are demos of robots completing simple
tasks in the home these are in
controlled environments and are only
possible because of training data
created from previous experiences this
AGI robot would need to be capable of
learning from unique situations to ask
questions based on its own volition and
solve them by itself but there is no
proof that this will happen in fact
studies do seem to suggest that there
are diminishing returns when it comes to
this kind of stuff we don't know if any
of this is even possible with
traditional silicon chips and
programming binary code any of that it
could take entirely new forms of
computing to achieve this we don't fully
understand how the human brain works so
attempting to replicate it with far more
simplistic technology far more rigid
technology it just seems more than a bit
optimistic people are acting like we're
5 years away from robots taking every
job becoming alive enslaving Humanity
changing life forever and honestly chat
GPT could be a stepping stone to all of
this a stepping stone to AGI in the same
way that the Discover of gunpowder was a
stepping stone to the nuke or the hot
air balloon was a stepping stone to
Intergalactic space flight that is to
say not much of a step at all for all we
know llms and AGI might not even be
related we might not even be on the
right track we just don't know so if we
don't even know if this can manifest
into some more powerful AI if we have no
substantial evidence to support that
we're on the road to AGI then why all
the hype obviously it is incorporations
best interest for people to believe that
this technology is on the cusp of being
something truly revolutionary so good
it's frightening it makes it seem way
more advanced than it actually is it
implies more potential for growth which
will make the stock number go up and
they have ways to Market this technology
so many companies are just throwing
around the words AI even if it doesn't
even apply to their products in any way
gigabyte which usually makes gaming
Centric motherboards and memory thingies
and whatnot well they decided they
wanted to go all in on the AI thing and
have rebranded their products with a big
old AI plastered on the side of them
yeah rebranded because it's really just
kind of the same stuff you could find
anywhere else but now it's it's got AI
written on it whatever that means this
exact same thing happened a few years
back with the whole metaverse craze
where every company started proclaiming
that their existing video games and
online services were already in fact
metaverses because investors wanted to
hear that and it diluted the term to
such a point that the word metaverse
started to mean nothing more than than
just a synonym for software when an AI
makes a mistake the companies call it a
hallucination which is a very Mis
leading term this is applying a human
quality to the machine when in reality
it's not hallucinating it's just taking
a set of probabilities and it hasn't
received enough training data on the
topic or the right algorithms to give
you an accurate answer it is not
hallucinating it simply made a mistake
the technology simply wasn't good enough
will it improve yes but we don't know
how good it will get or how accurate it
will will become for AI sentients it
could be 5 years away or 10 or 100 or a
thousand or it might not be possible for
us to build this at all we don't know
and everything else until it happens is
just a guess a lot of the AI safety you
hear about actually does refer to more
realistic concerns about how this
technology might be used if in the
future AI Services become a replacement
for traditional search engines like
Google if people rely on chat GPT to get
all their news then those in charge of
the service can lie manipulate or alter
information to benefit those who are in
charge of the AI to push their own
motives their own agendas this is where
the concerns of safety often lie not in
the fantasy of robots taking over the
world or at least that's where the the
fears should lie and look if fancy
Quantum Computing or something entirely
unprecedented allows for robot
Revolution allows for AGI if today's
technology and llms are on the right
path to all of this and all we need is
just a few trillion dollars in the
combined power consumption of a small
nation I I'll be the first to admit I
was wrong Oopsy Daisy egg on my face but
right now I'm just a little bit
skeptical so for the purposes of this
video Let's assume that the future of AI
is not revolutionary but evolutionary
and if that's the case is it a bubble
yes yes it is but that doesn't mean it's
worthless I kind of see AI right now
like the do boom of the late 9s this was
a time where investors believed that
every single website was going to make
millions of dollars and once they
figured out that only certain websites
would ever become popular would ever
become profitable many of them took
their money and ran the resulting
Fallout led to the do Comm Bubble Burst
but this wasn't the end of the worldwide
web now I'm not saying that AI will be
as important as the internet but clearly
this is more than just a simple fad
that's going to fade away I do think the
bubble will burst and then the industry
will have to realistically look at its
value rather than rely on unfounded
promises and rampant speculation but
yeah a AI is here to stay cuz it's been
here for a long time AI already has
changed our world algorithms Drive
search engines YouTube channel
recommendations most of the internet is
driven off of this automation we know
that the technology has value because
it's been around for a while the
question is whether this new form of AI
llms will have a significant impact on
how we live and work or how significant
of an impact that might be and a lot of
people
seem to have some doubts about all of
this Rutter's Institute for the study of
Journalism pulled 12,000 people from six
different countries online about their
opinions on AI granted 12,000 people is
not that many doing plls online does not
mean perfect results but this is about
as good as we're going to get if we're
trying to figure out what the average
person thinks about this technology and
most people have no idea what it is or
what it does or what it could do a lot
of people have heard of AI and these new
services but they don't know what
distinguishes a large language model
from traditional computer software we
can tell that most people are not using
AI that often even if chat GPT has a
huge install base younger people are
more likely to try it out than older
people obviously but most people view AI
as a technology that will primarily be
used in professional settings by
scientists and social media companies
but less so by the average person in
their daily lives again if these people
don't actually know what AI is
considering some of these poll results
I'm I don't think they do none of this
is super helpful but there is some stuff
we can take away from this it seems like
people aren't just confused by the
technology they seem to fundamentally
dislike it there are a lot of concerns
about the damage it can do people are
scared it will take their jobs away or
just functions as tools to manipulate
the public and most don't seem
interested in the positive aspects of
this technology at all most can't even
identify what that might be sometimes a
technology does need to be more than
good enough it needs to be perfect when
driving a school bus or doing taxes for
a multi trillion dollar Corporation you
probably don't want to put this
responsibility on a robot that can or
probably will make a mistake yes humans
are not perfect the AI might even be
better than the human in many cases but
humans give people somebody to blame
when things go wrong they provide
accountability something that the AI
just doesn't have for AI Google spent
all of Google iio 2024 talking about its
AI initiatives and people didn't really
seem to care all that much A lot of
people just roll their eyes cuz it's
more AI slop Microsoft hyped up its new
line of AI PCS which most saw as a
concerning invasion of privacy rather
than anything remotely worthwhile the
corporation is betting the future of its
operating system of the Windows
operating system on AI and it's a bet
that doesn't seem to be panning out in
fact I've seen a lot more people talk
about moving to Linux now that Windows
is infected with this AI garbage I have
no doubt going forward that llms and
machine learning and AI will manifest
into to something valuable it's possible
this might even result in massive
changes to everyday life but right now
this entire thing has been polluted by
Bad actors false promises and in my
opinion pretty misleading marketing
tactics and it's baffling to me that
people have just forgotten the whole
metaverse craze from a few years back
where the word itself began to mean
nothing by the end these analysts who
don't even understand the basic
technology are hyping up others about
how much money we're all apparently
about to make claiming that nvidia's
going to be worth $1
trillion in just a few years and as far
as I can tell there there's just no
reasonable basis for these claims we've
reached a point where big Tech is so
desperate to achieve growth that it just
keeps taking these fairly predictable
and standard advancements in software
and then applying new buzzwords to the
technology and by the time the bubble
should burst they're already onto
marketing their next buzzword I don't
want to dismiss the entire potential of
AI by calling it the next metaverse but
look there there's a lot of problems
here and as of right now for many people
this technology just kind of comes off
as a slightly more invasive less
accurate version of all these digital
personal assistant that have already
become common place over the years this
whole industry
right now just seems so flimsy to me
there's a good chance that everything I
say here will age like milk I get that
maybe we will see a world where most
jobs become significantly easier with
the assistance of the machines where
this is a necessity or maybe the tech is
too good and we all lose our jobs I
think right now we're seeing a
technology that is more style over
substance there is substance there but
there's just there's a lot of fluff here
or maybe I'm completely wrong and in 5
years we'll all have Rosie the robot in
our homes I need to get my robot out of
my swimming pool I don't think she can
breathe again huge thanks to ground news
for sponsoring today's video go to
ground. news/ husk or use the link in
the video description to subscribe today
if you sign up through my link you will
get 40% off the Vantage plan
[Music]
[Music]
استعرض المزيد من الفيديوهات ذات الصلة
5.0 / 5 (0 votes)