Why AI progress seems "stuck" | Jennifer Golbeck | TEDxMidAtlantic
Summary
TLDRThe video script discusses the current state of artificial intelligence, highlighting the difference between narrow AI and the hypothetical AGI. It questions the hype around AGI, suggesting that current AI tools, while powerful, are not yet at a level that poses an existential threat. The speaker addresses concerns about AI's reliability and potential for 'hallucinations,' the challenges in improving AI with data, and the economic feasibility of advancements. They also touch on the impact of AI on jobs, the issue of AI bias, and the unique aspects of human intelligence that AI cannot replicate, concluding with a humorous note on humanity's control over AI.
Takeaways
- 🧠 The script discusses the current state of artificial intelligence (AI), highlighting that while AI has surpassed human performance in specific tasks, such as chess, the concept of artificial general intelligence (AGI) is still a topic of debate and concern.
- 🔮 There is a growing concern among some in the tech industry about the potential dangers of AI, with calls for regulation due to its perceived power and threat to civilization.
- 💡 The script suggests two reasons for the industry's focus on the dangers of AI: the potential for significant financial gain by emphasizing the power of the technology, and the cinematic allure of AI overtaking humanity, which distracts from current AI-related issues.
- 🤖 The speaker questions the likelihood of achieving AGI, citing examples of current AI tools that are not yet perfect, such as Google's AI Search tool and the unreliability of generative AI in producing accurate responses.
- 📈 The script points out that while there is significant investment in AI, the return on investment is not yet clear, and the sustainability of this investment is in question.
- 🔍 A key challenge identified for AI is reliability, with algorithms often providing incorrect information, which is a hurdle to overcome before AI can live up to its hype.
- 🎭 The concept of 'AI hallucination' is introduced, referring to the tendency of AI to fabricate information or responses, which is a significant issue that needs addressing.
- 👨💼 The script argues that the fear of AI taking jobs may be overstated, as increased efficiency through AI tools could lead to profit rather than job losses.
- 🔒 The issue of AI inheriting human biases is highlighted as a persistent problem that has not been solved, which is crucial when considering AI in decision-making roles.
- 🛠️ The speaker expresses skepticism about the ability to solve the reliability and hallucination problems of AI, suggesting that the technology may have reached a plateau.
- 🌐 Finally, the script emphasizes that human intelligence is defined by our emotional and creative capabilities, which AI cannot replicate, offering a reassurance that AI will not replace our core humanity.
Q & A
What is the concept of Artificial General Intelligence (AGI) discussed in the script?
-AGI refers to the idea of AI that can perform at or above human levels on a wide variety of tasks, similar to the capabilities of human intelligence.
Why are some people in the tech industry concerned about the AI they are building?
-They believe the AI is so powerful and dangerous that it poses a threat to civilization and may need to be regulated due to its potential to cause existential harm to humanity.
What are the two main reasons suggested in the script for the tech industry's concern about AI?
-One is the potential for significant financial gain by emphasizing the power and danger of their technology, and the other is the cinematic appeal of the concept of AI overtaking humanity, which serves as a distraction from real-world AI problems.
What is the current state of AI in terms of achieving AGI according to the script?
-The script suggests that while there is hype around AGI, the current state of AI, exemplified by tools like Google's AI Search, is far from achieving AGI and may be at a plateau rather than on a sharp upward trajectory.
What is the main challenge that needs to be solved to realize the hype around AI?
-The main challenge is reliability, as AI algorithms often produce incorrect results or 'hallucinations,' which means they cannot be fully trusted to perform tasks without human correction.
What is an 'AI hallucination' as mentioned in the script?
-AI hallucination refers to the phenomenon where AI makes up information or content that did not exist in the training data, leading to incorrect or misleading outputs.
Why is solving the AI hallucination problem important for the future of AI?
-Solving the hallucination problem is important because it affects the reliability and trustworthiness of AI, which are crucial for AI to live up to its hype and be useful in practical applications.
What are the two factors mentioned in the script that AI tools need to improve upon?
-AI tools need to improve upon the amount of data they are trained on and the underlying technology itself to enhance their capabilities and reliability.
How does the script address the concern about AI taking all of our jobs?
-The script suggests that the concern is based on a misunderstanding, as AI can increase efficiency but does not necessarily replace jobs, especially considering the cost and availability of AI tools.
What is the fundamental issue with AI that the script suggests we should worry about?
-The script suggests worrying about the issue of AI adopting human biases from training data, which has not been successfully addressed and can lead to problematic outcomes in decision-making.
What is the final point made in the script about human intelligence and AI?
-The script concludes that human intelligence is defined by our ability to connect, have emotional responses, and creatively integrate information, which AI cannot replicate, thus distinguishing our humanity from AI capabilities.
Outlines
🧠 The Hype and Concerns Around AGI
The first paragraph discusses the current state of artificial intelligence (AI), highlighting its ability to outperform humans in specific tasks such as playing chess. It introduces the concept of artificial general intelligence (AGI), which is AI that can perform at or above human levels across a wide range of tasks. The speaker expresses concern about the discourse surrounding AGI, noting that while some in the tech industry warn of its potential dangers to civilization, others may be motivated by financial gain or the cinematic allure of AI overtaking humanity. The paragraph also points out that focusing on improbable futures can distract from real-world issues already arising from AI, such as racial bias in AI decision-making for prison release and the challenge of deep fakes.
🔮 The Reality of AGI and the Challenge of Reliability
The second paragraph delves into the challenges of achieving AGI, starting with the issue of reliability. It mentions AI's tendency to produce incorrect results, using Google's AI Search tool as an example. The speaker argues that the current trajectory of AI improvements may not be sufficient for achieving AGI and discusses 'AI hallucination,' where AI generates false information or images. The paragraph also addresses the high expectations set for AI in fields like law, where it has been used to write legal briefs, only to generate fictitious cases. The need for more data and technological advancements is highlighted, along with skepticism about the availability of sufficient high-quality data and the sustainability of investment in AI improvement.
🛠 The Future of AI: Improvements, Bias, and Human Connection
The final paragraph contemplates the future of AI, focusing on the need for substantial improvements in data and technology. It questions the economic viability of investing in AI to replace human workforces, given the availability of affordable, open-source AI tools. The speaker emphasizes the persistent issue of AI inheriting human biases and the futility of guardrails in addressing this problem. The paragraph concludes by distinguishing between human intelligence, defined by emotional connection and creativity, and AI, which lacks these core human attributes. It reassures that despite fears of AI overlords, humans retain control over technology, as we can always 'turn it off.'
Mindmap
Keywords
💡Artificial Intelligence (AI)
💡Generative AI
💡Artificial General Intelligence (AGI)
💡Reliability
💡AI Hallucination
💡Deep Fakes
💡Racial Bias
💡Incremental Improvements
💡Productivity
💡Human Intelligence
💡Elon Musk
💡Google AI Search
Highlights
Artificial intelligence has surpassed human performance in specific tasks such as chess.
The concept of artificial general intelligence (AGI) is gaining attention, with concerns about its potential threat to civilization.
Tech industry leaders are warning about the dangers of AI, advocating for regulation.
There is skepticism about the profitability and necessity of regulating powerful AI tools.
The fear of AI overtaking humanity is often sensationalized, distracting from current AI-related issues.
Elon Musk predicts AGI could be achieved within a year, despite current AI tools' limitations.
Google's AI Search tool exemplifies the current limitations of AI in providing accurate information.
The trajectory of AI development needs to be continuously upward to achieve AGI.
Reliability is a significant challenge for AI, as algorithms often produce incorrect results.
AI 'hallucination', or making up information, is a major issue that needs addressing.
The potential solution to AI hallucination may not be achievable with current technology.
Legal applications of AI have faced issues with accuracy and the creation of fictitious cases.
Even the best AI tools still hallucinate a significant percentage of the time.
The need for more data and technological improvements to enhance AI capabilities.
The challenge of finding reliable data to train AI, especially with the prevalence of low-quality content.
Investment in generative AI has not yet resulted in a sustainable financial return.
The debate over AI replacing jobs and the economic implications of increased efficiency.
AI's inability to replicate human emotional intelligence and creativity.
The persistent issue of AI inheriting human biases and the challenges in addressing this.
The importance of solving AI bias before widespread adoption in decision-making roles.
A reminder that, contrary to movies, we can always turn off AI if it becomes a threat.
Transcripts
[Music]
we've built artificial intelligence
already that on specific tasks performs
better than humans there is AI that can
play chess and beat human Grand Masters
but since the introduction of generative
AI to the general public a couple years
ago there's been more talk about
artificial general intelligence or AGI
and that describes the idea that there's
AI that can perform at or above human
levels on a wide variety of tasks just
like we humans are able to do and people
who think about AGI are worried about
what it means if we reach that level of
performance in the technology right now
there's people from the tech industry
coming out and saying the AI that we're
building is so powerful and dangerous
that it poses a threat to civilization
and they're going to government and
saying maybe you need to regulate us now
normally when an industry makes a
powerful new tool they don't say it
poses an existential threat to humanity
and that it needs to be limited so why
are we hearing that language and I think
there's two main reasons one is if your
technology is so powerful that it can
destroy civilization between between now
and then there's an awful lot of money
to be made with that and what better way
to convince your investors to put some
money with you than to warn that your
tool is that dangerous the other is that
the idea of AI overtaking humanity is
truly a cinematic concept we've all seen
those movies and it's kind of
entertaining to think about what that
would mean now with tools that we're
actually able to put our hands on in
fact it's so entertaining that it's a
very effective distra action from the
real problems already happening in the
world because of AI the more we think
about these improbable Futures the less
time we spend thinking about how do we
correct deep fakes or the fact that
there's AI right now being used to
decide whether or not people are let out
of prison and we know it's racially
biased but are we anywhere close to
actually achieving AGI some people think
so Elon Musk said that we'll achieve it
within a year I think he posted this a
few weeks ago but like at the same time
Google put out their AI Search tool
that's supposed to give you the answer
so you don't have to click on a link and
it's not going super
well please don't eat
rocks now of course these tools are
going to get better but if we're going
to achieve AGI or if they're even going
to fundamentally change the way we work
we need to be in a place where they are
continuing on a sharp upward trajectory
in terms of their abilities and that may
be one path but there's also the
possibility that what we're seeing is
that these tools have basically achieved
what they're capable of doing and the
future is incremental improvements in a
plateau so to understand the AI future
we need to look at all the hype around
it and get under there and see what's
technically possible and we also need to
think about where are the areas that we
need to worry and where are the areas
that we don't so if we want to realize
the hype around AI the one main
challenge that we have to solve is
reliability these algorithms are wrong
all the time like we saw with Google and
Google actually came out and said after
these bad search results um were
popularized that they don't know how to
fix this problem I use chat GPT every
day I read a newsletter that summarizes
discussions on farri message boards and
so I download that data chat GPT helps
me write a summary and it makes me much
more efficient than if I had to do it by
by hand
but I have to correct it every day
because it misunderstands something it
takes out the context and so because of
that we can't just rely on it to do the
job for me and this reliability is
really important now a subpart of
reliability in this space is AI
hallucination a great technical term for
the fact that AI just makes stuff up a
lot of the time I did this in my
newsletter I said J gbt are there any
people threatening violence if so give
me the quotes and It produced these
three really clear threats of violence
that didn't sound anything like people
talk on these message boards and I went
back to the data and nobody ever said it
it just made it up out of thin air and
you may have seen this if you've used an
AI image generator I asked it to give me
a close up of people holding hands
that's a hallucination and a disturbing
one at
that we have to solve this hallucination
problem if this AI is going to live up
to the hype and I I don't think it's a
solvable problem with the way this
technology works there are people who
say we're going to have it taken care of
in a few months but there's no technical
reason to think that's the case because
generative AI always makes stuff up when
you ask it a question it's creating that
answer or creating that image from
scratch when you ask it's not like a
search engine that goes and finds the
right answer on a page and so because
its job is to make things up every time
I don't know that we're going to be able
to get it to make up correct stuff and
then not make up other stuff that's not
what it's trained to do and we're very
far from achieving that and in fact
there are spaces where they're trying
really hard one space that there's a lot
of enthusiasm for AI is in the legal
area where they hope it will help write
legal briefs or do research some people
have found out the hard way that they
should not write legal briefs right now
with chat GPT and send them to federal
court because it just makes up cases
that sound right and that's a real
really fast way to get a judge mad at
you and to get your case thrown out now
there are legal research companies right
now that advertise hallucination free
generative Ai and I was really dubious
about this and researchers at Stanford
actually went in and checked it and they
found the best performing of these
hallucination free tools still
hallucinate 177% of the time so like on
one hand it's a great scientific
achievement that we have built a tool
that we can posee basically any query to
and 60 or 70 or maybe even 80% of the
time it gives us a reasonable answer but
if we're going to rely on using those
tools and they're wrong 20 or 30% of the
time there's no model where that's
really useful and that kind of leads us
into how do we make these tools that
useful because even if you don't believe
me and you think we're going to solve
this hallucination problem we're going
to solve the reliability problem the
tools still need to get better than they
are now and there's two things they need
to do that one is lots more data and two
is the technology itself has to improve
so where are we going to get that data
because they've kind of taken all the
reliable stuff online already and if we
were to find twice as much data as
they've already had that doesn't mean
they're going to be twice as
smart I don't know if there's enough
data out there and it's compounded by
the fact that one way that generative AI
has been very successful is at producing
lowquality content online that's bots on
social media misinformation and these
SEO pages that don't really say anything
but have a lot of ads and come up high
in the search results and if the AI
starts training on pages that it
generated we know from Decades of AI
research that they just get
progressively worse it's like the
digital version of mad cow
disease let's say we solve the data
problem you still have to get the
technology better and we've seen $50
billion dollar in the last couple years
invested in improving generative Ai and
that's resulted in $3 billion in Revenue
so that's not sustainable but of course
it's early right companies may find ways
to start using this technology but is it
going to be valuable enough to justify
the tens and maybe hundreds of billions
of dollars of Hardware that needs to be
bought to make these models get better I
don't think so and we can kind of start
looking at practical examples to figure
that out and it leads us to think about
where are the spaces we need to worry
and not because one place that
everybody's worried with this is that AI
is going to take all of our jobs lots of
people are telling us that's going to
happen and people are worried about it
and I think there's a fundamental
misunderstanding at the heart of that so
imagine this scenario we have a company
and they can afford to employ two
software engineers and if we were to
give those software Engineers some
generative AI to help write code which
is something it's pretty good at let's
say they're twice as efficient that's a
big overestimate but it makes the math
easy easy so in that case the company
has two choices they could fire one of
those software Engineers because the
other one can do the work of two people
now or they already could afford two of
them and now they're twice as efficient
so they're bringing in more money so why
not keep both of them and take that
extra profit the only way this math
fails is if the AI is so expensive that
it's not worth it but that would be like
the AI is $100,000 a year to do one
person's work of work so that sounds
really expensive and practically there
are already open-source versions of
these tools that are low cost that
companies can install and run themselves
now they don't perform as well as the
flagship models but if their half is
good and really cheap wouldn't you take
those over the one that cost a $100,000
a year to do one person's work of course
you would and so even if we solve
reliability we solve the data problem we
make the models better the fact that
there are cheap versions of this
available suggest that companies aren't
going to be spending hundreds of
millions of dollars to replace their
Workforce with AI there are areas that
we need to worry though because if we
look at AI now there are lots of
problems that we haven't been able to
solve I have been building artificial
intelligence for over 20 years and one
thing we know is that if we train AI on
human data the AI adopts human biases
and we have not been able to fix that
we've seen those biases start showing up
in generative AI
and the gut reaction is always well
let's just put in some guard rails to
stop the AI from doing the biased thing
but one that never fixes the bias
because the AI finds a way around it and
two the guard rails themselves can cause
problems so Google had an has an AI
image generator and they tried to put
guardrails in place to stop the bias in
the results and it turned out it made it
wrong this is a request for a picture of
the signing of the Declaration of
Independence and it's a great picture
but it is not factual correct and so in
trying to stop the bias we end up
creating more reliability problems we
haven't been able to solve this problem
of bias and if we're thinking about
deferring decision-making replacing
human decision makers and relying on
this technology and we can't solve this
problem that's a thing that we should
worry about and demand solutions to
before it's just widely adopted and
employed because it's sexy and I think
there's one final thing that's missing
here which is our human intelligence is
not defined by our productivity at work
at its core it's defined by our ability
to connect with other people our ability
to have emotional responses to take our
past and integrate it with new
information and creatively come up with
new things and that's something that
artificial intelligence is not now nor
will it ever be capable of doing it may
be able to imitate it and give us a
cheap fact simile of genuine connection
and empathy and creativity but it can't
do those core things to our humanity and
that's why I'm not really worried about
AGI taking over civilization but if you
come away from this disbelieving
everything I have told you and right now
you're worried about Humanity being
destroyed by AI overlords the one thing
to remember is despite what the movies
have told you if it gets really bad we
still can always just turn it
off thank you
MEC
Посмотреть больше похожих видео
Is AI A Bubble?
Sultan Khokhar warns of existential risks posed by increasing use of Artifical Intelligence (1/8)
How to get empowered, not overpowered, by AI | Max Tegmark
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
INTELLIGENZA ARTIFICIALE spiegata in 30 minuti 🤖
The 7 Types of AI - And Why We Talk (Mostly) About 3 of Them
5.0 / 5 (0 votes)