2024年开年AI大牛世界论坛关于AI的三大访谈之一 李飞飞、吴恩达对谈:这一次,AI冬天不会到来2024 A Dialogue between Li Fei-Fei and Andrew Ng
Summary
TLDR本次访谈邀请了斯坦福大学教授、被誉为人工智能之母的李飞飞和AI基金的总经理合伙人、Google Brain的创始领导安德鲁·吴,共同探讨了人工智能的现状与未来。两位专家分享了他们对AI技术发展的看法,包括AI在不同领域的应用、AI伦理问题、以及AI对社会和经济的深远影响。他们还讨论了AI技术的最新突破,以及如何平衡技术创新与社会责任。
Takeaways
- 🤖 AI的未来不会被媒体炒作所左右,商业基础比以往任何时候都更加坚实。
- 🚀 尽管存在对AI寒冬的担忧,但AI作为一个通用技术,其商业应用前景仍然非常广阔。
- 🌟 2024年AI可能的重大突破包括视频、时间序列、生物学和化学领域的进展。
- 🖼️ 计算机视觉和图像处理技术即将取得令人兴奋的进展,可能会与大型语言模型相媲美。
- 🧠 公共部门的AI发展将得到更多资源,非营利组织在AI领域的突破也将更加显著。
- 🤔 AI在特定任务上的应用将更加广泛,特别是在数据丰富且模式可重复的领域。
- 🛠️ 企业应关注AI在特定业务和行业中的具体应用,这些应用可能带来独特的竞争优势。
- 📉 对于AI的准确性问题,需要根据行业和风险水平来评估AI的应用范围和限制。
- 📰 关于生成性AI和知识产权的诉讼,特别是纽约时报与OpenAI的案件,反映了创作者经济与AI技术之间的紧张关系。
- 🌐 开源LLM(大型语言模型)与闭源LLM之间的竞争将继续,但模型的发展方向和数据使用将有所不同。
- 💡 AI技术的发展需要新的突破,例如子二次方架构或液态神经网络,以超越现有的Transformer模型。
Q & A
Rajiv Chand 在本次会议上担任什么角色?
-Rajiv Chand 在本次会议上担任主持人(moderator)。
Fe Lee 教授在斯坦福大学的职位是什么?
-Fe Lee 教授是斯坦福大学的教授,同时也是斯坦福人类中心人工智能研究所的联合主任。
Andrew Ng 在本次会议上的角色是什么?
-Andrew Ng 是 AI fund 的常务总经理合伙人,也是 Google Brain 的创始领导。
Fe Lee 教授被广泛认可为什么?
-Fe Lee 教授被广泛认可为人工智能的“祖母”。
Fe Lee 教授和 Andrew Ng 是何时首次相识的?
-Fe Lee 教授和 Andrew Ng 大约在 2007 年的某个会议或研讨会上首次相识。
Fe Lee 教授的哪本书被推荐给了观众?
-Fe Lee 教授的书《The World I See》被推荐给了观众。
Andrew Ng 预测 2024 年人工智能将会有哪些突破?
-Andrew Ng 预测 2024 年人工智能将在视频、时间序列、生物学和化学领域有重大突破。
Fe Lee 教授对未来人工智能的预测是什么?
-Fe Lee 教授预测未来人工智能将在像素空间(pixel space)有令人兴奋的技术进步,并且公共部门的人工智能将得到更好的资源支持。
在讨论中,对于 AI 代理的未来,Fe Lee 教授和 Andrew Ng 有什么不同的观点?
-Fe Lee 教授倾向于使用“辅助代理”(assistive agents)这个术语,而 Andrew Ng 则提到了“自主代理”(autonomous agents)。两者都认为 AI 的未来将更多地与人类协作,而不是完全自动化。
对于人工智能的商业基础,两位嘉宾持何种观点?
-两位嘉宾都认为人工智能的商业基础比以往任何时候都要强大,AI 正在成为推动下一轮数字革命或工业革命的真正转型驱动力。
Outlines
🎤 开场与介绍
视频脚本的开头介绍了Rajiv Chand作为主持人,他是研究部门的负责人,并表示很荣幸能主持这场关于人工智能伟大思想的会议。他向观众介绍了两位嘉宾:F Fe Lee教授,斯坦福大学的教授,也是斯坦福人类中心人工智能研究所的联合主任,以及Andrew Ng,AI基金的总经理合伙人以及Google Brain的创始领导。Rajiv Chand还提到了F Fe Lee的著作《你所看到的世界》,并推荐大家阅读。接着,他提出了关于人工智能未来走向的问题,询问两位嘉宾是否认为我们将迎来更多的炒作还是进入人工智能的低谷。
🤖 人工智能的商业基础与未来展望
在这段对话中,F Fe Lee和Andrew Ng讨论了人工智能的商业基础和未来的发展趋势。F Fe Lee认为,尽管媒体可能会炒作,但人工智能的商业基础比以往任何时候都要强大,因为AI是一种通用技术,类似于电力,有着广泛的应用前景。她还提到,即使AI技术没有进步,现有的业务基础也将继续增长。Andrew Ng同意这一观点,并补充说,我们已经看到了AI的另一个转折点,特别是大型语言模型的出现,这些模型正在成为推动数字化或工业革命的真正力量。他还预测了2024年可能出现的AI突破,包括在视频、时间序列、生物学和化学领域的进展。
🌟 人工智能的突破与应用
在这一段中,两位嘉宾继续讨论他们预期在2024年将会看到的人工智能的突破。F Fe Lee强调了计算机视觉领域的即将到来的技术进步,特别是在像素空间的模型。她提到了高斯扩散模型等技术,并表示对图像、视频和多模态的进展感到兴奋。Andrew Ng则讨论了从大型语言模型向视觉模型的转变,并强调了分析图像的重要性。他还提到了自主代理的兴起,这是一种可以规划和执行一系列操作的AI系统。此外,他还提到了在笔记本电脑上运行大型语言模型的可能性,以及这对设备制造商可能意味着什么。
🤔 人工智能代理与任务自动化
本段讨论了人工智能代理的概念,以及它们在业务中的应用。F Fe Lee提出了对自主代理的担忧,并建议使用“辅助代理”这个词。她强调了长尾分布的挑战,并认为人机协作比完全自动化更有可能发生。Andrew Ng同意这一点,并分享了他在与企业合作时的经验,说明了企业如何决定使用AI来增强还是替代人类工作。他还提到了AI在特定任务上的应用,如医疗保健中的辅助决策。
🏥 人工智能在医疗保健中的应用
在这一段中,F Fe Lee和Andrew Ng讨论了人工智能在医疗保健领域的应用。F Fe Lee提到了AI在医疗保健交付中的使用,尤其是在药物发现和电子健康记录的分析方面。她还提到了在新加坡的一个系统,该系统通过分析患者的电子健康记录来预测患者可能在医院中停留的时间。Andrew Ng讨论了AI在医疗保健操作中的应用,如MRI机器的调度,并强调了在风险较低的领域部署AI的机会。他还提到了AI在诊断和治疗中的局限性,以及如何在高风险情况下使用AI辅助决策。
📚 基础模型与知识产权诉讼
这段对话涉及了基础模型的发展和知识产权诉讼的问题。F Fe Lee和Andrew Ng讨论了2024年可能出现的基础模型领导者,并预测了AI技术将如何深入和扩展到所有行业。他们还讨论了关于生成性AI和知识产权的诉讼,特别是关于OpenAI和New York Times之间的诉讼。F Fe Lee表达了对OpenAI的支持,并认为诉讼中的论点有些模糊。Andrew Ng则提出了关于内容提供者是否应该因其内容被用于训练AI模型而获得补偿的问题,并提出了一些可能的解决方案。
💡 快速问答与未来展望
在最后一段中,Rajiv Chand通过一系列快速问答的形式,引导F Fe Lee和Andrew Ng对几个话题发表了简短的看法。这些问题包括开源LLMs是否能达到闭源LLMs的水平,AI生成的选举假信息是否会影响2024年的选举结果,Transformers是否遇到了瓶颈,以及是否存在来自AI的对人类存在的威胁。最后,他们还讨论了作为风险投资家,是否会投资于OpenAI这样的公司。
Mindmap
Keywords
💡人工智能
💡深度学习
💡自动驾驶
💡知识产权
💡生成性AI
💡AI伦理
💡AI基金
💡AI冬季
💡Transformers
💡数字革命
Highlights
Rajiv Chand作为CES会议的主持人,介绍了两位人工智能领域的杰出人物:斯坦福大学的教授Fei-Fei Lee和AI基金的总经理合伙人Andrew Ng。
Fei-Fei Lee被誉为人工智能的“祖母”,同时也是斯坦福大学人类中心AI研究所的联合主任。
Andrew Ng是Google Brain的创始领导,并对深度学习有重要贡献。
Fei-Fei Lee和Andrew Ng首次会面是在2007年的某个会议或研讨会上。
Andrew Ng提到,即使没有技术进步,AI的商业基础也比以往任何时候都要强大。
Fei-Fei Lee认为AI是一种通用技术,类似于电力,将在各个行业中发挥重要作用。
Andrew Ng预测,图像和视频的AI技术将取得重大突破。
Fei-Fei Lee强调公共部门AI的重要性,以及非营利组织在推动AI发展中的作用。
Andrew Ng讨论了自主代理的兴起,即AI系统能够规划和执行一系列动作。
Fei-Fei Lee提出了对“自主代理”一词的异议,建议使用“辅助代理”来更准确地描述AI的作用。
Andrew Ng分享了企业如何确定哪些任务适合使用AI来增强或自动化的方法。
Fei-Fei Lee讨论了AI在医疗保健领域的应用,以及如何安全地部署AI技术。
Andrew Ng预测,AI将在金融服务、教育、电子商务等行业中发挥越来越重要的作用。
Fei-Fei Lee和Andrew Ng讨论了AI对创作者经济的影响,以及媒体应如何更细致地报道这一问题。
关于AI训练使用互联网内容的版权问题,Fei-Fei Lee和Andrew Ng表达了对当前法律需要更新以适应生成性AI时代的看法。
Fei-Fei Lee认为,尽管AI可能改变某些任务,但不会导致整个工作的消失,而应被视为提高效率的工具。
Transcripts
[Music]
for
for
humaner
for
Fore
good morning
everyone good morning morning awesome my
name is Rajiv Chand I'm head of research
of wing it is my honor to be the
moderator for this session on great
minds in AI we are here with two of the
greatest Minds in AI immediately to my
left is Professor of Stanford University
co-director of the Stanford Health
excuse me Stanford human centered AI
Institute and also widely recognized as
the grandmother of artificial
intelligence F Fe Lee not to indicate
anything but age not
anything and immediately to her left is
managing General partner of AI fund also
the founding lead for Google brain
Andrew Ang everybody please join me
again in welcoming f f Lee and Andrew a
to
CES
he is also the author of uh this amazing
book called the worlds I see if you
haven't picked up a copy I certainly
recommend it it's got this amazing life
story as well as the history and future
for AI it's amazing amazing book so just
just just to add to that I think over
the years I've known f f i and many
others have been inspired by her
personal story she used to work in a
lrat I think she's been public about
that and then she wed up more recently
building up hii and frankly you know her
whole team knows she has a reputation
for working crazily hard and built hii
to this fantastic institution in
Stanford so I think for the people that
don't know hisory yet I think um if you
read her book you find it pretty
inspiring so thank you Andrew this is
why you're in the
book so Andrew F let's start with both
of you are each luminaries and Veterans
of AI but each of you have also worked
when's the first time both of you met
each other or worked together tell us a
little about your history I'm worried
when my have different
answers I think well first of all I've
been reading Andrew's papers before I
read uh I met Andrew I think we met as
young assistant professors somewhere
around
2007 in a conference or a workshop that
that was my Merm what what year is that
again 2007
2007 yeah honestly I my memory is awful
I I have no idea but but you should
remember that the first thing you said
to me is Fe do you want a job at
Stanford that I
remembered that works out that that I'm
I'm I I I I'm actually really proud you
know that I played this small Row in
convincing to come so it's well let's
start with question one which is kind of
state of AI and uh last year was
certainly a very hype year for AI our
good friend Rodney Brooke uh tweet
tweeted or posted on January 1st get
your thick coats now there may be yet
another AI winter around just around the
corner and it's going to be
cold so are we headed to less hype more
hype or a trough for this upcoming year
in
AI I think the media would do whatever
the media does but we're not in for
winter and that's because the business
fundamentals of AI are stronger than
than ever um even before the generative
AI wave that really took off last year
AI has been moving probably hundreds of
billions of dollars maybe trillions of
dollars I'm not sure at least hundreds
of billions of dollars for a single
company like Google you know sh more
relevant ads that drives massive amounts
of Revenue so the business fundamentals
are there and in fact because AI one of
the difficult things to understand about
AI is it's a general purpose technology
meaning that it's not useful for one
thing it's kind of like electricity and
another general purpose technology if I
ask you what electricity good for is
almost hard to answer that because it's
useful for so many different things and
AI is like that too and so where we are
today I think even if AI makes no
technological progress which you know it
is going to make Tech progress even if
it doesn't there's so many use cases all
around the world to be identified and
build out that the business fundamentals
I'm very confident will continue to grow
and we're going to make this session
also highly interactive as well let me
get a show of hands more hype less hype
winter how many folks think that we are
not at Peak hype there will be more hype
hype this year show a
hands how many folks think there will be
less hype this upcoming
year how many folks think no hands for
Less hype how many folks think that will
be a winter for this upcoming
year wow so we're not at Peak hype yeah
so I more or less agree with Andrew so
what we have seen is another inflection
point of AI and uh that inflection point
came came through the large language
models or the the first roll out of tat
GPT and then the uing models what I do
see and I agree with Andrew is this is a
deepened horizontal technology when it's
a deepened horizontal technology it is
becoming a true transformative driving
force of the next um whether you call it
digital Revolution or Industrial
Revolution in terms of the the public
media you know coverage it's going to
just go in in this kind of waves and
that's not very relevant but what is
what is relevant is that this technology
is here to stay is here to be deepening
into all vertical businesses and uh
customer consumer experiences and is
changing the very fabric of our societal
economical political landscape and that
is that is just a fact and that's
happening more and more let's jump to
Big breakthroughs that you anticipate
for 2024 clam at hugging face uh had I
think six predictions for this upcoming
year one of which was big breakthroughs
in AI for video time series biology and
chemistry what do you feel and maybe
start with you f what do you feel will
be one of the biggest breakthroughs in
AI in this upcoming year as we start
2024
that it's always very dangerous to
predict the future because then I'm
going to be quoted as saying something
wrong um all right coming from the field
of computer vision and and what I would
call pixel Centric AI I do think we're
at the verge of very exciting
technological advances in pixel space
we've been looking at gen we've been
looking at diffusion models we you know
some of you hear about gausian splatting
you're hear about uh I think there is
just so much that's almost you know
breaking through that wave of technology
I don't know if it's exactly going to be
as um um matured as llm or large
language models a a year and a few
months ago but I I'm seeing that more
and more and I'm very very excited by
that image video multimodal with a
combination different modes or any of
the three or right it's a combination of
those it's going to be more pixel first
it's not just language induced and uh
another thing I do want to say is uh
this is a little bit more faith face
hope uh rather than my prediction is
actually public sector AI I think it's
really important that for the ecosystem
as well as for many reasons that public
sector AI will be better resourced
uh we we are pushing our governments for
that as well as um some of the very
exciting interesting multi-disciplinary
nonprofit driven breakthroughs coming
from public sectors whether it's
sustainability or medicine drug
Discovery and and other areas Andrew
your thoughts yeah make a few quick
predictions so first um we've seen the
large language model breakthrough right
things a CH reading b i I agree with fa
about images coming so I'm seeing a
shift from large language models so
launch Vision models um a lot of
progress will not just be in generating
images it'll be in analyzing images so
computers can see much better those
implications on you know self-driving
cost for example wherever you have a
camera that's one second um we used the
prompting chat GP B you prompted a
response but I'm excited about the rise
of autonomous agents which is when you
can give a AI system and instruction
like d AI system do market research for
me to do a competitive analysis of this
company and instead of responding right
away it plans out a sequence of actions
like do these web searches and download
these web pages and summarize these
things it goes off and does half an hour
of work or an hour of work or a day of
work and then comes back to M an answer
autonomous agents they can plan out and
execute sequences of of actions they're
kind of just barely working but I feel
like there's a lot of traction the
research and the commercialization side
and we expecting breakthroughs um in the
coming months and me just one last thing
maybe appropriate for seers as well I'm
very excited about AI um so you know uh
I routinely run a large language model
on my laptop right so I use gbd4 all the
time I use bot quite often but what not
many people know is that it's actually
getting quite feasible to run the large
language model um on your laptop not as
big as gp4 but big enough to be useful
and I think this actually have a lot of
implications um device makers you know
all the PC makers won't we like to um be
able to sell consumers a more powerful
PC to let them power the latest AI I
think graphics cards was often a reason
for upgrade I think that uh Edge AI
running on your laptop or PC or your
industrial PC at the age that capability
is actually getting much better than
most people think and maybe a perfect
for CS I think this will drive a lot of
device sales for a lot of companies as
well I'm going to respectfully disagree
on or or just just tiny bit of a
discussion you use the word autonomous
agent um I actually would like to change
the a word to assistive agent
one thing we have seen in today's
language model large language models and
these large Foundation model is that
longtail distribution is still really
really hard whether we're talking about
hallucination and other things and in a
lot of Works Space that um in order to
deliver the kind of quality service and
uh products long tail matters so what I
actually see is a human machine in the
loop collaboration assistive agents that
part of the work is autonomous part of
the work is collaborative is more likely
to happen rather than fully autonomous
actually you high five years we finally
had something to disagree on no no but
but I actually actually do kind of agree
um so let me share my my experience I I
I think the term autonom agents is
problematic maybe but what I'm seeing in
business context is I know that a lot of
us would rather you know have ai hope
humans rather than replace humans
because of the job loss conversation
which is a thing and I
um and without uh uh diminishing the the
the suffering of people whose jobs do go
away what I what I see just BEC candid
is often the decision for whether you
use AI to automate or to augment it
tends to be a business economic decision
rather than an ethical decision maybe it
should be an ethical decision but candly
when I work with businesses and they
build a chatbot you know there's a very
rational economic calculation that I see
most businesses do to say great humans
add this value a adds his value what's
the right economic decision because the
competitors are doing the same thing so
I wish we could say don't replace human
Tas you know but but unfortunately okay
so so on this top of AI agents let me
say another quote from Mira morati she
said the concept of AI agents prob been
November or so the concept of AI agents
isn't new but now we're uh iterating
toward the future with intelligent
Common Sense agents that understand why
we do
things okay okay this this just
to um add to one thing and and I'll
comment on that one is that I think we
have to be careful about replacing Jobs
versus replacing tasks actually that was
I was going to say that next exactly so
I'm sure you and I read the same reports
that for every given human job it's
actually a suite of multiple tasks
sometimes for you know I I study healthc
care a lot a nurse eight hour shift is
hundreds of tasks so I do see that AI
agents helping and being assistive and
augmentative are manying tasks but we
should be very very very careful in
talking about jobs and I do think that
um economical business decisions are not
mutually exclusive from ethical societal
decisions it's a deeper conversation I
know you and I agree uh coming to your
question about these agents have an
understanding I think that this is a
very nuanced term just focusing on the
business what is understanding there's
the understanding of the pattern that
lives in the data there's the
understanding of what decisions you're
making there's also understanding of the
intention of the whatever the human task
so so I think it is actually um I would
not go as far as using a blanket word
understanding to describe today's AI
agents or which of those three do you
think AI agents will get to in what time
frame well I think the best we've gotten
is understanding the patterns in the
data you know especially if we have
massive training data we have done a
great job when let's say we you know for
example large language models right
using this sequence to sequence
Transformer based algorithm is really
done a great job um extracting the the
patterns in the data uh in in order to
create predictive powerful predictive
models and so I think that's really
really
probably the most ahead in terms of
understanding the decision making again
I think that's much more nuanced you
guys all come from business and you know
how nuanced it is and I I think there's
more to be done and to be said and in
terms of
intention I think we're just scratching
the first surface you know yeah actually
can can I I want to I want to go back to
the Tas topic because I think that's the
important so one thing that you know um
my team's working with quite a few
businesses and occasionally I get a call
from a c they say hey Andrew reading
about AI gen of AI what should I do
about it and it turns out that there's a
recipe for businesses to figure out um
what tasks you should try to use AI to
augment or automate so as fa was saying
most jobs are made of many different
tasks um take the job of a radiologist
you know Radiologists read x-ray images
they have to uh gather patient histories
they have to operate the machines
maintain the machines um you know
consult with Mentor younger doctors and
so on so radiologist is one example does
many different TS so what um I've seen
businesses do sometimes of Our Hope and
also one of our friends every broson was
really pionering this technique is to
look at your team you know figure out of
all of your employees what are the tasks
they're actually doing and to analyze
not at the job level but at the task
level how ad meable is this task to AI
augmentation or Automation and what's
the business Roi and every time I've
done this with a business um we've
always come up with way more ideas than
any of us have time to implement so
there's a lot of opportunities for AI
augmentation or Automation and then the
second thing I've learned is very often
the highest Roi tasks are not what
people initially think so for example um
when you think of a radiologist people
often think oh Radiologists read x-rays
there's that picture in your head of
that defining roow of what a job entails
but we actually break down that job
within many different tasks there are
often other Tas like maybe maybe
Gathering patient histories or something
you know that turns out to be easier and
maybe higher Roi so I found that doing
this exercise systematically has often
helped businesses um identify valuable
opportunities to then go through a bill
versus buy kind of decision to execute
an AI project Andrew this is exactly
actually where I wanted to go next which
is bringing it very practical to the to
the group here so so are there any
commonalities of applications that you
see
among your work with Fortune 500
companies on applications that have
clear demonstrable achievable Roi like
what what applications do you see that
most in this room should be absolutely
laser focused on yeah so well if if we
look as broad as portion 500 I think the
common ones are customer operations or
customer support there's so many
companies trying to augment or automate
customer support um I think software
engineering is also transforming so we a
lot of software engineers and I think
this goes well beyond GitHub co-pilot
GitHub co-pilot is a nice too but it
goes well beyond that um I think sales
operations is also being heavily
impacted um but for specific businesses
it turns out kind of everyone is doing
customer operations so you you should
probably consider it as well but the
more exciting things to me are um boy
what can I talk about talking to a very
large agriculture company and there some
people that do some task I can't talk
about the details but we identify the
task that you know it's not what you
think of when they think of harvesting
right it this this weird task that we
thought oh maybe we could use AI to
really save them a lot of time so it's
those Niche it's those things are
specific to your business and your
industry that I think are often more
interesting and creates that you know
industry specific defensible defensible
fly wheel and strengthens because
everyone at some point will probably
really to buy some generic tools for
sales operations and so on but the
things specific to your business they
should build internally those things
that I find very exciting one thing to
just add to that that is um there's you
know the the the kind of like uh
customer support or or Ops Solutions but
there's also another way to look at is
where are the com uh opportunity
commonality opportunities using the
current technology and I think it's
still true today is where you have the
most data where the data can be
um you can actually discern uh
repeatable patterns or or good patterns
out of the data and that's where you can
start whether it's human language
patterns or it's structure data patterns
or imagery P patterns where where data
is and where the the the the patterns of
data uh prove to be valuable and
actionable in your business is where one
should be looking at let's talk about
barriers that Fortune 500 CEOs May face
uh we held our healthc care Summit this
past Sunday with a number of healthcare
CEOs one of whom we asked what are you
most excited about for digital
Innovation he said artificial
intelligence Then I then I asked him
well what are you most concerned about
as CEO and he said
inaccuracy you pick the hardest
industry yeah yeah how what would you
say to CEOs Who Who Are You know talking
about inaccuracy as a CEO level concern
in artificial intelligence and are there
other concerns that you also see at that
level well this is where I was saying
right like it depends on your product it
depends on your services it depends on
the stakes of the outcome right Health
Care driving Financial prediction there
are many Industries where the longtail
accur accuracy is so important you
cannot afford human lives or human
injury you cannot afford you know
banking errors so this is where you need
to understand your industry you need to
understand your Solutions and services
and look at where AI genuinely can can
help and this is where when you call
hype personally as you know when I have
conversation with business Executives
this is where we really should peel away
from the hype and understand what this
technology can do and avoid that kind of
um you know uh investment or or or um um
the kind of directions where AI is not
ready so an industry like healthcare
which is life and death and highly
regulated then what would you say to a
company that's wants to do generative AI
but it's concerned about inaccuracy what
would you say to him or her well both
Andrew and I work in healthc care a lot
I personally work work a lot in healthc
care um uh delivery um so there is
actually a ton of AI usage in healthcare
just just breaking it down from very
Upstream drug Discovery there's a ton we
can do and I mean generative AI
generative AI by the way that is a
overloaded word every AI today people
call it generative AI when Andrew and I
started we have very specific
mathematical definitions of generative
AI but now it's we we used to call it
machine learning but machine learning
became AI exactly we also used to call
gen generative versus discriminative AI
right so that's that whole mathematical
rigor is gone so but yeah I I I feel
that the mass media has kind of taken
over the tech terminology and may
technology just adopt the
mass when you call generative AI I'm
just going to assuming like that kind of
large data driven there's a pre-training
phase you know so some people might put
Transformer and predictive model in it
but I'm not even totally sure if people
always do but in any case I think uh if
there is a true accuracy issue we should
examine uh several things is this a
model limitation is this a data quality
problem is this the the AI in the loop
there is more nuanced business issues
that causes inaccuracy really decipher
all those and uh try to tackle them and
sometimes for example in certain level
of Health Care diagnosis and treatment
you do have to recognize there's a limit
and we cannot push too far if if the
risk is too high so so let's turn just
just add to that um even though we use
term generative AI generative AI is
often used for analysis so you know my
teams have done a bunch of projects
using uh these large language models uh
to read electronic health records to try
to spit out the conclusion rather than
to write text and even if you're writing
text it turns out that if you're careful
umof software for summarizing you know
is not that bad it it may still make
some and and I think there's so many
opportunities to deploy things even in
the health care setting where the stakes
aren't quite as high so for Diagnostics
you miss something you know that seems
really bad but um we deployed A system
that uh is still running a hospital to
screen patients reading electronic
health records to try to decide who's at
high risk of mortality to recommend them
for consideration for palal care for end
of Life Care uh but we don't trust our
system to make the decision so we send
to a doctor that reviews the photos we
show them and then makes a final
decision um and actually one of my
friends in Singapore Chang in the
National University Singapore has a
system that's looking at patient dhrs as
they come in to try to estimate how long
is a patient going to be in the hospital
sometimes the doctor thinks oh this is a
simple case there'll be other than three
days but the AI says no 15 days and that
triggers a conversation this actually
happening now in Singapore where the
clinician says oh maybe I need to take a
second look at this patient maybe I miss
something that AI C so so these things
are actually getting deployed but
depending on the capabilities um we can
often you know design safeguards to make
sure that it's deployed and responsible
way oh in healthcare operations if
you're using AI to schedule your MRI
machine if you make a mistake fine the
US of MRI is less efficient that is bad
but that doesn't seem maybe as bad as
missing a critical diagnosis so there
are actually lot of opportunities deploy
in healthcare and I think pretty much
all the sectors so let's J to
foundational models and this next
question was inspired by an article that
I read in Venture B
if 2023 was the year of open AI among
the foundation model leaders who will we
be talking about most in
2024 will Apple launch Ajax llm will we
be talking more about Gemini than
about
GPT so I just said um earlier I see this
technology deepening and also widening
into all sectors and because of that it
is hard to single out one company I'm
sure there will be exciting releases I
there will be you know the next a h100
next uh generation release from the chip
side all the way to the consumer side so
I I'm not going to be able to bet on a
single um a single you know topic but I
do think uh 20 24 I hope to see and I do
believe will be defined by a year that
we are seeing the widening of AI
applications as well as AI technology
and uh so it's not just focusing on one
or two Andrew how do you handicap the
foundation model leaders for this year
no so so let me let me let me um it
turns out that every time there's a wave
of tech Innovation the media likes to
talk about the technology layer which is
why the media focuses on open AI Google
you know AWS Microsoft meta and so on
nothing wrong with that but it turns out
or in Nvidia and AMD and so on but it
turns out that for this technology
infrastructure layer to be successful
there's another sector we need to be
even more successful and that's the
application layer built on top of these
po tool providers because frankly we
need the applications built on top these
tools to generate even more Revenue so
that they can afford to pay the two
Builders um I think Squire wrote a nice
article showing given the capital
investment and gpus you know we better
right collectively as a few generate
applications to fill in these tens of
billions of dollars kinds of hold that
that that we're now you know with the
capital Investments that have already
been made with gpus so I I again I don't
know what the media does they they you
know the hype cycle is whatever it is
but I think a lot of the actual work
will not just be the foundation model
layer it'll be going to healthcare
Financial Services uh uh education
e-commerce all of these different
sectors to identify and execute the
projects are now possible yeah I grew up
in Tech and mobile and in Mobile you had
many multi-billion dollar app companies
get created and I do think we do think I
agree with you I think there going to be
something here for that staying with
Foundation models for a moment one of
the major topics today is the lawsuits
on generative Ai and intellectual
property so how do you see these
lawsuits evolving and should the New
York Times be compensated for use of its
content in
training do you want take that you are
the one on Twitter talking about it I've
already said some stuff that might gave
into trouble so I might as well now so
sry I I did look through the New York
Times um opening eye Microsoft lawsuit
not lawyer not giving legal advice or
any form of advice any for for for this
but my sympathy line much more of open
AI than with Microsoft and my sympathy
line much more of open Ai and Microsoft
than with the New York Times uh candly
when I read the New York Times lawsuit I
felt that it was a very muddy argum
argument um uh and you know I wish New
York Times lawyers were held to the same
standards of clarity and journalistic
you know explaining stuff as the
reporters are but I don't think they are
so the New York Times it this very
somewhat I thought sensationalist
showing of oh open AI you know
regurgitates uh uh New York Times
articles but I think the way I was
presented was frankly a little bit
strange how so how so Andrew what what
did you find as a flaw in the in the
arguments in that in that brief so two
things one and May more important is uh
uh the prompts you what you type in get
it to regate New York Times articles
that was a very strange prompt that I
don't think any pretty much any normal
user of open ey will ever use so I think
it is true that New York Times Found You
know open eye characterizes the bugs I
think New York Times found a bug in
which opening ey does regurg dat
articles which it should not do I don't
think regurg dating cortic content at
scale is appropriate so open out has a
bug where it does that and New York
Times just points out kind of kind of
says you have a bug you have a bug you
have a bug and yes open that has a bug
we all you know sadly sometimes have
bugs in the software and I think there
was another thing that was strange which
was I believe that in some of the
examples New York Times showed that you
can write a prompt your prompt chat GPT
to tell it to basically go and download
The New York Times article and then tell
it to print it out and I feel like just
because it does that it's not the same
as the fact that um opening ey trained
on a lot of text Data from the internet
including New York Times article and I
think the lawsuit tried to draw a link
between opening eye training on a lot of
text that includes New York Times
articles with this you know Spectre that
opening eye is uh engaging in Mass
regurgitation of New York Times text
which I don't think is really um uh
telling the full story Andrew you should
expect a call from the District
Attorney's Office let me let me add to
this um not specifically the New York
Times meaning Andrew will be a expert
witness yeah I I
rather that's my 2024 prediction I'm
just kidding um so I do want to add to
this and zoom out a little bit about uh
the tension between geni and Creator
economy and I I I'm not as nuanced as
Andrew about the specifics of New York
Times dispute but even in my book I'm
mentioning about the messiness of this
technology those of us who trained in
the technology itself love to see
deterministic even in probability Theory
it's mathematically rigorous um things
but the truth is when Tech when rubber
hits the road especially a technology as
profound as this it gets messy with the
human human world Human Society what New
York Times um lawsuit with oi and uh
Microsoft is really showing us is
indicative of the tension we're seeing
between creater economy which is the
internet has really uh you know scaled
creater economy and this impacts not
only big players like New York Times but
little players like a single artist a
photographer a music composer and that
whole ecosystem is being challenged
disrupted as well as augmented by
today's generative AI technology and
we're seeing that tension playing now in
addition to the new York Times lawsuit
we getting artists uh law uh engaged in
lawsuits with mid journey and and others
so I actually think especially Andrew
has been calling out media and I do
plead to the media to look into this
with of much more nuanced lens and uh
whether it's public sector or private
sector private sector we should pay much
deeper attention in this issue than than
than just scratching the surface and
there and there is a very human element
to this so during the Hollywood strikes
a lot of the you know difficulties was
just boy if you're a Creator and you
think is my job going to go away is all
my work going to be still in it is a
very deeply emotional thing I actually
do sympathize with that um I think that
the fears of job loss are probably
greater than they will be because of
what we said earlier the jobs are made
out of tasks and even artist jobs are
made out of many tasks and yes AI could
alterate some of the task but because AI
May automate I don't know 20 30% of
someone's task but that still a lot of
other tasks that we need people to do
and maybe they could be more efficient
and actually even make more money but
there is this fear that that I think is
um challenging I think the AI worlder
needs to do a better job having that
conversation and reassuring people that
the job losses I don't say there'll be
no job bosses that's just not true but I
think it won't be as bad as as as it's
feared let me jump back to the group
again here and again show a hands three
options all internet content should be
available for training not inference but
for training some content categories
some content provider categories should
be compensated for their content for
training and then many or most content
providers should be paid for their
content for training how many folks feel
that all internet content should be
available for free for training you're
just go all open internet right all open
all open internet all open internet how
many folks feel yeah let's stay say open
internet how many folks feel like all
open internet content should be
available for training and there's kind
of an interesting argument here you know
I mean the University of Michigan
quarterback might have watched Todd
Brady but he probably didn't pay Todd
Brady for for yesterday's performance
okay how many folks feel like some
content providers some categories of
content providers should be paid for
their content for training on models
like open Ai and
others how many folks feel like many or
most content providers on the open
internet should be
paid so maybe like 30 40 maybe 30 50 10
30 50 20 something like that I'm sure
this is how the law is made and by the
way just just here I think I think you
know you know copyright law was written
in a previous era I think it needs to be
cleaned up for the generative AI era and
there are these difficult questions
about what is best for society right we
you know US Government collectively as a
society who can pass whatever laws we
want and I think these are very
difficult debates about how do we
compensate Crea us fairly and also
enable Tech Innovation and you know what
does the open internet mean and do we
want it to be a little bit less open
than it has been these actually really
difficult questions about what's best
for society awesome we're going to do a
lightning round for this last four
minutes so true or false in 30 seconds
why maybe Andrew I'll start with you
open source llms will reach the level of
the best Clos Source llms within the
next 3
years uh don't know but it might there a
lot of momentum Sly beat today's close
LMS but whether but the race is on we'll
see what happens with close Source
versus open source I I'm going to say
it's false not because of the
discrepancy of quality it's because of
the Divergence of the kind of models
because of the data uh are very
different and the close Source will will
pivot much more to deepening deepened
business cases and the open source will
be different so true or false AI
generated election disinformation will
be everywhere but it will not shift the
outcome of the 2024
election why think
that
um more or less
true and the reason I'm saying this and
I'm part of the effort so I'm trying to
believe in this is the the strength of
democracy is not relied on technology
itself and the flip side is true the
weakness of democracy the strength and
weakness of a democracy is on its people
and if we do the right public education
and uh and have the right public
discourse we are stronger than we
believe who we
are yeah I
yeah I don't know it's tough I you I
think until now I think the B true or
false Andre true or
false I think it's I think it's probably
false but I'm not sure um because until
now the bottleneck for dissemination of
you know falsehoods of disinformation
has been distribution you know people
can write fake stff it's really
difficult to get that fake stff in front
of the large audience I think the risk
is personaliz Iz uh disinformation
personalized disinformation and
persuasion but the technology is early
and the building up of defenses is is is
also still early a deeply technical
topic Transformers are hitting a wall
and we need a new solution such as sub
quadratic architectures or liquid neuron
networks liquid neuron networks well um
coming from the pixel World which is
deeply non 1D it's 2D it's 3D it's
multi-dimensional
I do think we need breakthrough
technology Beyond sequence to sequence
models that I absolutely believe um so I
guess it's true truish Andrew I say
Transformers are not hurting a while if
all we have is Transformers for the next
5 years we still have tons of room and
we also need new breakthroughs because I
wish we had something much better than
Transformers Andrew the next one is for
you as managing General partner of the
AI fund and F totally opine as well I
would invest in open AI I at a100
billion valuation yeah no no no comment
open eyes is a great comy open eyes is a
great comfy s was my student at Stanford
really deep respect for open ey but no
comment on investment
decisions I'm too poor to
invest well that's that's that that may
be that may be not 100% true uh F
question true false there is an
existential threat to humanity from AI
not
now there is catastrophic so social
risks from democracy to data bias issu
uh algorithm bias issues to uh job uh
labor market shift these are true
societal risks but not the kind of
conscious sension being existential
crisis not now Andrew last one is for
you this is from Ivan garia from
Gunderson Detmer and true or false at
least as a venture capitalist at least
one AI startup will raise the
substantial round of financing before
investors realize that the company
contains no actual humans and the
founder is a
bot not this year not for a
while everybody please join me in
thanking f f Lee and Andrew egg
Browse More Related Video
![](https://i.ytimg.com/vi/QWWgr2rN45o/hq720.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AH-CYAC0AWKAgwIABABGFggTChlMA8=&rs=AOn4CLCVLedK2EP_Hk5VEKlMVrqyMqAl5A)
Geoffrey Hinton in conversation with Fei-Fei Li — Responsible AI development
![](https://i.ytimg.com/vi/Wmo2vR7U9ck/hq720.jpg)
Inside OpenAI [Entire Talk]
![](https://i.ytimg.com/vi/H1YoNlz2LxA/hqdefault.jpg?sqp=-oaymwExCJADEOABSFryq4qpAyMIARUAAIhCGAHwAQH4Af4JgALQBYoCDAgAEAEYfyA-KCQwDw==&rs=AOn4CLCW-IWfeOVjQ9QZ-bwutZbOWNL7Pw)
Interview with Dr. Ilya Sutskever, co-founder of OPEN AI - at the Open University studios - English
![](https://i.ytimg.com/vi/qTogNUV3CAI/hq720.jpg)
Demis Hassabis - Scaling, Superhuman AIs, AlphaZero atop LLMs, Rogue Nations Threat
![](https://i.ytimg.com/vi/TwDJhUJL-5o/hq720.jpg)
The Truth About Building AI Startups Today
![](https://i.ytimg.com/vi/CC2W3KhaBsM/hq720.jpg)
In conversation with the Godfather of AI
5.0 / 5 (0 votes)