Rethinking How AI And Humans Interact To Get The Best Of Both
Summary
TLDRこのトークでは、MITメディアラボのPatty教授とその指導の下で博士課程を進めるPatが、人工知能(AI)と人間の相互作用について語ります。Pattyは、AI研究者として30年以上前からMITのAIラボで研究を始め、後に人間に輔助する技術に興味を持ち、メディアラボに移籍しました。彼らは、AIが人間の能力を高める手段として、健康、学習、高齢者の記憶など、実際の問題を解決するのに役立つと信じています。しかし、AIの開発は人間のデザイン問題でもあり、人々がAIと協調して働く方法を研究し、適切なインターフェースを設計することが重要だと述べています。また、AIが人間の能力を高めるためには、人々がAIと批判的に考えながら対話する必要があると強調しています。
Takeaways
- 🎓 講演者は、MITのAIラボからメディアラボに移り、人間を強化するテクノロジーに興味を持つようになったと語っている。
- 🧠 AI開発者はAGI(人工総知能)を目指しているが、講演者は人類を支援するAIの開発に重点を置くべきだと主張している。
- 👥 メディアラボでは、AIを活用して学習、健康、高齢者の記憶などの問題に対処するプロジェクトを進めている。
- 🤖 講演者は、ソフトウェアエージェントが人々を支援し、情報の管理や利用を効率化できると提唱している。
- 🔍 過去には、ソフトウェアエージェントと直接操作の間で議論があり、講演者はエージェントがより適切な情報提供を行うと主張している。
- 📚 AIは単なる技術問題ではなく、人間のデザイン問題と信頼性の問題でもあると講演者は述べている。
- 📉 AIの開発が産業界に集中し、資金がAGIへの投資に吸収されていることに講演者は懸念を表明している。
- 👨🏫 AIは教育の分野でも重要な役割を果たす可能性があり、講演者はAIが学習を支援するツールとして機能するべきだと考えている。
- 🧐 講演者は、人々がAIと協働する際に批判的思考を促すことが重要だと示唆している。
- 📊 AIの開発には、心理学、デザイン、社会学、政治、法律など、多岐にわたる専門知識が必要であると強調している。
- 🚫 講演者は、人々の関与と議論がAIの将来についての重要な議題であると主張しているが、現在はそれが不十分であると懸念している。
Q & A
Patがメディアラボに移る理由は何ですか?
-Patは、人間の側面に焦点を当てた研究に興味を持ち、MITのAIラボからメディアラボに移りました。彼は知能機械を開発するのではなく、人々を強化し、より良いレベルでパフォーマンスさせることに興味がありました。
現在、AI開発者たちが追求している目標は何ですか?
-現在、多くのAI開発者はAGI(人工一般知能)を追求しています。彼らはAGIを達成することで世界がより良い場所になると考えています。
Patは人間の能力を強化するインターフェースについてどう考えていますか?
-Patは、AI技術を利用して人々の問題を解決し、健康、学習、高齢者の記憶などに関するプロジェクトに取り組んでいます。彼はAIが人々をより良いレベルでパフォーマンスさせると信じています。
Ben Schneidermanとの議論において、Patはどのような立場を取っていますか?
-Patは、ソフトウェアエージェントが人間の能力を強化するべきだと主張しており、Ben Schneidermanとは直接操作について議論しています。
PatはなぜAIと人間の相互作用が重要だと考えていますか?
-Patは、AIは単なる技術問題ではなく、人間のデザイン問題であると考えています。人々の信頼、理解、そしてデザインに関する多くの問題を解決する必要があると主張しています。
AIが発展しても人間のスキルは維持されるでしょうか?
-Patは、人々がAIに過度に依存し、自分のスキルを失う可能性があると懸念しています。AIは人々のスキルを補助するものであり、人間のスキルを置き換えるべきではありません。
教育においてAIはどのような役割を果たすでしょうか?
-AIは教育において、人々の学習プロセスを支援し、より効果的な学習を促進するべきだとPatは考えています。AIは人々に答えを与えるのではなく、問題解決プロセスに関与し、人々の思考を刺激するべきです。
PatはAIの将来についてどのように考えていますか?
-Patは、AIが教育、健康などに貢献する可能性に興奮していますが、AGIを追求するという目標には懸念を持ちます。AIの将来についての議論が不足していると感じており、多様な分野の専門家の意見を取り入れるべきだと主張しています。
人々がAIと対話する際に注意すべきこととは何ですか?
-人々はAIと対話する際に、AIが提示する情報に過度に依存しないように注意する必要があります。AIは人々の思考を刺激し、問題解決プロセスに関与するべきであり、単に答えを与えるものではないとPatは考えています。
AIが発展するにつれて、人々の生活はどのように変わるでしょうか?
-AIは人々の生活をより良いものに変えることができますが、AGIを追求するという目標は人々の利益を最優先に考えていないとPatは警告しています。AIは人々の能力を補助し、健康、学習、その他の分野で貢献するべきです。
Outlines
🤖 AI研究からメディアラボへ: パティの旅
パティはMITのAI研究者としてキャリアをスタートし、その後、メディアラボに移って人間とAIの相互作用に焦点を当てるようになりました。AI技術を用いて人間を補強し、知能を向上させるという視点が、今日ますます重要であると述べています。特に、企業がAGI(汎用人工知能)を追求する一方で、人々の利益に貢献する技術開発の必要性を強調しています。
📱 現在のAIインターフェースの課題
現在使用されているスマートフォンやラップトップなどのデバイスのインターフェースは、人々が情報を探すために注意を逸らさなければならない点で理想的ではないとパティは述べています。AIエージェントはユーザーのコンテキストを理解し、関連情報を提供することで、人々がその瞬間に集中できるように支援する役割を果たすべきだとしています。
📰 AIと人間の相互作用の重要性
AIは単なるエンジニアリングの問題ではなく、人間のデザイン問題でもあります。特に、人々がAIを過信しすぎる傾向があり、それが結果としてパフォーマンスを低下させる可能性があるとパティは述べています。彼女は、AIシステムと人間の協力を促進するために、ユーザーが批判的に思考するようなインターフェースの設計の重要性を強調しています。
🔬 過去のAIと現在の教訓
1980年代後半から90年代にかけてのAIブーム時に開発されたエキスパートシステムが成功しなかった理由の一つは、人間の要因が軽視されたためです。パティは、現在のAI開発でも同様の問題が繰り返される可能性があると懸念しています。彼女は、AGIの開発がすべての資金を吸い上げることに対する懸念を示し、AIの未来について広範な議論が必要であると述べています。
🌟 AIの未来: 人間中心のアプローチ
パティは、AGIではなく他の形態のAIが教育や健康の分野で大きな利益をもたらす可能性に期待しています。彼女は、AI開発において人間中心のアプローチを取ることの重要性を強調し、企業がAIを仕事のプロセスに統合するための小規模な実験を行うべきだと提案しています。さらに、AI開発にはエンジニアリングだけでなく、心理学やデザイン、社会学などの多様な専門知識が必要であると述べています。
Mindmap
Keywords
💡人工知能(AI)
💡メディアラボ
💡人間の側面
💡AGI(人工総脳)
💡インタフェース
💡ソフトウェアエージェント
💡人間のデザイン問題
💡信頼性
💡ユーザースタディ
💡教育と学習
Highlights
Pat and Professor Patty discuss the transition from AI research to focusing on human augmentation at MIT Media Lab.
The importance of using AI to augment human intelligence rather than replacing humans is emphasized.
Current AI development is criticized for focusing too much on achieving AGI (Artificial General Intelligence).
The need for AI to be used for benefiting humanity and people is advocated.
Projects that address health, learning, and memory for the elderly using AI are highlighted.
The debate between software acting as agents to augment humans and direct manipulation interfaces is discussed.
The potential of software agents to allow people to be more present by proactively providing relevant information is explored.
The importance of human design in AI development and the risks of overlooking it are underlined.
Studies showing people's over-reliance on AI and the resulting degradation of performance are mentioned.
The need for interfaces that engage users in critical thinking with AI is suggested.
The implications of AI in education and the importance of not just providing answers but engaging learners are discussed.
The interdisciplinary nature of AI development, involving psychology, design, sociology, and legal aspects, is emphasized.
The concern that the pursuit of AGI could lead to another AI winter is expressed.
The call for involving target users in AI development to prevent past mistakes from recurring is made.
Recommendations for those interested in a human-centered approach to AI include conducting user studies and involving various disciplines.
The potential benefits of AI in education, health, and other areas are acknowledged, with a warning against the singular focus on AGI.
An invitation for those interested in augmenting human intelligence with AI to visit the media lab's group is extended.
Transcripts
yeah so this session is going to be
about human AI interaction and or the AI
interface and um I many my name is Pat
I'm a PhD candidate at the media lab and
right here is my mentor Professor Patty
Mars who is also the director of the MIT
media lab fluid interfaces group so it's
great uh to have this conversation with
you Patty um so the first question I
have for you is um you start off as an
AI researcher at MIT and then you
transition to the media lab to work on
more of the human side of this kind of
question can you maybe talk a little bit
about your journey from the AI lab to
the media lab I did my PhD in AI over 30
years ago and uh in
Belgium and um moved to the AI lab what
was then called the AI Lab at MIT now
called seale uh because that was of
course the mecca for AI research at that
time and still um but after a couple of
years years at the AI Lab at MIT I
decided that I did not necessarily want
to develop intelligent machines that one
day could replace us uh surpass us that
I was much more interested in using
these same
Technologies to augment people to make
people more intelligent uh perform at a
better level and so on so that's
actually why I moved from the the AI lab
to the media lab and I think that that
point of view is more relevant than ever
today um AI developers today mostly
these days in companies like open Ai and
so on are especially focused on and
obsessed with this goal of AGI if only
we achieve AGI then the world will be a
better place I think that that is a very
nice leave goal and um it's really
unfortunate that all the money is going
towards that goal right now we should
think about why we um or what goal we
set for ourselves with AI and I think
surpassing people um in intelligence and
replacing people is one of the worst
goals we could set for ourselves so I
really have been advocating for decades
that we should instead focus on how do
we use these Technologies to benefit
Humanity benefit people yeah so in your
work you you focus on on human and
augmenting human sort of intell
intelligence what are some of the sort
of interface or what are some of the
project that you have done in this area
that address this yeah so I'm very
excited about the potential for these
Technologies to help people with issues
such as health learning
um even memory for the elderly and so on
so in our group we mostly take um real
problems out there um and try to think
about how these AI Technologies can be
um um used to basically benefit people
for example democratizing learning
making learning more effective by using
um AI Tutors or helping people with
mental health um or physical health by
using um AI systems that get to know
them uh get to know um their particular
issues ETC and that can assist them
right yeah I think at the time when you
came up with this idea of intelligence
system and and and you know human
computer interaction there was a huge
debate uh in the community between you
um and Ben Schneiderman I think you
advocate for the idea of you know
software acting as agent that augment
human being where Ben was talking about
direct manipulation maybe how how does
that work and and how is that irrelevant
today so yeah it's a little funny um
back in 97 I published a Scientific
American article on software
agents and so over 30 years ago um uh or
no not quite but
almost and um I argued that people were
getting more and more information uh all
the time and that they would need
software agents that knew about their
goals about uh their life about their
information that could help them with
keeping track of all this data making
data available um in real um on right uh
basically
proactively uh for people I still think
that the interfaces that we use today uh
to access the information World um these
smartphones laptops and so on the whole
style of interaction is actually not at
all um perfect or ideal because it
requires that we take away our attention
from the situation that we're in the
people we're talking to to then like um
look up information that may be relevant
or um and things like that so I've
always argued that software agents can
take a large role in really allowing to
be more present in the moment because an
agent can actually bring um up
proactively information that is relevant
to whatever is happening right now like
an uh my my phone should know that I'm
here on stage in an interview and that
it shouldn't be buzzing right now or
maybe if there is something particularly
relevant to what I'm saying maybe I'll
get just one word or something hint from
an agent saying don't forget to mention
X or something so I think there's a we
an opportunity to really reink our
relationship with devices by using AI
systems and in a way this type of
interaction is very proactive in the
sense that you know the AI not just like
wait for you to go and use it it
actually uh personaliz and actually know
who you are and kind of understand the
context and things like that yeah so
we've always been working on agents that
are more aware of the user's context
what a user is doing uh are they in a
supermarket uh shopping for whatever
toilet paper are they talking to someone
what are they talking about right Etc
are they reading an email message and
the agent can proactively make relevant
information available to the person for
whatever the problem is the issue is
that they're dealing with right now so
fast forward after that Vision now we
have this sort of you know generative AI
agent or a system like that that are
being deploy and develop people might
argue that if you just make the AI more
powerful or smarter this question will
solve itself but in your case you still
think that we need to focus on the
interaction and the interface why is
that the case yeah so I think AI is not
just an engineering problem even though
Engineers think so yeah um AI is a as
much a human design problem and human
design is not something that you solve
after the fact when you have AGI oh
let's now like decide on colors and
fonts and bells and whistles no human
design is a big problem right we need
people to have the right level of trust
in the AI systems that help them we need
people to understand how these AI
systems function and why they come up
with a particular decision there are so
many human design issues that are
incredibly important and we are finding
that with studies that we're doing for
example people overly rely on AI
especially generative AI large language
chat like interfaces they are all very
persuasive very believable they always
come up with a great and answer very
believable answer even though it may be
totally wrong and uh
hallucinating and we find in studies
that we do that people start relying on
these um AI systems too much and they
stop thinking for themselves they stop
engaging with um the problem at hand
because they say like oh my AI seems to
know the answer sure let's accept that
so we have um several studies that we've
done that show that people's performance
can actually
degrade by using AI because they overly
trust uh the AI so there's a lot of
important issues that we have to solve
if we want AI deployments ultimately to
be successful and I would um argue that
most AI deployments will be in in the
context of people working with the AI
whether uh they'll only have some
minimal supervision and auditing of
whatever the AI does versus true
collaboration uh or even mostly the
human dealing with the problem so there
will be primarily um AI applications for
people and AI to collaborate together
not AI operating autonomously which is
why it is so critical that research how
people work with AI and how we can best
design these interfaces so that we
ultimately benefit from the intelligence
of the AI and the intelligence of uh
people well one thing that I think is is
kind of interesting and and a little um
unintuitive is that people often think
that if you have you know human
intelligence and and artificial
intelligence put them together you have
a combined super intelligence but from
study that you mentioned that we have
done in our group we show that you know
people tend to just rely on you know one
intelligence and not using the other one
what are some sort of you know
techniques or some of the interaction
method that can be used to actually you
know augment human rather than making
human complicit to this system yeah we
uh for example did a very simple study
where we had people judge newspaper
headlines and decide whether they were
fake news or True News and um we gave
some people a correct AI that gives them
correct advice and explanations like yes
this headline is right or true and
because of these reasons uh we gave
another group um basically a malicious
AI that tried to convince you of the
opposite of the truth for all the
headlines and then a third group was not
using an AI at all and we learned that
um the group that got the malicious AI
their performance their accuracy was
almost half
of their accuracy of working by
themselves without any AI assistance so
they overly relied on AI I would uh
speculate also that or hypothesize that
the more people work with AI uh the more
deskilling could happen because people
start relying on AI so much that they
forget how to deal with the issue
themselves in the first place so we have
been arguing um uh for interfaces that
uh engage the user in critically
thinking together with uh the AI in a
conversation thinking about the problem
before the AI actually says um this is
true or this is false or this is cancer
or this is not cancer you have to engage
the user in thinking about um the
problem at hand the decision at hand
before the AI actually uh gives its
classification or its recommendation so
in a way sort of you know use the
intelligence of the AI to engage people
intelligently and then see how they kind
of come up with answer together so not
just the AI just give out the answer
right away exactly I think this had
implication in education and learning as
well right what what what do you think
about that yeah well we all know that
the best method of teaching people some
skill or teaching young people a skill
is not not to tell them what the answers
are and and what all the stuff is that
they should try to uh put into their
their brains the best question is to
engage uh the best method is to engage
them engage them in asking questions
engage them in the material don't just
give them all the answers get them
excited get them thinking get them to
discover the answers themselves with the
guidance of a good teacher so I think AI
systems should play that role they
should be more um assisting us in our
decision making process without taking
over um um and and basically um with the
result that then we uh tune out
basically and you know in order to do
this it seems like you need to draw a
lot of Knowledge from Auto F right not
just engineering of the AI self but like
you know human cognitive science or
psychology or UT area how does that
again I think AI is not just an
engineering problem it's a psychology
problem it's a design problem it's a
sociology problem it's a a political
problem a a legal problem and
unfortunately all of these other domains
are sort of afterthoughts and and the
questions they think about and all the
attention is going to engineering right
now
but I believe that these other areas of
expertise should be involved um right
now in the development of AI well one
wisdom that you have shared with me
earlier was that you know in the earlier
day when AI was called intelligence or
or expert system right you say that it
already surpass human in many capability
but it wasn't adopted because of this
social or you know the human issue do
you think it's going to continue to
happen yeah so back in the late 80s and
'90s there was another wave of interest
in artificial intelligence called expert
systems and there was a lot of money
being thrown at AI not as much as today
but also significant amounts and once
again AI was dealt with as an
engineering problem at that time so
people would develop AI expert systems
that would come up with a medical
diagnosis for example and these systems
performed pretty well um at that time
not as good maybe as our systems today
but they were very useful but then
Engineers would like drop this expert
system in the hands of the target user
and that Target user say a doctor making
a diagnosis would say well I can't trust
this system I wasn't involved in
designing this system I studied for 10
or 13 years to become an expert uh in my
field why should I like even ask this
computer for a second opinion about
decisions so a big reason why expert
systems were not a success was actually
these human factors right and that were
sort of again an afterthought and then
that was of course followed by an AI
winter so I would argue meaning an AI
winter less money for AI less interest
in AI I would would argue that we risk
another AI
winter uh coming up because again we
only have engineers and money people
interested in Ai and deciding what
direction to take in as opposed to
really everybody out there not just all
disciplines but Target users Target
users should be involved we should all
have a discussion together uh really
about what AI future we want and
unfortunately that discussion is not at
all happening right now it is just
assumed that we all want AGI and that's
it and then we'll solve all the problems
later after we've deployed AGI uh to
billions of people right so I have two
more question for you so one is if
people here are interested in sort of
taking that more um human Center
approach to AI what should they do
Beyond like come to your lab and talk to
you what are the the way that they can
get start on thinking about this I think
it's important to do more um U smaller
uh user studies or studies to see how
people respond to AI how they really
work with AI because this whole um
Vision that AI if you give it to people
they'll just be super super performing
whatever it's not true uh it's not
necessarily the case people also respond
very differently from person to person
to AI some people are afraid other
people uh afraid that the AI will take
their jobs other people fall in love
with AI almost literally and think it's
sentient and so on it's just a big mess
out there right and I would say that
companies especially should be careful
and should do smaller scale experiments
to integrate AI into work processes to
see what really happens how the uh
employees really use these systems and
engage with them right because other
issue like hallucination or you know
when it make up things right still going
to continue to exist and we human I
think is is the one that going to need
to be able to navigate them so I think
there a lot of uh ideas that can be
deployed in that so the last question
for you is then what are you most
excited about or what you know what are
you most concerned about this day um
yeah yeah well one thing I'm actually
very concerned about with respect to the
state of AI and you already heard it is
that all the money is uh being spent or
most of the money in Industry uh right
now uh this goal of AGI is sucking up
all the funding out there for a goal
that I would argue is not at all a good
goal to to strive for um AI uh research
used to happen primarily l in a
university context until recently where
we have standards of discussing problems
being open about approaches um uh trying
to get a lot of feedback peer review Etc
and unfortunately AI development right
now is mostly happening in industry and
is driven purely by money interests uh
not at all by Humanity's interests I
would argue so that is a big concern
that I have about what is happening uh
today at the same time though I am still
an AI researcher and I am excited about
the potential for AI not AGI but other
forms of AI to uh really benefit uh
education and learning benefit health
and so on but that goal of AGI is not
necessarily uh what is going to bring us
there that's really that's really
important so I think that's the
conversation today um if you're excited
about augmenting human intelligence or
augmenting human capability with AI you
can talk to Patty or I guess come visit
our group on the fifth floor at the
media lab we upstairs so thank you so
much thank you thank
[Applause]
you e
5.0 / 5 (0 votes)