「心」のデータ:行動デザインの視点から
Summary
TLDR松田惣一郎教授が心理学の観点から行動設計に関連する「心のデータ」について語る。人間と機械の違いを探求し、心理学の研究方法を説明。実験研究を中心に、自閉症スペクトラム障害の子どもたちとの交流を通じて、刺激・反応の単位を用いた分析を行い、社会的な相互作用を促進する技術を開発。新しい技術を用いた心理学現象の発見と、データ収集と解釈のギャップについても触れる。
Takeaways
- 📚 ソイチロ・マツダ教授は筑波大学人間科学部の心理学を主に教える講師であり、今日のデータサイエンスのビデオ講義で行動デザインの視点から精神データについて語ることになります。
- 🤖 「人工知能」という言葉はブームワードになり、人間が機械にはできないと考えられる能力について学生に尋ねています。
- 🧠 心理学の研究は、意識、感情、共感、想像力、痛み、愛などの精神現象を科学的に客観的に探求することを目的としています。
- 🔍 心理学の研究は、記述研究、相関研究、実験研究の3つの主要カテゴリーに分類されており、それぞれが異なる目的を持っています。
- 📈 実験研究や相関研究では、人々がどのように反応するかを測定する方法として、記述や選択肢の選択、録音などがあります。
- 🧠 脳画像研究は、これらの反応を集める他の方法と同時に脳活動を測定することに関連しており、将来的にはもっと多くのツールが心理学の測定に使用されるかもしれません。
- 👨🏫 ソイチロ・マツダ教授は心理学の博士号を取得後、人工知能研究所や自閉症研究センターで研究員として働いた経験があり、自閉症スペクトラム障害の専門家です。
- 🔧 応用行動分析は、現実世界の問題を解決するために行動原理を用いる心理学の分野であり、個体と環境の相互作用を研究します。
- 👶 自閉症スペクトラム障害の子どもたちとの対話において、ユーモアを用いた対話は視線接触や笑顔の頻度を大幅に増加させることができます。
- 👓 視線推定技術を用いたインターフェースの開発により、非専門家でも社会注意のコーディングを簡単に行うことが可能となりました。
- 🤹♂️ 社会相互作用スキルの開発には、通信デバイスボールのような対話的なデバイスの使用が効果的であり、自閉症スペクトラム障害の子どもたちに社会的な遊びのスキルを教えるのに役立ちます。
- 👓 視線の高さを下げることで、人間関係の距離感が変化し、子供たちとの対話においては視線の高さを下げることでより親しみやすくなることが示唆されています。
Q & A
ソイチロ・マツダ教授は主にどの分野の教員ですか?
-ソイチロ・マツダ教授は筑波大学人間科学部の教授で、主に心理学を教えています。
マツダ教授が心理学の研究において注目しているのはどのような現象ですか?
-マツダ教授は心理学の研究において、共感、感情、痛みなどの心の現象に注目しています。
心理学における「記述的研究」とはどのような研究ですか?
-「記述的研究」とは、Xが起こっていることを理解するための心理学の研究カテゴリーの一つです。
「相関研究」と「実験研究」の違いは何ですか?
-「相関研究」はXとYが関連していると主張する研究であり、「実験研究」はXがYを引き起こすと主張する研究です。
マツダ教授が使用した心理学の測定方法として、最も主なツールは何ですか?
-マツダ教授が使用した心理学の測定方法の主なツールは、被験者が音声で回答するというものです。
マツダ教授はどのようにして心理学のデータを自動化しやすくする研究を進めていますか?
-マツダ教授は、人工知能研究センターや自閉症研究センターでの経験を通じて、心理学のデータを自動化しやすくする研究に取り組んでいます。
自閉症スペクトラム障害の2つの診断基準は何ですか?
-自閉症スペクトラム障害の診断基準は、社会的コミュニケーションと人付き合いの障害、および制限的・繰り返しのある興味や行動です。
マツダ教授が行った視線推定技術を使った研究とはどのようなものですか?
-マツダ教授は視線推定技術を使って、非専門家が社会注意を簡単にコーディングできるインターフェースを開発しました。
マツダ教授が研究している「振動補償技術」とは何ですか?
-「振動補償技術」とは、運動捕捉を使って、成人と自閉症スペクトラム障害を持つ子どもが相互作用する際の相対位置の変化をモデル化する技術です。
マツダ教授が使用した「頭部追従型カメラ付きのウェアラブルスーツ」はどのような目的で使用されていますか?
-「頭部追従型カメラ付きのウェアラブルスーツ」は、視点を変えることで人々の心理的現象を発見することができる新しいデバイスの一つです。
マツダ教授が強調している「データの収集」と「データの解釈」の間にどのようなギャップがありますか?
-マツダ教授は、データの収集と解釈の間に、どのようにデータを理解し、人間に理解できる形にまとめるかという重要なギャップがあると強調しています。
Outlines
😀 心理学と行動設計の視点からのメンタルデータ
ソイチロ・マツダ教授は筑波大学人間科学部の心理学を教える。このビデオ講義では、人工知能と比較して「人間が機械にはできないこと」について議論し、心理学の観点から感情、共感、痛みなどのメンタルデータを科学的に測定する必要性を説く。心理学の研究は記述的、相関的、実験的研究に分かれ、特に実験的研究に重点を置いている。心理学的な応答を測定する方法として、記述や質問への回答、音声記録などがある。
🧠 脳画像研究と心理学的測定の進歩
マツダ教授は、心理学的測定がこれまでの定性的なものから定量的、安価なものへ進化したと説明。将来的には、目線、表情、ボディランゲージなどを心理学的に測定できるようになるかもしれない。また、自らの研究では、発達障害である自閉症スペクトラム障害の研究に特化しており、その研究では行為分析を応用している。
👶 刺激・反応・フィードバックの単位としてのオペラント
オペラントは個体と環境の相互作用を1つの単位として捉える。個体が環境に作用し、環境が個体に作用するという往復の刺激・反応・フィードバックのサイクルが行為分析の基本である。教授は、この原理を用いて自閉症スペクトラム障害を持つ子どもとの社会的な相互作用を研究し、ユーモアを用いた相互作用が子どもの反応を劇的に変えることができることを発見した。
🤖 コミュニケーションボールを使った社会相互作用スキルの開発
マツダ教授は、社会相互作用を遊びそのものに取り入れる研究を行っている。コミュニケーションボールは、一方を振ると他方の球が光るデバイスで、これにより子どもたちは社会相互作用の原則を学びやすくなる。また、運動捕捉技術を用いて、成人と自閉症スペクトラム障害を持つ子どもの相互作用における相対位置の変化を研究している。
👓 視点を変えるウェアラブルスーツによる視野の変化
研究では、視点を下げることで人々の間の距離感がどのように変化するかを調べた。結果として、視点が低いほど、人との間の距離感が広くなることがわかった。また、視点を変えることで心理学的な現象を発見することができると示した。
🔍 新しい技術を用いた心理学的現象の発見
マツダ教授は、新しいデバイスや技術を用いて、心理学的な現象を発見し、研究している。例えば、頭バンド型のデバイスを使って顔と顔の向き合い方を測定し、自閉症スペクトラム障害を持つ子どものサポートに活用する可能性がある。また、データの収集と解釈の間にはギャップがあり、どのようにデータを理解するかが重要であると強調した。
📊 データの収集と解釈のギャップ
最後に、マツダ教授はデータの収集とその解釈のギャップについて話す。心理学的、精神的特性をどのように具体的な出来事や現象を記録して測定するか、また、どのようなデータがすでに測定されているか、未測定のデータがあるかについて常に考え続けることが重要だと結びに語った。
Mindmap
Keywords
💡心理学
💡人工知能
💡行為設計
💡記述研究
💡相関研究
💡実験研究
💡自閉症スペクトラム障害
💡応用行動分析
💡視線推定技術
💡社会相互作用スキル
Highlights
Soichiro Matsuda教授在筑波大学人文科学部主要教授心理学。
本次讲座将从行为设计的角度探讨心理数据。
人工智能已成为热门话题,但人类与机器的区别是什么?
心理学研究旨在科学客观地探索人类独有的心理现象。
心理学研究分为描述性、相关性和实验性三种主要类型。
实验性研究通过控制实验来断言X导致Y。
心理测量可以通过书写、选择或口头回答等方式进行。
脑成像研究同时测量大脑活动和其他方法收集的响应。
未来心理测量工具可能包括眼神接触、面部表情和身体动作等。
Matsuda教授专注于自闭症谱系障碍的发育障碍研究。
应用行为分析基于行为原理解决现实世界问题。
行为分析使用操作单位,包括个体、环境、刺激、反应和反馈。
Matsuda教授的研究使用幽默互动显著提高了自闭症儿童的社交反应。
研究展示了如何利用计算机视觉技术辅助视频编码社交注意力。
社交互动技能的发展研究使用通信设备球进行。
运动捕捉技术用于模拟成人与自闭症儿童互动时的相对位置变化。
穿戴设备改变视角,研究人际空间和“空气手”幻觉。
头带设备用于测量面对面行为,研究面对面接触与眼神接触的不同功能。
研究强调了数据收集与数据解读之间的差距,以及如何科学地测量心理和心理特征。
Matsuda教授的讲座以对数据解读重要性的讨论结束。
Transcripts
Hello, everyone.
My name is Soichiro Matsuda,
and I teach mainly psychology at the University of Tsukuba's Faculty of Human Sciences.
I look forward to working with you today.
Today, in this video lecture on data science,
I would like to talk about mental data
from the perspective of behavioral design.
The term "artificial intelligence" has become a buzzword, hasn't it?
There is a question that I ask students in my classes.
The question is along the lines of, "What do you think humans can do that machines can't?"
The ideas that often come up are:
"humans can have consciousness, but machines can't",
"humans can have emotions, but machines can't",
"humans can feel empathy, but machines can't",
"humans have an imagination, but machines don't",
"humans can feel pain, but machines can't", and
"humans can love, but machine can't".
Are these all true?
These were given as things possible for humans but not for machines.
But, are humans really able to do all of these things?
How do you know that you can do these things,
or that humans can do them?
We want to explore these things that only humans can do,
such as empathy, emotion, pain, and other mental phenomena,
in a scientific and objective way.
This is called the study of psychology.
But what do we need to scientifically and objectively
explore mental phenomena such as empathy, emotion, and pain?
We need measurements, of course.
Now then,
how can we measure the mind scientifically and objectively?
There are many different types of research in psychology,
but broadly speaking, there are three main categories.
The first is descriptive research,
which lets us understand that X is occurring.
The second is correlational research.
This research is used to assert that X is related to Y.
The third is experimental research.
This research is used to assert that X causes Y.
Descriptive research can be either based on observation
or field research.
Correlational research is mainly done through field research or surveys.
Experimental research is done through controlled experiments.
Personally, my main focus among these three is in experimental research.
In experimental and correlational research,
one way to measure people's responses is by,
for example, making them write down their responses.
When asked to choose between 1 and 4 for a question,
subjects may have to circle the answer,
or type it,
as in using a keyboard to respond to what is shown on a display.
Another way is to ask "Which do you like better, this or that?"
and making the subjects choose.
Another option,
which is the main tool I have used in my research,
is to ask the subject to speak into a recording,
which is then transcribed into writing.
So-called brain imaging research is
about measuring brain activity at the same time
as using one of these other methods to gather responses.
In the future,
we may have many more tools to make psychological measurements.
For example, eye contact, face-to-face contact, facial expressions, and body movements.
Of course, it's not that we've suddenly become able to measure these things recently,
because we've been measuring them for a long time.
However, what used to be mainly qualitative assessments made by humans,
can now be measured quantitatively and inexpensively,
without having to rely on expensive machines that were necessary until now.
Measurements that used to take many difficult procedures
can now be done easily.
Having already said all that, I'd finally like to introduce myself.
After getting my Ph.D. in psychology,
I worked as a researcher in an artificial intelligence laboratory, then at an autism research center,
and from there I started teaching at a university.
My specialty is in a developmental disorder called autism spectrum disorder.
Autism spectrum disorder is
characterized by impairments in social and interpersonal interaction.
The two diagnostic criteria are impairments in
social communication and interpersonal interaction,
and interests or behavior that is restricted and repetitive.
Various studies conducted over decades have shown that
intensive and continuous training from a young age for children
with autism spectrum disorder results in significant improvements in IQ,
language skills, and adaptive reaction in groups
that received the intervention,
compared to groups that did not receive the intervention.
I specialize in applied behavior analysis,
a branch of psychology that is based on using
behavioral principles to solve problems in the real world.
It starts with the question, "What is behavior?"
Behavioral analysis uses something called
an operant as one of its units.
In an operant,
first there is the individual, or the organism,
and then there is the environment in which the organism exists.
The individual influences the environment,
and the environment influences the individual.
The interaction between the individual and the environment is
what we call behavior.
There can be no behavior for an individual without an environment.
Using T as the time axis of the time series,
the environment acts on the individual in the form of a stimulus,
then the action by the individual on the environment
as a result of the stimulus is called a response,
and then the environment reacts to this response in the form of a further stimulus, called feedback.
This series of stimulus, response, and stimulus is considered to be a single unit.
It may be hard to understand this just from saying it, so let me show you an example.
Here, stopping the shaking is the stimulus,
and the response is making eye contact.
He stops again, and the baby makes eye contact again.
From the baby's point of view,
the shaking stops, eye contact is made, then the shaking continues again.
This stimulus, response,
and stimulus interaction
occurs in the form of eye contact.
Here, he fills up a balloon and blows it in the baby’s face.
After the balloon is filled up, eye contact is made.
When eye contact is made, air is blown in the baby’s face again.
This is another case where interaction occurs in the form of eye contact.
My research so far
has covered a variety of different topics,
but all of it uses the principle of stimulus, response, and stimulus
to drive social interaction
and interpersonal communication.
In this study, I interacted with children
with autism spectrum disorder just as a typical kind adult,
saying things like "Nice, well done"
and "Yeah, that's right."
As you can see from the results here,
the children hardly ever responded with eye contact, or smiling,
or eye contact plus smiling.
The vertical axis represents the frequency of occurrence, or how often they responded with the given behavior.
Then I tried interacting with them in a humorous way.
For example, instead of just saying "Here's an apple,"
I would say "Ooh, an APPLE!"
while giving the apple,
or instead of returning something with just a smile,
I would make a funny face or do something strange and unexpected during the interaction.
When I did that, there was a huge change in the vertical axis,
the frequency of occurrence, or how often the behavior happened.
They almost always made eye contact.
They smiled a ton. So, even though they hardly ever smiled during previous interactions, they started smiling a lot.
Then, when I returned back to normal interactions,
eye contact and smiling went down,
as you might expect, but when I returned to humorous interactions again,
eye contact, smiles, and eye contact plus smiles went back up.
At this point, it was clear what was happening.
It was obvious how much eye contact
and smiling changed under the different conditions.
That made me very happy,
but this study was conducted for six months.
During that time, I had to watch the videos,
one frame at a time, and decide whether the behavior happened:
here, they looked at me, here, they smiled, here they looked at me, here they smiled.
It was a hellish exercise, and took an incredible amount of time and effort.
Doing this research gave me the opportunity to
collaborate with various IT engineers to
see how we could acquire research data automatically
and more easily.
This study, Visualizing Gaze Direction to Support Video Coding of Social Attention,
was research that I conducted together with computer vision researchers at the University of Tokyo.
In the field of developmental support,
there are techniques to assess interpersonal attention,
i.e. social attention, through behavioral observation.
As part of the techniques, an expert has to watch a video,
and make a judgment on whether the subject made eye contact or smiled by checking for certain signs.
In order for people who haven't mastered these techniques to do the same thing more easily,
we developed an interface that uses gaze estimation technology
to suggest points in time when the subject may have made eye contact.
By doing this,
we made it possible for non-experts
to easily do coding of social attention.
Let's go back to the operant,
the unit of behavior which I introduced earlier.
The environment here can also be other individuals, meaning other people.
That means that a stimulus for me can be a response for others,
and a response for me can be a stimulus for others, possibly a feedback for their previous response.
In this way, we humans are always influencing each other.
However, when trying to measure this,
we find that it is extremely difficult to control experimentally.
Since humans influence each other,
it is not possible to make everyone respond in a uniform manner.
Research attempting to make measurements until now has been rather artificial,
mostly focused on brain imaging and the central nervous system,
and there has been extremely little research
on the peripheral nervous system.
How do you measure behavior based on
inputs from the peripheral nervous system?
And what about the dynamics between two or more people?
Research on these topics is scarce.
That's why I’ve spent time thinking about
how we can use behavior imaging technology to
analyze interactions when there are multiple responses from the interactions of two or more parties.
This is a study on the development of social interaction skills
using communication device balls.
When you shake one of the balls, the other ball lights up,
but if you shake the ball that isn't lit up, nothing happens.
This is exactly what the process of social interaction looks like.
Until now,
there has been a lot of research on
how to teach social interaction skills to children who have difficulty with it,
but there has been very little research on
embedding social interaction into the act of playing itself.
When we introduced these mutually interacting devices
to children with autism spectrum disorder,
we were able to obtain data showing that
playing with balls with automatic feedback conditions demonstrating the principles of social interaction improves
social playing skills more than just playing
with simple balls.
We are also doing research on using motion capture to
model how relative positioning actually changes
when an adult interacts
with a child with autism spectrum disorder.
So far,
I've been talking about the dynamics of interesting interactions
between two parties, but I'm also doing another kind of research.
This is a wearable suit that changes your point of view,
made by Mr. Nishida at the University of Chicago,
and it looks like this, where your point of view is lowered to waist position.
You put on the Oculus headset,
and the camera attached around the waist moves in sync when you move your head.
By doing this, your point of view appears to be lowered.
This is called the stop-distance paradigm,
where the subject asks the approaching person to stop at the shortest distance
where they still feel comfortable with the other person's proximity.
We conducted an experiment to measure this interpersonal space,
and especially tried to determine how much the stop-distance changes when your point of view is lowered.
What we found out was that the lower the point of view,
the greater the interpersonal space.
It’s often said in nursery schools and kindergartens that if you approach a child while standing up,
the child gets scared.
One of the things that we learned from the experimental results is that
it's better to get down to the child's eye level when talking to them.
n addition to studying interpersonal space,
we also did a study of what we call the Air Hand Illusion, where the person in front of the subject says
"Let's shake hands," and reaches out their hand.
You would think that there's no way the person in front of you is so huge, right?
However,
people who put on the Child-hood suit end up reaching way up in the air.
What's important here is that you can't just show the video.
By moving the camera together with the head,
you get more errors, according to our findings.
What I want to say here is that by using new devices and new technologies,
we can discover psychological phenomena that
we were unable to find in the past,
which I am excited about exploring in my research.
There are a lot of exciting possibilities.
Professor Hachisu of Tsukuba University developed
this headband device for measuring face-to-face behavior.
When the wearers face each other like this, the color of the light changes.
We can make the color change as feedback,
or we can make it not change,
and just use the devices to measure whether or not people are facing each other.
My idea for how to use this device was that,
because it's extremely difficult to measure
when an adult and child make eye contact while interacting,
since eyes are so small,
at least we can measure when their faces are pointing at each other.
So, I decided to try using this in my research on
supporting children
with autism spectrum disorder.
But before doing that,
I tried to first verify that facing each other
and eye contact occur at roughly the same frequency in adults,
so I conducted a two-person dialogue experiment.
In the experiment, I put two people sitting face to face,
and sitting side to side.
When I did this,
I found that when sitting side to side,
the two people hardly ever faced each other.
Through this research, I realized that face-to-face contact and eye-to-eye contact may actually have different functions.
This is another example of how the usage of new technology has led to the discovery of new psychological phenomena.
I've already introduced a lot of different studies showing this,
but one important point here is that there is a gap
between collecting data and making sense of the data.
As I mentioned at the very beginning,
there are a lot of words, like emotion, empathy, intelligence, or sales expertise,
learning, and kindness.
In the future, you may hear many people saying that
they have measured emotions, measured empathy,
or measured sales expertise,
or you may have heard these things in the past.
But what is important is how they measured it.
How can we measure various psychological
and mental characteristics by recording concrete events and specific phenomena?
What can we measure,
and have already measured?
What have we not yet measured? These are extremely important questions to continue thinking about.
For example, when we say that something is a new technology for reading people's feelings,
what does it mean to read their feelings?
It may be that what is actually being measured is
changes in an image.
In that case, do changes in an image really represent feelings?
Are there things that cannot be detected just from changes in an image?
In my opinion, it is important to keep thinking about these things.
Also, when we collect data,
how can we handle data that humans cannot understand?
We may need to do work to frame the data.
in a way that humans can understand.
I would like to conclude my lecture on that point.
Thank you very much for your time.
5.0 / 5 (0 votes)