John Maeda's Design in Tech Report 2024: Design Against AI | SXSW 2024

SXSW
19 Mar 202452:37

Summary

TLDRこのスクリプトは、デザインとAIの関係性についての讲演です。AIの予測不可能性、会話デザインの歴史、そしてAIがデザイン職業に与える影響について説明しています。また、AI技術の発展を示す例や、AIとの交渉方法、そしてAIがもたらす変化とそれに対する批判的思考の重要性も触れられています。最終的に、デザインが技術を人間らしくする役割を果たし、私たちがAIとの関係をどのように築いていくかが重要であると強調しています。

Takeaways

  • 🎉 AIの進化は驚くべきですが、私たちが愛し、疑い、そして理解する必要がある複雑な技術であることを認識しましょう。
  • 🤖 AIの種類は多岐にわたり、大規模なモデルと小さなモデル、オープンなモデルとクローズドなモデルがあります。それぞれの特徴が異なるため、適切な用途を理解することが重要です。
  • 🚀 AI技術は急速に発展しており、例えばGitHub CopilotやGPT Engineerのようなツールが開発者やデザイナーの作業を変えています。
  • 🌐 機能呼び出しモデルは、ユーザーがUIを介して目的を達成するのではなく、目標を定めてその達成に必要なプラグインを呼び出すことができます。これはソフトウェア作成の新しい方法を示しています。
  • 🔍 批判的思考は、AIとの対話において非常に重要です。私たちはこの技術を正しく使用するために、状況を分析し、判断する必要があります。
  • 📚 AIに関する深い理解を深めるためには、幅広い資料を読むことが重要です。例えば、「On the Opportunities and Risks of Foundation Models」や「The Turing Trap」のような論文や書籍があります。
  • 🛠️ デザインは技術を人間にとってより優しくする役割を果たしています。しかし、AIのデザインには、人間味を加えるのではなく、批判的な価値を提供することが求められます。
  • 🔄 AIはマーケティングと製品開発の両方のループに影響を与えています。これにより、従来のユーザージャーニーとは異なる新しい方法で製品を作り、改善することができます。
  • 🌟 クリエイティブな分野では、手作りのような価値があるものが求められ、AIが完全に置き換えることができない独特の価値を提供するものです。
  • 📈 AIの発展は継続的であり、急激な変化は稀です。しかし、私たちはこの変化に適応し、AIをより良く世界に役立てるために、継続的に学習と成長する必要があります。
  • 👥 AIがもたらす可能性とリスクをバランス良く捉えるためには、公的/nonprofit部門と私的/profit部門の両方の視点を組み合わせることが役立ちます。

Q & A

  • AIとデザインの関係について述べる。

    -AIはデザインにおいて、予測不可能性や专业性を考慮した上で、デザインプロセスを改善するツールとして使用されます。デザインはAIを人間像に近づけるための重要な要素であり、AIによるデザインの改善は徐々に進化的に行われることが期待されています。

  • スピーカーがAIと対話する際の注意点は何ですか?

    -AIと対話する際には、AIが本当に理解しているように感じる「デリュージョン」に注意する必要があります。実際、AIは単に応答を生成するだけで、本当に理解しているわけではありません。そのため、AIとの対話を評価する際には、それを単なる応答生成器と見ることが重要です。

  • AIがデザインプロフェッショナルに与える影響について述べる。

    -AIはデザインプロフェッショナルの仕事に大きな影響を与えます。AIの予測モデルに基づいて、デザインが改善され、より効率的かつ効果的なデザインプロセスが可能になります。しかし、同時にAIが生み出すデザインのバランスを考慮し、技術とデザインのバランスを保つことが重要です。

  • AI技術が進化するにつれ、デザインのあり方に変化は見られるか?

    -はい、AI技術の進化に伴い、デザインのあり方には大きな変化が見られます。AIはデザインプロセスを自動化し、より迅速で効率的なデザインを生み出すことが可能になります。また、AIの多様性により、個々のニーズに応じたカスタマイズされたデザインも提供される可能性が高まります。

  • AIがデザインプロセスを変えるために、デザイン者はどのようなスキルを必要とするか?

    -デザイン者は、AI技術を理解し、それをデザインプロセスに統合するスキルを必要とします。また、AIが提供するデータとインサイトを解釈し、それをデザインに反映する能力も求められます。さらに、AI技術の進化に合わせて、継続的な学習とスキルの向上が重要です。

  • AIがデザインに対する影響をどのように変化させるか?

    -AIはデザインに対する影響を多岐に渡って変化させることがあります。デザインプロセスの自動化、個々のニーズへの対応、新しいデザイン素材の創造などが挙げられます。また、AIはデザインの可能性を広げる一方で、エチカルな問題やバイアスのリスクも持ち合わせています。

  • AIを用いたデザインの未来についてどう考えているか?

    -AIを用いたデザインの未来は非常に興味深く、デザインプロフェッショナルがより効率的かつ創造的になる可能性があると考えています。しかし、AIがもたらす影響を適切に管理し、デザインが依然として人間本位であることを確保する必要があります。また、AI技術の発展に伴い、デザインの価値や意義を再評価する機会となるでしょう。

  • AIがデザインに与える利点と欠点は何ですか?

    -AIがデザインに与える利点には、プロセスの効率化、デザインの自動化、個々のニーズへの対応能力の向上などが挙げられます。一方、欠点としては、AIによるバイアスのリスク、エチカル問題、またAIのデザイン結果が完全に理解できるわけではないことなどが考えられます。

  • AIを活用したデザイン作品の例として挙げられるものは何ですか?

    -AIを活用したデザイン作品には、自動生成のロゴデザイン、データに基づく視覚化、AIによる画像生成、そして個々のユーザーのニーズに合わせたカスタマイズされたデザイン作品などが挙げられます。

  • AIとデザインの関係をより深く理解するために、どのような資料を参考にすべきか?

    -AIとデザインの関係をより深く理解するためには、デザイン技術に関する論文、AIの発展に関する書籍、さらには実際にAIを活用したデザインプロジェクトの事例を研究することが役立ちます。また、デザインコミュニティや専門家からの意見を聞くことも重要です。

  • AIがデザインプロフェッショナルの役割をどのように変えるか?

    -AIはデザインプロフェッショナルの役割を変えることで、より創造的かつ戦略的になる可能性があります。AIが繰り返しのタスクを自動化することで、デザイン者はより多くの時間を創造的な作業に費やすことができます。また、AIはデザインプロフェッショナルに新しい視点やインサイトを提供し、より効果的なデザインに寄与するでしょう。

Outlines

00:00

🎤 イベントの開始とAIとデザインの話題

スピーカーはイベントの開始を宣言し、デザインとAIの話題に入ります。彼は自らのデザインテクノロジー報告の第10版を発表し、最近の赤ちゃんの誕生を報告します。更に、AIに対する愛と懸念の両方を示唆し、AIの複雑さとそれに伴う予測可能性について語ります。また、AIが職業に与える影響について「未知の知覚」として表現し、会話的デザインの歴史と重要性についても触れます。

05:00

📊 投票とAIへの態度

スピーカーは参加者に投票を促し、その結果を分析します。彼はAIと協働する、競う、または反対する人々の態度を探求し、その結果をコメントします。さらに、AIがデザインプロセスに与える影響について話し、自らの体験と成功事例を共有します。

10:01

🤖 AIの発展とデザインの変化

スピーカーはAIの発展を振り返り、過去の予測と現在の状況を比較します。彼は、AIがもたらす可能性とそれに伴う倫理的な問題、そしてデザインがより包括的になる方法について語ります。また、遠距離ワークの台頭と、AIがもたらす新たなデザインの素材についても触れています。

15:03

👨‍👩‍👧‍👦 個人的な経験とAIの影響

スピーカーは、セキュリティ業界での経験を通じてAIの重要性を理解し、生活や死に関連する問題に焦点を当てます。彼は、パンデミックの影響と個人的回復を探求し、AIがデザインと競合する状況を変化させる方法についても語ります。

20:03

📈 AIの基礎知識とその種類

スピーカーはAIの基礎知識を紹介し、大規模なモデルと小型なモデルの違い、オープンソースとクローズドソースの区別、そしてモデルの域について説明します。また、AIの機能呼び出しとその応用についても触れ、AIがデザインとマーケティングの両面でどのように影響を与えるかを探討します。

25:06

🌟 AIの魅力的な側面と手作りの価値

スピーカーはAIの魅力的な側面と、手作りのものとの比較に焦点を当てます。彼は、AIが高速で生成できるものと、人間にしか作られないものを対比し、両者の価値について議論します。また、AIがデザインと芸術の分野でどのように役立つかを示す例を挙げ、人間の感性とAIの力を組み合わせる可能性を強調します。

30:08

🚀 AIの進化と教育の未来

スピーカーはAIの進化と教育の未来について語ります。彼は、AIがデザインと開発のスキルに与える影響を強調し、AIの基礎モデルが過去10年間でどのように成長してきたかを説明します。また、AIが教育と学習の方法を変える可能性についても探討し、新しいスキルと知識の重要性を強調します。

35:10

🧠 批判的思考とAIの役割

スピーカーは批判的思考の重要性和、AIがそのプロセスにどのように影響を与えるかについて語ります。彼は、AIがビジネスと非営利組織の両方を変える可能性を探討し、AIの発展がもたらす新しい価値観を強調します。また、AIとの対話を通じて生み出される新たなデザインの価値を示す例を挙げ、設計者としての役割の変化についても触れています。

40:11

📚 AIに関するリソースと設計の将来

スピーカーはAIに関する重要なリソースを紹介し、設計の将来について語ります。彼は、AIの発展がデザインプロセスにどのように影響を与えるかを詳細に説明し、AIの進化を理解するための書籍や論文を推薦します。また、デザインが技術を人間化する役割についても話し、AIとの関係性とその重要性を強調します。

45:14

🎙️ イベントの締めと結論

スピーカーはイベントの締めを述べ、AIとデザインの今後の方向性について結論を導きます。彼は、AIがもたらす変化を理解し、適応するための計算的思考の重要性を強調します。また、AI技術の発展がデザインの仕事にどのように影響を与えるかについても触れ、デザインがAIと人間の関係をより良いものにする役割を果たす未来を期待しています。

Mindmap

Keywords

💡Design in Tech Report

ビデオスクリプトで言及されている「Design in Tech Report」は、スピーカーが毎年発表するデザインとテクノロジーに関する講演や報告を指しています。この报告では、デザインとAIの相互関係、今後のトレンド、そしてデザイン思考の重要性について述べられています。

💡AI

AIは「Artificial Intelligence」の略で、人工知能を指します。スクリプトの中では、AIがデザインや技術にどのように影響を与えているか、またどのように利用されていくかについて詳細に説明されています。特に、AIが予測不可能であるという誤解を解くことや、AIがデザインプロフェッショナルの仕事をどのように変える可能性があるかが強調されています。

💡Conversational Design

「Conversational Design」とは、ユーザーと機械との対話をデザインすることです。このスクリプトでは、1960年代から始まったこのデザインの歴史や、Erika Hallの著書「Conversational Design」について言及されています。また、AIが対話型インターフェースとしてどのように活用され、ユーザーのニーズに応える方法についても説明されています。

💡Computational Design

「Computational Design」は、コンピューテーション技術を用いてデザインを行うことを指します。スクリプトでは、このデザインが新しい素材として扱われ、アプリケーションや製品のデザインにどのような影響を与えるのかについて説明されています。また、AIがこのデザインプロセスを変える方法についても触れられています。

💡Function-calling models

「Function-calling models」とは、AIが特定の機能を呼び出すモデルであり、これによりユーザーが目標を達成するためのプロセスが簡素化されることが可能です。スクリプトでは、Vercel Gen 0 v0の例として挙げられ、UIがなくなる「Zero UI」の時代が到来する可能性についても言及されています。

💡Critical thinking

「Critical thinking」とは、批判的思考のことを指します。スクリプトの中では、AIの発展に伴い、人類が持つこのスキルがますます重要になる可能性について強調されています。AIが決定を下す前に、人間が持つこの能力を使って適切な判断を行うことが重要であることが述べられています。

💡Embodied AI

「Embodied AI」とは、物理的な身体を持つAIを指します。スクリプトでは、Boston DynamicsのAtlasのようなロボットがEmbodied AIの例として挙げられ、AIが身体を持つことでより高度なタスクを遂行できるようになる可能性について説明されています。

💡Spatial computing

「Spatial computing」とは、空間をコンピューティングの対象とする技術です。スクリプトでは、Apple Vision ProのようなAR/VR技術がSpatial computingの一部として言及され、これらが今後のデザインや技術の発展においてどのように役立つかについて説明されています。

💡Handcrafted

「Handcrafted」とは、手作りのことを指します。スクリプトでは、手作りの価値や人間の感性がAIの進化と共に変わることなく保持されるべきであると強調されています。例えば、Rivianの車が鳴くように、人間にしかできないデザインや機能がAIと共に進化する中で、どのように重要となるかが説明されています。

💡Human-AI collaboration

「Human-AI collaboration」とは、人間とAIが協力して作業することを指します。スクリプトでは、AIがデザインプロセスや製品開発において、人間の能力を補完する役割を果たす可能性について言及されています。特に、AIがどのようにデザイン思考を強化し、新しい創造性を生み出すかについても触れられています。

Highlights

The presentation is about 'Design Against AI' and is the 10th Design in Tech Report.

The presenter has a grandbaby and is excited about it.

AI's unpredictability is discussed, with the presenter arguing that AI is predictable in known knowns, but not in unknown unknowns.

The impact of AI on professions is unpredictable and referred to as an 'unknown known'.

Conversational design is an old concept, dating back to the 1960s.

The presenter mentions the work of Joseph Weizenbaum, who cautioned about the proliferation of chatbots.

The presenter has created a simple program to automate his work, leading to the question of their own necessity.

The presenter discusses the three types of design: classical, design thinking, and computational design.

AI and machine learning trends were observed as early as 2018, with a focus on inclusive design.

The presenter shares personal reflections on the pandemic and its impact on people's recovery and feelings.

The presenter emphasizes the importance of critical thinking in the era of AI, which is a skill that takes time and is valuable.

The presenter discusses the concept of 'makers' and 'talkers', and how AI is disrupting the lives of makers.

Function-calling models in AI are introduced as a new way of creating software that could change how we design.

The presenter talks about the critical role of design in humanizing technology and the need to find new value in criticality.

The presenter mentions the work of Sherry Turkle and the importance of asserting human values in the face of technology.

The presenter concludes by encouraging the audience to be critical of AI for customer use cases and to look forward to the changes in design.

Transcripts

play00:00

(audience clapping)

play00:01

- Hello.

play00:02

(audience clapping)

play00:04

Hi, how are y'all doing?

play00:06

(audience whooping)

play00:07

Good. Feeling all right.

play00:09

Okay, you have, like, roughly 50-ish minutes with me.

play00:12

Thanks for coming, first of all.

play00:15

This is my 10th Design in Tech Report.

play00:18

I have a grandbaby now.

play00:19

(audience whooping)

play00:20

Pretty cool, huh? Yeah, pretty psyched about that.

play00:23

All right, so you're here to talk about

play00:26

or think with me about Design Against AI.

play00:29

This is a different kind of presentation this year.

play00:33

I have a demo running also in it.

play00:36

It may crash, so live demo people, you're into that,

play00:41

so the way this works this year is I have a,

play00:45

it isn't automatically generated, but I realize it could be,

play00:48

and so I'm sorta getting to that step.

play00:51

The idea of AI is complex

play00:53

because in some cases we kinda love it,

play00:55

and in some cases we're like, "I'm not sure,"

play00:58

and so the whole topic of this year's report is

play01:02

I don't know, right?

play01:03

Who knows?

play01:06

I have it in seven sections,

play01:08

and so I'm gonna go kind of fast

play01:10

because, you know, TikTok generation people

play01:13

can watch thing in, like, five seconds,

play01:15

so you can catch it later.

play01:17

I'll put stuff on the Web as well. That sound okay?

play01:19

- Yep! (audience murmuring)

play01:20

- Okay. Here we go.

play01:21

All right, and there are people

play01:22

who hate this report, by the way,

play01:24

and I say, "Thank you."

play01:26

I love your criticism.

play01:28

People who are find it interesting,

play01:29

and I do it for you all, so here we go,

play01:34

so I have a colleague who recently told me this thing.

play01:37

He said, "This new kind of AI

play01:39

"behaves unpredictably,"

play01:44

and when he said that,

play01:44

I thought, "Eh, that's not kinda right,"

play01:47

because this kind of AI is very predictable.

play01:51

Those of you know this matrix of,

play01:53

like, the unknown unknowns, the known unknowns,

play01:56

the unknown knowns,

play01:57

well, those quadrants mean I don't know,

play02:00

but that quadrant there, known knowns, means I might know,

play02:05

and that's where this kind of AI does well,

play02:07

just like us humans, right?

play02:08

So we do really well there.

play02:10

Outside of that, we hallucinate, and we make up stuff,

play02:13

so it's very predictable.

play02:17

What is not predictable is how it'll change professions.

play02:20

I call this an unknown known.

play02:23

You know there's this new era where you can,

play02:25

like, take something small and make it gigantic,

play02:28

like, "Oh, I got an idea.

play02:29

"I wanna make a gigantic website,"

play02:31

and then, on the other end,

play02:32

the user says, "I don't wanna use this website.

play02:35

"I just want a button,"

play02:37

so the question is

play02:39

what happens when we kinda take little seeds

play02:41

and make them big and we use AI to shave them down?

play02:44

What was the whole point in the first place? Unsure.

play02:50

Now, conversational design is not a new topic.

play02:52

It's been around for quite a while,

play02:53

actually, since the 1960s.

play02:55

If you haven't read this book by Erika Hall in 2018,

play02:59

"Conversational Design,"

play03:00

she points out how it's the oldest interface we have.

play03:03

Like, you go a restaurant. You want something.

play03:05

You use a voice interface, right?

play03:08

You wanna talk to your parents, you use a voice interface.

play03:11

It's pretty old,

play03:13

and also, in the technology field,

play03:15

it's actually not a new idea as well.

play03:18

This is a book from 1967 by Nicholas Negroponte.

play03:22

It describes how you will someday build

play03:24

a predictive model of conversation,

play03:27

and the machine will be able to adapt to that

play03:30

based upon that data,

play03:32

so it's been really a long time coming,

play03:36

but I think the more important thing is

play03:37

once this technology was beginning to be imagined,

play03:41

there is the person who I took AI from in the 1980s,

play03:45

Joseph Weizenbaum.

play03:47

Dr. Weizenbaum wrote this book in the '70s

play03:49

that cautioned the world what happens

play03:51

when you have a chatbot everywhere

play03:54

because in 1960s, he invented the first chatbot.

play03:57

It was basically a psychotherapist that did two things.

play04:00

It would listen to you and repeat back what you said to it,

play04:03

like, "I had an awesome day."

play04:05

"Oh, you had an awesome day.

play04:06

"Tell me more,"

play04:08

and it would look for keywords,

play04:09

like, "Blah, blah, blah, blah, blah, my mother."

play04:11

"Oh, tell me about your mother,"

play04:14

and so you'd be stuck in this loop of thinking,

play04:16

like, "Oh my gosh, this is so understanding.

play04:18

"It's so therapeutic,"

play04:20

but because, as a youth, he fled Nazi Germany

play04:24

and questioned powerful tools,

play04:27

and so he spent the rest of his career as a scientist

play04:30

asking questions:

play04:31

Is this technology gonna be okay for the world?

play04:34

And so he said that he realized

play04:36

that no matter how intelligent you are,

play04:38

once you start getting a loop in this, a loop with this,

play04:41

you create this delusion

play04:43

that there's a thing behind the other side,

play04:45

a real person on the other side,

play04:46

so I call this

play04:47

sort of a kind of a delusion we have to kind of avoid

play04:51

if we're working in this field of AI today,

play04:55

so I have some polls this year

play04:56

'cause, you know, you could be all,

play04:58

like, a Zoom or Teams call

play05:00

and, like, you know, doing stuff interactively.

play05:02

There's also the app to send questions,

play05:05

but I have five polls.

play05:07

If you can point your phone to that,

play05:09

you'll see a typeform appear.

play05:12

What it is, is I'm really curious,

play05:14

like, what people in this room feel about AI,

play05:17

so I have a little poll there,

play05:19

and if you wanna, like, get in the clown car,

play05:22

sort of press the button, and I'm gonna hit Refresh.

play05:25

I'm doing this sort of, like, not the right way.

play05:26

I'm sure there's,

play05:27

like, more modern ways to do it with the app.

play05:30

Let's see. How fast are people in this room?

play05:33

Oh, people are fast in this room.

play05:35

Okay, I'm more of a collaborate with AI person.

play05:38

Some are compete. Some are protesting with all their heart.

play05:41

Oh, it's like watching the elections,

play05:43

and then, so we're feeling a little collaborative here,

play05:46

and by the way, if you go, if you keep going,

play05:48

there's, like, five simple questions in there.

play05:50

Okay. Okay, we got very...

play05:52

This is, like, Texas. This is Austin.

play05:54

Here we go. Okay, here we go.

play05:55

All right, thanks. Okay.

play05:57

Now, it turns out,

play05:58

because these have been around for a while,

play06:01

I've been trying to find a different way to do it

play06:04

because it's all done by hand, you know?

play06:06

I click on this. I take a screen cap of this.

play06:09

I print it out.

play06:10

In general, my table, my dining table

play06:12

has stacks and stacks of paper that I'm sorting by hand.

play06:15

This year, I wrote a simple program to take all my e-mail

play06:20

and take all the e-mails

play06:22

and take screen caps of everything I would've done,

play06:24

build a PDF and send it to the printer.

play06:27

That was kinda weird,

play06:28

and I did that with maybe, like, two hours of programming,

play06:32

which is also strange

play06:33

'cause I'm not that great of a programmer,

play06:36

and so what I did with it is

play06:39

I made a computer program

play06:41

that has read all the passes and tech reports,

play06:44

and I can ask any question.

play06:48

Why are design systems important?

play06:53

And it finds all the different slides related to it,

play06:57

and I've outsourced myself because I can then,

play07:00

I also clone my voice,

play07:04

and I'm not running a high-performant model.

play07:06

The back's a little slow,

play07:07

but 20 seconds later, it's gonna talk for me and my voice,

play07:13

and then I realized, "Wow, do I need to be here anymore?"

play07:16

(audience laughing)

play07:17

- [Narrator] The image from 2016

play07:18

is a slide from a presentation

play07:20

discussing the impact of designing systems

play07:23

in the field of design. - Right? Yeah.

play07:25

- It emphasizes that- - Okay, AI (indistinct).

play07:26

You can stop.

play07:27

All right, so just demonstration that,

play07:29

and that took me, like, what?

play07:31

Like, three hours to write from scratch,

play07:34

so that's sort of strange, isn't it?

play07:37

And it only cost, like, $5.

play07:39

Okay, so if you recall, those of you know those reports,

play07:42

I think the word design means too many things,

play07:45

and so I kinda, like, arranged it in three words.

play07:48

There's classical design.

play07:49

That's like the design we love,

play07:51

like, maybe this jacket or your glasses.

play07:53

It's like, "Oh my gosh, design. I love it."

play07:56

Then there's design thinking,

play07:58

which organizations had to adopt to get more creative.

play08:02

Classical designers tend to dislike design thinkers,

play08:06

but I always tell them,

play08:07

"Design thinkers make a six-figure salary, so rethink that,"

play08:11

and then there's computational design,

play08:13

which involves anything computational,

play08:15

anything app-like, product, whatever,

play08:17

and that's a new kind of design.

play08:19

It's a new material.

play08:20

Okay, and so past reports,

play08:23

I've been talking about AI coming one day.

play08:27

I was looking at GitHub repos in 2018.

play08:29

They were increasing.

play08:30

Machine learning AI was sort of trending,

play08:33

and also what happens (indistinct) be careful

play08:35

because you can automate imbalance.

play08:38

Bad things occur when you have this

play08:39

and give it to too many people at the same time,

play08:42

but at the same time,

play08:43

inclusive design was rising as a topic,

play08:45

which I thought was really important.

play08:47

The question is how do we make design more inclusive?

play08:49

How do we make technology more inclusive?

play08:51

Design can make a big impact there,

play08:55

and then, also, this thing called remote work was rising.

play08:58

I thought it was an interesting idea.

play08:59

Someday it's gonna happen, right?

play09:03

It happened, if you didn't notice that,

play09:06

and then, in 2019,

play09:08

2019, I was getting concerned.

play09:11

If you haven't read the work of Kate Crawford,

play09:13

it's a really wonderful series of work

play09:15

about cautionary tales of technology

play09:17

and what happens when it goes the wrong way,

play09:20

and I had this giant pile on my table of AI stuff,

play09:24

and so that year, I was feeling like,

play09:27

"I don't wanna talk about AI anymore," so I stopped,

play09:32

and it was actually a moment where things were changing,

play09:36

one of the so-called inflection points

play09:38

'cause when you think about it,

play09:40

there was this ethics thing, concern,

play09:43

and then inclusive design was sort of being created

play09:45

as a kind of, like, a, you know,

play09:47

like, a vaccine to this thing happening in the world.

play09:52

There was also Slack everywhere,

play09:54

if you don't remember at the time.

play09:55

I was like, "Oh, well, add you to my Slack."

play09:57

"No, please don't,"

play09:58

but it was like, you know, a new way to communicate,

play10:01

and it was remote work was becoming capable because of that,

play10:06

and also, useful AI was gonna happen someday,

play10:11

was what we were told,

play10:12

and the company Runway,

play10:14

Runway ML, this was their 2018 pitch deck.

play10:17

They were promising 10x cheaper,

play10:20

10x faster, 10x more accessible.

play10:22

Of course, no one believed them in 2018,

play10:24

but wow, who woulda thought?

play10:28

Also, it was also pre-COVID,

play10:31

and so all these companies were asking questions about

play10:34

how do we make your organization

play10:36

more digitally relevant, you know?

play10:38

Everyone should do things,

play10:40

like be all online all the time.

play10:42

You know, they should be just like digital natives

play10:44

on their Slacks.

play10:44

You know, why don't you change faster?

play10:47

People thought it was impossible to change faster,

play10:49

if you recall, in 2019.

play10:51

It wasn't gonna happen,

play10:52

but, of course, it did happen

play10:54

because of the pandemic, unfortunately.

play10:57

Now, after I stopped thinking about AI and design,

play11:01

I went into the security industry

play11:03

to focus on dangerous things

play11:04

because I became very interested in life or death

play11:07

'cause that was an era that, it was very scary, wasn't it?

play11:10

I'm not sure if some of you remember it.

play11:12

We've forgotten it, sort of,

play11:13

but it's still right there, right?

play11:16

And I drew this graph to understand,

play11:17

like, why was it so shocking?

play11:20

Well, since the year 2000,

play11:22

there wasn't any bad thing that occurred.

play11:24

This was a really bad thing to occur.

play11:26

The million deaths or more,

play11:29

like, in the last, like, three decades,

play11:31

but if you look at it in comparison to the last century,

play11:34

it happened a few times before,

play11:36

but this was extraordinary amount

play11:39

of terrible stuff that occurred,

play11:41

so just wanna acknowledge that we went through that.

play11:44

I'm not sure if you remember it,

play11:45

but I kind of wanna forget it,

play11:47

but it's like, "Wow, it was big,"

play11:49

so just for context,

play11:51

'cause if those of you are here,

play11:52

how did the pandemic impact you to recover from it?

play11:56

Ooh, 31%.

play11:57

I feel sad about it. Yeah, 30%.

play11:59

Someone know lost loved one, right?

play12:00

It didn't make me, like, oh, I'm okay.

play12:02

I haven't recovered from it.

play12:03

Mm, I lost loved one. I feel angry about it.

play12:06

Well, that's our feelings in this room,

play12:08

so thanks for sharing that.

play12:10

It is something that we're not...

play12:12

It's not as (indistinct) left us, right?

play12:14

AI happened, but that was probably a bigger thing

play12:16

when you think about it, at least to me.

play12:19

Okay, so I commissioned a bunch of illustrations

play12:22

at the time.

play12:23

This is work from home

play12:24

'cause everyone's saying, "Oh my gosh.

play12:25

"Work from home is so great,"

play12:27

but have you thought about it, you know?

play12:29

There's some problems on the edge,

play12:30

as you can't see on the screen.

play12:33

This is by Tony Ruth, by the way. Return to work.

play12:36

Do you wanna stay on my comfy (indistinct)?

play12:37

My comfy table? My comfy chair?

play12:40

Wanna go to work? I'm not sure.

play12:42

Return to work. (indistinct) work from work.

play12:45

I'm gonna work, but I gotta wear a thing, you know?

play12:48

So anyways, that was a mood in that era.

play12:53

I also commissioned this illustration

play12:56

to kinda demonstrate how resilience only happens

play12:59

because of adversity,

play13:01

so it's what's called an ambigram,

play13:02

so if you look it upside down, it's a different word.

play13:06

Adversity. Resilience.

play13:09

Okay. Context.

play13:11

I'm gonna get a time. Okay.

play13:14

Other thing I enjoyed learning

play13:15

about being in the security industry is

play13:19

sharks will not kill you.

play13:22

Like, "My gosh, this shark, it's gonna come and get you."

play13:24

It's like, "Well, it's very unlikely."

play13:27

It turns out most of us in this room are gonna die

play13:29

from something involving the lungs or the heart.

play13:33

That's, like, the majority,

play13:34

but TV makes it different, of course,

play13:37

so this was kind of helpful, to me at least,

play13:41

and then I found this.

play13:42

It's my favorite list,

play13:44

and if you oblige me, you may have seen me do this before,

play13:47

but I found it to be the most important list

play13:49

because it's people,

play13:51

things that people think when they're dying,

play13:54

and so I'm gonna have a sentence appear,

play13:57

and I want you to read it aloud with me.

play13:59

Sound okay? Okay?

play14:00

Okay, this is a performance art moment.

play14:02

Here we go, together. Okay.

play14:05

- [All] "I wish I had the courage

play14:06

"to live a life true to myself,

play14:08

"not the life others expected of me," right?

play14:14

"I wish I hadn't worked so hard."

play14:20

"I wish I'd had the courage to express my feelings."

play14:23

- [John] Thank you all for being here. I appreciate you.

play14:26

(audience laughing)

play14:27

I really do.

play14:30

- [All] "I wish I'd stayed in touch with my friends," right?

play14:33

- [John] Like, (groans) I don't stay in touch with them,

play14:36

and the last one,

play14:38

- [All] "I wish that I'd let myself be happier," right?

play14:41

- I'm tear up a little bit,

play14:42

but it's like that's important stuff, right?

play14:45

So anyways, in that journey, I found this list,

play14:48

and it's always helped me.

play14:49

Hope it'll help you all as well.

play14:51

Okay, so AI. Designing against AI.

play14:54

I like how the AI is in the word against.

play14:56

I like finding, like, things like that.

play14:58

My mom forced me to do word search puzzles as a child.

play15:02

I know what it was, but I find words everywhere.

play15:06

Okay, so according to the data

play15:07

from last year's Design in Tech Report...

play15:10

I love feedback, by the way.

play15:12

This was the most interesting thing in the whole report.

play15:16

It was by my friend Masakawa.

play15:19

It is a stop-motion samurai film

play15:24

made with wooden puppets and no computer graphics,

play15:28

so I wanna show you a little bit of that.

play15:29

You can always watch it in the privacy of your own home,

play15:32

but it's quite lovely.

play15:33

(percussive music)

play15:43

(weapons clanking)

play15:47

(rapping in Japanese) (weapons thudding)

play15:58

Now, it's 15 minutes long.

play16:00

That sawdust you saw: real sawdust, right?

play16:04

Oh my gosh, you gotta love it.

play16:06

Okay, now how do I do this here?

play16:08

Now, I'm doing this new thing that is too advanced,

play16:12

so how do I clue?

play16:14

Exit full screen.

play16:16

You know, I thought I was so clever, didn't I? Hmm.

play16:19

(percussive music)

play16:20

No, you're gonna stop me.

play16:22

Let's see here. One second.

play16:24

Okay, people don't like it when I do stuff like this.

play16:27

Here we go. Okay, here we go.

play16:29

Now I know. I have to (indistinct).

play16:31

Okay, learning loop.

play16:34

You also probably all seen the Sora thing,

play16:36

like, what the heck?

play16:38

If you haven't seen that, it's gonna what is that, right?

play16:40

So that happened, of course, too,

play16:42

and that's all computer stuff,

play16:45

so I've thought a lot about how,

play16:47

like, things made by hand we still think are special.

play16:52

I think they're special.

play16:53

At some point, people might not think it's special,

play16:56

but those who think it's special, let's keep trying, right?

play17:00

'Cause there's a lot of stuff happening,

play17:02

and one thing I've noticed

play17:05

is that we're all talking about AI in interesting ways,

play17:09

so I have a collection

play17:10

of things that I found help me understand AI.

play17:14

This is my favorite. This is Ted Chiang.

play17:17

Do you know the movie "Arrival"

play17:19

or the short story "Arrival"?

play17:21

Oh my gosh. Makes me cry.

play17:25

Large language models are a blurry JPEG of the Web.

play17:29

It is the most beautiful, right?

play17:32

It's like it's an imperfect approximation.

play17:34

It's a compression of information,

play17:37

and if you haven't seen this article, it's wonderful.

play17:42

He makes the argument that if we humans do not write,

play17:46

we won't be able to write something original ever,

play17:50

so for all of you who love to create,

play17:53

to me, it felt like, "Oh, well, I gotta create.

play17:56

"I can't let the AI do everything, or otherwise,

play17:58

"I'm not gonna be able to be creative anymore,"

play18:01

so anyways, this one, important.

play18:05

Another one is newspapers have the best explainers

play18:08

of large language models.

play18:09

Oh my gosh. These are expensive.

play18:12

I hope they get more clicks,

play18:14

but on the left is an "FT" special infographic

play18:17

on large language models.

play18:19

On the right is "The Guardian."

play18:21

My gosh, there's, like, at least, like, right?

play18:25

30 seconds of scroll in there,

play18:28

but if you're having a hard time

play18:29

understanding the mechanics or science of it,

play18:31

it's all in those two graphics.

play18:34

Yeah.

play18:38

'Kay, and then,

play18:40

when you're trying to understand large language models,

play18:43

this model, I think, is my favorite.

play18:45

Now, wanna note there,

play18:46

I'm giving you eight different ways

play18:48

to understand large language models that I find useful.

play18:52

One of them may be useful to you.

play18:53

I apologize

play18:54

with the one that you don't want actually right now,

play18:56

but this one here, this one is called the genius in a room.

play19:01

This one's quite nice. It's a little spooky.

play19:03

Now, this one, the idea is there's this room.

play19:05

There's a genius locked in there. No windows.

play19:08

It sounds kinda sad,

play19:09

and the genius cannot see outside the room.

play19:11

It's even worse, and the only way can talk to the genius

play19:15

is you gotta slip a piece of paper underneath the door,

play19:19

and it can only be a certain size.

play19:22

It's like a Post-it note size.

play19:24

You put underneath with your stuff.

play19:26

It's like it writes it, puts in the back, gives it back.

play19:29

Only one turn, and so this is kind of example

play19:33

of the token limit, so-called token limit,

play19:35

and we hear it getting bigger and bigger and bigger,

play19:37

so that means, like, a bigger piece of paper

play19:39

can go underneath the door.

play19:41

The second thing is that if you don't give it more context,

play19:45

it doesn't know anything.

play19:46

It's gonna guess.

play19:47

Like, to think it, like, guesses, like, a tall dude,

play19:50

and it's, like, it makes stuff up, very confident.

play19:52

It's really good,

play19:53

so it's gonna find stuff just to sort of say.

play19:57

The other one, it's completely stateless.

play20:00

It doesn't remember you at all.

play20:02

It has recurring amnesia,

play20:05

so you're like, "I told you that,"

play20:08

and it's like, "Oh, I don't remember ever,"

play20:10

so this mental model I find useful.

play20:14

Okay, this is a great talk by AIGA by Josh Clark.

play20:20

Josh Clark talks about design material.

play20:24

- [Speaker] (indistinct) if we're talk about here.

play20:26

Sound okay? Guys doing all right?

play20:28

- Yeah. (audience members whooping)

play20:29

- [Speaker] You look great.

play20:31

Yeah, you look terrific, yeah.

play20:32

- He's so positive. - All right, let's-

play20:33

- He's so positive, right?

play20:36

But again, it's like wood.

play20:38

It has a grain. It does certain things.

play20:41

It doesn't do certain things,

play20:43

so if you learn it as a material, it's also helpful.

play20:48

'Kay, there's a John Oliver special, 30 minutes.

play20:53

If you haven't watched this,

play20:54

this is actually super-well-informed.

play20:57

A lot of AI stuff you could learn from John Oliver.

play21:02

This is all, this is basically pointers to things,

play21:06

or you can have books.

play21:08

There's lots of books out there.

play21:09

There's more books coming as well,

play21:13

or you can cook with me at the Cozy AI Kitchen.

play21:16

I have something called the Cozy AI Kitchen now.

play21:19

Think your AI skills are up-to-date? Think again.

play21:24

The latest AI breakthrough will change everything.

play21:27

It's exhausting.

play21:28

I know, but don't worry

play21:31

because you're an AI professional, and you love change.

play21:35

Our course makes adapting easier.

play21:37

We featured the Acorn framework for AI engineering.

play21:41

It's filled with real coding examples.

play21:44

It's been developed alongside the semantic kernel community,

play21:47

and this toolkit is your ally

play21:49

against a relentless wave of AI innovations.

play21:52

Come along and join us. Come and join me.

play21:54

And stay ahead.

play21:55

So I have a Cozy AI Kitchen now on YouTube

play21:59

with special guests,

play22:00

and we're cooking AI every two weeks,

play22:03

and someone asked me, "Why are you doing this?

play22:05

"Is this your job?"

play22:06

This is, like, a side hobby of mine

play22:08

with some friends in the video studio,

play22:10

but if you don't actually create with it,

play22:14

if you don't write the code with it,

play22:16

it's quite opaque to everyone,

play22:18

so I provide different recipes, different things.

play22:21

You can press the button and use it,

play22:23

and you'll quickly become an AI chef in no time,

play22:28

or you can follow my 88-year-old mother.

play22:31

Some of you know that I moved to Seattle

play22:33

to take care of my parents, who are in their 80s,

play22:36

and then I happened to find a job

play22:38

at a company called Microsoft nearby there,

play22:42

and, you know, my mother is,

play22:46

she's very facile with her iPhone.

play22:48

She was a legal secretary when she was younger.

play22:51

She can type really fast, you know?

play22:53

So anyways, she got upset about iMessage recently,

play22:56

how it does that autocorrect thing.

play22:59

Whoo, she got really upset.

play23:01

She was like, "John, turn this off right now."

play23:04

I says, "Mom, you might want it."

play23:05

"Nope, turn it off right now."

play23:07

When your mother tells you do something,

play23:08

you gotta do it, right?

play23:09

So, like, oh man, I found the documentation.

play23:11

Okay, I did it. I turned it off.

play23:13

Recently, she changed her mind. Here she is.

play23:18

(audience chuckling)

play23:19

Was it spelling the wrong words?

play23:21

I thought you didn't like it. - Well, sometimes,

play23:22

sometimes, no, no, sometimes they put in words for me.

play23:26

They put words in my mouth.

play23:29

That's why I didn't like it,

play23:30

but then, I find

play23:32

that putting all those words together,

play23:37

(sighs) that's too much. (audience laughing)

play23:41

- So- - Too much.

play23:42

- [John] You changed your mind.

play23:44

- Yeah, so I changed my mind. I like AI.

play23:46

(mom laughing) (audience laughing)

play23:50

- Mom likes AI now. That's a recent reversal.

play23:52

(audience clapping) Yeah.

play23:55

Okay. How we on time now?

play23:57

Okay. Good.

play23:58

All right. Okay.

play23:59

Good and I can see Slido somewhere.

play24:02

There's questions coming in. I can see it too.

play24:03

No. Don't see it here.

play24:05

Okay. Good.

play24:06

All right. Robots are getting smarter, stronger, faster.

play24:10

If you haven't noticed, there's a Optimus.

play24:13

If you seen Optimus, like, you know, folding laundry,

play24:15

it's quite cute.

play24:18

There's the Figure Bot

play24:19

that can learn things in 30 minutes quickly and repeat.

play24:22

It can open your Krups coffee machines

play24:25

with very little training.

play24:26

There's also, my favorite is the Boston Dynamic Atlas.

play24:30

Atlas works out.

play24:32

(robot clanking)

play24:38

Oh my gosh. He's so strong.

play24:40

Yeah. Wow.

play24:41

Look at that. Look at that.

play24:42

You see the (indistinct) as I was looking at,

play24:43

and this is the new robot that came out

play24:47

online, like, last week.

play24:49

This is the H1. The H1 can jump and everything too.

play24:55

I know, right?

play24:56

It can also run at 3.3 meters per second, the fastest.

play25:00

I know.

play25:01

This is interesting, so robots are happening.

play25:05

There's also spatial computing.

play25:08

There is the new...

play25:09

I'm sure you've seen the Apple Vision Pro.

play25:12

This is an area I used to work in, spatial computing.

play25:15

There's a great book by Jerri, et al. on AR, VR, XR,

play25:19

if that's your new world,

play25:22

and again, my boss Nicholas,

play25:23

he, in 1993, he was shading everyone

play25:26

when he said, "VR, that's not new.

play25:29

"It was 25 years ago."

play25:31

That's, like, in 1993,

play25:32

so anyways, it's been a long time coming.

play25:36

Okay, let's check out here.

play25:38

Have you tried out any of the modern AR/VR headsets?

play25:42

Yes, but I don't own one. I don't feel the need yet.

play25:44

Hmm. No, but I'm thinking of try one out soon.

play25:47

Ooh. Yes, I own one, but don't use it.

play25:49

It's me too.

play25:51

Not my thing. Nope.

play25:53

I love one. I use it all the time.

play25:54

Look at that. Good for you.

play25:57

(audience laughs)

play25:58

Okay. All right.

play26:00

I also like to collect things made by humans

play26:02

that's digital as well.

play26:05

This is something

play26:06

that was made by high school students called Sinerider.

play26:10

Who knows Sinerider? Sinerider?

play26:12

Okay, you have to know this, then. It's good.

play26:14

I, like, wrote to them,

play26:15

and they were like, "Nobody uses this.

play26:17

"Please use it."

play26:17

Okay, here we goes. This is amazing.

play26:21

It is a math nerd game.

play26:24

Check it out. Sinerider.com.

play26:29

(birds chirping) (ethereal music)

play26:31

So pleasant too. See that?

play26:33

You type in the equation. It draws the curve.

play26:37

The vehicle moves up and down on the curve.

play26:41

(button clicks)

play26:43

(game whooshes)

play26:44

Isn't this clever?

play26:45

(game whooshes)

play26:46

(upbeat music)

play26:48

See that?

play26:48

Y = -x. No.

play26:50

Y = x squared. Ooh, no good.

play26:53

Ah, right? Okay.

play26:55

Yeah, this is definitely a taste thing.

play26:57

(audience chuckles)

play26:57

There we go.

play26:59

(game dinging)

play27:00

- Yeah, I'm into that.

play27:03

Sinerider by Hack Club.

play27:04

Again, high school students making code.

play27:07

This is Poetry Camera.

play27:08

This is a very- - The Poetry Camera,

play27:10

a camera that makes poems.

play27:11

- From Washington Square Park

play27:12

demoing Poetry Camera- (camera screams)

play27:14

People have agreed to have a poem taken of them,

play27:17

so I'll step back.

play27:18

All right. 3, 2, 1, cheese.

play27:21

It took a picture.

play27:21

It's analyzing the picture, looking for salient details.

play27:25

Then it's thinking really hard

play27:26

about the deep inner meaning

play27:28

of life and everything (camera buzzing)

play27:30

it saw on your faces. (camera buzzing)

play27:33

Okay, on wooden boards, a painted line of souls

play27:37

parallel parked in a patch of quiet green

play27:39

hushed dipped laughter sifting through leaf canopies

play27:43

as crumbs crumble from paper-wrapped parcels

play27:46

bare legs brush against cool arms of metal

play27:49

as sun filters through the dappled-

play27:51

- That's very Brooklyn, isn't it?

play27:52

(audience laughs) It's nice,

play27:53

again using GPT-4V to the analysis,

play27:56

but, like, handheld computing.

play27:58

It's very cool,

play27:59

and then, an hour before I came here,

play28:02

this was launched called the Puppets.app.

play28:05

This is amazing.

play28:06

This is, you can, like, put your hand on screen

play28:08

and do hand puppets.

play28:10

Isn't it amazing? Puppets.app.

play28:13

I was like, "What is this sorcery?"

play28:17

Yeah, so anyways, I like to think that human beings,

play28:20

you know, AI's got a lot of stuff,

play28:22

but we humans, we have our moves, you know,

play28:24

so we gotta, like, keep that,

play28:26

keep that, like, you know,

play28:27

a rich place of appreciation and fun.

play28:30

Okay, all right, so, okay,

play28:34

so we've gone through a few ideas how to think about AI.

play28:37

I showed you some human stuff.

play28:39

Now we're entering the section of the ABCs of AI.

play28:42

Here we go.

play28:44

Okay, so the thing about large language models to note

play28:46

is that they didn't happen overnight,

play28:48

like VR, taking 50 years.

play28:51

Large language models

play28:51

actually have been growing over the last 10 years.

play28:54

If you are in the world of language, you knew this.

play28:58

I didn't know this, but it's been happening for a long time,

play29:02

so this is a paper that came out last year,

play29:05

a comprehensive survey on pretrained foundation models.

play29:08

Notice the span of time, 10-ish years.

play29:12

The pink ones are the language models,

play29:14

so on the left-hand side,

play29:16

the large language models weren't that large.

play29:20

They were kind of like, you know, a little bit large,

play29:22

but as you can imagine, over time, they got a lot larger.

play29:25

The blue models are the vision models,

play29:28

which also were growing astronomically, exponentially.

play29:32

The green graph models and the ones in the very far right

play29:35

are the most interesting.

play29:37

They're called the unified pretrained models

play29:39

kind of like "Lord of the Rings":

play29:41

one model to rule them all.

play29:43

They go across all the modes.

play29:46

Now, when you think about it, I keep watching, like, YouTube

play29:50

to just sort of stay up on this,

play29:51

and just, like, this morning

play29:53

I woke up and wanted to show you the phenomenon.

play29:55

I'm not sure if it's you feel the same way,

play29:57

but, like, every AI, every YouTube video looks like this.

play30:01

Do we know what I'm talking about? The cover?

play30:03

They look shocked,

play30:04

and you're like, "Oh, I gotta look at this for some reason,"

play30:07

or they look sort of tired 'cause AI's coming

play30:12

or this one,

play30:13

where they're like have their hands on their face,

play30:15

like, "What is this?"

play30:16

So I keep wanting to click it,

play30:18

but I stop myself when I see these things,

play30:20

so be careful 'cause it's, like, very sensational.

play30:23

I have, like, a sort of a low-key version

play30:25

of how to talk about AI, the ABCs,

play30:29

so first off, A, is you got large models,

play30:34

and you got small models.

play30:36

That's pretty easy, right?

play30:37

You got the big ones and you got the little ones.

play30:40

The big ones, you can't run 'cause they're too big.

play30:43

Not gonna fit on your computer.

play30:44

The small ones, (squeaks skeptically) they're small.

play30:46

They can run your laptop.

play30:48

This is a big difference, right?

play30:49

One, you can run in the cloud.

play30:51

You can't when you can run your machine.

play30:53

Okay. Then B.

play30:56

Question is, like, is it open or is it closed?

play31:00

If it's open, you can mess with it.

play31:02

If it's closed, you can't, okay?

play31:05

So large, small, open, close, 'kay.

play31:09

Then there's model domains.

play31:12

Is it a mono modal model?

play31:16

Does it just work on text? Does it just work on images?

play31:19

Or is it multimodal?

play31:22

It goes across modes,

play31:24

and most models we're seeing today

play31:25

are gonna go across modes.

play31:26

It makes it be much more powerful.

play31:30

Now, I have to cheat. There is an A, B, C.

play31:32

There's, like, A to Z, actually. Gets worse, you know?

play31:35

So you got the large versus small.

play31:37

You got the open versus closed.

play31:39

You got the multimodal

play31:40

versus mu-multi-mo-mo-model mersus multimodal.

play31:42

Ugh, hard to say.

play31:44

You've also got embeddings models versus completion models.

play31:47

You got full model fine-tuning versus lower fine-tuning.

play31:51

You got function-calling models.

play31:53

These are the very interesting ones,

play31:56

and you got agents now, and this could keep going on,

play31:59

so just wanna note that all the terminology

play32:01

is rolling like wildfire, so just keeping up is hard,

play32:04

but those ABCs are a good place to start.

play32:09

Agents are the most interesting, at least for applications.

play32:13

I did a survey on agents,

play32:15

but if you know Kwame Nyanning,

play32:17

he's a former McKinsey consultant, also a startup person,

play32:20

I think he coined Agentic UX.

play32:23

Agentic UX bring this different kind of UX

play32:25

where the UX, as we know it, goes away

play32:28

and you work through a surrogate.

play32:30

Now, when you think about agents,

play32:33

do you have experience making agents?

play32:35

No, but I've prompted. See, that, you're on the way.

play32:38

No. I haven't touched this AI stuff yet.

play32:41

Yes, I made a GPT 20%, so that's, like, a 20,

play32:45

so a quarter of the people who've been surveyed in this room

play32:47

who've done this survey know what an agent is.

play32:51

It's gonna be more a common thing this year,

play32:53

so just keep it on your spider-sense radar.

play32:57

Okay, now, shiny things.

play33:03

Who likes shiny things? Like a squirrel?

play33:05

Thank you. Okay.

play33:06

Thank you, so a list of shiny things here.

play33:09

Okay, so I think that there's two kinds of shiny things

play33:13

to keep in mind in AI.

play33:15

There's either handcrafted things

play33:17

or things crafted for speed.

play33:20

The things crafted for speed are interesting

play33:21

because this AI used to take too long,

play33:24

but now it's getting a lot faster.

play33:27

On the handcraft side,

play33:29

it's this question of what we humans can create

play33:31

that the machine cannot.

play33:35

Okay, so handcraft selection. Who owns a Rivian?

play33:39

Anyone owns a Rivian here? Rivian owners here?

play33:43

No, Rivian not popular?

play33:46

Rivian has a chirping sound. (Rivian chirping)

play33:47

- And increase the pitch.

play33:48

(chirping becomes higher-pitched)

play33:50

The result is a clear and happy confirmation.

play33:52

Your Rivian is locked

play33:53

and the natural world is a little less noisy.

play33:56

(Rivian chirps)

play33:57

- See, the truck chirps. Why?

play34:00

We don't know, (audience chuckles)

play34:03

but a perfect example of a kind of a human sensibility.

play34:06

Another one is this one I found,

play34:09

which is a Dieter Rams-style framer repository.

play34:12

Have you seen this? It's so lovely.

play34:18

(computer clicking)

play34:20

It makes sounds.

play34:21

(computer clicking)

play34:26

I know why. We humans, we do these kinds of things.

play34:31

They're hard for the machine to make

play34:33

'cause they haven't been done yet,

play34:35

and so I am hopeful that we'll keep doing that.

play34:41

Okay, more handcraft.

play34:44

This is called wordsasimage.github.io.

play34:47

It uses the model to create type

play34:52

that takes the shape attributes

play34:56

and applies it to the serifs of the letter.

play34:59

(propulsive music)

play35:04

I know, right? Who would do that?

play35:10

Another one.

play35:11

Okay, so I wanna (indistinct)

play35:14

the future of education is super-important right now,

play35:19

and these are people that I look to if you know their work.

play35:22

I'll put this list online,

play35:24

but if you're looking for, like, new student work out there,

play35:26

there's a lot of interesting work rising

play35:28

in this human domain.

play35:32

Okay.

play35:33

Speedcraft, so remember I pressed the button

play35:36

and it generated the audio and took roughly 25 seconds?

play35:39

This is PlayHT.

play35:41

(keyboard clicking)

play35:43

- [Computer] Hi, how are you?

play35:45

- [John] Did you see that just now?

play35:46

(keyboard clicking)

play35:55

- [Computer] Hi there. I'm calling in regards to-

play35:57

- [John] So anyone who's doing this right now,

play35:59

you had that weird moment,

play36:00

like, "Wait, there was no wait time,"

play36:03

so latency is being taken out of this computation rapidly,

play36:08

so whereas we thought they were slow,

play36:10

they're so much faster,

play36:12

and now people are building things that are predictive,

play36:15

so their actual latency is in minus milliseconds,

play36:18

so it's guessing what you'll create before you create it,

play36:21

so right now, we're experiencing latency,

play36:24

but that's gonna be decimated very quickly.

play36:28

Uh, metaphor.

play36:31

Okay, now I wanna note

play36:33

that everything keeps getting faster every day.

play36:37

Who's tracking this stuff with me?

play36:39

Who gets tired every day? The newest thing.

play36:42

Yeah, it's, like, tiring.

play36:44

Okay, but gotta keep on going.

play36:47

Okay, this one I wanna...

play36:50

Section 5 is about the profound change we'll experience

play36:54

as people who make things.

play36:56

Let's see here.

play36:57

I am a mostly product designer,

play36:59

marketing designer, product manager.

play37:01

Using cutting-edge AI tools will enhance my skills.

play37:06

Thank you.

play37:07

It will influence my career,

play37:09

so, generally speaking,

play37:11

it's important to note

play37:14

that this stuff doesn't seem to be going away,

play37:16

and it keeps accelerating,

play37:18

and so I got some good news.

play37:21

Those of you who have adopted design systems,

play37:24

who's adopted design systems in their company?

play37:26

Congratulations. That is important computational skill.

play37:31

The second one, AI prompting,

play37:32

I think over half this audience has done that,

play37:34

so those are two new computational skills

play37:37

that we've already kind of introduced.

play37:39

We're all using.

play37:42

The reality, though, is that this is expansive.

play37:47

I use a model of the marketing loop and the product loop.

play37:50

It's a two loops of attraction versus retention,

play37:54

and if you think about it,

play37:57

on the left-hand side is you pull the possible buyer in,

play38:01

and on the right side,

play38:03

you keep them retained on the product,

play38:06

and the interesting thing is that

play38:08

this technology is affecting both loops at the same time,

play38:12

so more good news, question mark?

play38:16

Most marketing work is being impacted by this,

play38:20

and most product work is being impacted by this as well,

play38:23

so again, impacting both sides of the loop

play38:27

where design is involved,

play38:30

and if you haven't seen Vercel Gen 0 v0,

play38:33

which was re-released a couple days ago,

play38:38

it indicates an era where UI goes away,

play38:42

so-called Zero UI, and why is that possible?

play38:47

It's possible 'cause of something called function-calling.

play38:50

I wanna pause for a second because function-calling,

play38:54

if you haven't seen it or you haven't heard of it,

play38:57

it's because it's somewhat obscure.

play38:59

We're captivated by chatting with things right now.

play39:02

It's like, "Oh my gosh, I'ma chat to it.

play39:04

"It's gonna chat back to me. I'ma chat back.

play39:06

"I'ma have it make an image. I'ma chat back to it."

play39:09

Function-calling models are a little different

play39:12

in that function-calling models, the way to think of it is

play39:15

there's a chest of tools sitting over here,

play39:19

and what the AI model can do is

play39:22

it can select the tool and use the tool.

play39:27

Not only that: It can select the first tool

play39:30

and then the second tool to do pipelined work,

play39:34

so function-calling models mean that the effort we took

play39:40

to make a user journey or help someone,

play39:43

hold their hand, get the inputs from them through screens,

play39:47

the user can now teleport

play39:49

to the actual thing they wanna get done

play39:51

with no need for the journey,

play39:53

so that's beginning to happen right now,

play39:57

and so these two sides of the loop:

play40:00

What is marketing? What is product?

play40:03

Is gradually gonna change because of function-calling.

play40:06

It changes the relationship to a consumer.

play40:08

It change relationship to what a journey is

play40:11

to someone using that journey.

play40:14

Okay, I had this wonderful moment back when I ran a college

play40:18

called Rhode Island School of Design,

play40:21

and I used to walk around campus a lot

play40:24

and talk to students.

play40:25

They're my customers, right?

play40:26

So talk to 'em. "How you doin'? (indistinct)"

play40:28

I would always learn something new from them.

play40:31

There was one student came up to me

play40:33

and said, "Hey, John.

play40:35

"I saw you talk to those people over there.

play40:38

"You shouldn't talk to them."

play40:40

I said, "Why?"

play40:42

"They're talkers. You shouldn't talk to them."

play40:46

(indistinct), "Why's that?"

play40:48

"Well 'cause, like, we're makers. We make things.

play40:51

"They're talkers. They don't make anything,"

play40:53

and I was like, "Aren't you a talker

play40:56

"'cause you're talking to me?"

play40:59

So makers and talkers,

play41:01

so makers...

play41:02

Oh, makers and talkers...

play41:05

Developers are makers,

play41:07

and those makers' lives are getting disrupted

play41:10

by things like GitHub Copilot, GPT Engineer.

play41:15

It's changing how you develop software. The makers.

play41:19

Same for designer-makers.

play41:22

Designer-makers are being,

play41:23

or their lives are being changed as well

play41:24

by this new kind of way to build things.

play41:28

What's interesting, however,

play41:29

is people who are talkers, the people who manage,

play41:32

their lives are getting a lot better,

play41:36

so it's something to sort of note.

play41:38

Just saying. Okay.

play41:41

This is on my studio wall for the last 10 years

play41:44

by Eric Shinseki: "If you don't like change,

play41:46

"you're gonna like irrelevance even less,"

play41:49

and I find this very useful

play41:51

'cause it can be kind of like,

play41:52

"Whoa, this is getting tiring.

play41:54

"How I'm gonna keep going?"

play41:55

So but this keeps me going. Helps you too.

play41:59

Okay, function-calling. You know, I love it so much.

play42:03

Function-calling is really nicely illustrated in Vercel v0.

play42:09

What you see here is the model is you're chatting with it,

play42:14

but you're not just chatting with it.

play42:16

It's reaching out and using a plugin

play42:19

or it's generating the user interface that you need

play42:22

to get something done.

play42:24

This is a different way of creating software.

play42:29

You define the goal.

play42:31

The entire journey gets designed,

play42:34

and the different components get called to get the job done.

play42:38

It is still very new,

play42:41

but we're gonna see more of that, definitely,

play42:44

this year and beyond,

play42:45

and it'll probably change how we design in general.

play42:49

Okay. (sighs) Okay.

play42:51

All right. Let's see here.

play42:52

I don't see the questions, but anyways, how we doing? Okay?

play42:56

All right? We still here?

play42:57

(audience calling out) 'Kay, we got 16 minutes.

play42:59

Okay, I'm sorry giving you too much stuff.

play43:01

I know, but anyways, this is why you probably come here.

play43:03

I overcreate for the table. All right.

play43:07

Critical thinking,

play43:09

so critical thinking is awesome

play43:11

because we humans are really good at it.

play43:13

It's also a skill of a talker, mind you.

play43:16

The thinker-talker who isn't making

play43:19

can actually think critically,

play43:20

and it may be the one skill

play43:22

that we have to be the best at now,

play43:24

especially given the time.

play43:26

This is a model of critical thinking.

play43:29

You have standards.

play43:31

You have elements of thought,

play43:33

and you sort of put them all together,

play43:35

and what you get, you get the kind of a person that you are

play43:39

as an intellectual.

play43:41

Those of you who are AI call this chain-of-thought reasoning

play43:45

or a tree of thought or more exotic words.

play43:48

This is basically thinking hard,

play43:51

and when you think hard, it's critical.

play43:55

The problem with critical thinking, however, is

play43:58

it takes time.

play44:00

Like, how many of you

play44:01

like to buy weird things made by human beings?

play44:05

This is Austin. I'm sure you do that.

play44:06

Yes. South by Southwest, right?

play44:07

You go, "Oh my gosh, I gotta have that."

play44:09

Like, "There's a shirt I passed by

play44:12

"for the last six years in Austin.

play44:13

"I gotta have that shirt,"

play44:14

so I bought it yesterday 'cause it's clearly handmade.

play44:18

It's like, "Ah, it's pretty hard to make,"

play44:20

so it takes time,

play44:21

and so therefore, it's a little more expensive.

play44:24

The problem is time is money,

play44:27

so critical thinking is not a common thing.

play44:30

I've lived in the not-for-profit sector

play44:33

and the for-profit sector,

play44:34

and I realized a really simple thing

play44:37

that maybe you all have known for a longer time than me,

play44:40

but there's kind of like a division of labor.

play44:44

In corporations,

play44:48

it's a lot of energy to critical think,

play44:52

so they're not really good at critical thinking.

play44:54

They're good at scaling.

play44:56

Scaling, build, scale, build, scale,

play44:58

whoo kind of thing, right?

play44:59

Critical thinking, that takes time.

play45:03

On the other hand,

play45:04

not-for-profits are really good at critical thinking,

play45:08

and that's where a lot of the important stuff

play45:10

is being generated now to think critically,

play45:14

so it's almost as if

play45:15

if you keep looking across both worlds,

play45:18

in the average, you might find something better, I think,

play45:22

is my belief.

play45:25

I have a collection of a bunch of these papers that I found

play45:29

that I'll put online after this.

play45:31

What I think are the best resources

play45:33

to think of a balanced view of how this stuff all works.

play45:38

This is

play45:39

"On the Opportunities and Risks of Foundation Models."

play45:42

"Dangers of Stochastic Parrots."

play45:44

This is very new paper,

play45:45

"Ten Hard Problems in AI We Must Get Right."

play45:48

This is a wonderful paper to read

play45:50

if you haven't read it yet, "GPTs are GPTs."

play45:52

The graph is...

play45:55

You can't read the graph.

play45:56

You have to zoom it up, but it's fascinating,

play45:59

and my favorite by Erik at Stanford, "The Turing Trap."

play46:03

Because of this diagram 'cause I like simpler things.

play46:07

There's, like, so much stuff out there.

play46:08

It's, like, information obesity,

play46:09

so this is my favorite diagram I found.

play46:12

It's by Erik at Stanford.

play46:14

This is tasks that only humans can do, the set of,

play46:19

and then, this is human tasks that AI could automate.

play46:23

The question in your life is,

play46:25

is that black rectangle larger or smaller?

play46:28

Is your orange rectangle getting bigger or smaller?

play46:30

Is the question,

play46:32

and then, there's this whole sea of things,

play46:35

new tasks that humans can do with the help of AI,

play46:38

only with the help of AI,

play46:39

so I find this a very productive framing,

play46:42

like, you know, how are you building your bench of skills?

play46:46

What are your new...

play46:46

Or you have this new blue ocean type of skill out there.

play46:49

I have found, while building this report

play46:51

and showing you all the stuff I built

play46:52

to sort of make that sort of search thing, whatever.

play46:55

I was, like, "Wow, I can do stuff I couldn't do before."

play47:01

Who knows the work of Sherry Turkle? Sherry Turkle?

play47:03

If you don't know it, please check her work out.

play47:06

I think she's one of MIT's greatest minds.

play47:08

Sherry's been thinking

play47:09

about identity, devices, society for the longest time,

play47:15

but she wrote once that,

play47:16

"Technology challenges us to assert our human values,"

play47:20

which means that, first of all,

play47:22

we need to figure out what they are,

play47:24

and so really, everything Sherry's written

play47:27

is writing, I think, is very relevant today

play47:29

as we try to find our way forward

play47:31

in this unusual time of AI,

play47:34

so concluding thoughts,

play47:36

so I realize, those of you who design,

play47:40

that you are the great humanizer of technology.

play47:43

I've been cooking recently.

play47:45

There's, like, meat tenderizer.

play47:47

The meat's, like, hard, gets a little softer,

play47:49

so it's like I think design humanizes technology.

play47:53

That goes all the way back to the Bauhaus

play47:56

or even the 1800s where factories were making

play48:03

products that were dangerous to your hand, hard to hold,

play48:08

and so the Bauhaus school was a result

play48:11

of asking questions of, "Well, if you hold it this way,

play48:14

"you're gonna burn your hand.

play48:15

"Let's actually make it this way."

play48:16

"Oh, that's gonna take more manufacturing effort."

play48:18

"No, no, the consumer will benefit from that,"

play48:21

so design has always been something

play48:23

that humanizes technology,

play48:26

and the same time we're told don't humanize AI.

play48:30

It's kind of weird, right? What do you do?

play48:34

So maybe we should stop designing

play48:37

'cause we're humanizing it, right?

play48:41

But I believe that the new value to find

play48:44

is in this criticality thing.

play48:47

We gotta make it just taste better somehow.

play48:49

I put a tongue there, right?

play48:50

It's, like, measurable valuable is, like, whatever.

play48:53

Palpable value. "Oh, I want that," kind of thing.

play48:56

Critical value. We have to find that.

play49:01

Privacy is a value is becoming like that.

play49:02

That's delicious. Oh it's so private.

play49:04

That's delicious, so it is occurring,

play49:08

but lastly, avoid the delusion,

play49:12

when you're working with these models,

play49:13

to think there's a thing behind it

play49:16

'cause that goes back to the '60s.

play49:19

It's a new kind of technology. It's built outta math.

play49:22

It can do good things. It can do bad things.

play49:24

It's up for us humans who want to do good things with it

play49:28

to do better things for the world.

play49:31

Okay, there's a bunch of stuff to read up there

play49:34

in the critical space.

play49:36

These are some of my favorites.

play49:37

I just met someone in the line to get tacos somewhere

play49:41

who probably is here.

play49:44

This is my favorite thing.

play49:45

It's design patterns catalog.

play49:47

Dangerous hitting a URL.

play49:48

There we go. This is by Sarah Gold.

play49:51

Sarah Gold in the UK is probably the great mind of our time

play49:55

about services earning trust.

play49:56

This is the entire dictionary

play49:58

of ways to approach consent, privacy.

play50:02

It's such a beautiful thing.

play50:04

Okay. All right.

play50:05

All right. Go.

play50:06

Wow, we got (indistinct).

play50:07

"How to Speak Machine," my highly unsuccessful book of 2019,

play50:11

(audience laughing)

play50:12

which took me six years to write,

play50:14

but I wrote it to help understand computational design.

play50:18

The first is it never gets tired.

play50:21

The second thing is it can span infinite space.

play50:25

The third thing is it can act alive,

play50:27

so these three properties,

play50:28

we have had no design material like it before ever,

play50:33

one that never gets tired,

play50:34

one that can span infinite space

play50:36

and infinitesimal detail at the same time,

play50:39

and lastly, it has properties

play50:40

that can make it seem, like, alive.

play50:41

You saw the robots. You see it thinking.

play50:43

Weird.

play50:45

It changes how you make products

play50:46

because you can make them incomplete for the first time.

play50:50

You can also make products that are instrumented,

play50:51

so you can watch what's happening over time,

play50:54

and lastly, if you don't watch out, it can lead to imbalance

play50:58

because you can automate imbalance if you don't watch out.

play51:02

Okay, takeaways. Computational thinking.

play51:06

It's invaluable, not about coding,

play51:09

but understanding how computation works

play51:11

will help to keep you up-to-date

play51:15

'cause the physics have not changed.

play51:18

The second thing is your work is gonna change.

play51:20

AI's gonna change how we work,

play51:23

and it's gonna be not like this.

play51:25

It's gonna happen over time.

play51:27

People say, "Oh, no, John. It's gonna happen right now."

play51:30

Who's ever tried to get something

play51:31

through procurement before?

play51:32

Raise their hand. (audience laughing)

play51:33

Thank you. You know what I mean.

play51:34

This is gonna take a little time, you know, (indistinct),

play51:38

and lastly, be critical of AI for customer use cases.

play51:42

This is something that businesses,

play51:44

when they find value in that, will see the tide change,

play51:48

and so I see design as sort of standing at that forefront

play51:53

of trying to articulate that value, create that value,

play51:56

and I'm looking forward to it,

play51:57

and if you look at the work of Sarah Gold,

play51:59

you'll be able to see that starting to form in design.

play52:02

This'll be up on Design in Tech Report soon

play52:05

in a more coherent fashion

play52:07

and visit the Cozy AI Kitchen if you can,

play52:11

and this pocket of universe is now closed.

play52:12

Thanks for your attention and hope you enjoy the heat.

play52:15

(audience clapping)

play52:17

Thank you.

play52:18

(audience clapping)

play52:22

(ethereal music)

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
AIデザインテクノロジークリティカル思考デザイン対AI未来予測人工知能デザイン思考技術進化顕微鏡イベントサザン・ゴールド
Benötigen Sie eine Zusammenfassung auf Englisch?