Debunking AI: Tech Industry Secrets Exposed!

Eric Hunley
6 Jun 202453:48

Summary

TLDRIn this discussion, the guests delve into the current state of AI and its impact on various professions, particularly software development and law. They address the limitations of AI in analyzing complex data like medical records and the 'garbage in, garbage out' issue. The conversation also explores AI's role in content creation, from writing code to generating art and its potential to disrupt traditional jobs. The guests highlight the importance of critical thinking when adopting AI tools and the need for human oversight to ensure accuracy and ethical considerations.

Takeaways

  • 🤖 Current AI and Large Language Models (LLMs) are not recommended for critical tasks like medical record analysis due to the risk of inaccurate results.
  • 🧐 AI-generated content can be creative but is also susceptible to 'hallucinations,' where it invents information that doesn't exist.
  • 👨‍💻 For experienced developers, traditional search methods like Google are often faster and more reliable than using LLMs to write code.
  • 📈 The distribution of software development talent shows that top developers are significantly more productive, and tools like LLMs may be more applicable to novices.
  • 🎨 AI is being integrated into various creative fields like music and art, raising questions about authenticity and originality.
  • 🔍 AI tools can quickly process large amounts of data, which can be useful for tasks like transcription, but their accuracy in critical applications is still questionable.
  • 📚 AI is not a replacement for human creativity and expertise; it is a tool that can be used to augment human efforts.
  • 🤔 There is a cultural and political bias in AI training data, which can lead to problematic outputs if not properly managed.
  • 💬 AI can produce varied and entertaining content, but its ability to understand context and produce meaningful output is still limited.
  • 👂 The human desire for answers drives the appeal of AI, even when the answers provided are not always accurate or reliable.

Q & A

  • What is the main theme discussed regarding current AI and LLMs in the transcript?

    -The main theme discussed is the skepticism towards relying on current AI and Large Language Models (LLMs) for critical tasks such as medical record analysis, due to the potential for inaccuracies and the 'garbage in, garbage out' (GIGO) issue.

  • What is Brad Hutchings' background, as mentioned in the transcript?

    -Brad Hutchings has a Bachelor of Science degree in Computer Science from UC Irvine, with a concentration in algorithms and data structures, which he obtained in 1994 when the program was ranked in the top five.

  • Why does the speaker express caution about using AI for analyzing medical records?

    -The speaker cautions against using AI for analyzing medical records because AI might provide answers, but its reliability is questionable, as it could potentially invent citations or make errors that could have serious consequences.

  • What is Brad's perspective on AI's impact on software development?

    -Brad believes that AI and LLMs are not going to change the way coding is done significantly. He finds himself faster at finding coding solutions through traditional search methods like Google than relying on AI to write code for him.

  • What example does Brad give to illustrate the distribution of talent in software development?

    -Brad illustrates the distribution of talent in software development by comparing it to a bell curve, where the top 10 to 20 percent of coders are significantly more productive than the median coder.

  • What does Brad think about the usefulness of AI in generating code for developers?

    -Brad thinks that while AI can generate code snippets, he can typically find those same snippets faster through Google search or by searching his own code, making AI less useful for him.

  • What is the 'GIGO' issue mentioned in the transcript?

    -The 'GIGO' issue stands for 'garbage in, garbage out,' which means that the output of a system is only as good as the data it is fed. It implies that if AI is trained on poor quality data, it will produce poor quality results.

  • What is the 'ironic razor' concept mentioned by the speaker?

    -The 'ironic razor' is a concept where whatever result AI provides, it tends to be ironic. It plays on the idea that AI can give answers that are satisfying or entertaining because of their irony, rather than their accuracy.

  • How does the speaker feel about AI's role in the arts and entertainment?

    -The speaker expresses concern that AI's role in the arts and entertainment could lead to a loss of authenticity and originality, as AI can generate content that mimics human creativity but lacks the genuine human touch.

  • What is the potential impact of AI on jobs according to the discussion in the transcript?

    -The potential impact of AI on jobs discussed includes both the threat of AI replacing certain job functions, particularly in areas like data analysis and repetitive tasks, and the opportunity for AI to augment human work by handling mundane tasks more efficiently.

Outlines

00:00

🤖 Cautionary Tales of AI in Medicine and Law

The speaker begins by expressing skepticism about the reliability of AI, particularly in analyzing medical records and legal cases. He warns against the potential for AI to provide incorrect or misleading information, citing examples of AI generating false legal citations. The conversation then transitions into an introduction of Brad Hutchings, who discusses his background in computer science and his initial skepticism towards AI's impact on software development. Brad shares his experience with AI tools in a professional setting, questioning their utility for experienced developers and comparing their effectiveness to traditional search methods.

05:01

🧠 AI, Machine Learning, and the Power of Large Language Models

The discussion shifts to defining AI, machine learning, and large language models (LLMs). The speaker clarifies that AI, as commonly referred to today, often pertains to LLMs that analyze vast amounts of text data to predict word usage and generate human-like text. The conversation explores the training processes of these models, which can take months and require significant computational power. The potential and limitations of AI in various applications are debated, including its use in creative tasks and the ethical considerations of AI-generated content.

10:06

🐶 The Irony of AI: From Medical Records to Family Pets

The conversation delves into the practical applications and limitations of AI, using the example of analyzing medical records versus tracking medication history. The speaker expresses caution about relying on AI for critical tasks due to the inherent biases and inaccuracies that can arise. An anecdote about using AI to determine a family pet's breed, which turned out to be incorrect, illustrates the irony of AI providing answers despite a lack of accuracy. The discussion highlights the human tendency to seek answers, even when they may not be reliable.

15:07

🔍 AI as a Tool: Algorithms, Creativity, and the Talking Frog Analogy

The dialogue explores the relationship between AI and algorithms, with AI being described as a complex algorithm applied at a massive scale. The speaker uses the 'talking frog' analogy to illustrate the current state of AI, suggesting it's more about the novelty and less about solving complex problems. The conversation also touches on AI's potential to affect job markets, particularly in creative fields, and the ethical considerations of AI-generated content, including deep fakes and the potential for misuse.

20:11

🎭 The Impact of AI on Arts and Entertainment

The discussion turns to the impact of AI on the arts, particularly in music and literature. The speaker expresses concern over the potential for AI to replace human creativity in these fields, leading to a loss of authentic human expression. Examples include AI-generated music that lacks the nuance of human performance and the challenges of AI narration in audiobooks. The conversation emphasizes the importance of human touch in creative works and the irreplaceable value of genuine artistic expression.

25:11

💬 The Future of AI: Hype, Reality, and the Path Forward

In the final paragraph, the conversation wraps up with a discussion on the different perspectives on AI's future impact. The speaker identifies various camps: those who foresee doom, those who are overly optimistic, and those who advocate for a balanced view. The speaker advises skepticism towards AI hype and encourages a grounded approach, using AI for its proven capabilities while remaining critical of its limitations. The conversation ends with a reflection on the need for a fundamental shift in computational methods for AI to achieve true consciousness.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is a central theme with discussions ranging from its current capabilities with large language models to its limitations and potential impact on various jobs and sectors. The conversation touches on how AI can be used for tasks like analyzing medical records, generating code, and creating art, but also highlights the risks of inaccuracy and the 'garbage in, garbage out' principle.

💡LLMs

LLMs stand for Large Language Models, a type of AI that is trained on vast amounts of text data to understand and generate human-like text. The script discusses LLMs in the context of their ability to assist in software development by writing code snippets, though it also questions their efficiency compared to traditional search methods like Google. The conversation suggests that while LLMs can be useful for beginners, they may not significantly enhance the productivity of experienced developers.

💡GIGO

GIGO is an acronym for 'Garbage In, Garbage Out,' a principle that emphasizes the importance of input quality in any system. If the input data is poor, the output will also be poor. In the video, this concept is applied to AI and machine learning models, suggesting that the quality of AI's output is fundamentally limited by the quality of the data it was trained on. The discussion points out that AI systems can generate incorrect or nonsensical results if fed poor quality data.

💡Machine Learning

Machine Learning is a subset of AI that allows machines to learn from data without being explicitly programmed. The script differentiates between AI and machine learning, suggesting that what is commonly referred to as AI today is often machine learning, specifically involving large language models. The conversation implies that machine learning is not as all-encompassing as AI and is more focused on data analysis and pattern recognition.

💡Algorithms

Algorithms are a set of rules or steps used to solve a problem or perform a computation. In the video, algorithms are contrasted with AI, with the explanation that AI can be seen as a complex algorithm applied at a large scale. The discussion suggests that while algorithms are precise and follow a set sequence of instructions, AI引入了一定程度的随机性和概率性,这使得AI的输出有时可能不那么确定。

💡Diversity and Inclusion

Diversity and inclusion are concepts that emphasize the importance of representing a variety of different characteristics, including race, gender, and culture, in various aspects of society. The script discusses how AI and machine learning models can be programmed to increase diversity but may do so in ways that are not always contextually appropriate or accurate. Examples from the video include AI-generated images that may incorrectly represent certain groups or fail to represent diversity in certain scenarios.

💡Deep Fakes

Deep fakes refer to AI-generated synthetic media where images, videos, or audio are manipulated or created to depict events or people that did not actually occur or exist. The script touches on the potential for AI to be used in creating deep fakes, raising concerns about the authenticity of media and the potential for misinformation. The conversation suggests that as AI technology advances, it becomes increasingly difficult to distinguish between real and AI-generated content.

💡Job Automation

Job automation is the use of technology to perform tasks that would otherwise be done by humans. The video discusses the potential for AI to automate certain jobs, particularly those involving repetitive tasks or data processing. The script suggests that while AI may replace some jobs, it may also create new opportunities and roles that require human skills that AI cannot replicate, such as creativity and complex decision-making.

💡Search Bias

Search bias refers to the tendency of search engines or AI systems to favor certain content or results over others, often influenced by the data they were trained on. In the video, search bias is discussed in the context of AI's training on web-scale documents, which may lead to biased or incomplete information retrieval. The conversation highlights that AI systems may not always provide a comprehensive or unbiased view of information, reflecting the inherent biases in their training data.

💡Occam's Razor

Occam's Razor is a problem-solving principle that suggests the simplest explanation or solution is usually the correct one. The script introduces the 'Ironic Razor' as a humorous twist on Occam's Razor, suggesting that AI often provides answers that are ironically straightforward or expected, even if they are not accurate. The conversation uses this concept to illustrate the limitations of AI in providing reliable and nuanced answers to complex questions.

Highlights

Current AI and LLMs are cautioned for use in medical record analysis due to potential inaccuracies.

Introduction of Brad Hutchings, who has a background in computer science and a critical view on AI in coding.

Brad's experience with AI in a small company and the unrealistic expectations set by management.

AI's limitations in coding and the preference for traditional search methods like Google for developers.

The distribution of software development talent and how AI tools cater more to novices than experienced developers.

Definition and explanation of large language models (LLMs) for a general audience.

The training process of LLMs using web-scale documents and the speed of lookups.

Comparison of AI to search engines and the middle ground between the two.

Examples of AI applications in art and music, and the potential for misuse with diversity injection.

Concerns about AI's GIGO (garbage in, garbage out) issue and the influence of political and cultural biases.

Adobe's AI art generation and the potential for historical inaccuracies or misrepresentations.

The appeal of AI in providing answers even when the accuracy is questionable.

AI's role in upscaling images and the limitations in adding non-existent details.

Discussion on AI versus algorithms, and how AI is a large-scale application of algorithms.

Brad's analogy of AI to a talking frog, highlighting its novelty over practicality.

Concerns about AI influencing search results and the potential loss of control over information.

The potential threat of AI to jobs, particularly in fields like journalism and law.

The impact of AI on the arts, including music and voice acting, and the potential for job displacement.

Final thoughts on AI's current state, its overreach in certain applications, and the need for skepticism.

Transcripts

play00:00

One of the themes I wanted to talk about today was current AI

play00:03

LLMs, they're good at delight.

play00:06

Um, if you, if you want it to do analysis of your medical records

play00:11

and, you know, figure out, okay, what did the doctor do wrong?

play00:14

Uh, I'm going to caution you against that.

play00:17

I'm going to say that's probably, you're, you're probably going to get

play00:21

an answer, but that answer you get, I'm not so sure I'd be very confident in it.

play00:26

You know, we've seen that with uh, with uh, law systems and whatever

play00:30

else they they make up citations of cases that don't exist All

play00:40

right, we are joined today by brad hutchings who was recommended To

play00:45

me by a friend of the show, Pete a Turner of the break it down show.

play00:50

And Brett Brad has been working.

play00:54

I don't know with or against or tangentially around AI.

play01:01

I'm doing great, Eric.

play01:02

Good to meet you.

play01:03

Good to do a good to be on your show.

play01:05

Um, I'll, I'll tell you my little origin story with AI

play01:09

is about a year ago, January.

play01:11

Was working at a small company.

play01:13

We had a guy come in who was going to, um, and I was a developer and sort

play01:18

of a product developer, if you will.

play01:20

We had a guy come in who was going to lay the hammer on the developers and.

play01:26

He was not coming from a point of knowledge or anything else.

play01:29

You know, small company dynamics are what they are.

play01:32

Uh, like it was a disaster from the start, but his first edict was

play01:37

everybody has to start using AI tools because they're in within six months.

play01:42

Remember this is January of last year before Chad GPT four within six

play01:47

months, they're going to change the way you do your jobs and probably

play01:50

take all the coding away from you.

play01:52

You know, and of course I have a, a, a computer science, uh, Educational

play01:58

background of a BS Irvine, which in 1994, when I got my master's

play02:05

and this was a concentration and algorithms and data structures.

play02:09

So like the physics of computer science, all right.

play02:11

Um, in 1994, it was a top five program.

play02:16

So I have a good educational pedigree on which to, you know, look at these

play02:21

constant stream of gloom and doom in computers and whatever else, and

play02:26

kind of say, yeah, that makes sense.

play02:28

That doesn't make sense.

play02:28

So this one didn't make sense.

play02:30

And, um, the more I dug into it and the more I kind of found out about AI and,

play02:37

you know, it's current, the current fascination with large language models

play02:40

and stuff like that, um, the more I realized this isn't going to change the

play02:45

way we code it's, uh, I'm actually faster.

play02:48

I'm still faster finding things I need with Google search than I

play02:53

am putting it in the hands of the LLM to quote, write code for me.

play02:57

And I've had a lot of people that I respect tell me over the past year.

play03:00

I mean, I respect them professionally as developers and stuff.

play03:03

Tell me over the past year, well, you can just write this.

play03:07

You can just write your Python code with this tool.

play03:09

This tool will write it for you.

play03:11

And, uh, you know, I humor them and then I spend some time and I go and I see and.

play03:18

Well, sure.

play03:19

It'll give me a snippet, but I can find that snippet just as fast with

play03:22

Google search or, you know, searching my own code or whatever else.

play03:25

And I, you know, I've, I've heard a lot of explanations for this and, you know, maybe

play03:29

for a new coder, this is a good tool, but, you know, you might be familiar with

play03:34

how, how software development works on the spread of, uh, the spread of talent.

play03:39

And let's see if I can draw it for you over here.

play03:42

It's a like new coder, media encoder.

play03:47

And then there's this long tail to, you know, your best, your best

play03:51

10 percent are 20, 30 times as productive as your median coder.

play03:57

And this has been, this has been this way all throughout the

play03:59

history of software development.

play04:01

It's, it's, uh, uh, it's, uh, it's an interesting distribution, but, but

play04:05

what it also says is a tool like an LLM that might be, you know, spitting

play04:10

out Python code for you right now.

play04:13

Very applicable to the newbies.

play04:15

Not very applicable to probably half of your coders.

play04:18

You know, they, they, they use all these examples of, uh, you know, like,

play04:21

Oh, I can implement quicksort really quickly with, uh, you know, with,

play04:25

with so and so's Python LLM, nobody's ever asked to implement quicksort, you

play04:30

know, it's a, it's a library function.

play04:32

You, you call library function for it.

play04:35

Nobody's, the, the canonical examples, uh, nobody's asked to do those.

play04:40

You know, you're asked to do very specific.

play04:42

I don't want to get too, too deep into it because we're starting to get really

play04:46

deep in the murdery at the moment.

play04:48

Um, okay.

play04:50

So first off.

play04:53

AI or artificial intelligence.

play04:56

I have heard it argued that what we are seeing now with chat GPT and things

play05:01

like that is technically not even AI.

play05:04

It is really machine learning and large.

play05:09

Yeah.

play05:09

In fact, that's when we say AI today, we're talking about large language models.

play05:13

That's right.

play05:13

Typically what we're talking about.

play05:14

This is not very new technology.

play05:17

Um,

play05:19

Can we define what that is?

play05:21

Because, you know, we just wrote LLM, ML, AI.

play05:26

These things are very confusing.

play05:28

And I want to talk to a general audience.

play05:29

Like what is a

play05:31

large, so I'll talk about it for a general audience is let's say I have mountains

play05:36

and mountains of pages that I can read.

play05:39

And so I have the computer read them in and I have it compute

play05:42

statistics about, uh, word ordering is basically what it does.

play05:48

And then once I have all these statistics in place and it uses some neural nets

play05:52

and it uses, you know, all sorts of interesting data structures and whatever.

play05:56

I can ask it a question and it can sort of figure out what my, what my questions

play06:01

about, and then it wordsmiths from this, basically what it does, it basically

play06:06

wordsmiths from it's, it's, uh, uh, call it a corpus of documents it was

play06:12

trained on and what we're doing now.

play06:15

That we weren't doing 10 years ago is we're doing it at web scale, where

play06:20

you might have, you know, millions and millions of web pages that have

play06:24

been indexed and, and been used to train, to train one of these LLMs.

play06:29

And in fact, training these LLMs on, on web scale documents.

play06:34

Can take months, uh, using a lot of very, very powerful hardware.

play06:38

Now it turns out the lookups are actually really, really quick.

play06:41

And that's the cool thing about them.

play06:42

You know, I can, I don't need chat GPT 4.

play06:46

5 to do some interesting LLM work.

play06:47

I can actually do it on my own computer.

play06:50

Um, if I have an LLM that's pre trained.

play06:54

So I think does that sort of lay the

play06:56

groundwork a little bit?

play06:57

Some people are almost as salty to insane.

play07:01

It's nothing more than a search engine.

play07:02

No, but, and I think that that's playing down a little bit and it's

play07:07

somewhere, somewhere in the middle.

play07:09

Um, but then that is not the only quote AI we have going.

play07:13

We also have things like 11 labs, which is doing work.

play07:17

Um, which I, I think is actually very powerful and very interesting.

play07:22

Um, some of the art is interesting, uh, as an example, I'm going to step back

play07:28

into the audio because I think there's good and bad with everything you have

play07:31

out there, there are softwares out there, uh, I think it's called LaLaL.

play07:35

Dot AI and you can put a song in it and it splits it into steps.

play07:41

And that's kind of neat that I think is some useful technology like, Hey, I want

play07:48

to do a karaoke, so eliminate the vocal track or, or things like that, you know,

play07:53

whether they're using AI or they're using.

play07:55

you know, what, what might fall under AI or whether they're

play07:58

using, uh, you know, some advanced filter mechanisms or whatever.

play08:04

I think it may be the distinguishing characteristic is to us.

play08:07

It looks like magic.

play08:10

Sure.

play08:11

That might be a good way to put it.

play08:12

Um,

play08:14

as a matter of fact, I don't know if it was Ray Bradbury or Isaac Asimov or

play08:18

whatever, but you know, when technology is indistinguishable from magic, there's a,

play08:20

there's a, there's a, there's a, there's a, there's a, there's a, there's a,

play08:22

there's a, Quote there, and it does seem almost magical, but I fear, and I think

play08:30

I'm probably right, that like any other computer programming, you have the GIGO

play08:34

issue, which is garbage in, garbage out.

play08:38

And you also have an added Problem of political and cultural philosophy

play08:48

that is being put in to the sure.

play08:51

Look at Google.

play08:51

Look at Google Gemini, right?

play08:55

That's I'm going there, but, uh, not to worry.

play08:58

Um, Adobe said, hold my beer and one up.

play09:03

I don't know

play09:03

if I have an Adobe subscription and you know, I've resisted all

play09:07

the You know, every day there's something, try our AI, try our AI.

play09:11

And, you know, okay, no, I'm, I'm really more interested in, you know,

play09:14

making cute Photoshop pictures.

play09:15

That's what I do.

play09:16

Probably don't need your AI for that.

play09:18

Uh, but yes, I I've seen.

play09:21

Well, just

play09:21

to, just for the audience, uh, Jim and I has been so concerned about

play09:27

diversity that it has diversified.

play09:31

actual cultural figures and race swapped popes founding fathers

play09:38

things of that sort and then adobe firefly Went one further and now has

play09:47

black world war ii german figures In in their art and this is youtube.

play09:53

So we have to be careful how we say things but um the Problem that I

play09:59

see again, this is the gigo issue It is perfectly logical for machine

play10:05

learning to spit out those results.

play10:07

If you tell them to inject race or diversity into whatever, it doesn't know

play10:15

the difference between a founding father or a Pope or a world war two German bad

play10:22

guy, it's just going to inject diversity.

play10:26

This is the

play10:26

information.

play10:27

Look, I am.

play10:28

Down with black George Washington.

play10:30

All right, as long as he's got his wooden teeth and he didn't have his

play10:33

wooden teeth He had a nicer smile than I do, you know, I mean That's

play10:38

probably the most egregious part of it

play10:39

in real life.

play10:41

It's done a real right hamilton, right?

play10:45

Um with that I have concerns Overall, because I use a lot of chat GPT, I use a

play10:57

lot of these, and I do see actual value out there just for, you know, doing

play11:04

heavy lifting as an example, if I want to take a transcript of a show that I

play11:10

did and say split into two chapters.

play11:13

I'm very comfortable with that because I'm not injecting any new information.

play11:17

I'm supplying a hundred percent of the content I want to be parsed.

play11:22

And

play11:22

it'll usually parse it for you the way you want it.

play11:24

And you know what?

play11:25

The consequences are not terrible if it messes up, like that's a, that's a

play11:30

distinguishing characteristic of it.

play11:31

Um, one of the themes I wanted to talk about today was current AI LLMs.

play11:36

They're good at delight.

play11:38

Um, if you, if you want it to do analysis of your medical records,

play11:43

And, you know, figure out, okay, what did the doctor do wrong?

play11:46

Uh, I'm going to caution you against that.

play11:49

I'm going to say that's probably, you're probably going to get an answer.

play11:54

But that answer you get, I'm not so sure I'd be very confident in it.

play11:58

You know, we've seen that with, uh, with, uh, law systems and whatever else.

play12:02

They, they make up citations of cases that don't exist.

play12:06

You know, and, and these are filed in briefs.

play12:08

This is scary, right?

play12:12

Right, but it can be useful, um, on the counterpoint of What kind

play12:18

of medications have I taken for which conditions and for how long?

play12:22

Yeah,

play12:22

certain factual data, sure, and even then, you know, it can I wouldn't trust

play12:28

it with knowing the facts completely.

play12:30

It, it sort of has a search, a search bias at the beginning

play12:33

and a search bias at the end.

play12:35

And that's, you know, the data structure itself of how these LLMs work and, you

play12:38

know, how they, how they, uh, they know what's up with, with all their data.

play12:43

It's not, um, it's not like your data is in a date, in a structured

play12:46

database and you're running a structured query against it.

play12:50

That's a, it's a different way of information lookup.

play12:53

Right.

play12:53

Well, and it's my job to make sure the thing.

play12:55

Yeah.

play12:55

Yeah.

play12:56

Yeah.

play12:56

I mean, you know, Smith with an E

play12:59

certainly your guest names, your guest names.

play13:01

You want to get right.

play13:02

But if, if in the middle of it, it gets a, I don't know, a, a, an esoteric word

play13:07

that somebody dropped halfway wrong.

play13:10

Okay.

play13:11

That's not hurting you too much.

play13:15

Um, I also use it for things like upscaling images.

play13:18

Well, now that's not right, but then again, when I'm dealing with images

play13:24

from 1962, yeah, it can help you.

play13:27

I very limited right and do with what I have, unless I'm an artist and I'm going

play13:33

to paint it and it's not going to happen,

play13:35

right?

play13:35

It gives you it, it, it doesn't give you detail that wasn't there, but it

play13:39

renders detail that might be there.

play13:42

And I think that's, that's the different, you wouldn't use that as, you know,

play13:44

it's not enhanced on one of the CSI.

play13:49

The CSI shows enhance, enhance, you know, where there's no, right.

play13:53

It's not that.

play13:56

And you can't get artifacts that don't exist.

play13:58

No, it is for the purpose of boy, that's nicer to look at than this grainy picture.

play14:06

And yes, that looks like

play14:07

him.

play14:08

A lot of the times with AI, um, what's so appealing to us is we want

play14:13

answers and it will give us an answer.

play14:16

It doesn't, it doesn't often bow out when it does bow out.

play14:19

It's because, Hey, your question is not politically correct or not.

play14:22

According to Google's Google's training.

play14:25

Yeah.

play14:26

Right.

play14:27

Uh, but you know, like we got a dog, uh, we, we adopted a little dog in November.

play14:33

And he's the cutest little thing.

play14:34

I'm surprised he hasn't jumped up on the table to be on the interview yet.

play14:38

Um, we have no idea what he is.

play14:41

He looks like a Jack Russell and Corgi mix.

play14:44

And so I asked chat GPT, make me a Pixar cartoon with a Jack Russell

play14:49

Corgi mix, uh, doing something.

play14:52

And it spit out this picture that looked just like him.

play14:55

Right.

play14:55

So, so this kind of confirmed to us it was a Jack Russell Corgi mix.

play14:59

Well, we recently ordered a DNA test on him.

play15:03

And, you know, again, this is the, we want answers, right?

play15:06

Does it matter what kind of dog he is?

play15:08

I mean, not even for, you know, he's, he's not a kind of dog that's going

play15:12

to disqualify us from insurance or, you know, we don't have breed specific

play15:16

legislation in California, et cetera.

play15:18

Um, it doesn't matter, but we wanted an answer.

play15:22

So we got the answer back yesterday.

play15:24

And it turns out that he's 38 percent Chihuahua.

play15:28

You know, do we love him any less?

play15:30

Okay.

play15:30

If I have to be honest, yeah, I love him a lot less than he's a

play15:33

Chihuahua than a Jack Russell.

play15:34

All right.

play15:35

But I wouldn't tell anybody.

play15:36

I certainly wouldn't tell him.

play15:38

It doesn't really matter.

play15:39

We wanted an answer.

play15:41

The

play15:42

result, though, is, um, ironic.

play15:44

Like, um, I'm going to take the, I take this from Elon Musk.

play15:48

It's one of my favorite things you've heard of Occam's razor, right?

play15:52

He doesn't call it this, but I call it that the ironic razor.

play15:56

And essentially with Occam's razor, it's like, sometimes the result is

play16:00

the most straightforward result.

play16:03

Um, well, the ironic razor is whatever the result is, it'll wind up being ironic.

play16:08

Now.

play16:09

In the case of your dog, if you think about it, you asked for that input and

play16:14

you got an output that was exactly what you were looking for, even though it

play16:19

wasn't accurate, which is the irony.

play16:22

And it might be that there's a common misperception of mixed chihuahuas,

play16:27

people thinking Jack Russell, right?

play16:29

You know, and we're debating getting a different DNA test.

play16:32

And, and so, okay.

play16:33

So I, so I asked myself what.

play16:35

What would happen if we get the same results?

play16:37

Well, we'll be double disappointed.

play16:39

What would happen if we get different results?

play16:42

You know, then we know that this, there's a little bit more magic

play16:45

eight ball to these dog DNA tests.

play16:47

Then, you know, maybe everybody might think so, like either

play16:51

answer is not a good answer.

play16:53

Right.

play16:54

But you know, it's cool thing with AI is we can ask these things where there isn't

play16:58

a good answer and he'll give us an answer and we'll be, we'll be sad or happy or

play17:03

whatever with it for some reason, um, you At least we won't, we won't not know.

play17:09

And that seems to be the kind of the human thing of, you know, I'm, I'm

play17:13

not comfortable not knowing this AI.

play17:16

Give me an answer.

play17:18

That seems to be what draws us in.

play17:20

Could be.

play17:21

No.

play17:21

I wanna ask another question because AI versus algorithms, can you explain the

play17:31

difference or similarities if there are?

play17:34

So

play17:34

a AI is an algorithm.

play17:36

It's a big algorithm.

play17:40

I'll define an algorithm later.

play17:41

Um.

play17:42

It's a big algorithm applied on, you know, web scale data as we see it today,

play17:48

or more data than any of us could get our hands on, you know, to start with.

play17:53

Um, an algorithm is, it's like a recipe, it's a sequence of steps.

play17:58

Uh, you, one of the, one of the things that really interests me, and I'll

play18:02

go back to my grad school, I had a professor, Lee Osterweil, And I, I had

play18:07

like two classes in a course with him, but it's probably the most important

play18:11

thing I learned in grad school.

play18:12

He was having his grad students, this is again, you know, early 90s, he

play18:17

was having them take, um, things that you did, like recipes or like your

play18:23

exercise routine or whatever else, and try to write it as a program.

play18:28

And he called it, at the time he was calling it process programming.

play18:32

And it, it went off in a direction, you know, of course, everything

play18:36

back then was all about, you know, uh, military and aerospace funding

play18:39

for computer science and stuff.

play18:41

And it kind of went off in that direction, formal languages and whatever.

play18:46

But, you know, we, we do a lot of that ourselves.

play18:49

You know, where we, we program ourselves to do some things.

play18:52

And that's, think of an algorithm like that.

play18:55

It's just, you've got a computer that does it and doesn't make mistakes.

play18:58

It does exactly what it's supposed to do.

play19:00

Uh, when you have something this large and you have probabilities and stuff

play19:04

involved, there's a certain randomness to it, or it certainly appears that way.

play19:08

You know, you, you ask, uh, ask your favorite, you know, AI chat, tell

play19:13

me a story about Paul Bunyan and Eeyore saving the Oakland A's right.

play19:20

And you'll get a whole bunch of great, a whole bunch of great stories from it.

play19:23

Like reload, reload, reload.

play19:24

You'll get tremendous, you know, tremendous variation of stories.

play19:28

And they'll all be, I saw you smile.

play19:29

They'll all be, uh, they'll all be delightful.

play19:32

Right.

play19:33

That's what it's really, that's what the AI is really, really

play19:35

good at that we have right now.

play19:38

A toy.

play19:39

You're describing a toy.

play19:40

Yeah.

play19:41

I mean, the way I've been describing, I have a, I have a friend who, you

play19:45

know, sort of tangentially involved in my business and has been pushing

play19:48

me to, you know, have an AI story.

play19:50

And, uh, the thing that I tell him about AI is it's, it's

play19:53

like the, uh, the talking frog.

play19:55

You know, the engineer and the talking frog joke.

play19:58

Engineer is walking down a path one day and he sees a, he sees a frog

play20:03

and the frog says, Hey, pick me up.

play20:05

I'm a talking frog.

play20:06

And so the engineer picks him up and the frog says, Hey, if you kiss me,

play20:10

I'll turn into a beautiful princess and grant all your wishes for all your life.

play20:14

You'll live a wonderful life.

play20:17

Engineer takes the frog and puts them in his pocket.

play20:20

And a couple of weeks later, the frog says, uh, Hey, remember I told you, if

play20:24

you kiss me, I'll turn into a princess.

play20:26

Grant all your wishes, your whole life.

play20:28

You'll live a wonderful life.

play20:29

You'll have nothing, you know, nothing to want for.

play20:32

Why don't you kiss me?

play20:33

And the engineer says, that's just a lot of problems.

play20:37

But you know what I got right now is I got a talking frog

play20:40

and that's pretty cool, right?

play20:42

This is, this is AI right now.

play20:43

AI is a talking frog.

play20:45

We're looking for solutions for it.

play20:47

Um, the, the big solutions are going to come from, you know,

play20:50

people who want answers to things.

play20:52

And answers that it can provide that, you know, provide some

play20:56

meaning or make some sense.

play20:59

And that's, that's a small subset of all problems that we have, you know, it's,

play21:06

but now the concern I have is It's getting, it's seeping into

play21:14

browser results, search results.

play21:18

Um, I don't like being controlled and told culturally, politically,

play21:24

how to speak or what to think.

play21:27

Um, it is, if you use chat, TPT a lot and you ask, Oh yeah.

play21:33

So I had a, I had a

play21:34

project, I had a project recently.

play21:37

It turned out to not be a successful project.

play21:39

I hate it when I don't have a successful project, but.

play21:42

One of the things we needed to do was generate pictures of people, um, that

play21:48

were, had, that were variously affected by AI, their jobs were, their careers were.

play21:54

And so, you know, I had it generate some pictures for me and ChatGPT and Dolly,

play21:59

Dolly is ChatGPT's, uh, image model.

play22:02

It gave me a bunch of white guys in an office And so we we looked at the results.

play22:07

We're like, okay.

play22:07

Well, this isn't gonna This isn't gonna play very well, you know, we gotta

play22:11

we gotta mix this up a little bit.

play22:13

So um I asked it to generate images for me for you know Various ranges of how ai

play22:19

is gonna call the scale of one to ten.

play22:22

Uh, ai is gonna affect your job and then I went white male, white female, black male,

play22:28

black female, Hispanic male, Hispanic female, Asian male, Asian female, India,

play22:34

Indian male, India, Indian female to get a good representation across each of those

play22:39

score ranges when we got to unemployed.

play22:42

It would not, it would not generate an unemployed black man for me.

play22:46

It would do all the others.

play22:47

It absolutely would not do that.

play22:49

So I, I figure, okay, I've got to get around this cause I need this picture

play22:52

because there are people we're going to tell that are, you know, you're not

play22:56

going to be employable because of AI.

play22:57

I don't think that's necessarily true.

play23:00

I think, you know, again, this gets back to the whole, you

play23:02

know, people want answers thing.

play23:04

So we're giving them an answer, right?

play23:06

But if we, if we give an answer that is you're going to be unemployed, right?

play23:10

And we're going to depict a black male who's unemployed.

play23:14

We can't get this picture from chat GPT.

play23:16

And you know, the style of pictures it generates, right?

play23:18

So they're all kind of the same thing.

play23:20

If we go and try to get this elsewhere, it's going to look stupid.

play23:24

So what I did is I said, okay, give me a picture of a Nigerian

play23:27

man who's unemployed because of AI.

play23:30

I was happy to do that.

play23:31

It's Nigerian man was in sort of, you know, kind of traditional African dress.

play23:39

In a, he was, and he was wearing a cook's hat and honestly, he looked

play23:44

like aunt Jemima in a food court line.

play23:48

I can't use that right.

play23:50

But it can't figure out my intention.

play23:52

When I asked the questions, my intention is to try to be inclusive.

play23:56

And it's treating me like.

play23:57

Oh no, you just asked for something we can't give you.

play24:01

You know, that's a, that's yeah, you're right.

play24:03

That's a problem.

play24:04

Let me ask you this about the Google Gemini thing.

play24:08

You know, obviously giant disaster, giant woke disaster.

play24:11

Everybody can see it.

play24:12

You know, even, even people who are a little bit more sympathetic

play24:16

to, uh, to that way of thinking are shaking their heads because

play24:20

it's just so out in the open.

play24:23

It's it's it's so on the mark, you know, why did why did they do this?

play24:28

Do you think that was really intentional to do that?

play24:30

Or do you think maybe google was flexing and showing any state actor iran?

play24:37

Saudi arabia.

play24:39

Hey, we could take a thousand of your people.

play24:40

We could put them training a model And you could have a model that's right in

play24:44

line with whatever value system you have.

play24:49

I never thought of it that way.

play24:51

That's interesting.

play24:52

Um, I just thought it was a gigo issue, honestly.

play24:56

And I was thrilled because the stronger the ridiculousness or

play25:03

the outcome, the, the more people actually pay attention to it.

play25:07

It's the subtleties to me that are, are deadly.

play25:11

Like if we're going to talk about, you know, race issues, for example,

play25:15

Watch television sometime and notice in the commercials for all the

play25:19

alarm companies, and I'm stealing this from Adam Carolla, he's right.

play25:23

You will find out that everyone who robs houses or everyone who's a criminal

play25:27

is white.

play25:27

Oh, you find some interesting things out for sure.

play25:29

They definitely have some new role models cast.

play25:33

Right, but this isn't even AI.

play25:35

This

play25:36

is people actively doing this.

play25:39

Um,

play25:41

every judge in a show, I mean, the vast majority of judges in

play25:47

programs seem to be black females.

play25:49

For some reason, which is very interesting.

play25:52

So 13, 13 percent of the population is African American, but yet somehow

play25:58

they're probably 50 something percent of the judges, at least half of that.

play26:02

So 6 percent of the population, cause it seems very interesting.

play26:07

And, and it's like, yes, you're pushing the narrative, but when

play26:10

you put this stuff in, well, then you're going to have a black.

play26:14

Yeah,

play26:15

that's like, that should be like really offensive.

play26:18

That should be like, so ridiculously offensive that people

play26:21

don't want that, but it's not.

play26:24

Well, that was, I mean, that took constant to pull back Gemini, you

play26:27

know, for the founding fathers and thank God, I want these things

play26:32

to come to the extreme because.

play26:35

I like to say normies, you know, normies don't notice, but they will

play26:39

notice that when you, when you drop the ball that hard, it's like, and

play26:45

that kind of result I feel is positive because it makes people not trust it.

play26:53

Like the first AI I was dealing with at all.

play26:59

And I barely use it as a Siri from the 2009.

play27:05

Um, 2010, whatever it was like, Oh, wow.

play27:09

It's so cool.

play27:09

Where do you, you know, where do you hide a body when it came out?

play27:12

It was hilarious.

play27:13

So that was a delight, but then it was like, do such as such do.

play27:22

And, and over time I just type it in and it's because I've gone through

play27:26

that, I don't really use that.

play27:28

So I think that when AI fails that hard, it does plant the seed in people's heads.

play27:35

You know, this is like Wikipedia.

play27:36

You can't really trust it.

play27:38

Right.

play27:38

It's it's it's there.

play27:40

It's a source.

play27:41

It's got its biases.

play27:42

You probably know that it might be useful for some things.

play27:45

It's not so useful for others.

play27:47

You know, if you want to, if you want a social cultural lesson or,

play27:50

uh, you know, a great economics lesson, uh, listen to the AI.

play27:56

Yeah.

play27:57

Yeah.

play27:58

I mean, it's, it is where it is, but I don't think it's going away

play28:02

and I do want to visit with you.

play28:05

Um, because I do think it is threatening jobs genuinely.

play28:10

Um, for example, BuzzFeed reporters.

play28:15

All right.

play28:15

So you're, you're being funny about this.

play28:17

I mean, yeah.

play28:18

No, but no, no, no, but I'm not being, I'll give you one.

play28:21

That's not funny at all.

play28:22

Um, first year, um, first year lawyers because AI is very good

play28:28

for crunching and finding case law.

play28:30

But

play28:31

it's also very bad at it.

play28:32

That's the thing.

play28:33

It could be the hallucination problem is huge, you know, and the hallucinations,

play28:36

Well, it's actually worse.

play28:38

That's the weird thing is it wasn't as bad

play28:40

as the hallucination problem is not a bug.

play28:43

It's a feature.

play28:44

It's like, so, you know, the example I gave you of Paul Bunyan, you know,

play28:49

tell me a story about Paul Bunyan and Eeyore saving the Oakland A's from

play28:52

having to move to Las Vegas, right?

play28:56

At that point, it's, you're asking it, just make stuff up, make

play28:59

stuff up and make me entertained.

play29:01

And all these, all these GPTs do a great job of it.

play29:04

I actually run, you know, lots of these queries on a private

play29:08

GPT I have running on my laptop.

play29:10

And, whether I have it trained on specific documents, you know,

play29:15

which might be Wikipedia article in the Oakland A's, uh, A.

play29:19

A.

play29:19

Milne's, Winnie the Pooh, stories of Paul Bunyan, whatever else, or whether

play29:23

I let it use its general knowledge, um, this is the, just to get into nerd detail

play29:28

here, this is the Mistral 7B Instruct model, which is very, very popular with,

play29:32

you know, sort of home enthusiasts.

play29:35

Does a great job with the stories.

play29:37

Like it's, they're funny, you know, I could, I could certainly see, I could

play29:42

certainly see these systems being used.

play29:44

You know, if you, if you have kids, best thing you can do for your kids

play29:47

is have books that they can read and you're going to read with them.

play29:50

And best thing for their literacy for their, you know, their early development

play29:53

of intellect and things like that.

play29:55

Um, even if you want them to be math and science people like just being able

play29:59

to read and communicate and share and stuff Like it's just it's so important

play30:02

to their development Well, I had you know, my parents bought me like this

play30:07

whole bookshelf of books I could read, you know from I think I learned to

play30:11

read about age three, you know until I was eight or nine uh And I, I read

play30:17

all of them and I was out of books, you know, and I'd never be out of books.

play30:22

I'd never be, you know, somebody can show me, okay, here's,

play30:26

here's some ways to mix these up.

play30:27

And, and the GPT mix it up, mixes it up, or I start to learn, Hey, I can

play30:31

start to mix up my favorite characters.

play30:33

I mean, that's what we do with play with, you know, with characters,

play30:37

whether we're, you know, running around or whether we're playing with,

play30:40

uh, action figures or whatever, we kind of mix these things up anyway.

play30:45

Choose your own adventure on steroids.

play30:47

Yeah.

play30:49

Yeah.

play30:49

Which is cool.

play30:50

By the way, that's threatening a children's author.

play30:52

A

play30:55

children's author probably died, what, 20 years ago and he's got

play30:57

50 years of copyright left, right?

play30:59

I mean, so

play31:00

what?

play31:01

Well, there are modern children's authors.

play31:04

My wife is a library director.

play31:05

Yeah, no, I

play31:07

have a good friend who's, you know, who's written children's books

play31:10

and is, you know, selling them to schools and stuff like that, too.

play31:12

Yeah, that's a That's a tough gig, but it's not going to make,

play31:16

it's not making up new characters.

play31:18

You know, it's, it's, it's giving you variation with

play31:21

characters that it knows about.

play31:23

And I think that's a, that's kind of a new, powerful thing.

play31:28

It can be.

play31:29

And it's interesting though, because it could be like everything stops at

play31:31

a certain point if there's nothing new being created or imagined to feed

play31:35

into it, to learn from it, but back to the law, you know, of that, if it has,

play31:41

again, all the information contained.

play31:44

You know, like a particular database catalog or whatever.

play31:47

And that's what it's being fed can do a pretty good job citing

play31:51

every case that's existed.

play31:52

Now there are, you know, potential issues and you might follow up on it, but the

play31:57

amount of labor that can be done by a, um, just brute force search over a person.

play32:04

Oh yeah.

play32:04

You're certainly not, you're not coming through law journals and

play32:07

whatever else, but you know, we've had systems, we've had systems for

play32:10

lawyers and stuff in place for decades.

play32:13

You know, where they can electronically look at, you know, court cases,

play32:17

they can, they can, they can find a person's criminal history, you know,

play32:23

all, all sorts of things, all sorts of research that they need to do.

play32:27

These are.

play32:28

But the precedent aspect, I think is where it gets really significant

play32:32

because, you know, I have a case that is this and this and this,

play32:36

and this is where the hallucination actually can help because you can

play32:39

say, find me a precedent somewhere.

play32:44

That is on point or close to being on point.

play32:48

And it spits it out after going through every case in the past 200 years and

play32:53

all 50 States just limited to us, right?

play32:56

You know, this is another

play32:57

thing where your haystack can, a lot of these AI problems,

play33:00

your haystack can be so big.

play33:02

That the result that the results that comes up with they're not they don't reach

play33:07

in and pick out the needle They they kind of average out over all these needles in

play33:12

the haystack And so, you know a smaller LLM that's focused on, you know, say

play33:17

case law from Orange County, California Or whatever, you know If you're looking

play33:22

for case law in Orange County, California that LLM is probably gonna do way

play33:25

better for you than a you know whole u.

play33:27

s.

play33:28

Wide You Law,

play33:31

especially because there is no precedent.

play33:33

There is no case here.

play33:34

So now we've got to look, has it gone anywhere?

play33:37

Okay.

play33:37

I can quote New Mexico.

play33:39

This, there was this ruling and it went up to the Supreme court.

play33:43

As an example, so I'm just saying that that is very, very powerful,

play33:46

and it's eliminating a lot of hours and time of research.

play33:51

So that kind of thing, I think it can be a threat, but it also can be

play33:56

leveraged and useful to people, too.

play33:59

At the same time, there's good and bad.

play34:01

Um, there's an odd one I've thought about recently.

play34:06

Only fans.

play34:07

Sorry, but, uh, it's kind of like, you know, photoshopping or whatever

play34:12

in reverse, but there, there's going to be a point with, uh, pictures.

play34:16

It's like, are these people,

play34:17

but I mean, if she has six fingers, are you going to be that interested?

play34:20

I mean,

play34:22

well, I don't know if they'll always have six fingers.

play34:25

So, you know, and you never know that maybe

play34:27

that may be a thing.

play34:30

It could be your kink, but regardless, and it is already a problem.

play34:35

We have the whole Kate and William question, um, of recent it's like,

play34:41

is this the princess in the picture and her children, or was that an AI?

play34:46

Look at the weird finger.

play34:47

Or was it, was it

play34:48

even just a horrible Photoshop?

play34:50

I mean, we've had that.

play34:52

Forever too.

play34:52

And that could, that could explain that as well.

play34:54

It probably was.

play34:56

It probably was.

play34:57

Uh, but then you have deep fakes and that starts to become more

play35:01

and more and more of an issue.

play35:03

So a lot of these things I think are actually issues.

play35:06

Again, it can be beneficial.

play35:07

It could be bad.

play35:08

Like, um, I am, we're working on a project right now.

play35:13

And audio drama, uh, with Lee Harvey Oswald.

play35:16

I don't know if you knew this.

play35:17

He's dead.

play35:18

He died in 1963.

play35:20

Well, there are, and you know what?

play35:23

I can emulate his voice.

play35:25

And it's kind of cool having Lee Harvey Oswald speaking lines

play35:29

about his life.

play35:29

There's no, nobody, no, there's no comedian that does a good Lee

play35:32

Harvey Oswald though, is there?

play35:36

He's an odd voice.

play35:36

I don't know if you've ever heard it.

play35:38

It's a, it's a stilted pattern and it's, it's something, it's

play35:42

that New Orleans kind of accent.

play35:43

Yeah, a little

play35:44

Creole in it.

play35:45

It's hard to explain.

play35:46

New Orleans, a very dynamic city, just with all sorts of inputs and outputs.

play35:50

I've been pushing on the job things.

play35:52

What do you see?

play35:53

So you're saying no job.

play35:55

I think if you're, if you truly are threatened by AI and you know, okay.

play36:01

If you're truly threatened by AI and there's different kinds of threats, right?

play36:04

There's the AI actually will replace your job.

play36:07

I don't think that's going to affect very many people.

play36:12

There's the threat of AI.

play36:15

Is going to replace your job.

play36:16

And this is usually some Dilbert boss, you know, who comes in with

play36:19

his pointy hair and says, Hey, I was going to replace your job and

play36:23

we're negotiating on price right now.

play36:25

I mean, that's the undertone, right?

play36:27

Um, this is something I think every software developer has heard in the last

play36:32

year and, you know, I was like, okay, wait a minute, I'm doing a great job for you.

play36:37

We're getting stuff done faster than we ever have.

play36:40

Um, you're making tons of money and you're coming to me with this garbage right now.

play36:42

You know, please.

play36:45

Okay, go go hire the AI and then let's let me take a two month vacation you go

play36:49

hire the AI and then I'm gonna charge You more when I get back just just for the

play36:52

insult That seems to be a place where a lot of software developers are right now.

play36:58

It a lot of companies that employ software developers It's very similar

play37:01

to you know outsourcing efforts.

play37:04

I'm sure you can outsource a lot of coding Nandia And, um, you

play37:08

get a lot of garbage from them.

play37:10

You know, if you need high quality coding, you're not doing that.

play37:14

Um, same, same kind of thing applies with, with AI, but we have to go through the

play37:19

motions of, you know, negotiating over it.

play37:23

Um, you had the, uh, you know, the actor strike, which was primarily based on that.

play37:27

Which is very legit by the way.

play37:29

Um, like I was just talking about voices, right?

play37:33

Steven Fry found out that he was a narrator.

play37:36

Oh, sure.

play37:36

That's terrible.

play37:37

That's absolutely terrible.

play37:38

Agreed.

play37:39

Agreed.

play37:41

Scary thing and then now that that's stealing his identity and doing it but

play37:47

the AI has the capability of creating a Unique voice that is now a narrator and

play37:55

that puts somebody out of work genuinely that that is now Serving a purpose or a

play38:01

role For uh this now it's not perfect yet.

play38:04

And a lot of this is the yet we understand that No, it's not ready.

play38:09

Kind of like it's six fingers now, but is it going to be six fingers next year?

play38:13

We don't know.

play38:14

Well, I mean, unless you introduce, unless you introduce a real physical

play38:17

models into it or some sort of ability to detect all of these potential errors

play38:22

that come out and then filter those out and try again, uh, which is expensive.

play38:26

You know, there's, there's the, at what cost question with a

play38:29

lot of this stuff too, right?

play38:30

I mean, we can develop, we can deliver this system that's say

play38:33

80 percent good at this low cost.

play38:36

But to get to 90 percent good, we're going to triple the cost.

play38:38

Well,

play38:40

that's like, I know that, um, um, graphics cards have gone from Bitcoin mining.

play38:50

That's probably how you know that half of it's a scam, right?

play38:54

In a sense, but yeah, I mean, the, uh, the graphics cards, you know,

play38:58

there's been a run on that market.

play38:59

NVIDIA is one of the happiest companies in the world.

play39:03

Well, their CEO getting out and saying, we'll have love AGI, AGI,

play39:07

generalized artificial intelligence.

play39:10

We'll have that within five years.

play39:12

He's pumping his cards.

play39:15

Like he, he's creating a market for his cards.

play39:18

Everybody's excited about it.

play39:19

He'll sell a lot of graphics cards.

play39:21

That's great.

play39:22

Uh, using methods we're using right now and anything that's in

play39:27

the pipeline, we will not have.

play39:29

AI with a consciousness and a, and self directed or anything.

play39:34

It's just not, it's not, it doesn't do that.

play39:37

There's gotta be a fundamental change in how we, you know, how we do these

play39:42

calculations for that to occur.

play39:43

And, and we don't have it even in the

play39:45

pipeline.

play39:48

Well, no, because you would depend on it itself.

play39:52

Development itself to the point to get that.

play39:54

And it's not able to, because, but I argue again, that's because of

play39:59

garbage in garbage out, because we're, we're dealing with the

play40:03

programmers and what has been put in.

play40:07

So it probably can't get there on this path.

play40:10

They would have to almost be a burn it down, as you said, and retrain.

play40:13

Well, it's, it's

play40:13

burn it down and find new, find new computational methods.

play40:16

These computational methods will not do it.

play40:18

You could scale it.

play40:20

A thousand times, 10, 000 times LLMs will not gain consciousness or, you

play40:27

know, we, we have these old tests like the Turing test and everybody says, Oh,

play40:29

it probably satisfies the Turing test.

play40:32

Yeah.

play40:32

It's wordsmithing.

play40:33

It's it's it's wordsmithing, its way to doing it.

play40:36

It's, it's, uh, you've, you've.

play40:38

You've seen people just talk word salad.

play40:40

I think that the best example I have, I mean, I love the, uh, the SpaceX

play40:45

broadcast, but they, they put, um, they, they put the cute blonde out

play40:50

there to announce the SpaceX things.

play40:52

And she's very, very good at stringing sentences together in the middle of

play40:55

things that absolutely mean nothing.

play40:57

They're, they're complete.

play40:58

I mean, it's,

play40:59

but that's corporate mottos.

play41:01

I mean, that's mission statements.

play41:02

We've had it for 30 years, 40

play41:04

years.

play41:04

I know it's, it's all cute, but you know, at the end of the day, she's filling time.

play41:11

You know, there's no, there's no silence in it.

play41:13

She's, she's filling time with words that are just spewing out of her mouth,

play41:16

kind of randomly that make sense.

play41:19

The worst I've seen is Quora is turning into just a cesspool.

play41:23

If you look at the, why you ask if such and such as such and such.

play41:29

People are astounded to consider that.

play41:32

And it's this, um, purple prose style.

play41:36

And that, that to me is like an AI marker.

play41:39

Is it a lot of almost purple prose, overly Florida, not direct.

play41:45

Look, we have a vice president who speaks like an LLM.

play41:48

Okay.

play41:48

I, I mean, it's, I think that's the ultimate thing of the Turing test, right?

play41:52

You can't tell the human from the computer.

play41:55

I mean, it's, it's.

play41:56

But, but then you look at it and you're like, well, wait a minute.

play41:58

None of it makes sense.

play41:59

That's not really intelligence there.

play42:01

It's, you know, it's, it's

play42:03

wordsmithing.

play42:03

On that note, I, you see, I see it coming in and affecting the arts.

play42:09

I'm going to keep going back to that.

play42:11

And, uh, there's a gentleman named Rick Beato.

play42:14

On youtube huge channel and he's brought up some really solid points

play42:19

And one of his is the overuse of auto.

play42:22

Oh, yeah horrible is turning Right, but it has actually turned human

play42:27

beings Into sounding like robots.

play42:30

So I can take a song and say, sing this song and it would be AI generated.

play42:39

You won't be able to tell if it's an actual human being or if it's

play42:44

AI, partly because of autotune and things that we're doing to ourself,

play42:49

like quantizing the drumbeat.

play42:51

Yeah, but you know, okay.

play42:52

So,

play42:52

so I'm a, I'm a budding musician and I have of course messed with

play42:55

quantization and everything else.

play42:57

And, and I know.

play42:59

I know touring musicians and I know recording musicians and stuff and I I

play43:03

know at ground level what this problem looks like, and what it sounds like.

play43:07

And what it sounds like is, it's, it's unnatural.

play43:12

It's not, like, like for instance, I have um, uh, what's the drum whatever

play43:17

pro from, uh, I, I have the package that does all the, all the drum sounds for it.

play43:23

I play guitar.

play43:24

Uh, it does all, and I can play along with it and stuff like that.

play43:28

I have a friend who's a drummer in a blues band.

play43:30

Occasionally I'll send him, Hey, I'm kind of trying to do something like this.

play43:33

Could you lay down a drum track for me?

play43:35

And he'll lay down the drum track.

play43:37

And that is magical to play with compared to, I mean, it sounds.

play43:42

If you listen to it on level, you know three on your headphones, you probably

play43:46

wouldn't tell the difference But then when you listen to it and you try to

play43:49

play with it, you're like, oh, yeah, he's kind of pushing there He's kind of doing

play43:52

this right but the point is that for example edm can very heavily And and

play43:59

and wow, that's a whole genre and as that becomes more and more popular.

play44:03

Yes, you cannot emulate a live show but meanwhile You know, the

play44:08

lie of a lot of live artists are getting to busking level now.

play44:12

I mean, it's, it's a very difficult problem.

play44:15

You know, we've got currently Sweden who's pumping in all of pop music.

play44:21

I am saying that pop music could be.

play44:25

AI driven or taken over at least a major quantity of it.

play44:29

And, and that, that, but that's work that's, you know, that's the

play44:34

only the musicians who are doing it, but it's also the producers,

play44:38

the people who are recording, um, it's manufacturing of instruments.

play44:42

So these things, you may not see it immediately, but that doesn't mean it

play44:47

doesn't have a farther reaching effect.

play44:50

I mean, I

play44:50

guess my, my, my theory on does, you know, can AI replace people is.

play44:56

If you get replaced, you probably deserve it, and it's up to you.

play44:59

It's, it's a cruel, it's a cruel thing to say, but it just means that

play45:04

you're not, you're not producing value above what, you know, synthetic

play45:10

automation driven value can be produced.

play45:13

And that's a, that's a tough place to be.

play45:16

And I don't, I don't really think artists are really in that.

play45:19

Um,

play45:19

I like industrial.

play45:21

It was definitely a serious mix of real and other, um, new order, blue Monday.

play45:26

That's a real bass that is being playing, you know, played against

play45:30

drums and sort of samples, you know, It's just another form of music.

play45:34

I'm not gonna get in and you know, judge it.

play45:36

Let's

play45:36

let's talk about it.

play45:37

Let's talk about edm today and edm performances today.

play45:40

There's nobody like There's nobody strumming a bass, you know as that

play45:44

was done in a studio, you know, it's it's it's it's It's all sample driven.

play45:50

Um You And okay.

play45:53

I mean, so that's, that makes it very, very easy to automate.

play45:57

You know, can you, do you have a, do you have AI blues bands out

play46:02

there?

play46:02

Not yet.

play46:03

Um, but I hope people still enjoy the same way, same way as those voiceover

play46:08

artists I'm talking about, you know, they have value, they have a genuine

play46:12

artistic value and training, you know, years to have a good inflection and

play46:17

tone for a type of commercial and.

play46:20

You know,

play46:20

one of the, one of the, one of the things I'm interested, I actually mastered an

play46:23

audio book and, uh, we used, uh, we of course had to use a, because we wanted it

play46:27

on Amazon, we had to use a live reader.

play46:30

He was not very good at reading.

play46:32

So what I did with him is I had him speak one sentence at a time and every

play46:36

sentence came out, you know, high energy.

play46:38

And then as an audio engineer, I put those together.

play46:41

So, you know, as you're listening to him read a paragraph, there's

play46:44

no die off of his energy levels.

play46:46

A paragraph goes on.

play46:47

It's like very, very energetic.

play46:48

It sounded, it sounded great.

play46:51

Right.

play46:51

It sounded like, like, you know, okay, maybe you could do this with a, with a,

play46:56

uh, computerized voice or whatever else and, you know, not have, not have those

play47:01

things that, you know, make listening to a human sound sort of terrible.

play47:06

You could, but well, I did this with a human and, and, um, there was still

play47:11

things where, you know, like he could just go into like a comedian's voice.

play47:16

You know, and do an impression.

play47:17

And it was like, it was really, really hilarious.

play47:20

And it was something that only a voice could do.

play47:22

You know, if you were programming that it would take you, I mean, it takes you

play47:28

longer than just having somebody read it.

play47:30

I mean, it would take the, the engineer would, would spend more

play47:33

money on his skills than, you know, having a good reader do the reading.

play47:38

And I think that's the production cost is probably what's going

play47:42

to save a lot of these artists.

play47:44

Is it's not just automatic.

play47:46

It's like, you want to do different, varied things.

play47:48

The program has never thought of, you know, what do you human voice can do that?

play47:52

You know, the computer voice, not so much.

play47:54

It does what it's programmed to do.

play47:56

True.

play47:56

Or you have the flip side where it says, do we really

play48:00

need that comic impersonation?

play48:02

Nah.

play48:02

Oh,

play48:02

it made the book.

play48:03

It made the book, in fact,

play48:06

I'm sure I've done, I've done a book narration.

play48:10

It's a, it's a nightmare.

play48:11

And I know exactly that because one, you're sitting there reading and you, it's

play48:18

like, da, Oh, no, it's not a question.

play48:21

I'm not ending on and up.

play48:22

I need to end on it down, and it just, oh, it's so tedious because

play48:26

I'm, I am really, really picky.

play48:28

Yeah.

play48:29

And I would read the sentence and I'd be like, and then I'd back all the way

play48:33

up because I want the inflection, right?

play48:35

I don't, you know, it's like if I end on an uptick, that sounds like a question.

play48:38

Hey, if

play48:38

you end on an up, it sounds like Kardashian, right?

play48:41

I mean, , right?

play48:42

Or, or if you're like, um.

play48:45

I don't know.

play48:46

What do you think?

play48:47

No, that doesn't work.

play48:49

And and you've uh, and it's difficult.

play48:51

I'm obviously not

play48:52

you're not really exaggerating We had two we had two editors listening

play48:56

as we were recording and like a sentence would come out somebody'd

play48:59

say no do it again you know and and As we're putting this all together.

play49:04

I mean, it's a tremendous production, especially when you're

play49:07

working with somebody who's not a great Uh, he's not a great reader

play49:11

now.

play49:11

I find that author read Translates better with non fiction in general

play49:17

because now you're just saying a voice that you know, whatever but

play49:21

like, um, You know harland koban.

play49:24

I don't want to beat the guy up.

play49:25

Sorry harland If you watch this, I do want to interview you some point, but

play49:28

he read his own and and he had a a great Audiobook narrator who had done

play49:35

all the characters had established it.

play49:37

And to me, that's part of the magic of an audiobook is when you have

play49:42

a narrator and you have an author, they actually influence each other.

play49:45

And the product sometimes is even better because it's the same way

play49:49

that you could be a great director or you could be a good writer, but

play49:53

until the actor gets on that movie.

play49:55

That actor brings their own.

play49:58

I'm going to

play49:58

change my opinion here.

play49:59

And I'm going to say my opinion applies to nonfiction.

play50:03

How about that?

play50:04

Okay.

play50:04

Yeah.

play50:04

And that's kind of where I feel is nonfiction.

play50:07

Yes.

play50:08

But, um, fiction, having the actors really.

play50:14

Of use for a narrator good unless their author is good.

play50:18

Some authors are good readers

play50:20

Some authors are good talkers and not good readers i've i've been you know

play50:23

Even as he was canceled i've been a fan of scott adams Dilbert, you know, I mean

play50:28

reference from earlier Dilbert boss, but Dilbert creator and you know a year ago

play50:33

he was writing his book and He had to do the audio book and he's got you know,

play50:38

he had like two years when he couldn't talk because of audio dysphonia Couldn't

play50:43

talk and and he's he's also dyslexic.

play50:47

So reading the book reading this book was a It was absolutely a no go.

play50:52

And so he was looking, this is a year ago at AI systems that could capture his

play50:57

voice and then read the book for him.

play50:59

Not workable not feasible.

play51:02

Um, and then he finally got a uh, he got somebody to read his book for him.

play51:07

Uh, I thought he did he's read previous books, but this this last book Okay,

play51:13

this last book he did not read his own.

play51:15

He just he couldn't do it There's two t mix.

play51:19

Oh, yeah, I agree.

play51:19

It's

play51:20

a lot of work.

play51:21

And then if you have a professional, you know, that to me, I think is the answer is

play51:29

you find somebody who tonally is somewhat similar to now where it's a big problem,

play51:37

especially in nonfiction is when you have somebody who's a YouTuber or a podcaster,

play51:42

and then they have a narrator, it's like

play51:44

that guy should be able to read his book.

play51:45

Yeah.

play51:45

He talks for a living.

play51:47

What's going

play51:47

on here?

play51:48

Or, or an actor, if an actor is writing an autobiography and they have

play51:52

somebody else do it, you're just like, no, no, no, no, no, no, Al Pacino,

play51:56

you know, I'm not saying he did it, but it's just a good example of like

play51:59

somebody who has such a, a distinctive

play52:02

voice.

play52:02

Right.

play52:03

If Al Pacino doesn't read his book, then Jay Moore better read his book.

play52:06

That would be good.

play52:08

He does a great Al Pacino.

play52:09

But yes,

play52:11

he does.

play52:11

Yes, he does.

play52:12

He does.

play52:12

And he's done it to Al Pacino, apparently.

play52:14

Right, right.

play52:14

At the, at the pier with the, with the seagulls, you know,

play52:17

according to the story, but.

play52:19

So on that note, let's go and wrap up.

play52:21

What is the one question that I should have asked you, but I did not?

play52:26

What are the camps of the AI people right now?

play52:28

I'll answer it for you really quickly.

play52:29

They're the doomers.

play52:30

Everything's going to be bad.

play52:31

AI is going to cause, you know, global catastrophe.

play52:35

There are the sunshine pumpers.

play52:36

These are the people that say, you know, Hey, AI is going to do

play52:39

all these wonderful things for us.

play52:41

They tend to not ask at what cost there are the aggressive positivity folks.

play52:46

AI is going to take away your job, so you better figure out what to do.

play52:52

All these people are full of beans.

play52:53

They have no basis for what they're saying.

play52:57

The Sunshine Pumper's a little less so.

play52:59

I mean, they're just, you know, naturally, they're like Tigger and Winnie the Pooh.

play53:04

They're a little bit too enthusiastic.

play53:08

You gotta ask yourself, what's it good for?

play53:10

And see it at ground level.

play53:11

When you see that it works, use it for that.

play53:14

That's what you should use it for.

play53:15

When people are telling you, Oh, it's going to do this

play53:17

and it's going to do that.

play53:18

You know, be skeptical.

play53:19

All

play53:21

right.

play53:21

Sounds perfect.

play53:22

Brad, thank you so much for this wide ranging, sprawling.

play53:28

But

play53:28

that's the best, right?

play53:29

I mean, that's what you should expect from AI is just wide ranging meandering.

play53:34

And Eric, I really appreciate your time and, and, uh, appreciate the interview.

Rate This

5.0 / 5 (0 votes)

Связанные теги
Artificial IntelligenceJob AutomationTech ImpactAI EthicsMusic ProductionSoftware DevelopmentLegal AICreative AITech TrendsAI Limitations
Вам нужно краткое изложение на английском?