Debunking AI: Tech Industry Secrets Exposed!
Summary
TLDRIn this discussion, the guests delve into the current state of AI and its impact on various professions, particularly software development and law. They address the limitations of AI in analyzing complex data like medical records and the 'garbage in, garbage out' issue. The conversation also explores AI's role in content creation, from writing code to generating art and its potential to disrupt traditional jobs. The guests highlight the importance of critical thinking when adopting AI tools and the need for human oversight to ensure accuracy and ethical considerations.
Takeaways
- 🤖 Current AI and Large Language Models (LLMs) are not recommended for critical tasks like medical record analysis due to the risk of inaccurate results.
- 🧐 AI-generated content can be creative but is also susceptible to 'hallucinations,' where it invents information that doesn't exist.
- 👨💻 For experienced developers, traditional search methods like Google are often faster and more reliable than using LLMs to write code.
- 📈 The distribution of software development talent shows that top developers are significantly more productive, and tools like LLMs may be more applicable to novices.
- 🎨 AI is being integrated into various creative fields like music and art, raising questions about authenticity and originality.
- 🔍 AI tools can quickly process large amounts of data, which can be useful for tasks like transcription, but their accuracy in critical applications is still questionable.
- 📚 AI is not a replacement for human creativity and expertise; it is a tool that can be used to augment human efforts.
- 🤔 There is a cultural and political bias in AI training data, which can lead to problematic outputs if not properly managed.
- 💬 AI can produce varied and entertaining content, but its ability to understand context and produce meaningful output is still limited.
- 👂 The human desire for answers drives the appeal of AI, even when the answers provided are not always accurate or reliable.
Q & A
What is the main theme discussed regarding current AI and LLMs in the transcript?
-The main theme discussed is the skepticism towards relying on current AI and Large Language Models (LLMs) for critical tasks such as medical record analysis, due to the potential for inaccuracies and the 'garbage in, garbage out' (GIGO) issue.
What is Brad Hutchings' background, as mentioned in the transcript?
-Brad Hutchings has a Bachelor of Science degree in Computer Science from UC Irvine, with a concentration in algorithms and data structures, which he obtained in 1994 when the program was ranked in the top five.
Why does the speaker express caution about using AI for analyzing medical records?
-The speaker cautions against using AI for analyzing medical records because AI might provide answers, but its reliability is questionable, as it could potentially invent citations or make errors that could have serious consequences.
What is Brad's perspective on AI's impact on software development?
-Brad believes that AI and LLMs are not going to change the way coding is done significantly. He finds himself faster at finding coding solutions through traditional search methods like Google than relying on AI to write code for him.
What example does Brad give to illustrate the distribution of talent in software development?
-Brad illustrates the distribution of talent in software development by comparing it to a bell curve, where the top 10 to 20 percent of coders are significantly more productive than the median coder.
What does Brad think about the usefulness of AI in generating code for developers?
-Brad thinks that while AI can generate code snippets, he can typically find those same snippets faster through Google search or by searching his own code, making AI less useful for him.
What is the 'GIGO' issue mentioned in the transcript?
-The 'GIGO' issue stands for 'garbage in, garbage out,' which means that the output of a system is only as good as the data it is fed. It implies that if AI is trained on poor quality data, it will produce poor quality results.
What is the 'ironic razor' concept mentioned by the speaker?
-The 'ironic razor' is a concept where whatever result AI provides, it tends to be ironic. It plays on the idea that AI can give answers that are satisfying or entertaining because of their irony, rather than their accuracy.
How does the speaker feel about AI's role in the arts and entertainment?
-The speaker expresses concern that AI's role in the arts and entertainment could lead to a loss of authenticity and originality, as AI can generate content that mimics human creativity but lacks the genuine human touch.
What is the potential impact of AI on jobs according to the discussion in the transcript?
-The potential impact of AI on jobs discussed includes both the threat of AI replacing certain job functions, particularly in areas like data analysis and repetitive tasks, and the opportunity for AI to augment human work by handling mundane tasks more efficiently.
Outlines
🤖 Cautionary Tales of AI in Medicine and Law
The speaker begins by expressing skepticism about the reliability of AI, particularly in analyzing medical records and legal cases. He warns against the potential for AI to provide incorrect or misleading information, citing examples of AI generating false legal citations. The conversation then transitions into an introduction of Brad Hutchings, who discusses his background in computer science and his initial skepticism towards AI's impact on software development. Brad shares his experience with AI tools in a professional setting, questioning their utility for experienced developers and comparing their effectiveness to traditional search methods.
🧠 AI, Machine Learning, and the Power of Large Language Models
The discussion shifts to defining AI, machine learning, and large language models (LLMs). The speaker clarifies that AI, as commonly referred to today, often pertains to LLMs that analyze vast amounts of text data to predict word usage and generate human-like text. The conversation explores the training processes of these models, which can take months and require significant computational power. The potential and limitations of AI in various applications are debated, including its use in creative tasks and the ethical considerations of AI-generated content.
🐶 The Irony of AI: From Medical Records to Family Pets
The conversation delves into the practical applications and limitations of AI, using the example of analyzing medical records versus tracking medication history. The speaker expresses caution about relying on AI for critical tasks due to the inherent biases and inaccuracies that can arise. An anecdote about using AI to determine a family pet's breed, which turned out to be incorrect, illustrates the irony of AI providing answers despite a lack of accuracy. The discussion highlights the human tendency to seek answers, even when they may not be reliable.
🔍 AI as a Tool: Algorithms, Creativity, and the Talking Frog Analogy
The dialogue explores the relationship between AI and algorithms, with AI being described as a complex algorithm applied at a massive scale. The speaker uses the 'talking frog' analogy to illustrate the current state of AI, suggesting it's more about the novelty and less about solving complex problems. The conversation also touches on AI's potential to affect job markets, particularly in creative fields, and the ethical considerations of AI-generated content, including deep fakes and the potential for misuse.
🎭 The Impact of AI on Arts and Entertainment
The discussion turns to the impact of AI on the arts, particularly in music and literature. The speaker expresses concern over the potential for AI to replace human creativity in these fields, leading to a loss of authentic human expression. Examples include AI-generated music that lacks the nuance of human performance and the challenges of AI narration in audiobooks. The conversation emphasizes the importance of human touch in creative works and the irreplaceable value of genuine artistic expression.
💬 The Future of AI: Hype, Reality, and the Path Forward
In the final paragraph, the conversation wraps up with a discussion on the different perspectives on AI's future impact. The speaker identifies various camps: those who foresee doom, those who are overly optimistic, and those who advocate for a balanced view. The speaker advises skepticism towards AI hype and encourages a grounded approach, using AI for its proven capabilities while remaining critical of its limitations. The conversation ends with a reflection on the need for a fundamental shift in computational methods for AI to achieve true consciousness.
Mindmap
Keywords
💡AI
💡LLMs
💡GIGO
💡Machine Learning
💡Algorithms
💡Diversity and Inclusion
💡Deep Fakes
💡Job Automation
💡Search Bias
💡Occam's Razor
Highlights
Current AI and LLMs are cautioned for use in medical record analysis due to potential inaccuracies.
Introduction of Brad Hutchings, who has a background in computer science and a critical view on AI in coding.
Brad's experience with AI in a small company and the unrealistic expectations set by management.
AI's limitations in coding and the preference for traditional search methods like Google for developers.
The distribution of software development talent and how AI tools cater more to novices than experienced developers.
Definition and explanation of large language models (LLMs) for a general audience.
The training process of LLMs using web-scale documents and the speed of lookups.
Comparison of AI to search engines and the middle ground between the two.
Examples of AI applications in art and music, and the potential for misuse with diversity injection.
Concerns about AI's GIGO (garbage in, garbage out) issue and the influence of political and cultural biases.
Adobe's AI art generation and the potential for historical inaccuracies or misrepresentations.
The appeal of AI in providing answers even when the accuracy is questionable.
AI's role in upscaling images and the limitations in adding non-existent details.
Discussion on AI versus algorithms, and how AI is a large-scale application of algorithms.
Brad's analogy of AI to a talking frog, highlighting its novelty over practicality.
Concerns about AI influencing search results and the potential loss of control over information.
The potential threat of AI to jobs, particularly in fields like journalism and law.
The impact of AI on the arts, including music and voice acting, and the potential for job displacement.
Final thoughts on AI's current state, its overreach in certain applications, and the need for skepticism.
Transcripts
One of the themes I wanted to talk about today was current AI
LLMs, they're good at delight.
Um, if you, if you want it to do analysis of your medical records
and, you know, figure out, okay, what did the doctor do wrong?
Uh, I'm going to caution you against that.
I'm going to say that's probably, you're, you're probably going to get
an answer, but that answer you get, I'm not so sure I'd be very confident in it.
You know, we've seen that with uh, with uh, law systems and whatever
else they they make up citations of cases that don't exist All
right, we are joined today by brad hutchings who was recommended To
me by a friend of the show, Pete a Turner of the break it down show.
And Brett Brad has been working.
I don't know with or against or tangentially around AI.
I'm doing great, Eric.
Good to meet you.
Good to do a good to be on your show.
Um, I'll, I'll tell you my little origin story with AI
is about a year ago, January.
Was working at a small company.
We had a guy come in who was going to, um, and I was a developer and sort
of a product developer, if you will.
We had a guy come in who was going to lay the hammer on the developers and.
He was not coming from a point of knowledge or anything else.
You know, small company dynamics are what they are.
Uh, like it was a disaster from the start, but his first edict was
everybody has to start using AI tools because they're in within six months.
Remember this is January of last year before Chad GPT four within six
months, they're going to change the way you do your jobs and probably
take all the coding away from you.
You know, and of course I have a, a, a computer science, uh, Educational
background of a BS Irvine, which in 1994, when I got my master's
and this was a concentration and algorithms and data structures.
So like the physics of computer science, all right.
Um, in 1994, it was a top five program.
So I have a good educational pedigree on which to, you know, look at these
constant stream of gloom and doom in computers and whatever else, and
kind of say, yeah, that makes sense.
That doesn't make sense.
So this one didn't make sense.
And, um, the more I dug into it and the more I kind of found out about AI and,
you know, it's current, the current fascination with large language models
and stuff like that, um, the more I realized this isn't going to change the
way we code it's, uh, I'm actually faster.
I'm still faster finding things I need with Google search than I
am putting it in the hands of the LLM to quote, write code for me.
And I've had a lot of people that I respect tell me over the past year.
I mean, I respect them professionally as developers and stuff.
Tell me over the past year, well, you can just write this.
You can just write your Python code with this tool.
This tool will write it for you.
And, uh, you know, I humor them and then I spend some time and I go and I see and.
Well, sure.
It'll give me a snippet, but I can find that snippet just as fast with
Google search or, you know, searching my own code or whatever else.
And I, you know, I've, I've heard a lot of explanations for this and, you know, maybe
for a new coder, this is a good tool, but, you know, you might be familiar with
how, how software development works on the spread of, uh, the spread of talent.
And let's see if I can draw it for you over here.
It's a like new coder, media encoder.
And then there's this long tail to, you know, your best, your best
10 percent are 20, 30 times as productive as your median coder.
And this has been, this has been this way all throughout the
history of software development.
It's, it's, uh, uh, it's, uh, it's an interesting distribution, but, but
what it also says is a tool like an LLM that might be, you know, spitting
out Python code for you right now.
Very applicable to the newbies.
Not very applicable to probably half of your coders.
You know, they, they, they use all these examples of, uh, you know, like,
Oh, I can implement quicksort really quickly with, uh, you know, with,
with so and so's Python LLM, nobody's ever asked to implement quicksort, you
know, it's a, it's a library function.
You, you call library function for it.
Nobody's, the, the canonical examples, uh, nobody's asked to do those.
You know, you're asked to do very specific.
I don't want to get too, too deep into it because we're starting to get really
deep in the murdery at the moment.
Um, okay.
So first off.
AI or artificial intelligence.
I have heard it argued that what we are seeing now with chat GPT and things
like that is technically not even AI.
It is really machine learning and large.
Yeah.
In fact, that's when we say AI today, we're talking about large language models.
That's right.
Typically what we're talking about.
This is not very new technology.
Um,
Can we define what that is?
Because, you know, we just wrote LLM, ML, AI.
These things are very confusing.
And I want to talk to a general audience.
Like what is a
large, so I'll talk about it for a general audience is let's say I have mountains
and mountains of pages that I can read.
And so I have the computer read them in and I have it compute
statistics about, uh, word ordering is basically what it does.
And then once I have all these statistics in place and it uses some neural nets
and it uses, you know, all sorts of interesting data structures and whatever.
I can ask it a question and it can sort of figure out what my, what my questions
about, and then it wordsmiths from this, basically what it does, it basically
wordsmiths from it's, it's, uh, uh, call it a corpus of documents it was
trained on and what we're doing now.
That we weren't doing 10 years ago is we're doing it at web scale, where
you might have, you know, millions and millions of web pages that have
been indexed and, and been used to train, to train one of these LLMs.
And in fact, training these LLMs on, on web scale documents.
Can take months, uh, using a lot of very, very powerful hardware.
Now it turns out the lookups are actually really, really quick.
And that's the cool thing about them.
You know, I can, I don't need chat GPT 4.
5 to do some interesting LLM work.
I can actually do it on my own computer.
Um, if I have an LLM that's pre trained.
So I think does that sort of lay the
groundwork a little bit?
Some people are almost as salty to insane.
It's nothing more than a search engine.
No, but, and I think that that's playing down a little bit and it's
somewhere, somewhere in the middle.
Um, but then that is not the only quote AI we have going.
We also have things like 11 labs, which is doing work.
Um, which I, I think is actually very powerful and very interesting.
Um, some of the art is interesting, uh, as an example, I'm going to step back
into the audio because I think there's good and bad with everything you have
out there, there are softwares out there, uh, I think it's called LaLaL.
Dot AI and you can put a song in it and it splits it into steps.
And that's kind of neat that I think is some useful technology like, Hey, I want
to do a karaoke, so eliminate the vocal track or, or things like that, you know,
whether they're using AI or they're using.
you know, what, what might fall under AI or whether they're
using, uh, you know, some advanced filter mechanisms or whatever.
I think it may be the distinguishing characteristic is to us.
It looks like magic.
Sure.
That might be a good way to put it.
Um,
as a matter of fact, I don't know if it was Ray Bradbury or Isaac Asimov or
whatever, but you know, when technology is indistinguishable from magic, there's a,
there's a, there's a, there's a, there's a, there's a, there's a, there's a,
there's a, Quote there, and it does seem almost magical, but I fear, and I think
I'm probably right, that like any other computer programming, you have the GIGO
issue, which is garbage in, garbage out.
And you also have an added Problem of political and cultural philosophy
that is being put in to the sure.
Look at Google.
Look at Google Gemini, right?
That's I'm going there, but, uh, not to worry.
Um, Adobe said, hold my beer and one up.
I don't know
if I have an Adobe subscription and you know, I've resisted all
the You know, every day there's something, try our AI, try our AI.
And, you know, okay, no, I'm, I'm really more interested in, you know,
making cute Photoshop pictures.
That's what I do.
Probably don't need your AI for that.
Uh, but yes, I I've seen.
Well, just
to, just for the audience, uh, Jim and I has been so concerned about
diversity that it has diversified.
actual cultural figures and race swapped popes founding fathers
things of that sort and then adobe firefly Went one further and now has
black world war ii german figures In in their art and this is youtube.
So we have to be careful how we say things but um the Problem that I
see again, this is the gigo issue It is perfectly logical for machine
learning to spit out those results.
If you tell them to inject race or diversity into whatever, it doesn't know
the difference between a founding father or a Pope or a world war two German bad
guy, it's just going to inject diversity.
This is the
information.
Look, I am.
Down with black George Washington.
All right, as long as he's got his wooden teeth and he didn't have his
wooden teeth He had a nicer smile than I do, you know, I mean That's
probably the most egregious part of it
in real life.
It's done a real right hamilton, right?
Um with that I have concerns Overall, because I use a lot of chat GPT, I use a
lot of these, and I do see actual value out there just for, you know, doing
heavy lifting as an example, if I want to take a transcript of a show that I
did and say split into two chapters.
I'm very comfortable with that because I'm not injecting any new information.
I'm supplying a hundred percent of the content I want to be parsed.
And
it'll usually parse it for you the way you want it.
And you know what?
The consequences are not terrible if it messes up, like that's a, that's a
distinguishing characteristic of it.
Um, one of the themes I wanted to talk about today was current AI LLMs.
They're good at delight.
Um, if you, if you want it to do analysis of your medical records,
And, you know, figure out, okay, what did the doctor do wrong?
Uh, I'm going to caution you against that.
I'm going to say that's probably, you're probably going to get an answer.
But that answer you get, I'm not so sure I'd be very confident in it.
You know, we've seen that with, uh, with, uh, law systems and whatever else.
They, they make up citations of cases that don't exist.
You know, and, and these are filed in briefs.
This is scary, right?
Right, but it can be useful, um, on the counterpoint of What kind
of medications have I taken for which conditions and for how long?
Yeah,
certain factual data, sure, and even then, you know, it can I wouldn't trust
it with knowing the facts completely.
It, it sort of has a search, a search bias at the beginning
and a search bias at the end.
And that's, you know, the data structure itself of how these LLMs work and, you
know, how they, how they, uh, they know what's up with, with all their data.
It's not, um, it's not like your data is in a date, in a structured
database and you're running a structured query against it.
That's a, it's a different way of information lookup.
Right.
Well, and it's my job to make sure the thing.
Yeah.
Yeah.
Yeah.
I mean, you know, Smith with an E
certainly your guest names, your guest names.
You want to get right.
But if, if in the middle of it, it gets a, I don't know, a, a, an esoteric word
that somebody dropped halfway wrong.
Okay.
That's not hurting you too much.
Um, I also use it for things like upscaling images.
Well, now that's not right, but then again, when I'm dealing with images
from 1962, yeah, it can help you.
I very limited right and do with what I have, unless I'm an artist and I'm going
to paint it and it's not going to happen,
right?
It gives you it, it, it doesn't give you detail that wasn't there, but it
renders detail that might be there.
And I think that's, that's the different, you wouldn't use that as, you know,
it's not enhanced on one of the CSI.
The CSI shows enhance, enhance, you know, where there's no, right.
It's not that.
And you can't get artifacts that don't exist.
No, it is for the purpose of boy, that's nicer to look at than this grainy picture.
And yes, that looks like
him.
A lot of the times with AI, um, what's so appealing to us is we want
answers and it will give us an answer.
It doesn't, it doesn't often bow out when it does bow out.
It's because, Hey, your question is not politically correct or not.
According to Google's Google's training.
Yeah.
Right.
Uh, but you know, like we got a dog, uh, we, we adopted a little dog in November.
And he's the cutest little thing.
I'm surprised he hasn't jumped up on the table to be on the interview yet.
Um, we have no idea what he is.
He looks like a Jack Russell and Corgi mix.
And so I asked chat GPT, make me a Pixar cartoon with a Jack Russell
Corgi mix, uh, doing something.
And it spit out this picture that looked just like him.
Right.
So, so this kind of confirmed to us it was a Jack Russell Corgi mix.
Well, we recently ordered a DNA test on him.
And, you know, again, this is the, we want answers, right?
Does it matter what kind of dog he is?
I mean, not even for, you know, he's, he's not a kind of dog that's going
to disqualify us from insurance or, you know, we don't have breed specific
legislation in California, et cetera.
Um, it doesn't matter, but we wanted an answer.
So we got the answer back yesterday.
And it turns out that he's 38 percent Chihuahua.
You know, do we love him any less?
Okay.
If I have to be honest, yeah, I love him a lot less than he's a
Chihuahua than a Jack Russell.
All right.
But I wouldn't tell anybody.
I certainly wouldn't tell him.
It doesn't really matter.
We wanted an answer.
The
result, though, is, um, ironic.
Like, um, I'm going to take the, I take this from Elon Musk.
It's one of my favorite things you've heard of Occam's razor, right?
He doesn't call it this, but I call it that the ironic razor.
And essentially with Occam's razor, it's like, sometimes the result is
the most straightforward result.
Um, well, the ironic razor is whatever the result is, it'll wind up being ironic.
Now.
In the case of your dog, if you think about it, you asked for that input and
you got an output that was exactly what you were looking for, even though it
wasn't accurate, which is the irony.
And it might be that there's a common misperception of mixed chihuahuas,
people thinking Jack Russell, right?
You know, and we're debating getting a different DNA test.
And, and so, okay.
So I, so I asked myself what.
What would happen if we get the same results?
Well, we'll be double disappointed.
What would happen if we get different results?
You know, then we know that this, there's a little bit more magic
eight ball to these dog DNA tests.
Then, you know, maybe everybody might think so, like either
answer is not a good answer.
Right.
But you know, it's cool thing with AI is we can ask these things where there isn't
a good answer and he'll give us an answer and we'll be, we'll be sad or happy or
whatever with it for some reason, um, you At least we won't, we won't not know.
And that seems to be the kind of the human thing of, you know, I'm, I'm
not comfortable not knowing this AI.
Give me an answer.
That seems to be what draws us in.
Could be.
No.
I wanna ask another question because AI versus algorithms, can you explain the
difference or similarities if there are?
So
a AI is an algorithm.
It's a big algorithm.
I'll define an algorithm later.
Um.
It's a big algorithm applied on, you know, web scale data as we see it today,
or more data than any of us could get our hands on, you know, to start with.
Um, an algorithm is, it's like a recipe, it's a sequence of steps.
Uh, you, one of the, one of the things that really interests me, and I'll
go back to my grad school, I had a professor, Lee Osterweil, And I, I had
like two classes in a course with him, but it's probably the most important
thing I learned in grad school.
He was having his grad students, this is again, you know, early 90s, he
was having them take, um, things that you did, like recipes or like your
exercise routine or whatever else, and try to write it as a program.
And he called it, at the time he was calling it process programming.
And it, it went off in a direction, you know, of course, everything
back then was all about, you know, uh, military and aerospace funding
for computer science and stuff.
And it kind of went off in that direction, formal languages and whatever.
But, you know, we, we do a lot of that ourselves.
You know, where we, we program ourselves to do some things.
And that's, think of an algorithm like that.
It's just, you've got a computer that does it and doesn't make mistakes.
It does exactly what it's supposed to do.
Uh, when you have something this large and you have probabilities and stuff
involved, there's a certain randomness to it, or it certainly appears that way.
You know, you, you ask, uh, ask your favorite, you know, AI chat, tell
me a story about Paul Bunyan and Eeyore saving the Oakland A's right.
And you'll get a whole bunch of great, a whole bunch of great stories from it.
Like reload, reload, reload.
You'll get tremendous, you know, tremendous variation of stories.
And they'll all be, I saw you smile.
They'll all be, uh, they'll all be delightful.
Right.
That's what it's really, that's what the AI is really, really
good at that we have right now.
A toy.
You're describing a toy.
Yeah.
I mean, the way I've been describing, I have a, I have a friend who, you
know, sort of tangentially involved in my business and has been pushing
me to, you know, have an AI story.
And, uh, the thing that I tell him about AI is it's, it's
like the, uh, the talking frog.
You know, the engineer and the talking frog joke.
Engineer is walking down a path one day and he sees a, he sees a frog
and the frog says, Hey, pick me up.
I'm a talking frog.
And so the engineer picks him up and the frog says, Hey, if you kiss me,
I'll turn into a beautiful princess and grant all your wishes for all your life.
You'll live a wonderful life.
Engineer takes the frog and puts them in his pocket.
And a couple of weeks later, the frog says, uh, Hey, remember I told you, if
you kiss me, I'll turn into a princess.
Grant all your wishes, your whole life.
You'll live a wonderful life.
You'll have nothing, you know, nothing to want for.
Why don't you kiss me?
And the engineer says, that's just a lot of problems.
But you know what I got right now is I got a talking frog
and that's pretty cool, right?
This is, this is AI right now.
AI is a talking frog.
We're looking for solutions for it.
Um, the, the big solutions are going to come from, you know,
people who want answers to things.
And answers that it can provide that, you know, provide some
meaning or make some sense.
And that's, that's a small subset of all problems that we have, you know, it's,
but now the concern I have is It's getting, it's seeping into
browser results, search results.
Um, I don't like being controlled and told culturally, politically,
how to speak or what to think.
Um, it is, if you use chat, TPT a lot and you ask, Oh yeah.
So I had a, I had a
project, I had a project recently.
It turned out to not be a successful project.
I hate it when I don't have a successful project, but.
One of the things we needed to do was generate pictures of people, um, that
were, had, that were variously affected by AI, their jobs were, their careers were.
And so, you know, I had it generate some pictures for me and ChatGPT and Dolly,
Dolly is ChatGPT's, uh, image model.
It gave me a bunch of white guys in an office And so we we looked at the results.
We're like, okay.
Well, this isn't gonna This isn't gonna play very well, you know, we gotta
we gotta mix this up a little bit.
So um I asked it to generate images for me for you know Various ranges of how ai
is gonna call the scale of one to ten.
Uh, ai is gonna affect your job and then I went white male, white female, black male,
black female, Hispanic male, Hispanic female, Asian male, Asian female, India,
Indian male, India, Indian female to get a good representation across each of those
score ranges when we got to unemployed.
It would not, it would not generate an unemployed black man for me.
It would do all the others.
It absolutely would not do that.
So I, I figure, okay, I've got to get around this cause I need this picture
because there are people we're going to tell that are, you know, you're not
going to be employable because of AI.
I don't think that's necessarily true.
I think, you know, again, this gets back to the whole, you
know, people want answers thing.
So we're giving them an answer, right?
But if we, if we give an answer that is you're going to be unemployed, right?
And we're going to depict a black male who's unemployed.
We can't get this picture from chat GPT.
And you know, the style of pictures it generates, right?
So they're all kind of the same thing.
If we go and try to get this elsewhere, it's going to look stupid.
So what I did is I said, okay, give me a picture of a Nigerian
man who's unemployed because of AI.
I was happy to do that.
It's Nigerian man was in sort of, you know, kind of traditional African dress.
In a, he was, and he was wearing a cook's hat and honestly, he looked
like aunt Jemima in a food court line.
I can't use that right.
But it can't figure out my intention.
When I asked the questions, my intention is to try to be inclusive.
And it's treating me like.
Oh no, you just asked for something we can't give you.
You know, that's a, that's yeah, you're right.
That's a problem.
Let me ask you this about the Google Gemini thing.
You know, obviously giant disaster, giant woke disaster.
Everybody can see it.
You know, even, even people who are a little bit more sympathetic
to, uh, to that way of thinking are shaking their heads because
it's just so out in the open.
It's it's it's so on the mark, you know, why did why did they do this?
Do you think that was really intentional to do that?
Or do you think maybe google was flexing and showing any state actor iran?
Saudi arabia.
Hey, we could take a thousand of your people.
We could put them training a model And you could have a model that's right in
line with whatever value system you have.
I never thought of it that way.
That's interesting.
Um, I just thought it was a gigo issue, honestly.
And I was thrilled because the stronger the ridiculousness or
the outcome, the, the more people actually pay attention to it.
It's the subtleties to me that are, are deadly.
Like if we're going to talk about, you know, race issues, for example,
Watch television sometime and notice in the commercials for all the
alarm companies, and I'm stealing this from Adam Carolla, he's right.
You will find out that everyone who robs houses or everyone who's a criminal
is white.
Oh, you find some interesting things out for sure.
They definitely have some new role models cast.
Right, but this isn't even AI.
This
is people actively doing this.
Um,
every judge in a show, I mean, the vast majority of judges in
programs seem to be black females.
For some reason, which is very interesting.
So 13, 13 percent of the population is African American, but yet somehow
they're probably 50 something percent of the judges, at least half of that.
So 6 percent of the population, cause it seems very interesting.
And, and it's like, yes, you're pushing the narrative, but when
you put this stuff in, well, then you're going to have a black.
Yeah,
that's like, that should be like really offensive.
That should be like, so ridiculously offensive that people
don't want that, but it's not.
Well, that was, I mean, that took constant to pull back Gemini, you
know, for the founding fathers and thank God, I want these things
to come to the extreme because.
I like to say normies, you know, normies don't notice, but they will
notice that when you, when you drop the ball that hard, it's like, and
that kind of result I feel is positive because it makes people not trust it.
Like the first AI I was dealing with at all.
And I barely use it as a Siri from the 2009.
Um, 2010, whatever it was like, Oh, wow.
It's so cool.
Where do you, you know, where do you hide a body when it came out?
It was hilarious.
So that was a delight, but then it was like, do such as such do.
And, and over time I just type it in and it's because I've gone through
that, I don't really use that.
So I think that when AI fails that hard, it does plant the seed in people's heads.
You know, this is like Wikipedia.
You can't really trust it.
Right.
It's it's it's there.
It's a source.
It's got its biases.
You probably know that it might be useful for some things.
It's not so useful for others.
You know, if you want to, if you want a social cultural lesson or,
uh, you know, a great economics lesson, uh, listen to the AI.
Yeah.
Yeah.
I mean, it's, it is where it is, but I don't think it's going away
and I do want to visit with you.
Um, because I do think it is threatening jobs genuinely.
Um, for example, BuzzFeed reporters.
All right.
So you're, you're being funny about this.
I mean, yeah.
No, but no, no, no, but I'm not being, I'll give you one.
That's not funny at all.
Um, first year, um, first year lawyers because AI is very good
for crunching and finding case law.
But
it's also very bad at it.
That's the thing.
It could be the hallucination problem is huge, you know, and the hallucinations,
Well, it's actually worse.
That's the weird thing is it wasn't as bad
as the hallucination problem is not a bug.
It's a feature.
It's like, so, you know, the example I gave you of Paul Bunyan, you know,
tell me a story about Paul Bunyan and Eeyore saving the Oakland A's from
having to move to Las Vegas, right?
At that point, it's, you're asking it, just make stuff up, make
stuff up and make me entertained.
And all these, all these GPTs do a great job of it.
I actually run, you know, lots of these queries on a private
GPT I have running on my laptop.
And, whether I have it trained on specific documents, you know,
which might be Wikipedia article in the Oakland A's, uh, A.
A.
Milne's, Winnie the Pooh, stories of Paul Bunyan, whatever else, or whether
I let it use its general knowledge, um, this is the, just to get into nerd detail
here, this is the Mistral 7B Instruct model, which is very, very popular with,
you know, sort of home enthusiasts.
Does a great job with the stories.
Like it's, they're funny, you know, I could, I could certainly see, I could
certainly see these systems being used.
You know, if you, if you have kids, best thing you can do for your kids
is have books that they can read and you're going to read with them.
And best thing for their literacy for their, you know, their early development
of intellect and things like that.
Um, even if you want them to be math and science people like just being able
to read and communicate and share and stuff Like it's just it's so important
to their development Well, I had you know, my parents bought me like this
whole bookshelf of books I could read, you know from I think I learned to
read about age three, you know until I was eight or nine uh And I, I read
all of them and I was out of books, you know, and I'd never be out of books.
I'd never be, you know, somebody can show me, okay, here's,
here's some ways to mix these up.
And, and the GPT mix it up, mixes it up, or I start to learn, Hey, I can
start to mix up my favorite characters.
I mean, that's what we do with play with, you know, with characters,
whether we're, you know, running around or whether we're playing with,
uh, action figures or whatever, we kind of mix these things up anyway.
Choose your own adventure on steroids.
Yeah.
Yeah.
Which is cool.
By the way, that's threatening a children's author.
A
children's author probably died, what, 20 years ago and he's got
50 years of copyright left, right?
I mean, so
what?
Well, there are modern children's authors.
My wife is a library director.
Yeah, no, I
have a good friend who's, you know, who's written children's books
and is, you know, selling them to schools and stuff like that, too.
Yeah, that's a That's a tough gig, but it's not going to make,
it's not making up new characters.
You know, it's, it's, it's giving you variation with
characters that it knows about.
And I think that's a, that's kind of a new, powerful thing.
It can be.
And it's interesting though, because it could be like everything stops at
a certain point if there's nothing new being created or imagined to feed
into it, to learn from it, but back to the law, you know, of that, if it has,
again, all the information contained.
You know, like a particular database catalog or whatever.
And that's what it's being fed can do a pretty good job citing
every case that's existed.
Now there are, you know, potential issues and you might follow up on it, but the
amount of labor that can be done by a, um, just brute force search over a person.
Oh yeah.
You're certainly not, you're not coming through law journals and
whatever else, but you know, we've had systems, we've had systems for
lawyers and stuff in place for decades.
You know, where they can electronically look at, you know, court cases,
they can, they can, they can find a person's criminal history, you know,
all, all sorts of things, all sorts of research that they need to do.
These are.
But the precedent aspect, I think is where it gets really significant
because, you know, I have a case that is this and this and this,
and this is where the hallucination actually can help because you can
say, find me a precedent somewhere.
That is on point or close to being on point.
And it spits it out after going through every case in the past 200 years and
all 50 States just limited to us, right?
You know, this is another
thing where your haystack can, a lot of these AI problems,
your haystack can be so big.
That the result that the results that comes up with they're not they don't reach
in and pick out the needle They they kind of average out over all these needles in
the haystack And so, you know a smaller LLM that's focused on, you know, say
case law from Orange County, California Or whatever, you know If you're looking
for case law in Orange County, California that LLM is probably gonna do way
better for you than a you know whole u.
s.
Wide You Law,
especially because there is no precedent.
There is no case here.
So now we've got to look, has it gone anywhere?
Okay.
I can quote New Mexico.
This, there was this ruling and it went up to the Supreme court.
As an example, so I'm just saying that that is very, very powerful,
and it's eliminating a lot of hours and time of research.
So that kind of thing, I think it can be a threat, but it also can be
leveraged and useful to people, too.
At the same time, there's good and bad.
Um, there's an odd one I've thought about recently.
Only fans.
Sorry, but, uh, it's kind of like, you know, photoshopping or whatever
in reverse, but there, there's going to be a point with, uh, pictures.
It's like, are these people,
but I mean, if she has six fingers, are you going to be that interested?
I mean,
well, I don't know if they'll always have six fingers.
So, you know, and you never know that maybe
that may be a thing.
It could be your kink, but regardless, and it is already a problem.
We have the whole Kate and William question, um, of recent it's like,
is this the princess in the picture and her children, or was that an AI?
Look at the weird finger.
Or was it, was it
even just a horrible Photoshop?
I mean, we've had that.
Forever too.
And that could, that could explain that as well.
It probably was.
It probably was.
Uh, but then you have deep fakes and that starts to become more
and more and more of an issue.
So a lot of these things I think are actually issues.
Again, it can be beneficial.
It could be bad.
Like, um, I am, we're working on a project right now.
And audio drama, uh, with Lee Harvey Oswald.
I don't know if you knew this.
He's dead.
He died in 1963.
Well, there are, and you know what?
I can emulate his voice.
And it's kind of cool having Lee Harvey Oswald speaking lines
about his life.
There's no, nobody, no, there's no comedian that does a good Lee
Harvey Oswald though, is there?
He's an odd voice.
I don't know if you've ever heard it.
It's a, it's a stilted pattern and it's, it's something, it's
that New Orleans kind of accent.
Yeah, a little
Creole in it.
It's hard to explain.
New Orleans, a very dynamic city, just with all sorts of inputs and outputs.
I've been pushing on the job things.
What do you see?
So you're saying no job.
I think if you're, if you truly are threatened by AI and you know, okay.
If you're truly threatened by AI and there's different kinds of threats, right?
There's the AI actually will replace your job.
I don't think that's going to affect very many people.
There's the threat of AI.
Is going to replace your job.
And this is usually some Dilbert boss, you know, who comes in with
his pointy hair and says, Hey, I was going to replace your job and
we're negotiating on price right now.
I mean, that's the undertone, right?
Um, this is something I think every software developer has heard in the last
year and, you know, I was like, okay, wait a minute, I'm doing a great job for you.
We're getting stuff done faster than we ever have.
Um, you're making tons of money and you're coming to me with this garbage right now.
You know, please.
Okay, go go hire the AI and then let's let me take a two month vacation you go
hire the AI and then I'm gonna charge You more when I get back just just for the
insult That seems to be a place where a lot of software developers are right now.
It a lot of companies that employ software developers It's very similar
to you know outsourcing efforts.
I'm sure you can outsource a lot of coding Nandia And, um, you
get a lot of garbage from them.
You know, if you need high quality coding, you're not doing that.
Um, same, same kind of thing applies with, with AI, but we have to go through the
motions of, you know, negotiating over it.
Um, you had the, uh, you know, the actor strike, which was primarily based on that.
Which is very legit by the way.
Um, like I was just talking about voices, right?
Steven Fry found out that he was a narrator.
Oh, sure.
That's terrible.
That's absolutely terrible.
Agreed.
Agreed.
Scary thing and then now that that's stealing his identity and doing it but
the AI has the capability of creating a Unique voice that is now a narrator and
that puts somebody out of work genuinely that that is now Serving a purpose or a
role For uh this now it's not perfect yet.
And a lot of this is the yet we understand that No, it's not ready.
Kind of like it's six fingers now, but is it going to be six fingers next year?
We don't know.
Well, I mean, unless you introduce, unless you introduce a real physical
models into it or some sort of ability to detect all of these potential errors
that come out and then filter those out and try again, uh, which is expensive.
You know, there's, there's the, at what cost question with a
lot of this stuff too, right?
I mean, we can develop, we can deliver this system that's say
80 percent good at this low cost.
But to get to 90 percent good, we're going to triple the cost.
Well,
that's like, I know that, um, um, graphics cards have gone from Bitcoin mining.
That's probably how you know that half of it's a scam, right?
In a sense, but yeah, I mean, the, uh, the graphics cards, you know,
there's been a run on that market.
NVIDIA is one of the happiest companies in the world.
Well, their CEO getting out and saying, we'll have love AGI, AGI,
generalized artificial intelligence.
We'll have that within five years.
He's pumping his cards.
Like he, he's creating a market for his cards.
Everybody's excited about it.
He'll sell a lot of graphics cards.
That's great.
Uh, using methods we're using right now and anything that's in
the pipeline, we will not have.
AI with a consciousness and a, and self directed or anything.
It's just not, it's not, it doesn't do that.
There's gotta be a fundamental change in how we, you know, how we do these
calculations for that to occur.
And, and we don't have it even in the
pipeline.
Well, no, because you would depend on it itself.
Development itself to the point to get that.
And it's not able to, because, but I argue again, that's because of
garbage in garbage out, because we're, we're dealing with the
programmers and what has been put in.
So it probably can't get there on this path.
They would have to almost be a burn it down, as you said, and retrain.
Well, it's, it's
burn it down and find new, find new computational methods.
These computational methods will not do it.
You could scale it.
A thousand times, 10, 000 times LLMs will not gain consciousness or, you
know, we, we have these old tests like the Turing test and everybody says, Oh,
it probably satisfies the Turing test.
Yeah.
It's wordsmithing.
It's it's it's wordsmithing, its way to doing it.
It's, it's, uh, you've, you've.
You've seen people just talk word salad.
I think that the best example I have, I mean, I love the, uh, the SpaceX
broadcast, but they, they put, um, they, they put the cute blonde out
there to announce the SpaceX things.
And she's very, very good at stringing sentences together in the middle of
things that absolutely mean nothing.
They're, they're complete.
I mean, it's,
but that's corporate mottos.
I mean, that's mission statements.
We've had it for 30 years, 40
years.
I know it's, it's all cute, but you know, at the end of the day, she's filling time.
You know, there's no, there's no silence in it.
She's, she's filling time with words that are just spewing out of her mouth,
kind of randomly that make sense.
The worst I've seen is Quora is turning into just a cesspool.
If you look at the, why you ask if such and such as such and such.
People are astounded to consider that.
And it's this, um, purple prose style.
And that, that to me is like an AI marker.
Is it a lot of almost purple prose, overly Florida, not direct.
Look, we have a vice president who speaks like an LLM.
Okay.
I, I mean, it's, I think that's the ultimate thing of the Turing test, right?
You can't tell the human from the computer.
I mean, it's, it's.
But, but then you look at it and you're like, well, wait a minute.
None of it makes sense.
That's not really intelligence there.
It's, you know, it's, it's
wordsmithing.
On that note, I, you see, I see it coming in and affecting the arts.
I'm going to keep going back to that.
And, uh, there's a gentleman named Rick Beato.
On youtube huge channel and he's brought up some really solid points
And one of his is the overuse of auto.
Oh, yeah horrible is turning Right, but it has actually turned human
beings Into sounding like robots.
So I can take a song and say, sing this song and it would be AI generated.
You won't be able to tell if it's an actual human being or if it's
AI, partly because of autotune and things that we're doing to ourself,
like quantizing the drumbeat.
Yeah, but you know, okay.
So,
so I'm a, I'm a budding musician and I have of course messed with
quantization and everything else.
And, and I know.
I know touring musicians and I know recording musicians and stuff and I I
know at ground level what this problem looks like, and what it sounds like.
And what it sounds like is, it's, it's unnatural.
It's not, like, like for instance, I have um, uh, what's the drum whatever
pro from, uh, I, I have the package that does all the, all the drum sounds for it.
I play guitar.
Uh, it does all, and I can play along with it and stuff like that.
I have a friend who's a drummer in a blues band.
Occasionally I'll send him, Hey, I'm kind of trying to do something like this.
Could you lay down a drum track for me?
And he'll lay down the drum track.
And that is magical to play with compared to, I mean, it sounds.
If you listen to it on level, you know three on your headphones, you probably
wouldn't tell the difference But then when you listen to it and you try to
play with it, you're like, oh, yeah, he's kind of pushing there He's kind of doing
this right but the point is that for example edm can very heavily And and
and wow, that's a whole genre and as that becomes more and more popular.
Yes, you cannot emulate a live show but meanwhile You know, the
lie of a lot of live artists are getting to busking level now.
I mean, it's, it's a very difficult problem.
You know, we've got currently Sweden who's pumping in all of pop music.
I am saying that pop music could be.
AI driven or taken over at least a major quantity of it.
And, and that, that, but that's work that's, you know, that's the
only the musicians who are doing it, but it's also the producers,
the people who are recording, um, it's manufacturing of instruments.
So these things, you may not see it immediately, but that doesn't mean it
doesn't have a farther reaching effect.
I mean, I
guess my, my, my theory on does, you know, can AI replace people is.
If you get replaced, you probably deserve it, and it's up to you.
It's, it's a cruel, it's a cruel thing to say, but it just means that
you're not, you're not producing value above what, you know, synthetic
automation driven value can be produced.
And that's a, that's a tough place to be.
And I don't, I don't really think artists are really in that.
Um,
I like industrial.
It was definitely a serious mix of real and other, um, new order, blue Monday.
That's a real bass that is being playing, you know, played against
drums and sort of samples, you know, It's just another form of music.
I'm not gonna get in and you know, judge it.
Let's
let's talk about it.
Let's talk about edm today and edm performances today.
There's nobody like There's nobody strumming a bass, you know as that
was done in a studio, you know, it's it's it's it's It's all sample driven.
Um You And okay.
I mean, so that's, that makes it very, very easy to automate.
You know, can you, do you have a, do you have AI blues bands out
there?
Not yet.
Um, but I hope people still enjoy the same way, same way as those voiceover
artists I'm talking about, you know, they have value, they have a genuine
artistic value and training, you know, years to have a good inflection and
tone for a type of commercial and.
You know,
one of the, one of the, one of the things I'm interested, I actually mastered an
audio book and, uh, we used, uh, we of course had to use a, because we wanted it
on Amazon, we had to use a live reader.
He was not very good at reading.
So what I did with him is I had him speak one sentence at a time and every
sentence came out, you know, high energy.
And then as an audio engineer, I put those together.
So, you know, as you're listening to him read a paragraph, there's
no die off of his energy levels.
A paragraph goes on.
It's like very, very energetic.
It sounded, it sounded great.
Right.
It sounded like, like, you know, okay, maybe you could do this with a, with a,
uh, computerized voice or whatever else and, you know, not have, not have those
things that, you know, make listening to a human sound sort of terrible.
You could, but well, I did this with a human and, and, um, there was still
things where, you know, like he could just go into like a comedian's voice.
You know, and do an impression.
And it was like, it was really, really hilarious.
And it was something that only a voice could do.
You know, if you were programming that it would take you, I mean, it takes you
longer than just having somebody read it.
I mean, it would take the, the engineer would, would spend more
money on his skills than, you know, having a good reader do the reading.
And I think that's the production cost is probably what's going
to save a lot of these artists.
Is it's not just automatic.
It's like, you want to do different, varied things.
The program has never thought of, you know, what do you human voice can do that?
You know, the computer voice, not so much.
It does what it's programmed to do.
True.
Or you have the flip side where it says, do we really
need that comic impersonation?
Nah.
Oh,
it made the book.
It made the book, in fact,
I'm sure I've done, I've done a book narration.
It's a, it's a nightmare.
And I know exactly that because one, you're sitting there reading and you, it's
like, da, Oh, no, it's not a question.
I'm not ending on and up.
I need to end on it down, and it just, oh, it's so tedious because
I'm, I am really, really picky.
Yeah.
And I would read the sentence and I'd be like, and then I'd back all the way
up because I want the inflection, right?
I don't, you know, it's like if I end on an uptick, that sounds like a question.
Hey, if
you end on an up, it sounds like Kardashian, right?
I mean, , right?
Or, or if you're like, um.
I don't know.
What do you think?
No, that doesn't work.
And and you've uh, and it's difficult.
I'm obviously not
you're not really exaggerating We had two we had two editors listening
as we were recording and like a sentence would come out somebody'd
say no do it again you know and and As we're putting this all together.
I mean, it's a tremendous production, especially when you're
working with somebody who's not a great Uh, he's not a great reader
now.
I find that author read Translates better with non fiction in general
because now you're just saying a voice that you know, whatever but
like, um, You know harland koban.
I don't want to beat the guy up.
Sorry harland If you watch this, I do want to interview you some point, but
he read his own and and he had a a great Audiobook narrator who had done
all the characters had established it.
And to me, that's part of the magic of an audiobook is when you have
a narrator and you have an author, they actually influence each other.
And the product sometimes is even better because it's the same way
that you could be a great director or you could be a good writer, but
until the actor gets on that movie.
That actor brings their own.
I'm going to
change my opinion here.
And I'm going to say my opinion applies to nonfiction.
How about that?
Okay.
Yeah.
And that's kind of where I feel is nonfiction.
Yes.
But, um, fiction, having the actors really.
Of use for a narrator good unless their author is good.
Some authors are good readers
Some authors are good talkers and not good readers i've i've been you know
Even as he was canceled i've been a fan of scott adams Dilbert, you know, I mean
reference from earlier Dilbert boss, but Dilbert creator and you know a year ago
he was writing his book and He had to do the audio book and he's got you know,
he had like two years when he couldn't talk because of audio dysphonia Couldn't
talk and and he's he's also dyslexic.
So reading the book reading this book was a It was absolutely a no go.
And so he was looking, this is a year ago at AI systems that could capture his
voice and then read the book for him.
Not workable not feasible.
Um, and then he finally got a uh, he got somebody to read his book for him.
Uh, I thought he did he's read previous books, but this this last book Okay,
this last book he did not read his own.
He just he couldn't do it There's two t mix.
Oh, yeah, I agree.
It's
a lot of work.
And then if you have a professional, you know, that to me, I think is the answer is
you find somebody who tonally is somewhat similar to now where it's a big problem,
especially in nonfiction is when you have somebody who's a YouTuber or a podcaster,
and then they have a narrator, it's like
that guy should be able to read his book.
Yeah.
He talks for a living.
What's going
on here?
Or, or an actor, if an actor is writing an autobiography and they have
somebody else do it, you're just like, no, no, no, no, no, no, Al Pacino,
you know, I'm not saying he did it, but it's just a good example of like
somebody who has such a, a distinctive
voice.
Right.
If Al Pacino doesn't read his book, then Jay Moore better read his book.
That would be good.
He does a great Al Pacino.
But yes,
he does.
Yes, he does.
He does.
And he's done it to Al Pacino, apparently.
Right, right.
At the, at the pier with the, with the seagulls, you know,
according to the story, but.
So on that note, let's go and wrap up.
What is the one question that I should have asked you, but I did not?
What are the camps of the AI people right now?
I'll answer it for you really quickly.
They're the doomers.
Everything's going to be bad.
AI is going to cause, you know, global catastrophe.
There are the sunshine pumpers.
These are the people that say, you know, Hey, AI is going to do
all these wonderful things for us.
They tend to not ask at what cost there are the aggressive positivity folks.
AI is going to take away your job, so you better figure out what to do.
All these people are full of beans.
They have no basis for what they're saying.
The Sunshine Pumper's a little less so.
I mean, they're just, you know, naturally, they're like Tigger and Winnie the Pooh.
They're a little bit too enthusiastic.
You gotta ask yourself, what's it good for?
And see it at ground level.
When you see that it works, use it for that.
That's what you should use it for.
When people are telling you, Oh, it's going to do this
and it's going to do that.
You know, be skeptical.
All
right.
Sounds perfect.
Brad, thank you so much for this wide ranging, sprawling.
But
that's the best, right?
I mean, that's what you should expect from AI is just wide ranging meandering.
And Eric, I really appreciate your time and, and, uh, appreciate the interview.
5.0 / 5 (0 votes)