Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED
Summary
TLDRThe video script is an engaging discussion on the importance and prevalence of algorithms in our daily lives, led by David J. Malan, a professor of computer science at Harvard University. It covers the concept of algorithms as step-by-step problem-solving instructions, essential in both the physical and virtual worlds. The script explores various levels of algorithmic complexity, from simple tasks like making a sandwich to more complex scenarios such as searching for a contact in a phone book. It also delves into the role of algorithms in computer science, data science, and artificial intelligence, including their use in machine learning, recommender systems, and large language models. The discussion highlights the impact of algorithms on personalization in social media, the challenges of understanding and controlling complex AI systems, and the future of algorithm development. The script emphasizes the continuous evolution of algorithms in our lives, their potential benefits, and the need for critical thinking about their application and consequences.
Takeaways
- π Algorithms are fundamental to solving problems and are prevalent in both the physical and virtual worlds.
- π‘ A computer's CPU is its 'brain', executing instructions like math operations and directional movements.
- π§ Memory, or RAM, is crucial for a computer's operation, storing programs and data temporarily or permanently.
- π Algorithms can be as simple as a bedtime routine or making a sandwich, requiring precise and step-by-step instructions.
- π Search engines like Google use algorithms to organize and retrieve information efficiently from vast databases.
- π’ Sorting algorithms, such as bubble sort, are foundational in computer science and involve solving problems step by step locally.
- π€ Machine learning and AI are intertwined, with algorithms being used to improve and personalize content in various applications.
- π Algorithms are used in various aspects of life, from social media feeds to train routing, enhancing efficiency and personalization.
- π§ Data scientists and AI researchers are still exploring the 'why' behind the effectiveness of certain algorithms, like deep neural networks.
- π The role of algorithms in industry is not just about creating them but also about integrating them into systems and processes effectively.
- π As algorithms become more integrated into daily life, it's important to consider their implications, both positive and negative, on society.
Q & A
What is an algorithm and why are they important?
-An algorithm is a list of step-by-step instructions for solving a problem or performing a task. They are important because they are ubiquitous, representing opportunities to solve problems not only in the physical world but also in the virtual world, and are fundamental to how we interact with technology in our daily lives.
How would you define a computer?
-A computer is an electronic device, typically rectangular in shape, that allows for input through typing and other forms of interaction. It contains a CPU (central processing unit) which acts as its 'brain' and is capable of executing instructions, as well as memory (RAM) for temporary data storage and a hard drive or solid state drive for long-term data storage.
What is the role of memory in a computer?
-Memory, or RAM (Random Access Memory), is a type of hardware inside a computer where active data is stored while the computer is running programs or games. It allows for quick access to this data. Additionally, computers have a hard drive or solid state drive for permanent data storage, which retains information even when the power is off.
How does the concept of precision relate to algorithms?
-Precision is critical in algorithms because it ensures that the correct steps are taken to achieve the desired outcome. An imprecise algorithm can lead to incorrect results or inefficiencies. For instance, when searching for information on the internet, precise instructions (search terms) are necessary to find the correct information.
What is a recursive algorithm and how does it work?
-A recursive algorithm is a sophisticated type of algorithm that calls itself to solve the same problem, but in smaller parts. It works by repeatedly breaking down a problem into smaller and smaller sub-problems until it becomes simple enough to solve directly. This method is a key concept in divide and conquer strategies.
How do sorting algorithms like bubble sort operate?
-Bubble sort is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items, and swaps them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted.
What is the significance of machine learning in today's world?
-Machine learning is a subset of artificial intelligence that provides systems the ability to learn and improve from experience without being explicitly programmed. It is significant because it is used in recommender systems, search engines, and various applications that require making predictions or decisions based on data.
How do large language models (LLMs) like Chat GPT work?
-Large language models like Chat GPT are trained on a large corpus of text to predict the next word or a set of tokens in a sequence. They use machine learning to understand patterns in language and generate human-like text based on the input provided to them.
What is the role of data scientists in developing algorithms?
-Data scientists play a crucial role in developing and deploying algorithms. They not only understand the algorithm and its statistical performance but also handle software engineering aspects, such as systems integration, ensuring reliable input, and useful output. They also consider organizational integration, which involves how the algorithm fits into the workflow of a company or institution.
Why is it important to understand the fundamentals of algorithms even when high-level tools are available?
-Understanding the fundamentals of algorithms is important because it provides a deeper comprehension of how technology works, which can be beneficial for troubleshooting, optimizing performance, and innovating new solutions. Even though high-level tools abstract away the complexity, knowing the basics can help in leveraging these tools more effectively and understanding their limitations.
How do you perceive the future of algorithms and their role in our daily lives?
-The future of algorithms is likely to see even more integration into our daily lives, from routing trains to personalizing content on our phones. Algorithms will continue to make our lives more efficient in many ways, but it's important to be aware of their influence and to consider the ethical implications of their use.
What are the ethical considerations when developing and using algorithms?
-Ethical considerations include ensuring transparency in how algorithms make decisions, especially when they impact people's lives significantly. There are concerns about privacy, as algorithms often rely on large amounts of personal data. Additionally, there's the risk of bias in algorithms if they are trained on biased data, which can lead to unfair outcomes.
Outlines
π Introduction to Algorithms and Computer Science
David J. Malan, a Harvard University professor, introduces the concept of algorithms as fundamental to solving problems in both the physical and virtual worlds. He explains that a computer's CPU is its 'brain,' executing instructions, and that memory, or RAM, is used for temporary storage of programs and games. The hard drive or solid state drive is for permanent data storage. Malan also touches on the importance of precision in algorithms, using the example of making a peanut butter sandwich to illustrate step-by-step instructions.
π Searching Algorithms and Efficiency
The discussion moves to searching algorithms, emphasizing the importance of efficiency in finding information. An example is given about finding a contact in an alphabetized list, comparing the efficiency of linear search to more advanced methods like binary search, which uses a divide and conquer strategy. The concept of a loop in programming is introduced as a technique to repeat tasks, and the idea of a recursive algorithm is briefly mentioned as a more sophisticated type of algorithm.
π€ Recursive Algorithms and Problem-Solving
Patricia, an NYU senior studying computer science and data science, explains algorithms as a systematic way of solving problems. The conversation covers different types of sorting algorithms, such as bubble sort, which addresses local issues within a data set. The concept of divide and conquer is further explored, and recursive algorithms are introduced as algorithms that call themselves to solve smaller instances of the same problem. Patricia also discusses the role of algorithms in content recommendation systems like those used by social media platforms.
π§ Machine Learning and Algorithmic Research
A PhD student discusses the process of researching and inventing algorithms, focusing on identifying inefficiencies and making connections. Machine learning is highlighted as a field where algorithms learn from data, with examples given such as Google's search algorithms and recommender systems. The student also touches on concerns related to the application of machine learning, like deep fakes, and the importance of data in training algorithms. The conversation concludes with thoughts on the future of algorithms and their increasing presence in everyday life.
π The Role of Algorithms in Data Science and AI
Chris Wiggins, an associate professor and chief data scientist at the New York Times, discusses the role of algorithms in data science. He explains that algorithms are crucial for optimizing models and creating data products. The conversation covers the distinction between data scientists and computer scientists, the importance of software engineering in deploying algorithms, and the impact of AI and machine learning on various industries. Wiggins also addresses the rise of AI-based startups and the relationship between AI and data science.
π€ The Impact of Large Language Models
The discussion focuses on large language models (LLMs) and their impact on the perception of artificial intelligence. It's noted that while LLMs have changed public perception, particularly after the release of advanced chatbots, the underlying technology has been in development for some time. The conversation explores the concept of generative AI and the use of machine learning to create models that can perform tasks like sorting data or generating text. There's also a debate on whether the rise of such advanced tools diminishes the importance of understanding fundamental algorithms, with the argument that a high-level understanding can be sufficient for practical use without delving into the intricacies of the algorithms themselves.
π The Spectrum of Algorithms and Their Future
David J. Malan concludes the discussion by emphasizing the spectrum of algorithms, from basic to advanced, and encouraging continuous learning. He suggests that as one studies and understands algorithms, they become more accessible and can be applied to a wide range of problems. The potential for both good and bad outcomes with technology is acknowledged, reflecting on how algorithms, as part of technology, are neither inherently positive nor negative but have the power to influence outcomes based on their application.
Mindmap
Keywords
π‘Algorithm
π‘Computer
π‘CPU (Central Processing Unit)
π‘Memory
π‘Divide and Conquer
π‘Recursive Algorithm
π‘Machine Learning
π‘Data Science
π‘Neural Networks
π‘Large Language Models (LLMs)
π‘Optimization Algorithm
Highlights
David J. Malan, a professor of computer science at Harvard University, explains algorithms across five levels of increasing difficulty.
Algorithms are ubiquitous, important in both the physical and virtual worlds, and represent opportunities to solve problems.
A computer is defined as an electronic device with a CPU (central processing unit) and memory for processing instructions and storing data.
RAM (Random Access Memory) is distinguished from a hard drive or solid state drive for storing programs and data while in use.
Algorithms are described as step-by-step instructions for solving problems, similar to a bedtime routine or making a sandwich.
Precision in algorithms is crucial, as demonstrated by the sandwich-making example where imprecision led to unnecessary steps.
The concept of a loop in programming is introduced as a technique to repeat actions, making processes more efficient.
Divide and conquer is a strategy for solving problems by breaking them down into smaller, more manageable parts.
Recursive algorithms are a type of algorithm that use themselves to solve smaller instances of the same problem.
Sorting algorithms, such as bubble sort, focus on local problems and make incremental improvements to achieve a desired outcome.
Social media platforms use algorithms to personalize content and recommendations, aiming to increase user engagement.
Machine learning algorithms are used in recommender systems to predict and curate content based on user behavior and preferences.
The line between classical algorithms and AI is blurred, with AI often involving learning from data to refine strategies and predictions.
The future of algorithms is expected to involve deeper integration into everyday life, improving efficiency and personalization.
Data scientists and AI researchers are still trying to understand why certain complex models, like deep neural networks, work so well.
The role of algorithms in industry often revolves around creating data products that require software engineering and systems integration.
Despite the rise of AI and large language models, understanding the fundamentals of algorithms remains important for innovation and advancement in the field.
The impact of algorithms on society raises questions about control, transparency, and the ethical use of technology.
For those interested in computer science and programming, the evolving landscape of AI presents opportunities to contribute to the development and application of algorithms.
Transcripts
- Hello world.
My name is David J. Malan
and I'm a professor of computer science
at Harvard University.
Today, I've been asked to explain algorithms
in five levels of increasing difficulty.
Algorithms are important
because they really are everywhere,
not only in the physical world,
but certainly in the virtual world as well.
And in fact, what excites me about algorithms
is that they really represent an opportunity
to solve problems.
And I dare say, no matter what you do in life,
all of us have problems to solve.
So, I'm a computer science professor,
so I spend a lot of time with computers.
How would you define a computer for them?
- Well, a computer is electronic,
like a phone but it's a rectangle,
and you can type like tick, tick, tick.
And you work on it.
- Nice. Do you know any of the parts
that are inside of a computer?
- No.
- Can I explain a couple of them to you?
- Yeah.
- So, inside of every computer is some kind of brain
and the technical term for that is CPU,
or central processing unit.
And those are the pieces of hardware
that know how to respond to those instructions.
Like moving up or down, or left or right,
knows how to do math like addition and subtraction.
And then there's at least one other type of
hardware inside of a computer called memory
or RAM, if you've heard of this?
- I know of memory because you have to memorize stuff.
- Yeah, exactly.
And computers have even different types of memory.
They have what's called RAM, random access memory,
which is where your games, where your programs
are stored while they're being used.
But then it also has a hard drive,
or a solid state drive, which is where your data,
your high scores, your documents,
once you start writing essays and stories in the future.
- It stays there.
- Stays permanently.
So, even if the power goes out,
the computer can still remember that information.
- It's still there because
the computer can't just like delete all of the words itself.
- Hopefully not.
- Because your fingers could only do that.
Like you have to use your finger to delete
all of the stuff. - Exactly.
- You have to write.
- Yeah, have you heard of an algorithm before?
- Yes. Algorithm is a list of instructions to tell people
what to do or like a robot what to do.
- Yeah, exactly.
It's just step by step instructions for doing something,
for solving a problem, for instance.
- Yeah, so like if you have a bedtime routine,
then at first you say, "I get dressed, I brush my teeth,
I read a little story, and then I go to bed."
- All right.
Well how about another algorithm?
Like what do you tend to eat for lunch?
Any types of sandwiches you like?
- I eat peanut butter.
- Let me get some supplies from the cupboard here.
So, should we make an algorithm together?
- Yeah.
- Why don't we do it this way?
Why don't we pretend like I'm a computer
or maybe I'm a robot, so I only understand your instructions
and so I want you to feed me, no pun intended, an algorithm.
So, step-by-step instructions for solving this problem.
But remember, algorithms, you have to be precise,
you have to give...
- The right instructions.
- [David] The right instructions.
Just do it for me. So, step one was what?
- Open the bag.
- [David] Okay. Opening the bag of bread.
- Stop. - [David] Now what?
- Grab the bread and put it on the plate.
- [David] Grab the bread and put it on the plate.
- Take all the bread back and put it back in there.
[David laughing]
- So, that's like an undo command.
- Yeah.
- Little control Z? Okay.
- Take one bread and put it on the plate.
- Okay.
- Take the lid off the peanut butter.
- [David] Okay, take the lid off the peanut butter.
- Put the lid down.
- [David] Okay. - Take the knife.
- [David] Take the knife.
- [Addison] Put the blade inside the peanut butter
and spread the peanut butter on the bread.
- I'm going to take out some peanut butter
and I'm going to spread the peanut butter on the bread.
- I put a lot of peanut butter on
because I love peanut butter.
- Oh, apparently. I thought I was messing with you here...
- No, no it's fine.
But I think you're apparently happy with this.
- [Addison] Put the knife down,
and then grab one bread and put it on top
of the second bread, sideways.
- Sideways.
- Like put it flat on.
- Oh, flat ways, okay.
- [Addison] And now, done. You're done with your sandwich.
- Should we take a delicious bite?
- Yep. Let's take a bite.
- [David] Okay, here we go.
What would the next step be here?
- Clean all this mess up.
[David laughing]
- Clean all this mess up, right.
We made an algorithm, step by step instructions
for solving some problem.
And if you think about now,
how we made peanut butter and jelly sandwiches,
sometimes we were imprecise and you didn't give me
quite enough information to do the algorithm correctly,
and that's why I took out so much bread.
Precision, being very, very correct with your instructions
is so important in the real world
because for instance, when you're using the worldwide web
and you're searching for something on Google or Bing...
- You want to do the right thing.
- [David] Exactly.
- So, like if you type in just Google,
then you won't find the answer to your question.
- Pretty much everything we do in life is an algorithm,
even if we don't use that fancy word to describe it.
Because you and I are sort of following instructions
either that we came up with ourselves
or maybe our parents told us how to do these things.
And so, those are just algorithms.
But when you start using algorithms in computers,
that's when you start writing code.
[upbeat music]
What do you know about algorithms?
- Nothing really, at all honestly.
I think it's just probably a way to store information
in computers.
- And I dare say, even though you might not have
put this word on it, odds are you executed as a human,
multiple algorithms today even before you came here today.
Like what were a few things that you did?
- I got ready.
- Okay. And get ready. What does that mean?
- Brushing my teeth, brushing my hair.
- [David] Okay.
- Getting dressed.
- Okay, so all of those, frankly, if we really
dove more deeply, could be broken down into
step-by-step instructions.
And presumably your mom, your dad, someone in the past
sort of programmed you as a human to know what to do.
And then after that, as a smart human,
you can sort of take it from there
and you don't need their help anymore.
But that's kind of what we're doing
when we program computers.
Something maybe even more familiar nowadays,
like odds are you have a cell phone.
Your contacts or your address book.
But let me ask you why that is.
Like why does Apple or Google or anyone else
bother alphabetizing your contacts?
- I just assumed it would be easier to navigate.
- What if your friend happened to be at the very bottom
of this randomly organized list?
Why is that a problem? Like he or she's still there.
- I guess it would take a while to get to
while you're scrolling.
- That, in of itself, is kind of a problem
or it's an inefficient solution to the problem.
So, it turns out that back in my day,
before there were cell phones, everyone's numbers
from my schools were literally printed in a book,
and everyone in my town and my city, my state
was printed in an actual phone book.
Even if you've never seen this technology before,
how would you propose verbally to find John
in this phone book? - Or I would just flip through
and just look for the J's I guess.
- Yeah. So, let me propose that we start that way.
I could just start at the beginning
and step by step I could just look at each page,
looking for John, looking for John.
Now even if you've never seen this here technology before,
it turns out this is exactly what your phone could be doing
in software, like someone from Google or Apple or the like,
they could write software that uses a technique
in programming known as a loop,
and a loop, as the word implies,
is just sort of do something again and again.
What if instead of starting from the beginning
and going one page at a time,
what if I, or what if your phone goes like two pages
or two names at a time?
Would this be correct do you think?
- Well you could skip over John, I think.
- In what sense?
- If he's in one of the middle pages that you skipped over.
- Yeah, so sort of accidentally and frankly
with like 50/50 probability,
John could get sandwiched in between two pages.
But does that mean I have to throw
that algorithm out altogether?
- Maybe you could use that strategy until you get close
to the section and then switch to going one by one.
- Okay, that's nice.
So, you could kind of like go twice as fast
but then kind of pump the brakes as you near your exit
on the highway, or in this case near the J section
of the book.
- Exactly.
- And maybe alternatively, if I get to like
A, B, C, D, E, F, G, H, I, J, K,
if I get to the K section,
then I could just double back like one page
just to make sure John didn't get sandwiched
between those pages.
So, the nice thing about that second algorithm
is that I'm flying through the phone book
like two pages at a time.
So, 2, 4, 6, 8, 10, 12.
It's not perfect, it's not necessarily correct
but it is if I just take one extra step.
So, I think it's fixable,
but what your phone is probably doing
and frankly what I and like my parents and grandparents
used to do back in the day was we'd probably go roughly
to the middle of the phone book here,
and just intuitively, if this is an alphabetized phone book
in English, what section am I probably going to
find myself in roughly?
- K?
- Okay. So, I'm in the K section.
Is John going to be to the left or to the right?
- To the left.
- Yeah.
So, John is going to be to the left or the right
and what we can do here, though your phone
does something smarter, is tear the problem in half,
throw half of the problem away,
being left with just 500 pages now.
But what might I next do?
I could sort of naively just start at the beginning again,
but we've learned to do better.
I can go roughly to the middle here.
- And you can do it again. - Yeah, exactly.
So, now maybe I'm in the E section,
which is a little to the left.
So, John is clearly going to be to the right,
so I can again tear the problem poorly in half,
throw this half of the problem away,
and I claim now that if we started with a thousand pages,
now we've gone to 500, 250,
now we're really moving quickly.
- Yeah.
- [David] And so, eventually I'm hopefully dramatically
left with just one single page
at which point John is either on that page
or not on that page, and I can call him.
Roughly how many steps might this third algorithm take
if I started with a thousand pages
then went to 500, 250, 125,
how many times can you divide 1,000 in half? Maybe?
- 10.
- That's roughly roughly 10.
Because in the first algorithm,
looking again for someone like Zoe in the worst case
might have to go all the way through a thousand pages.
But the second algorithm you said was 500,
maybe 501, essentially the same thing.
So, twice as fast.
But this third and final algorithm is sort of fundamentally
faster because you're sort of dividing and conquering it
in half, in half, in half,
not just taking one or two bites out of it out of a time.
So, this of course is not how we used to use phone books
back in the day since otherwise they'd be single use only.
But it is how your phone is actually searching for Zoe,
for John, for anyone else, but it's doing it in software.
- Oh, that's cool.
- So, here we've happened to focus on searching algorithms,
looking for John in the phone book.
But the technique we just used
can indeed be called divide and conquer,
where you take a big problem and you divide and conquer it,
that is you try to chop it up into smaller,
smaller, smaller pieces.
A more sophisticated type of algorithm,
at least depending on how you implement it,
something known as a recursive algorithm.
Recursive algorithm is essentially an algorithm
that uses itself to solve the exact same problem
again and again, but chops it smaller, and smaller,
and smaller ultimately.
[upbeat music]
- Hi, my name's Patricia.
- Patricia, nice to meet you.
Where are you a student at?
- I am starting my senior year now at NYU.
- Oh nice. And what have you been studying
the past few years?
- I studied computer science and data science.
- If you were chatting with a non-CS,
non-data science friend of yours,
how would you explain to them what an algorithm is?
- Some kind of systematic way of solving a problem,
or like a set of steps to kind of solve
a certain problem you have.
- So, you probably recall learning topics
like binary search versus linear search, and the like.
- Yeah.
- So, I've come here complete with a
actual chalkboard with some magnetic numbers on it here.
How would you tell a friend to sort these?
- I think one of the first things we learned was
something called bubble sort.
It was kind of like focusing on smaller bubbles
I guess I would say of the problem,
like looking at smaller segments rather than
the whole thing at once.
- What is I think very true about what you're hinting at
is that bubble sort really focuses on local, small problems
rather than taking a step back trying to fix
the whole thing, let's just fix the obvious problems
in front of us. So, for instance, when we're trying to get
from smallest to largest,
and the first two things we see are eight followed by one,
this looks like a problem 'cause it's out of order.
So, what would be the simplest fix,
the least amount of work we can do
to at least fix one problem?
- Just switch those two numbers
'cause one is obviously smaller than eight.
- Perfect. So, we just swap those two then.
- You would switch those again.
- Yeah, so that further improves the situation
and you can kind of see it,
that the one and the two are now in place.
How about eight and six?
- [Patricia] Switch it again.
- Switch those again. Eight and three?
- Switch it again.
[fast forwarding]
- And conversely now the one and the two are closer to,
and coincidentally are exactly where we want them to be.
So, are we done?
- No.
- Okay, so obviously not, but what could we do now
to further improve the situation?
- Go through it again but you don't need
to check the last one anymore because we know
that number is bubbled up to the top.
- Yeah, because eight has indeed bubbled all the way
to the top. So, one and two?
- [Patricia] Yeah, keep it as is.
- Okay, two and six?
- [Patricia] Keep it as is.
- Okay, six and three?
- Then you switch it.
- Okay, we'll switch or swap those.
Six and four?
- [Patricia] Swap it again.
- Okay, so four, six and seven?
- [Patricia] Keep it.
- Okay. Seven and five?
- [Patricia] Swap it.
- Okay. And then I think per your point,
we're pretty darn close.
Let's go through once more.
One and two? - [Patricia] Keep it.
- Two three? - [Patricia] Keep it.
- Three four? - [Patricia] Keep it.
- Four six? - [Patricia] Keep it.
- Six five?
- [Patricia] And then switch it.
- All right, we'll switch this. And now to your point,
we don't need to bother with the ones
that already bubbled their way up.
Now we are a hundred percent sure it's sorted.
- Yeah.
- And certainly the search engines of the world,
Google and Bing and so forth,
they probably don't keep webpages in sorted order
'cause that would be a crazy long list
when you're just trying to search the data.
But there's probably some algorithm underlying what they do
and they probably similarly, just like we,
do a bit of work upfront to get things organized
even if it's not strictly sorted in the same way
so that people like you and me and others
can find that same information.
So, how about social media?
Can you envision where the algorithms are in that world?
- Maybe for example like TikTok, like the For You page,
'cause those are like recommendations, right?
It's sort of like Netflix recommendations
except more constant because it's just every video
you scroll, it's like that's a new recommendation basically.
And it's based on what you've liked previously,
what you've saved previously, what you search up.
So, I would assume there's some kind of algorithm there
kind of figuring out like what to put on your For You page.
- Absolutely. Just trying to keep you presumably
more engaged.
So, the better the algorithm is,
the better your engagement is,
maybe the more money the company then makes on the platform
and so forth.
So, it all sort of feeds together.
But what you're describing really is more
artificially intelligent, if I may,
because presumably there's not someone at TikTok
or any of these social media companies saying,
"If Patricia likes this post, then show her this post.
If she likes this post, then show her this other post."
Because the code would sort of grow infinitely long
and there's just way too much content for a programmer
to be having those kinds of conditionals,
those decisions being made behind the scenes.
So, it's probably a little more artificially intelligent.
And in that sense you have topics like neural networks,
and machine learning which really describe
taking as input things like what you watch,
what you click on, what your friends watch,
what they click on, and sort of trying to infer
from that instead, what should we show Patricia
or her friends next?
- Oh, okay. Yeah. Yeah.
That makes like the distinction more...
Makes more sense now.
- Nice. - Yeah.
[upbeat music]
- I am currently a fourth year PhD student at NYU.
I do robot learning, so that's half and half
robotics and machine learning.
- Sounds like you've dabbled with quite a few algorithms.
So, how does one actually research algorithms
or invent algorithms?
- The most important way is just trying to think about
inefficiencies, and also think about connecting threads.
The way I think about it is that algorithm for me
is not just about the way of doing something,
but it's about doing something efficiently.
Learning algorithms are practically everywhere now.
Google, I would say for example,
is learning every day about like,
"Oh what articles, what links might be better than others?"
And re-ranking them.
There are recommender systems all around us, right?
Like content feeds and social media,
or you know, like YouTube or Netflix.
What we see is in a large part determined by this kind of
learning algorithms.
- Nowadays there's a lot of concerns
around some applications of machine learning
like deep fakes where it can kind of learn how I talk
and learn how you talk and even how we look,
and generate videos of us.
We're doing this for real, but you could imagine
a computer synthesizing this conversation eventually.
- Right.
- But how does it even know what I sound like
and what I look like, and how to replicate that?
- All of this learning algorithms that we talk about, right?
A lot, like what goes in there is just
lots and lots of data.
So, data goes in, something else comes out.
What comes out is whatever objective function
that you optimize for.
- Where is the line between algorithms that
play games with and without AI?
- I think when I started off my undergrad,
the current AI machine learning
was not very much synonymous.
- Okay.
- And even in my undergraduate, in the AI class,
they learned a lot of classical algorithms for game plays.
Like for example, the A star search, right?
That's a very simple example of how you can play a game
without having anything learned.
This is very much, oh you are at a game state,
you just search down, see what are the possibilities
and then you pick the best possibility that it can see,
versus what you think about when you think about,
ah yes, gameplay like the alpha zero for example,
or alpha star, or there are a lot of, you know,
like fancy new machine learning agents that are
even learning very difficult games like Go.
And those are learned agents, as in they are getting better
as they play more and more games.
And as they get more games, they kind of
refine their strategy based on the data that I've seen.
And once again, this high level abstraction
is still the same.
You see a lot of data and you'll learn from that.
But the question is what is objective function
that you're optimizing for?
Is it winning this game?
Is it forcing a tie or is it, you know,
opening a door in a kitchen?
- So, if the world is very much focused on supervised,
unsupervised reinforcement learning now,
what comes next five, ten years, where is the world going?
- I think that this is just going to be more and more,
I don't want to use the word encroachment,
but that's what it feels like of algorithms
into our everyday life.
Like even when I was taking the train here, right?
The trains are being routed with algorithms,
but this has existed for you know, like 50 years probably.
But as I was coming here, as I was checking my phone,
those are different algorithms,
and you know, they're kind of getting all around us,
getting there with us all the time.
They're making our life better most places, most cases.
And I think that's just going to be a continuation
of all of those.
- And it feels like they're even in places
you wouldn't expect, and there's just so much data
about you and me and everyone else online
and this data is being mined and analyzed,
and influencing things we see and hear it would seem.
So, there is sort of a counterpoint which might be good
for the marketers, but not necessarily good for you and me
as individuals.
- We are human beings, but for someone
we might be just a pair of eyes who are
carrying a wallet, and are there to buy things.
But there is so much more potential for these algorithms
to just make our life better without
changing much about our life.
[upbeat music]
- I'm Chris Wiggins. I'm an associate professor
of Applied Mathematics at Columbia.
I'm also the chief data scientist of the New York Times.
The data science team at the New York Times
develops and deploys machine learning
for newsroom and business problems.
But I would say the things that we do mostly, you don't see,
but it might be things like personalization algorithms,
or recommending different content.
- And do data scientists, which is rather distinct
from the phrase computer scientists.
Do data scientists still think in terms of algorithms
as driving a lot of it?
- Oh absolutely, yeah.
In fact, so in data science and academia,
often the role of the algorithm is
the optimization algorithm that helps you find the best
model or the best description of a data set.
And data science and industry, the goal,
often it's centered around an algorithm
which becomes a data product.
So, a data scientist in industry might be
developing and deploying the algorithm,
which means not only understanding the algorithm
and its statistical performance,
but also all of the software engineering
around systems integration, making sure that that algorithm
receives input that's reliable and has output that's useful,
as well as I would say the organizational integration,
which is how does a community of people
like the set of people working at the New York Times
integrate that algorithm into their process?
- Interesting. And I feel like AI based startups
are all the rage and certainly within academia.
Are there connections between AI
and the world of data science?
- Oh, absolutely.
- The algorithms that they're in,
can you connect those dots for...
- You're right that AI as a field has really exploded.
I would say particularly many people experienced a ChatBot
that was really, really good.
Today, when people say AI,
they're often thinking about large language models,
or they're thinking about generative AI,
or they might be thinking about a ChatBot.
One thing to keep in mind is a ChatBot is a special case
of generative AI, which is a special case of using
large language models, which is a special case of using
machine learning generally,
which is what most people mean by AI.
You may have moments that are what John McCarthy called,
"Look Ma, no hands," results,
where you do some fantastic trick and you're not quite sure
how it worked.
I think it's still very much early days.
Large language models is still in the point of
what might be called alchemy and that people are building
large language models without a real clear,
a priori sense of what the right design is
for a right problem.
Many people are trying different things out,
often in large companies where they can afford
to have many people trying things out,
seeing what works, publishing that,
instantiating it as a product.
- And that itself is part of the scientific process
I would think too.
- Yeah, very much. Well, science and engineering,
because often you're building a thing
and the thing does something amazing.
To a large extent we are still looking for
basic theoretical results around why
deep neural networks generally work.
Why are they able to learn so well?
They're huge, billions of parameter models
and it's difficult for us to interpret
how they're able to do what they do.
- And is this a good thing, do you think?
Or an inevitable thing that we, the programmers,
we, the computer scientists, the data scientists
who are inventing these things,
can't actually explain how they work?
Because I feel like friends of mine in industry,
even when it's something simple and relatively familiar
like auto complete, they can't actually tell me
why that name is appearing at the top of the list.
Whereas years ago when these algorithms were more
deterministic and more procedural,
you could even point to the line that made that name
bubble up to the top. - [Chris] Absolutely.
- So, is this a good thing, a bad thing,
that we're sort of losing control perhaps in some sense
of the algorithm?
- It has risks.
I don't know that I would say that it's good or bad,
but I would say there's lots of scientific precedent.
There are times when an algorithm works really well
and we have finite understanding of why it works
or a model works really well
and sometimes we have very little understanding
of why it works the way it does.
- In classes I teach, certainly spend a lot of time on
fundamentals, algorithms that have been taught in classes
for decades now, whether it's binary search,
linear search, bubble sorts, selection sort or the like,
but if we're already at the point where I can pull up
chat GPT, copy paste a whole bunch of numbers or words
and say, "Sort these for me,"
does it really matter how Chat GPT is sorting it?
Does it really matter to me as the user
how the software is sorting it?
Do these fundamentals become more dated and less important
do you think?
- Now you're talking about the ways in which code
and computation is a special case of technology, right?
So, for driving a car, you may not necessarily need
to know much about organic chemistry,
even though the organic chemistry is how the car works.
So, you can drive the car and use it in different ways
without understanding much about the fundamentals.
So, similarly with computation, we're at a point
where the computation is so high level, right?
You can import psychic learn and you can go from zero
to machine learning in 30 seconds.
It's depending on what level you want to understand
the technology, where in the stack, so to speak,
it's possible to understand it and make wonderful things
and advance the world without understanding it
at the particular level of somebody who actually might have
originally designed the actual optimization algorithm.
I should say though, for many of the optimization
algorithms, there are cases where an algorithm
works really well and we publish a paper,
and there's a proof in the paper,
and then years later people realize
actually that proof was wrong and we're really
still not sure why that optimization works,
but it works really well or it inspires people
to make new optimization algorithms.
So, I do think that the goal of understanding algorithms
is loosely coupled to our progress
and advancing grade algorithms, but they don't always
necessarily have to require each other.
- And for those students especially,
or even adults who are thinking of now steering into
computer science, into programming,
who were really jazzed about heading in that direction
up until, for instance, November of 2022,
when all of a sudden for many people
it looked like the world was now changing
and now maybe this isn't such a promising path,
this isn't such a lucrative path anymore.
Are LLMs, are tools like Chat GPT reason not to perhaps
steer into the field?
- Large language models are a particular architecture
for predicting, let's say the next word,
or a set of tokens more generally.
The algorithm comes in when you think about
how is that LLM to be trained or also how to be fine tuned.
So, the P of GPT is a pre-trained algorithm.
The idea is that you train a large language model
on some corpus of text, could be encyclopedias,
or textbooks, or what have you.
And then you might want to fine tune that model
around some particular task or
some particular subset of texts.
So, both of those are examples of training algorithms.
So, I would say people's perception
of artificial intelligence has really changed a lot
in the last six months, particularly around November of 2022
when people experienced a really good ChatBot.
The technology though had been around already before.
Academics had already been working with Chat GPT three
before that and GPT two and GPT one.
And for many people it sort of opened up this conversation
about what is artificial intelligence
and what could we do with this?
And what are the possible good and bad, right?
Like any other piece of technology.
Kranzburg's first law of technology,
technology is neither good, nor bad, nor is it neutral.
Every time we have some new technology,
we should think about it's capabilities
and the good, and the possible bad.
- [David] As with any area of study,
algorithms offer a spectrum from the most basic
to the most advanced.
And even if right now, the most advanced of those algorithms
feels out of reach because you just
don't have that background,
with each lesson you learn, with each algorithm you study,
that end game becomes closer and closer
such that it will, before long, be accessible to you
and you will be at the end of that most advanced spectrum.
5.0 / 5 (0 votes)