Basics of Asymptotic Analysis (Part 3)
Summary
TLDRThis lecture delves into the concept of asymptotic complexity in algorithm analysis, emphasizing that exact running time calculations are impractical and less insightful. It illustrates how time complexity is inherently linked to input size, using an array example to demonstrate how shifting elements affects time. The focus is on the growth rate of time complexity functions, f(n), and how they can be compared for different data structures or algorithms. The script guides through the process of determining the dominant term in time complexity expressions, showcasing how higher-order terms become more significant as input size increases, ultimately explaining the use of asymptotic notation to approximate and simplify time complexity analysis.
Takeaways
- 😀 Asymptotic analysis is a better approach than measuring exact running time for calculating time complexity of algorithms.
- 🔍 Measuring the actual running time is impractical because it depends on the size of the input and is not constant.
- 📚 The running time of an algorithm is generally proportional to the size of the input, which is a key concept in understanding time complexity.
- 🔄 An example of shifting elements in an array illustrates how the time complexity increases linearly with the number of elements.
- 📈 The function f(n) represents the time complexity, indicating the number of instructions executed for an input of size n.
- 🔢 For simple programs, calculating f(n) is straightforward, but it becomes more complex as programs grow in size.
- 📊 Comparing the growth rate of f(n) between different data structures or algorithms is crucial for understanding their efficiency at larger input sizes.
- 📉 The percentage contribution of different terms in f(n) to the total running time can vary significantly with the input size.
- 📚 As n increases, higher-order terms (like n^2) dominate the running time, making lower-order terms and constants negligible.
- 🔎 Analyzing the growth rate helps identify which term in the time complexity function contributes the most to the running time for large inputs.
- 🚀 Asymptotic complexity focuses on the term that consumes the most time for large input sizes, simplifying the understanding of algorithm efficiency.
Q & A
Why is it not practical to measure the exact running time of an algorithm?
-Measuring the actual running time is not practical because it can vary significantly depending on the size of the input and the machine on which the algorithm is executed.
What is the main factor that affects the running time of an algorithm?
-The running time generally depends on the size of the input.
In the given example, how does the time complexity change with the size of the array?
-For an array of size 3, adding an element at the beginning requires 3 shifts, taking 3 units of time. For an array of size 10,000, it requires 10,000 shifts, taking 10,000 units of time.
What does the function f(n) represent in the context of time complexity?
-f(n) is a function that denotes the time complexity, representing the number of instructions executed for an input size n.
How does the time complexity function f(n) help in comparing data structures?
-By comparing the f(n) values of different data structures for various input sizes, we can determine which data structure performs better in terms of time complexity.
Why are we interested in the growth rate of f(n) rather than its exact value?
-We are interested in the growth rate of f(n) because it shows how the running time of an algorithm scales with the input size. This helps in understanding the algorithm's efficiency for large inputs.
What does the term 'asymptotic complexity' mean?
-Asymptotic complexity is the approximate measure of an algorithm's time complexity, focusing on the term that grows the fastest with the input size while ignoring smaller terms.
How does the growth rate of f(n) differ for the terms in the example f(n) = 5n² + 6n + 12 as n increases?
-For small values of n, the constant term (12) and the linear term (6n) may contribute significantly to the running time. However, as n increases, the quadratic term (5n²) dominates and contributes most of the time complexity.
Why is it acceptable to eliminate smaller terms in the time complexity function?
-It is acceptable to eliminate smaller terms because they contribute negligibly to the running time for large input sizes. The dominant term provides a sufficient approximation of the time complexity.
What is the key takeaway from understanding the growth rate of time complexity?
-The key takeaway is that the efficiency of an algorithm is best understood by focusing on how its running time grows with the input size, rather than exact running times, allowing for better comparison and selection of algorithms.
Outlines
📈 Understanding Time Complexity Through Asymptotic Analysis
This paragraph delves into the concept of time complexity in algorithms, emphasizing that determining the exact running time is impractical and not the best approach. It introduces the idea that time complexity is dependent on the size of the input, using an example of shifting elements in an array to illustrate how time increases linearly with input size. The concept of f(n) as a function representing time complexity is explained, with an example of a loop and a printf function to demonstrate how f(n) can be calculated. The importance of the growth rate of f(n) is highlighted, and the paragraph concludes with the idea that comparing algorithms or data structures based on their f(n) values can reveal their efficiency relative to input size.
📚 Analyzing Growth Rates in Asymptotic Complexity
This paragraph continues the discussion on time complexity by examining the growth rates of different components within the function f(n). It uses a specific function, f(n) = 5n^2 + 6n + 12, to demonstrate how the contribution of each term to the overall running time changes with varying input sizes. The example shows that while constants and lower-order terms may seem significant for small n, they become negligible as n grows large, with the quadratic term dominating the time complexity. The paragraph illustrates the concept of asymptotic complexity, where the focus is on the term with the highest growth rate, and the less significant terms can be disregarded for large n. This approach provides an approximate measure of time complexity that is very close to the actual result, satisfying the need for efficiency analysis in algorithm design.
Mindmap
Keywords
💡Asymptotic Analysis
💡Time Complexity
💡Running Time
💡Input Size
💡Shifting
💡f(n)
💡Growth Rate
💡Constant Term
💡Linear Term
💡Quadratic Term
💡Asymptotic Complexity
Highlights
Asymptotic analysis is a better approach than examining the exact running time for calculating time complexity.
Measuring the actual running time is impractical due to its dependency on input size.
An example illustrates how the running time increases with the size of an array when adding an element to the beginning.
The time complexity is directly proportional to the size of the input, demonstrated through the array shifting example.
Time complexity, denoted as f(n), represents the number of instructions executed for the input value n.
Comparing f(n) values is a method to evaluate and compare the efficiency of different data structures or algorithms.
The growth rate of f(n) is crucial for understanding performance differences between data structures or algorithms at larger input sizes.
An example of a printf function within a loop demonstrates how f(n) can be calculated and related to time complexity.
As program complexity increases, finding the exact f(n) becomes more challenging.
The importance of considering different values of n to understand the growth rate of f(n) is emphasized.
A polynomial function example shows how the contribution of each term to the running time changes with increasing n.
For large n, the squared term in a polynomial time complexity function dominates the running time.
Visualizing the growth rate helps in understanding which term in the time complexity function contributes the most to the running time.
The concept of eliminating non-dominant terms in time complexity to get an approximate measure is introduced.
Asymptotic complexity focuses on the term that takes the most time, providing a satisfactory approximation of the actual running time.
Asymptotic analysis is interested in the growth rate and not the exact running time or machine-specific execution times.
The presentation concludes by reinforcing the importance of asymptotic complexity in understanding algorithmic efficiency.
Transcripts
Let's consider part three of basics of asymptotic analysis.
From the last lecture, we have learnt that examining the
exact running time is not the best solution
to calculate the time complexity.
That's what we have learnt from the last lecture right?
It is not a best practice to know the
exact running time of an algorithm.
So, what is the best solution to this problem?
There are certain points we need to note here.
Measuring the actual running time is not at all practical.
We have learnt this already, that measuring the actual
running time is not at all practical.
The running time generally depends on the size of the input.
I want you to note this point that,
running time generally depends on the size of the input.
Now, why I am saying this, let's see this with the help of an example.
Suppose we have an array which consist of three elements like this.
Now what I want to do, I want to add
one more element to the beginning of the list.
Let's say this is the element I want to add at the beginning of the list.
For this purpose obviously, I must have an empty slot here.
And I need to make three shifts.
Let us assume that if one shift takes one unit of time,
then three shifts will take three units of time. Isn't that so?
If one shift takes one unit of time,
then three shifts will take three units of time.
I need to shift each element towards right,
in order to make an empty slot here.
Now, you can understand that
with the size equal to three, the time will also be three units.
After shifting, this will become empty,
and then I can add the element in the beginning of the list.
There is no problem is adding this element.
Right? But what if we have 10 000 elements in an array?
Isn't that going to be very tedious?
Because shifting is costly. Right?
Adding the element to the beginning of the list is not costly.
Shifting is costly.
In that case, we have to make 10 000 shifts
which will take 10 000 units of time.
Because, we have assumed that one shift will take one unit of time.
We have to make 10 000 shifts.
Therefore, it will take 10 000 units of time
So, it is clear from this fact that,
running time generally depends on the
size of the input. Here, in this case it depends on the size of the array.
Size of the input or size of the array, one and the same thing.
So, if the size of the array is three, it takes three units of time.
If the size of the array is 10 000,
it takes 10 000 units of time.
The one thing we should note here is that,
it totally depends on the size of the input
that what is going to be the running time.
Obviously, we are not considering the machine time,
that how much time a particular operation will take
in a particular machine. We are considering
only the size of the input.
So, running time generally depends on the size of the input.
We should always remember this fact.
Therefore, if the size of the input is n,
then f(n) is a function of n denotes the time complexity.
So, f(n) is nothing but a function which denotes
the time complexity. It depends on input n.
Okay. Or in other words, I can say
f(n) represents the number of instructions
executed for the input value n.
Let's consider an example to understand this statement.
Here in this example, I have a printf function
within this loop which runs from zero to n minus one.
Let's say, this instruction will take one unit of time.
This is my assumption.
Let's say this particular instruction takes one unit of time,
then we are executing this instruction n number of times.
So, this is clear that f(n) will be equal to n.
Because, each instruction is taking one unit of time.
And we are executing this instruction n number of times.
Right? We are executing this instruction n number of times.
That is why, it depends on the size of the input n
and f(n) will be equal to n because
f(n) is keeping the count of the number of instructions
executed for the input value n.
This will give us the time complexity.
But, how to find f(n)?
You know, finding f(n) in small programs is easy.
But, as programs become more complex,
finding f(n) is not that easy.
We can compare two data structures for a particular operation
by comparing their f(n) values.
We can compare their f(n) values
and then we can compare two data structures.
We are interested in growth rate of f(n) --
this is very important,
we are interested in growth of f(n) with respect to n.
Because, it might be possible that
for smaller input size one data structure may seem better
than the other, but for larger input size it may not.
This is very important.
We are interested in growth rate of f(n).
We have to try different values of n,
and then we have to find what is going to be the f(n) value.
So, for different values of n, we should calculate
not for just one value of n.
Because, it might be possible that one data structure
may seem better for smaller input size than the other data structure.
This concept is applicable in comparing
the two algorithms as well.
It is just like, you know, we can compare
two algorithms in the same way.
We can compare two data structures in the same way.
We are interested in growth rate of f(n).
Now, how to know the growth rate of f(n)?
Let's consider one example.
Let's say, we have f(n) equal to five n square plus six n plus 12.
This represents time or number of instructions executed
that depends on the input value n of course.
Let's say initially, we take n equal to one.
Then percentage of running time due to five n square will be
five divided by five plus six plus 12.
We just have to put one here, this becomes five,
this becomes six, this is already 12.
So, five plus six plus 12.
We will divide five by five plus six plus 12
and then we will multiply it by 100 in order to calculate the percentage.
This will give us 21.74 percent.
So, we can clearly say that, percentage of
running time due to five n square is 21.74 percent.
Five n square is contributing this much amount of time.
Obviously, this we are calculating for n equal to one.
Let's see what is the percentage of running time due to six n.
It is six divided by five plus six plus 12 into 100,
which is 26.09 percent.
So, six n is contributing higher than five n square. Right?
Now, let's see what is the percentage of
running time due to 12. It is 12 divided by
five plus six plus 12 into 100, which is 52.17 percent.
It seems like most of the time is consumed here.
But wait, we have to see the growth rate.
We cannot say right now that maximum
amount of time is taken by this 12.
Let's see for the other input values as well.
Let's say n is equal to 10.
In that case, five n square is contributing 87.41 percent
of the running time, you can clearly see the growth here.
From 21.74 percent to 87.21 percent.
While, six n is decreasing.
It decreases from 26.09 percent to 10.49 percent.
And 12, which we thought of as taking most of the time,
is now taking just 2.09 percent.
Now, let's take n value equal to 100.
In that case, you can clearly see the growth.
Fine n square is contributing the maximum
of the time. It is 98.97 percent.
While, this is just taking 1.19 percent
and 12 is just taking 0.02 percent.
If you increase the n value further,
you can clearly see five n square is taking 99.88 percent of the time.
While, six n is taking 0.12 percent of the time
and 12 is just taking 0.0002 percent of the time.
These are almost negligible, right?
Most of the time is consumed here,
that is in five n square.
99.88 percent of the time is consumed.
So, from this fact it is clear that
for larger values of n, the squared term consumes
almost 99 percent of the time.
This term is consuming the most of the time.
Let's say, you have taken the value of n equal to 1.
And you simply say that 12 is taking the most of the time.
But the reality is different.
As you are increasing the value of n,
you can clearly see which term is taking the most of the time.
Now, let's visualize the growth rate.
Here, you can clearly see that,
square term is taking most of the time.
You can clearly see the growth over here.
While, the growth of the constant term
you can also see. Initially, it is high While, the growth of the constant term
you can also see. Initially, it is high
but later, it declines faster
Similar for the linear term as well.
You can clearly say that, this square term
or quadratic term is taking the most of the time.
Right?
So, there is no harm if we can eliminate
the rest of the terms as they are not
contributing much to the time.
We can eliminate these terms -- six n plus 12, we can eliminate them.
There is no problem in eliminating them.
Because, we know that, five n square term is taking the most of the time.
It is 99.88 percent.
We are getting the approximate time complexity.
Isn't that so?
And we are satisfied with this result because
this approximate result is very near to the actual result.
If we take f(n) equal to five n square, there is no harm.
We know that, it is taking 99.88 percent of the time
or it may take even more.
We are getting the approximate time complexity
and we are happy with it. There is no problem
in eliminating the rest of the terms.
This concept -- this concept of calculating the
approximate measure of time complexity
is called asymptotic complexity.
We are not calculating the exact running time here.
We are eliminating the unnecessary terms.
We are only concentrating on the term
which takes most of the time. Okay.
So, this approximate measure of time complexity
is called asymptotic complexity.
And this is what we are interested in.
We are not interested in calculating the
exact running time, we are not interested in
which machine we are executing our algorithm
We are only interested in calculating the
approximate measure of time complexity.
And we are satisfied with this result.
We already know this fact that time complexity
depends on the size of the input.
As you increase the size, you can see the time complexity changes.
So it depends on the size of the input
and we are happy with this result.
Okay friends, this is it for now.
Thank you for watching this presentation.
Browse More Related Video
![](https://i.ytimg.com/vi/GU0YW689fTM/hq720.jpg)
Algorithm Design | Approximation Algorithm | Vertex Cover Problem #algorithm #approximation
![](https://i.ytimg.com/vi/iug_d-PxLio/hqdefault.jpg?sqp=-oaymwEXCJADEOABSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLDaFhekqIWE38c6XOPc_DBGT6YYTQ)
DAA101: Randomized Algorithms in DAA| Las Vegas Algorithm | Monte Carlo Algorithm
![](https://i.ytimg.com/vi/Ios5SEjQ6v8/hq720.jpg)
Algorithm Design | Approximation Algorithm | Traveling Salesman Problem with Triangle Inequality
![](https://i.ytimg.com/vi/Y4coZEfErlg/hqdefault.jpg?sqp=-oaymwEXCJADEOABSFryq4qpAwkIARUAAIhCGAE=&rs=AOn4CLCq7kYWVegaFHPZgGNXLWi3-AiXMw)
Lecture 25 : Binary Search Tree (BST) Sort
![](https://i.ytimg.com/vi/uBGl2BujkPQ/hq720.jpg)
Introduction to Anatomy & Physiology: Crash Course Anatomy & Physiology #1
![](https://i.ytimg.com/vi/6w71IUzKz2o/hq720.jpg)
Lecture 1.3 - Introduction and Types of Data - Classification of data
5.0 / 5 (0 votes)