Basics of Asymptotic Analysis (Part 3)

Neso Academy
19 Apr 202009:50

Summary

TLDRThis lecture delves into the concept of asymptotic complexity in algorithm analysis, emphasizing that exact running time calculations are impractical and less insightful. It illustrates how time complexity is inherently linked to input size, using an array example to demonstrate how shifting elements affects time. The focus is on the growth rate of time complexity functions, f(n), and how they can be compared for different data structures or algorithms. The script guides through the process of determining the dominant term in time complexity expressions, showcasing how higher-order terms become more significant as input size increases, ultimately explaining the use of asymptotic notation to approximate and simplify time complexity analysis.

Takeaways

  • 😀 Asymptotic analysis is a better approach than measuring exact running time for calculating time complexity of algorithms.
  • 🔍 Measuring the actual running time is impractical because it depends on the size of the input and is not constant.
  • 📚 The running time of an algorithm is generally proportional to the size of the input, which is a key concept in understanding time complexity.
  • 🔄 An example of shifting elements in an array illustrates how the time complexity increases linearly with the number of elements.
  • 📈 The function f(n) represents the time complexity, indicating the number of instructions executed for an input of size n.
  • 🔢 For simple programs, calculating f(n) is straightforward, but it becomes more complex as programs grow in size.
  • 📊 Comparing the growth rate of f(n) between different data structures or algorithms is crucial for understanding their efficiency at larger input sizes.
  • 📉 The percentage contribution of different terms in f(n) to the total running time can vary significantly with the input size.
  • 📚 As n increases, higher-order terms (like n^2) dominate the running time, making lower-order terms and constants negligible.
  • 🔎 Analyzing the growth rate helps identify which term in the time complexity function contributes the most to the running time for large inputs.
  • 🚀 Asymptotic complexity focuses on the term that consumes the most time for large input sizes, simplifying the understanding of algorithm efficiency.

Q & A

  • Why is it not practical to measure the exact running time of an algorithm?

    -Measuring the actual running time is not practical because it can vary significantly depending on the size of the input and the machine on which the algorithm is executed.

  • What is the main factor that affects the running time of an algorithm?

    -The running time generally depends on the size of the input.

  • In the given example, how does the time complexity change with the size of the array?

    -For an array of size 3, adding an element at the beginning requires 3 shifts, taking 3 units of time. For an array of size 10,000, it requires 10,000 shifts, taking 10,000 units of time.

  • What does the function f(n) represent in the context of time complexity?

    -f(n) is a function that denotes the time complexity, representing the number of instructions executed for an input size n.

  • How does the time complexity function f(n) help in comparing data structures?

    -By comparing the f(n) values of different data structures for various input sizes, we can determine which data structure performs better in terms of time complexity.

  • Why are we interested in the growth rate of f(n) rather than its exact value?

    -We are interested in the growth rate of f(n) because it shows how the running time of an algorithm scales with the input size. This helps in understanding the algorithm's efficiency for large inputs.

  • What does the term 'asymptotic complexity' mean?

    -Asymptotic complexity is the approximate measure of an algorithm's time complexity, focusing on the term that grows the fastest with the input size while ignoring smaller terms.

  • How does the growth rate of f(n) differ for the terms in the example f(n) = 5n² + 6n + 12 as n increases?

    -For small values of n, the constant term (12) and the linear term (6n) may contribute significantly to the running time. However, as n increases, the quadratic term (5n²) dominates and contributes most of the time complexity.

  • Why is it acceptable to eliminate smaller terms in the time complexity function?

    -It is acceptable to eliminate smaller terms because they contribute negligibly to the running time for large input sizes. The dominant term provides a sufficient approximation of the time complexity.

  • What is the key takeaway from understanding the growth rate of time complexity?

    -The key takeaway is that the efficiency of an algorithm is best understood by focusing on how its running time grows with the input size, rather than exact running times, allowing for better comparison and selection of algorithms.

Outlines

00:00

📈 Understanding Time Complexity Through Asymptotic Analysis

This paragraph delves into the concept of time complexity in algorithms, emphasizing that determining the exact running time is impractical and not the best approach. It introduces the idea that time complexity is dependent on the size of the input, using an example of shifting elements in an array to illustrate how time increases linearly with input size. The concept of f(n) as a function representing time complexity is explained, with an example of a loop and a printf function to demonstrate how f(n) can be calculated. The importance of the growth rate of f(n) is highlighted, and the paragraph concludes with the idea that comparing algorithms or data structures based on their f(n) values can reveal their efficiency relative to input size.

05:02

📚 Analyzing Growth Rates in Asymptotic Complexity

This paragraph continues the discussion on time complexity by examining the growth rates of different components within the function f(n). It uses a specific function, f(n) = 5n^2 + 6n + 12, to demonstrate how the contribution of each term to the overall running time changes with varying input sizes. The example shows that while constants and lower-order terms may seem significant for small n, they become negligible as n grows large, with the quadratic term dominating the time complexity. The paragraph illustrates the concept of asymptotic complexity, where the focus is on the term with the highest growth rate, and the less significant terms can be disregarded for large n. This approach provides an approximate measure of time complexity that is very close to the actual result, satisfying the need for efficiency analysis in algorithm design.

Mindmap

Keywords

💡Asymptotic Analysis

Asymptotic Analysis is a method used in computer science to understand the behavior of algorithms as the input size becomes very large. It focuses on the growth rate of the time complexity function rather than the exact running time. In the script, it is the central theme, emphasizing that the exact running time is not the best solution for calculating time complexity, and the importance of examining the growth rate of f(n), the function that denotes time complexity, as the input size increases.

💡Time Complexity

Time complexity is a measure of the amount of time an algorithm takes to complete as a function of the length of the input. It is usually expressed using Big O notation. In the video, time complexity is discussed in the context of how it depends on the size of the input, and the importance of understanding its growth rate for comparing algorithms and data structures.

💡Running Time

Running time refers to the actual time an algorithm takes to execute, which is generally impractical to measure exactly due to various factors like machine specifications. The script points out that running time is not the focus in asymptotic analysis but rather the theoretical time complexity, which is a function of the input size.

💡Input Size

Input size is the measure of the amount of data that an algorithm processes. The script uses the term repeatedly to illustrate that the running time and time complexity are directly proportional to the size of the input, as demonstrated with the example of shifting elements in an array.

💡Shifting

In the context of the script, shifting refers to the operation of moving elements in an array to create space for a new element. It is used as an example to demonstrate how the running time can increase linearly with the input size, as each new element requires a certain number of shifts.

💡f(n)

f(n) is a function that represents the time complexity of an algorithm, where 'n' is the size of the input. The script explains that f(n) can be a simple or complex function, and it is used to compare the efficiency of different algorithms or data structures by examining how f(n) grows with n.

💡Growth Rate

Growth rate in the script refers to how quickly the time complexity function f(n) increases as the input size grows. It is a critical concept because it helps to predict the performance of an algorithm for large inputs, which is the primary focus of asymptotic analysis.

💡Constant Term

In the context of the script, the constant term refers to a part of the time complexity function that does not change with the input size. It is used to illustrate that while constant terms may seem significant for small input sizes, their impact diminishes as the input size grows large.

💡Linear Term

The linear term in the script represents a part of the time complexity function that grows at a rate proportional to the input size. It is used in the example to show that although it contributes more than the constant term for small inputs, its relative contribution decreases as the input size increases.

💡Quadratic Term

The quadratic term in the script is a component of the time complexity function that grows at a rate proportional to the square of the input size. It is highlighted as the term that dominates the running time for large inputs, thus being the most significant factor in the time complexity for large 'n'.

💡Asymptotic Complexity

Asymptotic complexity is the approximate measure of time complexity that focuses on the fastest-growing term in the time complexity function. The script explains that by eliminating the less significant terms, we can get a simplified and close approximation of the actual time complexity, which is sufficient for comparing algorithms.

Highlights

Asymptotic analysis is a better approach than examining the exact running time for calculating time complexity.

Measuring the actual running time is impractical due to its dependency on input size.

An example illustrates how the running time increases with the size of an array when adding an element to the beginning.

The time complexity is directly proportional to the size of the input, demonstrated through the array shifting example.

Time complexity, denoted as f(n), represents the number of instructions executed for the input value n.

Comparing f(n) values is a method to evaluate and compare the efficiency of different data structures or algorithms.

The growth rate of f(n) is crucial for understanding performance differences between data structures or algorithms at larger input sizes.

An example of a printf function within a loop demonstrates how f(n) can be calculated and related to time complexity.

As program complexity increases, finding the exact f(n) becomes more challenging.

The importance of considering different values of n to understand the growth rate of f(n) is emphasized.

A polynomial function example shows how the contribution of each term to the running time changes with increasing n.

For large n, the squared term in a polynomial time complexity function dominates the running time.

Visualizing the growth rate helps in understanding which term in the time complexity function contributes the most to the running time.

The concept of eliminating non-dominant terms in time complexity to get an approximate measure is introduced.

Asymptotic complexity focuses on the term that takes the most time, providing a satisfactory approximation of the actual running time.

Asymptotic analysis is interested in the growth rate and not the exact running time or machine-specific execution times.

The presentation concludes by reinforcing the importance of asymptotic complexity in understanding algorithmic efficiency.

Transcripts

play00:00

Let's consider part three of basics of asymptotic analysis.

play00:04

From the last lecture, we have learnt that examining the

play00:07

exact running time is not the best solution

play00:09

to calculate the time complexity.

play00:11

That's what we have learnt from the last lecture right?

play00:14

It is not a best practice to know the

play00:16

exact running time of an algorithm.

play00:18

So, what is the best solution to this problem?

play00:21

There are certain points we need to note here.

play00:23

Measuring the actual running time is not at all practical.

play00:26

We have learnt this already, that measuring the actual

play00:28

running time is not at all practical.

play00:31

The running time generally depends on the size of the input.

play00:35

I want you to note this point that,

play00:36

running time generally depends on the size of the input.

play00:40

Now, why I am saying this, let's see this with the help of an example.

play00:43

Suppose we have an array which consist of three elements like this.

play00:47

Now what I want to do, I want to add

play00:49

one more element to the beginning of the list.

play00:51

Let's say this is the element I want to add at the beginning of the list.

play00:54

For this purpose obviously, I must have an empty slot here.

play00:57

And I need to make three shifts.

play00:59

Let us assume that if one shift takes one unit of time,

play01:02

then three shifts will take three units of time. Isn't that so?

play01:05

If one shift takes one unit of time,

play01:08

then three shifts will take three units of time.

play01:10

I need to shift each element towards right,

play01:12

in order to make an empty slot here.

play01:14

Now, you can understand that

play01:16

with the size equal to three, the time will also be three units.

play01:19

After shifting, this will become empty,

play01:21

and then I can add the element in the beginning of the list.

play01:24

There is no problem is adding this element.

play01:26

Right? But what if we have 10 000 elements in an array?

play01:29

Isn't that going to be very tedious?

play01:31

Because shifting is costly. Right?

play01:33

Adding the element to the beginning of the list is not costly.

play01:36

Shifting is costly.

play01:38

In that case, we have to make 10 000 shifts

play01:40

which will take 10 000 units of time.

play01:43

Because, we have assumed that one shift will take one unit of time.

play01:46

We have to make 10 000 shifts.

play01:48

Therefore, it will take 10 000 units of time

play01:52

So, it is clear from this fact that,

play01:54

running time generally depends on the

play01:56

size of the input. Here, in this case it depends on the size of the array.

play02:00

Size of the input or size of the array, one and the same thing.

play02:03

So, if the size of the array is three, it takes three units of time.

play02:07

If the size of the array is 10 000,

play02:09

it takes 10 000 units of time.

play02:11

The one thing we should note here is that,

play02:13

it totally depends on the size of the input

play02:16

that what is going to be the running time.

play02:18

Obviously, we are not considering the machine time,

play02:20

that how much time a particular operation will take

play02:22

in a particular machine. We are considering

play02:25

only the size of the input.

play02:27

So, running time generally depends on the size of the input.

play02:31

We should always remember this fact.

play02:34

Therefore, if the size of the input is n,

play02:36

then f(n) is a function of n denotes the time complexity.

play02:40

So, f(n) is nothing but a function which denotes

play02:42

the time complexity. It depends on input n.

play02:45

Okay. Or in other words, I can say

play02:47

f(n) represents the number of instructions

play02:50

executed for the input value n.

play02:54

Let's consider an example to understand this statement.

play02:56

Here in this example, I have a printf function

play02:59

within this loop which runs from zero to n minus one.

play03:02

Let's say, this instruction will take one unit of time.

play03:06

This is my assumption.

play03:07

Let's say this particular instruction takes one unit of time,

play03:10

then we are executing this instruction n number of times.

play03:13

So, this is clear that f(n) will be equal to n.

play03:17

Because, each instruction is taking one unit of time.

play03:20

And we are executing this instruction n number of times.

play03:23

Right? We are executing this instruction n number of times.

play03:27

That is why, it depends on the size of the input n

play03:30

and f(n) will be equal to n because

play03:33

f(n) is keeping the count of the number of instructions

play03:36

executed for the input value n.

play03:38

This will give us the time complexity.

play03:40

But, how to find f(n)?

play03:42

You know, finding f(n) in small programs is easy.

play03:45

But, as programs become more complex,

play03:47

finding f(n) is not that easy.

play03:49

We can compare two data structures for a particular operation

play03:52

by comparing their f(n) values.

play03:54

We can compare their f(n) values

play03:55

and then we can compare two data structures.

play03:57

We are interested in growth rate of f(n) --

play03:59

this is very important,

play04:01

we are interested in growth of f(n) with respect to n.

play04:04

Because, it might be possible that

play04:06

for smaller input size one data structure may seem better

play04:09

than the other, but for larger input size it may not.

play04:12

This is very important.

play04:13

We are interested in growth rate of f(n).

play04:15

We have to try different values of n,

play04:17

and then we have to find what is going to be the f(n) value.

play04:20

So, for different values of n, we should calculate

play04:22

not for just one value of n.

play04:24

Because, it might be possible that one data structure

play04:27

may seem better for smaller input size than the other data structure.

play04:30

This concept is applicable in comparing

play04:32

the two algorithms as well.

play04:34

It is just like, you know, we can compare

play04:36

two algorithms in the same way.

play04:37

We can compare two data structures in the same way.

play04:39

We are interested in growth rate of f(n).

play04:42

Now, how to know the growth rate of f(n)?

play04:44

Let's consider one example.

play04:46

Let's say, we have f(n) equal to five n square plus six n plus 12.

play04:49

This represents time or number of instructions executed

play04:53

that depends on the input value n of course.

play04:55

Let's say initially, we take n equal to one.

play04:58

Then percentage of running time due to five n square will be

play05:01

five divided by five plus six plus 12.

play05:04

We just have to put one here, this becomes five,

play05:06

this becomes six, this is already 12.

play05:08

So, five plus six plus 12.

play05:10

We will divide five by five plus six plus 12

play05:12

and then we will multiply it by 100 in order to calculate the percentage.

play05:15

This will give us 21.74 percent.

play05:18

So, we can clearly say that, percentage of

play05:20

running time due to five n square is 21.74 percent.

play05:25

Five n square is contributing this much amount of time.

play05:29

Obviously, this we are calculating for n equal to one.

play05:32

Let's see what is the percentage of running time due to six n.

play05:34

It is six divided by five plus six plus 12 into 100,

play05:38

which is 26.09 percent.

play05:40

So, six n is contributing higher than five n square. Right?

play05:44

Now, let's see what is the percentage of

play05:46

running time due to 12. It is 12 divided by

play05:48

five plus six plus 12 into 100, which is 52.17 percent.

play05:52

It seems like most of the time is consumed here.

play05:54

But wait, we have to see the growth rate.

play05:57

We cannot say right now that maximum

play05:59

amount of time is taken by this 12.

play06:01

Let's see for the other input values as well.

play06:04

Let's say n is equal to 10.

play06:06

In that case, five n square is contributing 87.41 percent

play06:10

of the running time, you can clearly see the growth here.

play06:13

From 21.74 percent to 87.21 percent.

play06:17

While, six n is decreasing.

play06:19

It decreases from 26.09 percent to 10.49 percent.

play06:23

And 12, which we thought of as taking most of the time,

play06:27

is now taking just 2.09 percent.

play06:30

Now, let's take n value equal to 100.

play06:32

In that case, you can clearly see the growth.

play06:34

Fine n square is contributing the maximum

play06:36

of the time. It is 98.97 percent.

play06:39

While, this is just taking 1.19 percent

play06:42

and 12 is just taking 0.02 percent.

play06:45

If you increase the n value further,

play06:47

you can clearly see five n square is taking 99.88 percent of the time.

play06:52

While, six n is taking 0.12 percent of the time

play06:55

and 12 is just taking 0.0002 percent of the time.

play07:00

These are almost negligible, right?

play07:02

Most of the time is consumed here,

play07:04

that is in five n square.

play07:06

99.88 percent of the time is consumed.

play07:09

So, from this fact it is clear that

play07:11

for larger values of n, the squared term consumes

play07:15

almost 99 percent of the time.

play07:18

This term is consuming the most of the time.

play07:21

Let's say, you have taken the value of n equal to 1.

play07:24

And you simply say that 12 is taking the most of the time.

play07:28

But the reality is different.

play07:30

As you are increasing the value of n,

play07:32

you can clearly see which term is taking the most of the time.

play07:34

Now, let's visualize the growth rate.

play07:36

Here, you can clearly see that,

play07:38

square term is taking most of the time.

play07:41

You can clearly see the growth over here.

play07:43

While, the growth of the constant term

play07:45

you can also see. Initially, it is high While, the growth of the constant term

play07:46

you can also see. Initially, it is high

play07:48

but later, it declines faster

play07:50

Similar for the linear term as well.

play07:52

You can clearly say that, this square term

play07:54

or quadratic term is taking the most of the time.

play07:57

Right?

play07:58

So, there is no harm if we can eliminate

play08:00

the rest of the terms as they are not

play08:02

contributing much to the time.

play08:04

We can eliminate these terms -- six n plus 12, we can eliminate them.

play08:08

There is no problem in eliminating them.

play08:10

Because, we know that, five n square term is taking the most of the time.

play08:15

It is 99.88 percent.

play08:18

We are getting the approximate time complexity.

play08:20

Isn't that so?

play08:21

And we are satisfied with this result because

play08:24

this approximate result is very near to the actual result.

play08:27

If we take f(n) equal to five n square, there is no harm.

play08:30

We know that, it is taking 99.88 percent of the time

play08:33

or it may take even more.

play08:34

We are getting the approximate time complexity

play08:37

and we are happy with it. There is no problem

play08:39

in eliminating the rest of the terms.

play08:41

This concept -- this concept of calculating the

play08:44

approximate measure of time complexity

play08:47

is called asymptotic complexity.

play08:49

We are not calculating the exact running time here.

play08:52

We are eliminating the unnecessary terms.

play08:55

We are only concentrating on the term

play08:57

which takes most of the time. Okay.

play09:00

So, this approximate measure of time complexity

play09:03

is called asymptotic complexity.

play09:05

And this is what we are interested in.

play09:07

We are not interested in calculating the

play09:09

exact running time, we are not interested in

play09:12

which machine we are executing our algorithm

play09:14

We are only interested in calculating the

play09:17

approximate measure of time complexity.

play09:20

And we are satisfied with this result.

play09:22

We already know this fact that time complexity

play09:24

depends on the size of the input.

play09:26

As you increase the size, you can see the time complexity changes.

play09:31

So it depends on the size of the input

play09:33

and we are happy with this result.

play09:34

Okay friends, this is it for now.

play09:36

Thank you for watching this presentation.

Rate This

5.0 / 5 (0 votes)

Related Tags
Asymptotic AnalysisAlgorithm ComplexityTime ComplexityInput SizeRunning TimeShifting CostGrowth RateData StructuresEfficiency ComparisonApproximation MethodPerformance Evaluation