[SER222] Asymptotics (2/5): Upper Bounds

Ruben Acuna
30 Aug 201715:10

Summary

TLDREn este video se explica la idea de una cota superior utilizando funciones matemáticas. El presentador muestra cómo graficar funciones para entender cuándo una función es mayor que otra más allá de un punto específico, llamado x0. Se introduce la notación de Big O, que define formalmente las cotas superiores para analizar el tiempo de ejecución de algoritmos. A través de ejemplos, se explica cómo eliminar términos no dominantes y coeficientes para encontrar el Big O de una función, destacando la importancia de aproximar la cota superior lo más cercana posible a la realidad.

Takeaways

  • 📊 La función g(n) parece ser una cota superior para f(n) después de un punto específico llamado x0.
  • 📝 La notación f(n) = O(g(n)) indica que f(n) tiene una cota superior en términos de g(n).
  • 🔢 La definición matemática de cota superior dice que |f(n)| ≤ C * |g(n)| para n > x0, donde C es una constante.
  • 📈 Al graficar, g(n) siempre está por encima de f(n) después del punto de intersección x0.
  • ⚖️ Para encontrar la cota superior (O grande) de una función, se eliminan los términos no dominantes y las constantes.
  • ✔️ Un ejemplo: f(n) = 2n + 3 tiene O(n) ya que el término dominante es n.
  • 🔍 Se puede demostrar formalmente la cota superior al encontrar un valor de C que satisfaga la relación matemática.
  • ⚙️ El proceso para determinar la cota superior de una función implica simplificar los términos y aplicar la definición de O grande.
  • ⏲️ El uso de constantes en las cotas superiores da flexibilidad, permitiendo ajustar la función g(n) para que sea una cota válida.
  • 🔢 En términos de logaritmos, todas las funciones logarítmicas tienen la misma cota superior debido a la conversión constante de base.

Q & A

  • ¿Qué es una cota superior en términos de funciones matemáticas?

    -Una cota superior es una función que siempre está por encima de otra función a partir de un cierto punto. Esto significa que, a partir de ese punto, la función g(n) es mayor o igual que f(n).

  • ¿Cómo se denomina la notación para expresar una cota superior?

    -La notación utilizada para expresar una cota superior es O grande, que se escribe como f(n) = O(g(n)), lo que significa que f(n) está acotada superiormente por g(n).

  • ¿Qué significa x0 en el contexto de la cota superior?

    -x0 representa el punto de intersección a partir del cual la función g(n) siempre es mayor que f(n). Es el punto a partir del cual la cota superior es válida.

  • ¿Por qué es importante el valor constante C en la definición de cota superior?

    -El valor constante C permite ajustar la función g(n) para que sea mayor que f(n) en todo momento. Da flexibilidad en la definición de la cota superior al permitir escalar la función g(n).

  • ¿Cómo se determina la cota superior de una función f(n)?

    -Para determinar la cota superior, se eliminan todos los términos menos el dominante y luego se elimina el coeficiente de ese término. Esto proporciona una estimación del crecimiento máximo de la función.

  • ¿Cuál es el procedimiento para probar que una función f(n) está acotada superiormente por g(n)?

    -Para probar que f(n) está acotada superiormente por g(n), se demuestra que f(n) ≤ C * g(n) para un valor de constante C y para n mayor que x0. Esto implica resolver una desigualdad que muestre que la función cumple con la definición formal.

  • ¿Qué es un término dominante en el análisis de la cota superior?

    -El término dominante es el término de la función que crece más rápido a medida que n aumenta. Es el término que determina el comportamiento de la función para grandes valores de n.

  • ¿Por qué los logs en diferentes bases pueden considerarse equivalentes en términos de O grande?

    -Los logs en diferentes bases son equivalentes en O grande porque el cambio de base es una operación constante, y esta constante se puede absorber en la constante C de la definición de cota superior.

  • ¿Qué papel juegan los valores absolutos en la definición formal de la cota superior?

    -Los valores absolutos garantizan que solo se considere el crecimiento positivo de las funciones, ya que el tiempo o las operaciones no pueden ser negativas. Aunque matemáticamente se incluyen, en la práctica no afectan mucho el análisis.

  • ¿Qué ocurre si una función tiene más de un punto de intersección con su cota superior?

    -Antes del punto x0, las funciones pueden cruzarse varias veces, pero la cota superior solo se asegura a partir de x0, donde g(n) siempre es mayor o igual que f(n). Las fluctuaciones antes de x0 no afectan la cota superior.

Outlines

00:00

📊 Introducción a la noción de cota superior

En este párrafo, el presentador introduce el concepto de una cota superior usando dos funciones arbitrarias, f(n) y g(n), representadas en un gráfico. Observa cómo g(n) parece ser mayor que f(n) a partir de un punto específico, x0, lo que sugiere que g(n) podría ser una cota superior de f(n). Sin embargo, aclara que, sin una definición matemática formal, esto es solo una suposición basada en la gráfica.

05:02

✏️ Definición matemática de la cota superior

Se presenta la definición formal de una cota superior utilizando notación matemática. f(n) es O(g(n)) si existe un constante C tal que el valor absoluto de f(n) es menor o igual que C veces el valor absoluto de g(n) para valores de n mayores que un punto específico x0. La importancia de esta definición es garantizar que f(n) no sobrepase a g(n) para grandes valores de n, manteniéndose dentro de los límites de la cota superior.

10:03

📉 Ejemplo básico de cálculo de O(f(n))

Este párrafo explica el proceso para calcular la cota superior de una función. El ejemplo f(n) = 2n + 3 ilustra cómo se descartan los términos no dominantes (en este caso, el +3) y el coeficiente del término dominante (2n), resultando en O(n). El proceso se valida utilizando la fórmula matemática y se explica cómo escoger la constante C adecuada para probar que la cota superior es válida.

15:05

🧮 Ejemplo con una función logarítmica

Aquí se muestra un segundo ejemplo utilizando una función más compleja, f(n) = log(n) * 35n². Se siguen los mismos pasos para determinar que la cota superior es O(n²). Nuevamente, el proceso matemático para probar que esta es la cota correcta implica verificar la definición formal y determinar un valor adecuado para C, garantizando que la función no sobrepase su cota superior para grandes valores de n.

🔍 Notas finales sobre las cotas superiores y los logaritmos

Se concluye con una explicación sobre cómo los logaritmos de diferentes bases (por ejemplo, log base 2 y log base 10) resultan en la misma cota superior. Esto se debe a que la conversión entre bases logarítmicas es una operación de tiempo constante, que queda absorbida dentro del valor constante C en la definición de la cota superior. En términos prácticos, esto significa que todas las funciones logarítmicas tienen la misma cota superior en el contexto de O(g(n)).

Mindmap

Keywords

💡Cota superior

En el video, la cota superior se refiere a una función que limita a otra desde arriba, lo que significa que una función f(n) nunca crecerá más rápido que una función g(n) a partir de un punto determinado. Esta idea es clave en el análisis de algoritmos para comprender el comportamiento de la complejidad de una función.

💡f(n)

f(n) representa la función que se quiere analizar en términos de su crecimiento o complejidad. En el video, f(n) es la función 'roja' y su comportamiento es comparado con g(n) para determinar si g(n) actúa como una cota superior.

💡g(n)

g(n) es una función arbitraria usada como referencia para comparar con f(n). Si g(n) siempre está por encima de f(n) después de un punto x0, entonces g(n) se considera una cota superior de f(n). Esta comparación es esencial para determinar la eficiencia de un algoritmo.

💡Punto de intersección (x0)

x0 es el punto a partir del cual g(n) se convierte en una cota superior de f(n). Antes de este punto, f(n) puede comportarse de manera impredecible o fluctuante, pero a partir de x0, g(n) siempre estará por encima de f(n). Este concepto es crucial para determinar dónde comienza a cumplirse la cota superior.

💡O grande (O(f(n)))

El O grande es una notación matemática usada para describir el límite superior de una función, es decir, la velocidad máxima de crecimiento de una función. En el video, se explica que si f(n) = O(g(n)), entonces f(n) está acotada superiormente por g(n) a partir de x0.

💡Constante C

La constante C es un factor de escala en la notación de O grande que permite ajustar la función g(n) para que sea suficientemente grande como para actuar como una cota superior para f(n). En el video, se menciona que C no está fija, lo que da flexibilidad para elegir un valor que haga que la desigualdad se cumpla.

💡Crecimiento

El crecimiento de una función se refiere a cómo cambia su valor en relación con n. El video compara f(n) y g(n) para determinar cuál crece más rápido, y así establecer si g(n) es una cota superior. Esto es esencial para el análisis de algoritmos y su eficiencia.

💡Termino dominante

El término dominante en una función es el que más contribuye a su crecimiento a medida que n aumenta. En el video, para determinar la cota superior, se ignoran los términos menores y se trabaja solo con el dominante, ya que este define el comportamiento asintótico de la función.

💡Coeficiente

El coeficiente es el número que multiplica a la variable en un término de una función. En el análisis de O grande, se ignoran los coeficientes al buscar la cota superior, ya que no afectan el comportamiento asintótico de la función.

💡Logaritmo (log n)

El logaritmo aparece en el video como parte de una función cuyo crecimiento es más lento que el de otras funciones polinómicas. En la notación O grande, los logaritmos de diferentes bases son considerados equivalentes porque las diferencias entre ellos solo involucran constantes, que pueden ser absorbidas por la constante C.

Highlights

Introduction to the concept of an upper bound using two functions, f(n) and g(n), with a graphical demonstration.

Explanation of how g(n) is an upper bound for f(n) past a specific point, referred to as x0.

Definition of Big O notation: f(n) = O(g(n)) represents that f(n) has an upper bound of g(n).

Mathematical definition of Big O: f(n) is less than or equal to a constant C times g(n) for n greater than x0.

Emphasis on dropping non-dominant terms and coefficients to simplify determining Big O for a given function.

First example with the function f1(n) = 2n + 3: The Big O is determined as O(n) after dropping the constant and coefficient.

Introduction of C as a constant used to adjust and scale g(n) for Big O calculations.

Second example: f2(n) = log(n) * 35n^2. The Big O is determined as O(n^2) after eliminating the non-dominant term.

Step-by-step demonstration of proving the Big O bound for a function using algebra and substitution.

Explanation of why logarithmic functions with different bases can have the same Big O due to constant time conversion.

The idea that Big O provides a worst-case time complexity and allows flexibility in adjusting constants to ensure f(n) remains bounded.

Emphasis on the importance of a 'tight' upper bound, avoiding unnecessarily large upper bounds like O(2^n).

Graphical interpretation of upper bounds, showing how g(n) must remain above f(n) past a certain point (x0).

Final note on how all logarithmic functions with different bases resolve to the same Big O due to constant time conversions.

Practical takeaway: the process of simplifying and proving Big O bounds is essential for analyzing algorithm performance.

Transcripts

play00:05

- In this video, I want to elaborate

play00:06

on the idea of an upper bound.

play00:08

So, we only introduced the basic idea,

play00:10

but we're gonna talk through this

play00:11

with some mathematical notation.

play00:13

So, take a look at the graph on the right-hand side.

play00:16

There, I have two arbitrary functions,

play00:18

one of them is red and one of them is blue.

play00:20

Let me go ahead and label them.

play00:21

So the red one, I'm going to call that my f of n.

play00:26

So maybe this is somehow the function I'm trying to bound.

play00:29

Then over here is the function,

play00:31

I'm gonna call it g of n.

play00:32

G of n is kind of another typical function name, right,

play00:35

so we'll just stick with g of n.

play00:37

So we have f of n and g of n side-by-side.

play00:40

Now if I'm looking at this graph we see a trend.

play00:42

That trend is that g of n is going to be,

play00:45

past a certain point, always larger than f of n.

play00:49

So this is basically our demonstration,

play00:51

our graphical demonstration of the idea that g of n

play00:55

is probably an upper bound f of n.

play00:57

Now of course from this picture I don't know for sure.

play00:59

Right, presumably these functions go on forever,

play01:01

and right now I don't really have the evidence

play01:03

to say g of n is for sure an upper bound on f of n.

play01:07

In fact, right, if we think about mathematical terms,

play01:09

well, we haven't even looked at a definition

play01:11

for what something means to be an upper bound,

play01:12

so we don't really know for sure,

play01:14

but graphically we can kind of get an idea

play01:15

of what is maybe happening.

play01:17

So g of n appears to be past a certain point

play01:20

always be higher than f of n.

play01:22

Now that particular point we'll actually give it a name,

play01:24

so that we'll call x sub zero.

play01:27

So this little intersection here.

play01:28

Right, so these functions intersect a couple times, right,

play01:30

and there's sometimes that blue is greater than red,

play01:32

red is greater than blue,

play01:34

but past this particular point,

play01:35

past this x0 point, our g of n, right,

play01:38

our blue function is always greater than,

play01:40

or appears to be always greater than f of n, alright?

play01:42

So this is kind of our last kind of point of intersection,

play01:45

it's going to be a particular point

play01:49

past which our upper bound idea holds.

play01:51

Right, so upper bounds are always going to be expressed

play01:53

in terms of well, they are good past this point.

play01:56

Prior to that point, I don't really know.

play01:58

The idea being is that all sorts of different things

play02:00

could happen with very small programs,

play02:01

maybe you have some constant overhead,

play02:03

and that initial region right before this x naught,

play02:07

or x0, right, this region over here,

play02:09

my analysis is basically gonna be

play02:13

unpredictable because of all these little fluctuations

play02:15

that happen, right, or special cases that happen,

play02:17

so I don't really want to mess with that stuff.

play02:20

Alright, so graphically this is the idea of an upper bound,

play02:22

right?

play02:23

We have my g of n always above f of n, past some region.

play02:27

So how do we define that mathematically?

play02:29

So now we're gonna get our hands a little bit dirty.

play02:32

So I have a definition here, and I'm going to read it.

play02:34

Definition.

play02:35

We write that f of n equal O of n,

play02:38

right, and this is basically the proper syntax,

play02:43

f of n equal O of g of n to say that f of n

play02:47

has an upper bound, right, keywords there,

play02:51

of g of n.

play02:53

In general, whenever I write O of n,

play02:54

we're basically saying, oh,

play02:55

the thing on the left, right, f of n, has some upper bound,

play02:57

and then inside that O of n, right, the stuff in here,

play03:01

right, this is what's actually upper bounded by, right?

play03:03

G of n is just an arbitrary function, right?

play03:05

It could be anything in there, right?

play03:06

Could be n, it could be n squared, it could be whatever.

play03:08

For now we're just saying it's generic.

play03:10

Now when we say where we're upper bounded,

play03:12

if we want to be more precise,

play03:14

we'll say that f of n, right,

play03:15

will take at most this number of operations, right?

play03:19

It won't ever take more time than what that upper bound is.

play03:22

We don't want that red line going above the blue line.

play03:23

That would be against kind of our intuition

play03:26

for what does it mean for something to be an upper bound.

play03:28

And formula we'll see is the following:

play03:30

f of n equal O of g of n if and only if

play03:35

the absolute value of f of n is less than or equal

play03:37

to a constant C times the absolute value of g of n,

play03:42

for n greater than x0, where x0 is some constant.

play03:47

So that'd be our formal definition.

play03:48

Recall x sub zero from our previous slide

play03:51

is that cutoff point, right,

play03:52

at which point the upper bound actually holds.

play03:54

Before then, anything could happen,

play03:55

after it then we're good.

play03:58

So our main definition lies here.

play04:01

F of n equal O of g of n if and only if.

play04:05

So mechanically, we're saying, right,

play04:06

is that f of n is big O, right?

play04:09

That's kind of our term for an upper bound.

play04:11

We'll say that it's big O, that's the common nomenclature.

play04:15

If, right, this right-hand side holds, right?

play04:17

This absolute value of f versus the absolute value of g.

play04:25

Our lesser values don't really matter to us all that much.

play04:27

Remember that these are supposed to be growth functions

play04:29

that we analyze, right?

play04:30

And it doesn't really make sense for a function

play04:32

to take negative time, right,

play04:34

or negative number of operations,

play04:36

so in general we don't really worry

play04:37

about those absolute signs,

play04:39

although in the mathematical kind of definition,

play04:41

it's stating they are there

play04:42

and so we should leave them there.

play04:43

They just don't really complicate things for us

play04:45

in the practical world.

play04:48

And so what that bit of text there is saying, right,

play04:50

is that f is upper bounded, right, by g of n,

play04:54

if f is always less than some constant times g of n, right?

play04:58

So g of n basically is some other arbitrary function, right?

play05:01

And then that function, right,

play05:03

basically is growing as fast as f, right?

play05:05

We don't want f to somehow jump up in that graph

play05:08

and spring over blue, you know?

play05:10

The upper bound function, we don't want that to happen.

play05:12

So g of n is basically obligated

play05:13

to always be above f of n.

play05:16

Now there's this little C value in there.

play05:17

That C value is a constant.

play05:19

Basically what that's saying is in our big O, right,

play05:22

somehow our big O is some function,

play05:24

it could be n squared, right,

play05:26

I'm actually allowed to have some constant in front of there

play05:28

that I can use to scale it.

play05:29

I can use to make it bigger, right?

play05:31

That coefficient, basically I'm allowed to play with it.

play05:33

Notice that C isn't assigned a particular value here,

play05:35

so allowed to basically change.

play05:37

So it's somehow a little bit of flexibility

play05:39

when we're writing an upper bound, right?

play05:43

So you can maybe picture that blue line, right?

play05:45

Maybe it could go up even higher,

play05:47

or maybe it could even come down a little bit.

play05:49

I'm allowed basically in the definition

play05:51

to pick whatever coefficient I have in that function

play05:53

that I use for my G of n, right?

play05:55

I don't have to care about specifically what it is,

play05:57

I'm allowed to pick something that's convenient.

play05:59

So if I want to I can make it really big,

play06:00

and maybe then it's easier for me to show

play06:02

that a particular g of n is actually

play06:04

the upper bound for an f of n.

play06:06

So that's actually quite nice in terms of the math, right,

play06:08

it gives us a little bit of flexibility.

play06:10

And then of course, x sub zero is that fixed point

play06:12

after which my upper bound is actually supposed to hold.

play06:15

So that's our upper division.

play06:20

So now let's take a look at how I would actually

play06:23

go to a function and figure out what is the big O for it,

play06:25

right, what is the upper bound for it.

play06:28

So the process is pretty simple.

play06:29

Basically say is given some function f of n,

play06:32

drop all but the dominant term,

play06:35

and then drop the coefficient

play06:36

on the term that's left over.

play06:38

So basically we're just throwing away a lot of stuff.

play06:40

The result from that will be the big O

play06:43

upper bound for that particular function.

play06:46

Below I have a couple of examples.

play06:48

So let's say over here

play06:50

that my f of n is 2n plus three.

play06:54

Well, try to find our rules.

play06:56

First thing we want to do is drop all but the dominant term.

play06:58

Here that 2n is the dominant term because it grows.

play07:00

The plus three is just a constant,

play07:02

it's not gonna do anything.

play07:03

So we drop this plus three.

play07:06

And then we have 2n, while our second part of our rule there

play07:09

for finding big O, right, for finding separate bound,

play07:12

is going to be to drop the coefficient,

play07:13

so I'll drop the two.

play07:15

Result is that our big O of f1 is n.

play07:19

So over here, right, g of n, right, is equal to n.

play07:24

Right, so, maybe if I had to write the full syntax here,

play07:27

right, f1 of n is big O of n, right?

play07:37

You remember generically, before we said that f of n

play07:39

was big O of g of n,

play07:40

but now we know what g of n is, right?

play07:42

It's just n, so I can do substitution there.

play07:44

So f1 of n, right, is O of n, right?

play07:48

So that function there, right, 2n plus three,

play07:51

is upper bounded by n, right?

play07:54

By g of n equal to n.

play07:57

So it's a linear function being bounded

play07:58

also by another linear function.

play08:00

Now of course the reason why that works

play08:01

is because there's that free constant

play08:03

we're allowed in front of n, right?

play08:05

So I can actually pick something like, oh hey,

play08:07

in g of n, right, that n maybe has a 10 in front of it.

play08:12

Well, if I have 10n versus 2n plus three,

play08:15

well, for decently large values, right,

play08:17

that 10n is gonna be much larger than 2n.

play08:20

So for sure that's gonna be an upper bound

play08:22

that works for us just because we're allowed

play08:23

to pick that constant there.

play08:25

Now, if I wanted to prove that mathematically,

play08:30

I would basically need something like this over here.

play08:32

And it's really simple here for this example,

play08:34

just because we're using a simple function,

play08:37

but what we need to do do if you want to actually

play08:38

prove that that bound is correct, right?

play08:40

Initially, what we talked through

play08:41

is kind of the heuristic process

play08:42

for figuring out what is the big O of a particular f of n,

play08:45

but what if I wanted to actually prove, right,

play08:46

that that particular bound is correct?

play08:48

What happens is that then I need to show

play08:50

that that definition up top holds.

play08:52

So that definition, right,

play08:54

was something like, you know,

play08:57

f of n less than or equal to g of n.

play09:00

Right, so what I would do, right,

play09:04

is basically try to show that these things work out.

play09:09

And basically there's some substitution stuff that happens,

play09:11

right, so over here I have my left-hand side, right,

play09:18

is supposed to be absolute of f of n.

play09:21

Here, right, my f of n was 2n,

play09:24

plus three,

play09:28

and then this is less than,

play09:30

so let's assume these would sort of be absolute bars here,

play09:33

let me make this a little more obvious.

play09:35

Less than, and c times g of n.

play09:39

So c absolute value g of n we said is n over here,

play09:44

close that absolute.

play09:45

Right, so if I were to just do that substitution,

play09:48

based on my f of n and my g of n

play09:49

that I got from that basic process that was defined,

play09:52

they have this expression like this.

play09:54

And this expression basically need to argue

play09:56

that it is true, right?

play09:57

And if I can argue that it's true,

play09:58

then I've proved the upper bound.

play10:01

In this case, it becomes mini-algebra.

play10:02

Basically what will happen is we'll drop our absolute value

play10:04

just because it doesn't really matter

play10:05

for our practical examples,

play10:07

and then we'll basically say, oh hey,

play10:09

I need to find out some value of c,

play10:12

such that this whole thing is true.

play10:16

Alright, pick some coefficient.

play10:17

And I could pick a decent coefficient here.

play10:20

Usually what you'd just do is pick that coefficient

play10:21

to be the sum of the coefficients

play10:22

of everything on the left-hand side.

play10:25

Maybe you can think about why that would work.

play10:27

And so I can pick c to be something like five, right,

play10:29

two plus three, that actually gives me

play10:31

this expression down here that I've written in the slides,

play10:33

right, and then this bound, right,

play10:35

this bound will hold

play10:36

for some sufficiently large values of n.

play10:38

And what I can actually do is I can actually solve

play10:40

that thing for n and basically figure out,

play10:42

you know, what is gonna be the cut off by which it applies.

play10:44

So in this case, we're saying

play10:46

that c is gonna be equal to five,

play10:48

and this cutoff, this upper bound will apply

play10:52

for n greater than zero.

play10:53

So assume that n is integer types,

play10:55

basically we're saying that one or greater,

play10:56

this thing will hold.

play10:57

Okay, so that's a simple example

play10:59

showing that our bound holds, right?

play11:00

So we have the basic idea that if I have a growth function,

play11:03

where I can use that heuristic process

play11:04

to figure out what the big O is,

play11:06

and that gives me a suggestion of what it is,

play11:08

but really in terms of math

play11:10

we want to show or prove that that is the correct answer,

play11:12

and the way we do that is by basically showing

play11:14

that the definition actually holds.

play11:17

So let's just do one more example.

play11:19

Over here, what I have my f2,

play11:21

f2 is log n times 35 n squared.

play11:25

Once I want to figure out the big O,

play11:26

well, what I do is I drop all but the dominant term,

play11:29

in this case I'm dropping the log term,

play11:31

and then I drop the coefficient on the remaining term.

play11:33

Remaining term doesn't have a coefficient,

play11:35

it's really just one behind the scenes,

play11:36

so I can leave it alone.

play11:37

So f2 is going to be big O of n squared.

play11:45

And again, I'm just writing kind of the proper notation

play11:48

that is also correct.

play11:49

So f2 is O of n squared.

play11:53

Okay, so we have that.

play11:56

On the right-hand side, then,

play11:57

you have kind of the derivation to show that that's true,

play12:00

and it's following the same problems

play12:01

as we did for the last function.

play12:02

So basically, we look back up at the definition,

play12:04

f less than g of n, right?

play12:06

Fill in our blanks, our f of n we know,

play12:08

our g of n that we're guessing by doing

play12:10

that elimination of terms,

play12:11

and then we figure out what is the value of c

play12:14

by adding in all the coefficients on the left-hand side,

play12:18

that gives us an expression,

play12:19

and then we can basically just solve for n, right,

play12:20

and figure out when is this thing gonna hold.

play12:23

So it's not too bad for these simple functions.

play12:25

When you're doing something maybe

play12:26

with a more complicated function,

play12:28

then that'll take you a little bit more time,

play12:29

but the overall process, right, will be the same,

play12:32

maybe there's just more algebraic steps involved.

play12:35

Okay, so our big O, right, again,

play12:37

is this general term that we use

play12:39

when we're saying that something is an upper bound.

play12:41

So I have f of n, right, some growth function,

play12:43

and I'm deriving here my upper bounds for that, right,

play12:46

which is a big O expression.

play12:48

Keep in mind the big O is always above another function.

play12:52

This is good because it gives us kind of our worse case,

play12:54

where we know our program's gonna take longer

play12:56

than this amount of time, but on the negatives, right,

play13:00

big O is basically really easy to write,

play13:02

because we can just say things like,

play13:03

oh, something is big O of two to the n, right?

play13:06

Something that's exponential.

play13:07

Well, obviously my program is going to be big O of that,

play13:10

right, that function grows so quickly

play13:13

that pretty much everything that I could write, right,

play13:15

in a program will take less time to run than that.

play13:18

So sometimes people kind of wave their hands,

play13:20

and just make their upper bounds really big,

play13:22

but that's not useful for us.

play13:24

Really what you want to do

play13:24

is you want to make a big O, right, a upper bound,

play13:27

that's as close as we possibly can be to the actual f of n.

play13:30

Right, that gives us kind of more information,

play13:32

it's closer to it.

play13:33

Something that's really large, right,

play13:35

it's just kind of like waving our hands,

play13:36

and there's just so many functions that are less than it

play13:38

that doesn't really tell us anything about that f of n.

play13:41

Alright, so I have one final note here about upper bounds,

play13:44

and that's about the use of logs.

play13:46

So it would be true to say the following:

play13:48

log sub two of n is big O of log sub two of n.

play13:52

Also, log sub 10 of n is equal to O of log sub two of n.

play13:58

So I have basically, my left-hand sides are different,

play14:02

and my right-hand sides are the same,

play14:03

so I'm saying two different logs resolve to the same big O,

play14:06

resolve to the same upper bound.

play14:08

Take a moment to ask yourself,

play14:09

why might that be the case?

play14:13

Okay, so the answer there is simply put,

play14:16

log conversion is a constant time operation, right?

play14:19

So if I want to switch from log two of n to log 10 of n,

play14:21

right, if I wanted to do a change of base operation,

play14:24

I basically just divide by something like, I don't know,

play14:26

a log of two over base 10 or something,

play14:30

or log base 10 of two.

play14:32

All I need to do is that constant time operation,

play14:35

and that constant time operation,

play14:36

basically what happens is it gets folded

play14:38

into the constant within the definition of big O.

play14:41

So over here I have this c value here.

play14:44

That c value is allowed to be anything, right,

play14:46

so that value there basically can soak up the conversion,

play14:49

right, that little constant that results

play14:52

from a change of base operation.

play14:55

So when big O, basically we can be a little bit lazy

play14:57

and say all logs look the same, right?

play14:59

Because constant times change those basis,

play15:02

and that constant time just gets folded into that c

play15:04

in the definition of the function.

play15:06

So they're all the same from the point of view

play15:09

of an upper bound.

Rate This

5.0 / 5 (0 votes)

相关标签
Big OLímite SuperiorMatemáticasAnálisis AlgoritmosCrecimiento FuncionesComplejidadNotación MatemáticaProgramaciónRendimientoFunciones
您是否需要英文摘要?