MINI-LESSON 8: Power Laws (maximally simplified)

N N Taleb's Probability Moocs
26 May 202123:51

Summary

TLDRThis lecture delves into the concept of power laws and the Pareto distribution, illustrating their prevalence through the 80/20 rule. It contrasts power laws with Gaussian and sub-exponential distributions, highlighting their unique properties like constant ratio in tail behavior and scale-free nature. The speaker explains the implications of power laws in real-world scenarios, such as wealth distribution and company sizes, emphasizing their intuitive understanding despite the absence of traditional statistical measures like mean and variance in certain cases.

Takeaways

  • 📚 The lecture discusses the concept of power laws and the Pareto distribution, introducing them in the context of different distribution classes.
  • 🔍 The Pareto principle, often cited as the 80/20 rule, is highlighted as a real-world example of power law distribution, illustrating how a small percentage of people own a large percentage of resources.
  • 📈 Power laws are characterized by a constant ratio in the tail of the distribution, unlike Gaussian distributions where the ratio changes as you move further from the mean.
  • 📊 The script differentiates between three main classes of distributions: Gaussian, subexponential, and power laws, each with their own properties and implications for statistical inference.
  • 🤔 The Gaussian distribution is safe for statistical inference due to its convergence properties, but power laws present challenges due to their fat tails and lack of traditional statistical measures like mean and variance.
  • 📉 Power law distributions are intuitive and can be understood through the concept of recursion, where the same 80/20 rule can be applied repeatedly to smaller subsets.
  • 📝 The script explains that power laws can be mathematically expressed as a constant times x to the power of negative alpha for large x, which is key to understanding their behavior.
  • 📊 The log-log plot is a useful tool for identifying power laws, as it should show a straight line with a slope of alpha for distributions that follow a power law.
  • 🔢 The 'alpha' parameter in power laws determines the thickness of the tail; lower alpha values indicate fatter tails and higher values result in thinner tails approaching Gaussian behavior.
  • 🚫 Power laws with alpha less than or equal to one have no mean, and those with alpha less than or equal to two have no variance, challenging traditional statistical analysis.
  • 🌐 The lecture concludes by emphasizing the importance of understanding power laws for statistical inference and for developing a more empirical and mathematically solid view of the world.

Q & A

  • What is the Pareto Principle often associated with?

    -The Pareto Principle is often associated with the idea that 80% of the effects come from 20% of the causes, specifically in the context of wealth distribution where 20% of the people own 80% of the land.

  • What does the term 'fractal' imply in the context of the Pareto Principle?

    -In the context of the Pareto Principle, 'fractal' implies that the 80/20 distribution can be recursively applied at different levels, such as within the 20% itself, leading to further refinements like 50% of the land being owned by 1% of the people.

  • What are the three main classes of distributions mentioned in the script?

    -The three main classes of distributions mentioned are Gaussian (including lower Gaussian like binomial and upper Gaussian like the actual Gaussian distribution), subexponential (including rank and log normal distributions), and power laws (including Pareto, Student's t, and stable distributions).

  • What is special about the probability ratios in a power law distribution compared to a Gaussian distribution?

    -In a power law distribution, the probability ratios are constant regardless of the value of x, whereas in a Gaussian distribution, these ratios change as x increases, showing an acceleration of decline but not maintaining a constant ratio.

  • Why is the Gaussian distribution considered safe for statistical inference?

    -The Gaussian distribution is considered safe for statistical inference because, under the central limit theorem, the sum of a large sample tends to converge to a Gaussian distribution, allowing for more predictable and reliable statistical analysis.

  • What does the term 'thick tail' or 'semi-fat tail' refer to in the context of distributions?

    -In the context of distributions, 'thick tail' or 'semi-fat tail' refers to distributions that have heavier tails than Gaussian distributions but do not follow a power law, meaning that extreme events are less likely but not as rare as in a Gaussian distribution.

  • How is the concept of 'scale and variance' related to power law distributions?

    -The concept of 'scale and variance' in power law distributions refers to the property that the ratio of exceedance probabilities remains constant regardless of the scale of x, indicating a consistent pattern of distribution without a specific mean or variance.

  • What does the 'Lindy effect' imply about the life expectancy of non-perishable items or concepts?

    -The Lindy effect implies that the life expectancy of non-perishable items or concepts is proportional to their current age, suggesting that the longer something has already existed, the longer it is expected to continue to exist.

  • How does the script differentiate between Gaussian and power law distributions in terms of life expectancy?

    -The script differentiates by stating that in Gaussian distributions, life expectancy shrinks as the age increases beyond certain thresholds, while in power law distributions, life expectancy remains constant or even increases, reflecting the 'Lindy effect'.

  • What mathematical property characterizes power law distributions?

    -Power law distributions are characterized by the property that the probability of exceeding a value x, for large x, can be approximated as a constant times x to the power of negative alpha, which is indicative of the distribution's tail behavior.

  • What are the implications of different alpha values in power law distributions?

    -Different alpha values in power law distributions indicate the fatness of the tail: lower alpha values result in fatter tails (more extreme events), while higher alpha values result in thinner tails. Alpha equal to one indicates a Cauchy distribution with no mean, and alpha values of two or lower imply no variance.

Outlines

00:00

📚 Introduction to Power Laws and Pareto Distribution

The speaker begins by expressing excitement about discussing power laws and the Pareto distribution. They introduce the concept by referencing the 80/20 rule, which originated from Vilfredo Pareto's observation of wealth distribution in Italy. The talk aims to contrast power laws with other distributions, highlighting the fractal nature of power laws where the same proportions recur at different scales. The speaker also outlines the three main classes of distributions: Gaussian, subexponential, and power laws, with a focus on the Pareto distribution's unique characteristics and its prevalence in real-world phenomena.

05:00

📉 Understanding Power Laws and Their Intuitive Nature

This paragraph delves into the intuitive aspects of power laws, contrasting them with the Gaussian distribution. The speaker explains the constant ratio characteristic of power laws, as opposed to the accelerating decline seen in Gaussian distributions. They provide examples of wealth distribution and the predictability of power laws, even without a defined mean or variance. The summary also touches on the concept of scale and variance in the context of power laws, emphasizing the predictability and the constant probability ratio regardless of the scale.

10:10

📈 Gaussian vs. Power Law: Life Expectancy and Conditional Expectation

The speaker compares Gaussian properties with power law characteristics, focusing on life expectancy as an example. In Gaussian distributions, the conditional expectation of life expectancy decreases as the age threshold increases. However, with power laws, the conditional expectation remains a multiple of the threshold, indicating a different approach to understanding life spans and aging. The paragraph also discusses the implications of these distributions in real-world scenarios, such as the life expectancy of companies and the concept of 'shock' mortality in the sub-exponential class.

15:12

📊 Mathematical Characterization of Power Laws

This section provides a mathematical explanation of power laws, defining them as a tail behavior where the probability of exceeding a value x decreases as a constant times x to the negative power of alpha. The speaker describes how to identify power laws through log-log plots and the implications of different alpha values on the distribution's properties. The summary highlights the unique statistical properties of power laws, such as the absence of mean and variance when alpha is less than or equal to one and two, respectively.

20:13

🌟 Conclusion: Power Laws in Statistical Inference and Worldview

In the concluding paragraph, the speaker summarizes the importance of understanding power laws in statistical inference and shaping our worldview. They emphasize the limitations of traditional statistical measures like mean and variance in the context of power law distributions and encourage a shift towards more empirical and mathematically solid approaches. The speaker also mentions their book on statistical consciousness and fat tails, suggesting that understanding these concepts can greatly enhance our analysis and interpretation of real-world data.

Mindmap

Keywords

💡Power Laws

Power laws are a type of mathematical relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of that quantity. In the context of the video, power laws are used to describe distributions where the frequency of an event is inversely proportional to its magnitude, such as the Pareto distribution. The script discusses how power laws are intuitive and well-known, often associated with the 80/20 rule, and how they can be applied recursively to understand distributions of wealth or land ownership.

💡Pareto Distribution

The Pareto distribution is a power-law probability distribution that is used in various fields such as economics, where it can describe the distribution of wealth. Named after Vilfredo Pareto, it is characterized by a heavy tail, meaning that there are a few very large values that occur less frequently than expected in a normal distribution. The script uses the Pareto distribution as an example to illustrate the concept of power laws and their application in understanding wealth distribution.

💡80/20 Law

The 80/20 law, also known as the Pareto Principle, is a principle that states that roughly 80% of the effects come from 20% of the causes. In the video, the 80/20 law is used as an example of a power law in action, specifically in the context of wealth and land ownership, where it is noted that 20% of the people own 80% of the land.

💡Fractal

A fractal is a concept in mathematics that refers to a shape that can be split into parts, each of which is a reduced-scale copy of the whole. In the video, the term 'fractal' is used to describe the recursive nature of power laws, where the same pattern of distribution (e.g., 80/20) can be applied at different scales, such as the distribution of wealth among individuals.

💡Gaussian Distribution

The Gaussian distribution, also known as the normal distribution, is a probability distribution that is characterized by a bell-shaped curve. It is used in the video to contrast with power law distributions, showing how the probabilities of extreme events (such as exceeding many standard deviations) decline rapidly in a Gaussian distribution, unlike in a power law distribution where the decline is more gradual.

💡Subexponential

Subexponential distributions are a class of probability distributions that have heavier tails than exponential distributions but are not power laws. In the script, subexponential distributions are discussed as an intermediate class between Gaussian and power law distributions, including distributions like the log-normal and exponential, which have thicker tails but do not exhibit the same scaling properties as power laws.

💡Scale Invariance

Scale invariance in the context of the video refers to the property of power law distributions where the shape of the distribution remains the same when the scale of the variable is changed. This is illustrated by the constant ratio of probabilities for different magnitudes of the same variable, such as the probability of having wealth greater than a certain amount compared to a multiple of that amount.

💡Tail of Distribution

The tail of a distribution refers to the portion of the distribution that corresponds to the least likely outcomes. In the video, the tail of a power law distribution is highlighted as being 'heavy' or 'fat,' meaning that there is a higher probability of extreme values than in a Gaussian distribution. The script discusses how the tail of a power law distribution declines at a rate that maintains a constant ratio, which is different from the more rapid decline seen in Gaussian distributions.

💡Lindy Effect

The Lindy Effect is a theory that the future life expectancy of a non-perishable item is proportional to its current age, implying that the older an item is, the longer it will be expected to last. In the video, the Lindy Effect is mentioned in the context of the three classes of distributions, suggesting that for certain types of distributions, the expected additional life or relevance of an item increases with its current age.

💡Sigma

In statistics, sigma (σ) typically represents the standard deviation of a set of values, a measure of the amount of variation or dispersion in the data. In the script, sigma is used to describe the number of standard deviations an event is from the mean, particularly in the context of Gaussian distributions, to illustrate the decline in probability as the number of sigmas increases.

Highlights

Power laws and the Pareto distribution are introduced as key concepts in understanding statistical distributions.

The 80/20 rule is associated with the Pareto principle, illustrating the uneven distribution of resources like land ownership in Italy.

Fractal nature of power laws is explained, showing how the 80/20 principle can be recursively applied.

Three main classes of distributions are presented: Gaussian, subexponential, and power laws.

Gaussian distributions are contrasted with their rapid convergence to a mean in large samples.

Subexponential distributions are characterized, including rank and log-normal distributions.

Power law distributions are detailed, including Pareto, Student's t, and stable distributions.

The concept of scale and variance in power laws is introduced, showing a constant ratio regardless of the value of x.

Gaussian distributions are critiqued for their lack of scalability in ratios as values increase.

The practical implications of power laws in wealth distribution and company sizes are discussed.

The lack of a mean in power law distributions is highlighted, yet their predictability is emphasized.

A simplified characterization of power laws is provided through the probability of exceeding x.

The Lindy effect is introduced in the context of the three classes of distributions, affecting life expectancy and aging.

Conditional expectation in Gaussian distributions is contrasted with the constant life expectancy in power law distributions.

The mathematical expression for power laws is given, emphasizing the constant ratio for large x values.

The importance of the alpha parameter in power laws is explained, affecting the fatness of the tail.

The implications of alpha values on statistical measures like mean, variance, and kurtosis are discussed.

The transcript concludes with a summary of the importance of understanding power laws for statistical inference and worldview.

Transcripts

play00:00

friends hello again

play00:03

extremely happy as you can see to do

play00:05

these uh lectures

play00:06

uh mini lectures i'm gonna try to

play00:08

explain uh power laws the pareto

play00:11

distribution

play00:11

very quickly put it in perspective with

play00:14

other classes of distribution

play00:16

namely the three main ones that i'll

play00:18

present in a minute

play00:19

but let me tell you why it's very

play00:21

intuitive and very well known

play00:24

a lot of people associate

play00:28

power laws with the 80 20

play00:31

law principle uh guideline

play00:34

that we want 80 percent of the people in

play00:37

italy

play00:38

own 20 of the land and 20 of the land

play00:42

owned 80 percent sorry twenty percent of

play00:45

people owned eighty percent of the land

play00:46

so

play00:47

that is how it was born with the work of

play00:49

pareto vi

play00:50

fredo pareto it is fractal

play00:54

we'll see what fractal means fractal in

play00:56

a sense that you can recurse

play00:59

these 20 also have an 80 20 you apply it

play01:02

to them

play01:03

you do it again the last 20 you do an 80

play01:06

20 on it

play01:08

till you get to about 50 or 1.

play01:13

so 50 of the land is owned by one

play01:16

percent of the people

play01:17

compatible with that if you have the

play01:19

same proportion being preserved and

play01:21

we'll see how

play01:22

these get preserved with

play01:25

the notion of power law and power law

play01:30

decline of the tail of the distribution

play01:32

so

play01:33

uh one uh

play01:36

mentioned as i said the class of

play01:38

distribution

play01:44

okay now you have a class of

play01:46

distributions

play01:50

this is going to be

play01:56

we have mainly three classes

play01:59

the first one i would call it gaussian

play02:07

it's not really gaussian but the

play02:09

distribution under summation as we saw

play02:11

with the law of love

play02:12

with the central limit theorem things

play02:15

converge to the gaussian rather rapidly

play02:18

within the gaussian of course we have

play02:20

the lower i'll call it the lower

play02:21

gaussian like binomial

play02:25

and upper gaussian is a gaussian proper

play02:28

okay because a binomial doesn't go to

play02:31

infinity these are finite

play02:34

sorry infinite infinite support delivers

play02:38

on plus infinity minus infinity

play02:40

that's just this goes on finite support

play02:43

and below the binomial you have the

play02:45

bernoulli

play02:47

and of course below the bernoulli you

play02:48

have the so-called degenerative

play02:51

distribution so

play02:52

this is the gaussian in these classes

play02:55

distribution you can do

play02:57

a lot of statistical inference you're

play03:00

safe

play03:01

thing other summation where you have a

play03:02

large sample the average of that sample

play03:04

will be summed

play03:05

you converge everybody's happy so

play03:09

sorry if you're not seeing very well

play03:12

here

play03:13

let me raise it the razorboard a little

play03:16

okay

play03:19

second class let's call this class

play03:22

subexponential sub

play03:26

expo financial

play03:29

okay and of course we draw the line here

play03:33

between

play03:36

two classes power laws

play03:39

and let's call it the sub exponential

play03:42

but not power law

play03:43

okay we won't give it a name in here you

play03:46

have

play03:47

your rank and log normal

play03:50

nasty distribution

play03:54

we go like normal say we have the

play03:58

exponential

play04:00

laplace in between maybe you have the

play04:03

gamma

play04:04

which is okay i mean they're all related

play04:08

with prioritization or mild

play04:10

transformation

play04:11

and then you have a bunch of things

play04:15

this is let's call the semi-fat tail

play04:18

thick tail but not power loss now within

play04:22

power laws

play04:23

you have

play04:26

pareto distribution the pareto

play04:30

distribution

play04:31

or distribution with pareto entails

play04:34

student t

play04:38

or something more general the stable

play04:40

distribution

play04:42

within here you have the levy some

play04:45

parameterization of stable distribution

play04:48

you have the koshi because she actually

play04:51

questioned

play04:52

could be student t parameterized or

play04:54

stable distribution parameterized

play04:56

you have these distributions okay so

play05:00

give me a second to erase the board and

play05:04

uh let's forget about this technique if

play05:06

it's too technical for you forget about

play05:08

it

play05:08

because power laws are very intuitive

play05:10

very easy to understand

play05:12

okay let me let me uh

play05:15

show you what happens with the gaussian

play05:17

well you have decline

play05:19

probability of exceeding three sigma

play05:22

three standard deviation or three

play05:24

whatever

play05:25

okay is one and seven hundred and forty

play05:29

times

play05:31

probability of exceeding four

play05:35

is one in 32k times

play05:40

probably exceeding five sigma one

play05:44

and 3.5 million probability exceeding

play05:47

six sigmas

play05:50

one in

play05:53

one billion you see what you notice is

play05:57

acceleration of decline

play06:00

but there is one central attribute to

play06:01

this

play06:03

it's as follows the ratio of three

play06:06

sigmas to six sigmas

play06:10

you see is going to be much much much

play06:13

much

play06:13

bigger the problem than five sigma to

play06:16

ten sigma

play06:18

then sigma is 1 and i don't know 13 to

play06:20

the 10th

play06:21

something so as you go higher

play06:24

with a gaussian that ratio

play06:29

becomes different with the power law you

play06:32

don't have that

play06:33

you have a constant ratio the two sigmas

play06:36

divided by four sigmas or two whatever

play06:38

two dollars divided by four dollars or

play06:40

two billion dollars

play06:41

is the same as if you uh

play06:44

take another ratio five divided by ten

play06:47

it's the same probability ratio

play06:51

let's say i tell you

play06:54

how many people are richer than

play07:00

okay and i take one million

play07:05

one in 62 and a half people

play07:10

okay let's assume i know maybe not wrong

play07:13

maybe

play07:13

soon with hyperinflation it'd be a

play07:16

hundred percent of people

play07:17

or 99 percent of the people but

play07:20

let's assume that work say richer than 2

play07:24

million

play07:26

1 in 250

play07:30

richer than four million one in

play07:35

one thousand so you know automatically

play07:38

we're going to have richer than

play07:40

eight million it's important and four

play07:43

thousand

play07:44

you double that number you multiply by

play07:46

four yes it is

play07:47

very simply that's what a power law

play07:50

makes things very easy

play07:51

and you can predict and incidentally

play07:53

this distribution here

play07:57

does not have a variance but you can

play08:00

understand it perfectly

play08:01

you can you can even make you can make

play08:02

it simpler say with a stiffer

play08:05

power law one that corresponds more to

play08:08

what we have

play08:10

in uh real life

play08:15

uh richer than 1 million say

play08:20

i'll start with 62 and a half

play08:26

one and 125

play08:30

one in 250 one end

play08:33

500 there's a steeper power law this one

play08:37

does not have a mean

play08:38

guess what we understand we know how it

play08:41

works

play08:42

it doesn't have a mean but you know we

play08:44

can work with it we can live with it

play08:46

so the

play08:49

way we're going to simplify again

play08:52

characterize power laws

play09:06

it's as follows

play09:08

[Applause]

play09:11

probability of exceeding x

play09:15

over probability of exceeding n times x

play09:19

so here we use 2 times x before in the

play09:21

example

play09:23

for a gaussian

play09:28

or gaussian basin depends on n and x

play09:31

depends on n and depends on x

play09:35

for a power law

play09:38

depends only on

play09:42

n you don't care where x is

play09:45

let me give you another example of the

play09:47

gaussian

play09:49

okay the probability

play10:09

the probability

play10:12

this is x sorry x

play10:17

to x and ratio

play10:20

when x is one two sigma over one sigma

play10:23

is about seven

play10:25

four sigma over two sigma is about seven

play10:30

eighteen

play10:33

four sigma over eight sigma is something

play10:36

that was over 10 10.

play10:39

so it increases for power law depending

play10:42

how parameterized

play10:44

you see

play10:48

the ratio four sigma

play10:54

two sigma

play11:00

four times we saw earlier four times

play11:02

four times

play11:04

this invariant this is what we call

play11:06

scale and variance

play11:08

of the parallel solution tells you the

play11:10

following

play11:12

the ratio of millionaires so

play11:15

two billion over half millionaires is

play11:17

the same as ratio of billionaires

play11:19

over half millionaires approximately

play11:22

it's not going to be perfect

play11:24

and sometimes this only holds for x

play11:26

large

play11:28

like if you take salaries you will not

play11:30

find your income

play11:31

you will not find that property you know

play11:34

at the bottom but you'll find that

play11:35

property high up

play11:38

now for gaussian variables you don't

play11:40

have that property

play11:42

the ratio of people who are uh you know

play11:46

three meters tall over one and a half

play11:48

meters tall is not the same

play11:50

as the ratio of six there's no six meter

play11:53

but you see

play11:54

because of the rapidity of the client

play11:58

so you can apply the sales of companies

play12:00

stuff like that this is the mark of a

play12:01

power law

play12:02

i'm not going to go much further because

play12:04

i've written a lot on it

play12:06

but now let's talk about lindy in the

play12:09

three classes of distribution

play12:19

now let me discuss lindy in that context

play12:26

let's talk about the gaussian properties

play12:29

of gaussian which is what i call

play12:30

conditional expectation

play12:32

e of x conditional x

play12:35

higher than k okay

play12:39

and let's take this example life

play12:41

expectancy

play12:43

what is the expected life expectancy

play12:47

was x higher than zero

play12:52

let's say it's 80 years okay

play12:58

let's say now what is the life

play12:59

expectancy

play13:01

of conditional on being 80 years or

play13:04

older okay

play13:06

you reach 80 years what's the life

play13:08

effect of see

play13:11

it's going to be 92 so in other words an

play13:13

extra 12

play13:15

plus 12. okay actually if you're healthy

play13:19

much more than 92

play13:21

incidentally if you lift weight

play13:24

if you don't hang around economists if

play13:26

you don't if you don't read

play13:28

the right things if you don't own

play13:29

bitcoin you live long or is a bad

play13:31

bitcoin

play13:31

all right so let's say what is life

play13:34

effectively

play13:36

of x conditional x higher than 100

play13:41

that's going to be say one or two and a

play13:43

half plus

play13:44

two and a half years so as you see the

play13:46

extra last effects he shrinks

play13:48

four things in the gaussian domain

play13:52

and then of course what is the life

play13:54

expectancy

play13:55

assuming x is higher than 220

play13:58

well he just died the second

play14:02

right so in other words when x when when

play14:04

this

play14:05

went sorry uh x so

play14:08

when k goes up

play14:12

okay the life effect of c goes to k

play14:17

you see so that's the problem

play14:20

of the gaussian if i tell you what is

play14:22

the average deviation

play14:23

higher than 10 sigma it is 10 sigma

play14:27

now for power law you don't have this

play14:28

property

play14:30

okay so

play14:35

we keep this and let's say now we're

play14:37

increasing numbers okay

play14:39

typically for a power log

play14:43

what is the expectancy say let's say we

play14:45

start here expectancy of

play14:47

x higher than zero it's going to be

play14:50

ten

play14:54

now x

play14:58

we increase okay x equal 10

play15:02

maybe 20. it's a multiple of 10.

play15:06

now higher than 20 40 higher than 40

play15:09

80. and effectively you see that what is

play15:12

the

play15:12

uh this the ratio here is two times

play15:16

you can see one and a half times see

play15:18

that what's the average size of a

play15:19

company higher than a billion

play15:21

it's one and a half billion uh no no

play15:24

sorry it's two billion or something like

play15:26

that

play15:26

well it's a constant what's the average

play15:29

market move more than

play15:32

uh two sigma

play15:36

this is three sigma more than

play15:39

five sigmas seven and a half sigmas

play15:41

whatever however you define sigma

play15:43

this is what's interesting is that

play15:44

they're invariant so these this is

play15:47

the power law behavior

play15:50

that it produces ex

play15:54

an expectation that is

play15:58

a multiple of

play16:02

that that threshold okay

play16:07

people grasp it when i present it that

play16:09

way when i tell them if a technology is

play16:10

older than 10 years

play16:12

it's going to be 20 years older than 20

play16:13

years maybe 40 years

play16:15

because it has to be multiple higher now

play16:18

when we talk about sub-exponential class

play16:20

but not

play16:20

our law intermediate class guess what

play16:23

this is a constant

play16:28

some people say the crocodile has almost

play16:30

that

play16:32

whenever you spot a crocodile

play16:35

say it has 20 years to live

play16:38

if it's 10 years old it has 20 years to

play16:40

live if it's 20 years old it has 20 more

play16:42

years to live

play16:45

it's like what we call poisson arrival

play16:47

if things only die from shock there's no

play16:50

aging there may be poison arrival in

play16:53

that class

play16:54

the sub the exponential class

play16:58

so the borderline which is somewhat

play17:00

exponential with that powerball

play17:02

has this property okay the exponential

play17:05

class

play17:05

having constant life expectancy uh

play17:08

if the life of pregnancy is distributed

play17:10

according to this

play17:12

and you see it in a lot of things

play17:15

where you may have a combination

play17:19

to give an example there was a paper

play17:21

recently

play17:22

an actually generated debate that found

play17:25

that

play17:26

humans have visibly some decay because

play17:28

their mortality rate increases with time

play17:31

that there is aging but once you reach a

play17:33

certain age

play17:36

it becomes 50 a year so no matter how

play17:38

old the person is

play17:39

whether 1 10 or 120

play17:42

the person would have a say two and a

play17:45

half more years to go

play17:47

regardless so it becomes flat means

play17:50

there's no more aging

play17:51

and that produces completely different

play17:53

distribution of humans it means that

play17:54

human life is unlimited but short

play17:56

that's what the title of the book was

play17:58

the paper was it generated some

play18:00

uh debates because some people weren't

play18:02

happy with the data

play18:04

but the argument can be used for

play18:07

uh things in nature that we see are very

play18:10

old

play18:10

like turtles and stuff like that that

play18:12

maybe they're they

play18:14

they're not immortal you're never

play18:15

immortal because their accidents will

play18:16

kill you

play18:17

but these accidents are memoryless

play18:20

they're plason distributed

play18:22

with poisonous very shock you get this

play18:25

life effectively so i have very rapidly

play18:28

explained

play18:30

the power laws

play18:33

the difference between these let me get

play18:35

a little more mathematical now

play18:37

let's say the probability exceeding x

play18:41

for power law is going to be something

play18:44

like

play18:46

or converge for x very large

play18:49

x some constant time x to the minus

play18:52

alpha

play18:54

okay for x very large

play18:57

so you find this property some people

play18:59

use something complicated you see in my

play19:01

book

play19:02

the slowly moving uh

play19:06

slowly moving function and and which

play19:07

tells you that log of x

play19:09

if i take log of probability of

play19:11

exceeding x

play19:12

predicting log is gonna be minus alpha

play19:15

log of something

play19:17

so mathematically we can express power

play19:19

laws as follows

play19:22

a power law is for the tail not the

play19:23

whole probability distribution

play19:25

the probability of exceeding x for x

play19:28

very large

play19:30

is going to be approximated some

play19:32

constant

play19:34

times x to the minus alpha

play19:38

okay so let's forget about what's

play19:42

in front of this some scaling uh

play19:45

parameter but that's the property of

play19:47

power law

play19:48

now if i log take the log

play19:53

log of exceedance probability of

play19:54

survival function however you want to

play19:56

call it

play19:57

is going to be minus alpha approximately

play20:00

of course

play20:01

plus something log of x

play20:06

and this equation is a central

play20:09

one that allows us comes from

play20:13

scalability

play20:13

okay whenever you see log

play20:16

scalable so it gives us the following

play20:22

take a log log plot

play20:26

log of x log of probability of exceeding

play20:29

x

play20:31

you're going to have some whatever

play20:33

something like this

play20:35

and then it becomes

play20:38

a constant with slope alpha

play20:42

so typically that's how we identify this

play20:46

power law

play20:46

sometimes it falls here because you

play20:49

don't have enough observation

play20:50

by definition you don't have a lot in

play20:52

the tails okay

play20:55

and that alpha

play20:59

the lower the alpha alpha equal one you

play21:02

have koshi

play21:03

okay the lower the alpha

play21:07

the the the more the fatter the tail

play21:12

and the higher the alpha say alpha

play21:14

equals three we get something like the s

play21:16

p 500 so between two and a half and

play21:18

three

play21:19

the returns not the price okay uh

play21:23

something like alpha equals two you got

play21:26

earthquakes size of cities somewhere

play21:29

between one and two the gaussian

play21:31

it becomes vertical so alpha is

play21:33

effectively that tail alpha is

play21:35

effectively infinite

play21:36

for the gaussian one property you need

play21:39

to note

play21:40

is that when alpha equal one no mean

play21:44

when alpha equals two or lower

play21:48

okay no

play21:52

uh no variance

play21:55

when alpha equals four or lower no

play21:58

kurtosis

play22:00

or infinite depends on whether the

play22:02

moment is odd or even

play22:04

but this is crazy because it tells you

play22:07

you can do statistics we can understand

play22:08

everything but the notions like various

play22:11

kurtosis and even mean

play22:15

don't have any more meaning but yet we

play22:18

can do as we showed

play22:19

as i showed you you can we can figure

play22:21

out the properties we can work with

play22:22

wealth we can

play22:24

work with problems the the traditional

play22:27

notions that we saw

play22:28

in the beginning just do not apply

play22:31

let me stop here because i don't want to

play22:34

get too

play22:35

mathematical and be mathematical in

play22:38

something

play22:39

where i can relax and and

play22:42

express things in math in an unhabited

play22:46

way

play22:47

the masses incidentally in my book on

play22:50

the fact that

play22:50

the whole book is about statistical

play22:52

conscious fat tales but most of it is

play22:54

about characterization of fat tails the

play22:56

different classes different

play22:58

uh colors if you want the fat tails

play23:02

and uh and and this one i called alpha

play23:04

equals one or lower i called the forget

play23:06

about it it was

play23:07

spelled the way you should be

play23:10

pronouncing it because you can't do

play23:13

traditional statistics at all

play23:15

there's no mean but sometimes you have

play23:17

the illusion that there is a mean

play23:19

so thanks a lot for attending this

play23:22

lecture

play23:24

remember this is a central lecture

play23:27

central lecture from this with his

play23:29

intuition

play23:30

we can go very very far in redoing

play23:33

statistical

play23:35

inference and redoing um

play23:38

the the the the our re rev

play23:42

redesigning our views of the world in on

play23:44

more empirical basis

play23:46

and and of course more solid

play23:47

mathematical basis thank you and have a

play23:49

great

play23:50

day

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Pareto PrinciplePower LawsStatistical Distributions80-20 RuleFractal EconomicsGaussian AnalysisTail BehaviorScale InvarianceEconomic InferenceLecture Series
Benötigen Sie eine Zusammenfassung auf Englisch?