Estimating Market Risk Measures: A Quick Review (FRM Part 2, Book 1, Market Risk)

finRGB
11 Jul 201818:38

Summary

TLDRThis script delves into market risk estimation, exploring time series analysis of profits/losses and dividends, and adjusting for time value of money. It discusses calculating arithmetic and geometric returns, and introduces historical simulation and parametric methods for Value at Risk (VaR) estimation. The script also covers coherent risk measures, expected shortfall, and general risk measures, emphasizing VaR's limitations and the importance of sub-additivity. It concludes with standard error estimation and QQ plots for distribution fitting and outlier detection.

Takeaways

  • ๐Ÿ“ˆ **Time Series Analysis**: The script discusses the importance of time series data for financial instruments, including crisis data for listed instruments and MTM valuations for exotic products.
  • ๐Ÿ’น **Profit and Loss (P&L) Calculation**: It explains how P&L is calculated as the difference between the end-of-period receipts and the initial investment at the beginning of the period.
  • ๐Ÿ’ฐ **Time Value of Money**: The script covers two methods to adjust P&L for the time value of money: discounting future payments or capitalizing past payments using the risk-free rate.
  • ๐Ÿ”ข **Arithmetic vs. Geometric Returns**: It differentiates between arithmetic returns (average return proxy) and geometric returns (considering income reinvestment) and how they relate.
  • ๐Ÿ“Š **Loss Time Series (LP)**: Introduces the concept of creating a loss time series as the negated version of the P&L time series, important for understanding risk.
  • ๐Ÿ“š **Historical Simulation VAR**: Describes the historical simulation technique for VAR estimation, which involves sorting P&L observations and selecting a value at a certain confidence level.
  • ๐Ÿ“ **Parametric VAR Methods**: Discusses three parametric approaches to VAR estimation: normal distribution on P&L, normal distribution on returns, and log-normal distribution on returns.
  • ๐Ÿ”— **Coherent Risk Measures**: Defines coherent risk measures and their four conditions, emphasizing the importance of sub-additivity for margin calculations and regulatory capital.
  • ๐Ÿ“‰ **Expected Shortfall (ES)**: Introduces ES as a risk measure that provides insight into tail losses beyond the VAR threshold, calculated as a weighted average of tail losses.
  • ๐ŸŒ **Generalized Risk Measures**: Explains how VAR and ES can be part of a broader class of risk measures, the generalized risk measures, which are weighted averages of quantiles.
  • ๐Ÿ“Š **QQ Plots**: Describes the use of QQ plots for comparing empirical distributions with benchmark distributions, identifying outliers, and estimating distribution parameters.

Q & A

  • What are the two types of time series inputs required for a War estimation approach?

    -The two types of time series inputs required are the time series of profits/losses and the time series of dividends/coupons, which can be referred to as the income time series.

  • How is Profit and Loss (P&L) computed in the context of War estimation?

    -Profit and Loss (P&L) is computed as the receipts at the end of any period t minus the initial investment at the beginning of the period.

  • What are the two methods to adjust P&L for time value of money?

    -The two methods to adjust P&L for time value of money are by discounting the payments received at time T or by capitalizing the payments made at time t-1, both using the risk-free rate R.

  • What is the difference between arithmetic and geometric returns?

    -Arithmetic returns are calculated by dividing the P&L by the initial investment and are used as a proxy for average or expected return. Geometric returns consider reinvestment of income and a chosen compounding frequency over a designated time period.

  • How is Value at Risk (VaR) estimated using the historical simulation technique?

    -In the historical simulation technique, VaR is estimated by sorting the P&L observations in increasing order and picking the VaR at a certain confidence level as the N alpha plus 1/8 observation from the left.

  • Why is the sub additive property important for risk measures?

    -The sub additive property is important for risk measures because it is crucial for margin calculations, regulatory capital estimation, and setting a conservative upper bound to the combined risk of multiple positions.

  • What is the Expected Shortfall (ES) and how is it different from VaR?

    -Expected Shortfall (ES) is the probability-weighted average of tail losses and provides insight into how bad things can get if losses exceed VaR. Unlike VaR, which only indicates a threshold loss number, ES gives a measure of the average loss in the tail beyond the VaR level.

  • How can the standard error of an estimate be computed?

    -The standard error of an estimate can be computed by creating a thin strip around the VaR with a width of H and calculating three probabilities: to the left of the strip, to the right of the strip, and in the middle of the strip. These probabilities are then used in a formula to compute the standard error.

  • What is a QQ plot and how is it used in risk estimation?

    -A QQ plot is a quantile versus quantile plot that compares quantiles from an empirical distribution with quantiles from a specified benchmark distribution. It is used to identify if the benchmark distribution is a good fit for the empirical data, to identify outliers, and to estimate parameters such as location and scale.

  • Why is the normal distribution assumption used in parametric VaR approaches?

    -The normal distribution assumption is used in parametric VaR approaches to simplify calculations and because it is a commonly used distribution in statistics. However, it may not always accurately represent the true distribution of P&L or returns, especially in the tails.

  • What is a spectral risk measure and why is it considered coherent?

    -A spectral risk measure is a special subset of general risk measures where the weighting function satisfies three conditions: non-negative weights, weights sum to one, and weights assigned to more extreme losses are not less than those assigned to less extreme losses. It is considered coherent because it satisfies all four conditions of coherent risk measures, including sub additivity.

Outlines

00:00

๐Ÿ“Š Estimation Approaches and Returns

This paragraph introduces the process of estimating market risk, focusing on the inputs required for such estimations. It explains the computation of time series for prophets/losses and dividends/coupons, which are essential for calculating profit and loss (P&L). The P&L is adjusted for time value of money using either discounting or capitalizing methods with the risk-free rate. The paragraph then delves into the calculation of two types of returns: arithmetic returns, which serve as a proxy for expected returns, and geometric returns, which consider reinvestment of income. The relationship between these returns is highlighted, and the importance of understanding the distribution of losses is emphasized. The historical simulation technique for Value at Risk (VaR) estimation is introduced, which involves sorting P&L observations and selecting a specific observation based on the confidence level. The limitations and benefits of this technique are discussed, including its simplicity and its reliance on the assumption that the future will mirror the past.

05:01

๐Ÿ“ˆ Parametric Approaches and Coherent Risk Measures

The second paragraph shifts the discussion to parametric approaches, starting with the normal distribution assumption for P&L to calculate VaR. It contrasts this with the normal world approach, where normality is assumed for arithmetic returns to estimate the Word. The Log-normal VaR approach is also explained, which assumes normality in geometric returns, leading to log-normally distributed prices or valuations. The paragraph then transitions into a discussion on coherent risk measures, which are risk measures that satisfy four conditions: monotonicity, sub-additivity, positive homogeneity, and translational invariance. VaR is noted as not being a coherent risk measure due to its failure to meet the sub-additivity condition. The concept of expected shortfall (ES) is introduced as a risk measure that provides insight into the tail of losses beyond VaR, and it is explained how ES can be estimated using historical simulation or through a formula involving higher confidence levels.

10:02

๐Ÿ“‰ General Risk Measures and Standard Errors

This paragraph introduces general risk measures, which are weighted averages of various quantiles and can encompass both VaR and ES with appropriate weighting functions. The concept of spectral risk measures is discussed, which are a subset of general risk measures that satisfy three specific conditions to ensure coherence. The paragraph then explains the calculation of standard errors for VaR estimates, which provide a measure of uncertainty. Factors affecting standard error, such as sample size, bin width, and confidence level, are discussed. The chapter concludes with a discussion on QQ plots, which are used to compare an empirical distribution with a benchmark distribution, and their utility in identifying outliers and estimating distribution parameters.

15:04

๐Ÿ“‹ Summary of Market Risk Estimation

The final paragraph provides a summary of the key takeaways from the chapter on estimating market risk measures. It encapsulates the various methods for calculating VaR, the importance of understanding the distribution of returns and losses, and the use of QQ plots for distribution fitting and outlier detection. The summary underscores the importance of choosing the right distribution for risk estimation and the implications of different risk measures for regulatory capital and margin calculations.

Mindmap

Keywords

๐Ÿ’กTime Series

A time series is a sequence of data points collected at successive, equally spaced points in time. In the context of the video, time series are used to represent the historical data of financial instruments, such as crisis or valuations (for exotic products), and dividends or coupons. The script mentions computing the time series of profits/losses using these data points to estimate future financial risks.

๐Ÿ’กP&L (Profit and Loss)

P&L refers to the financial performance of a business or investment over a specific period, calculated as the difference between revenue and expenses. The video script discusses computing P&L as the income received at the end of a period minus the initial investment at the beginning of the period, which is crucial for estimating financial risk.

๐Ÿ’กTime Value of Money

The time value of money is the concept that a sum of money is worth more now than the same sum in the future due to its potential earning capacity. The script discusses adjusting the definition of P&L for the time value of money by either discounting future payments or capitalizing past payments using the risk-free rate.

๐Ÿ’กArithmetic Return

Arithmetic return is calculated by taking the P&L and dividing it by the initial investment. It serves as a proxy for the average or expected return on an investment. The script uses this concept to illustrate a simple method of estimating returns from financial data.

๐Ÿ’กGeometric Return

Geometric return accounts for the reinvestment of income over a certain period and is calculated using compounding frequencies. If continuous compounding is assumed, the geometric return is defined as the log of the final proceeds divided by the initial investment. The video script contrasts this with arithmetic return to show different ways of measuring investment performance.

๐Ÿ’กHistorical Simulation

Historical simulation is a technique used to estimate Value at Risk (VaR) by using historical observations of P&L. The script describes sorting these observations and selecting a VaR at a certain confidence level as the N alpha plus 1/8 observation from the left, which is a simple yet effective method of risk estimation.

๐Ÿ’กValue at Risk (VaR)

VaR is a measure of the potential loss in the value of a portfolio over a specific time period, given a certain level of confidence. The script explains how VaR can be estimated using historical simulation and also discusses its limitations, such as not being a coherent risk measure.

๐Ÿ’กCoherent Risk Measure

A coherent risk measure is one that satisfies four conditions: monotonicity, sub-additivity, positive homogeneity, and translational invariance. The script explains these conditions and why VaR is not a coherent risk measure due to its failure to meet the sub-additivity condition.

๐Ÿ’กExpected Shortfall (ES)

Expected Shortfall is a risk measure that provides more information about the tail of the loss distribution beyond VaR. It is defined as the probability-weighted average of tail losses. The script describes how ES can be estimated using historical simulation and is a spectral risk measure, which is a type of coherent risk measure.

๐Ÿ’กStandard Errors

Standard errors are estimates that indicate the degree of uncertainty associated with a sample estimate of a population parameter. In the script, standard errors are discussed in the context of estimating VaR, where they help in calculating confidence intervals and understanding the reliability of the VaR estimate.

๐Ÿ’กQQ Plot

A QQ plot, or quantile-quantile plot, is a graphical tool used to compare the distribution of two datasets. The script mentions using QQ plots to assess how well a specified distribution fits the empirical data, identify outliers, and estimate parameters such as location and scale.

Highlights

Introduction to inputs required for War estimation: time series of profits/losses and dividends/coupons.

Calculating Profit & Loss (P&L) as the difference between receipts at the end of a period and initial investment.

Adjusting P&L for time value of money by discounting or capitalizing payments using the risk-free rate.

Definition and calculation of arithmetic returns as a proxy for average or expected return.

Explanation of geometric returns considering reinvestment of income and compounding frequency.

Linking arithmetic and geometric returns using a specific expression.

Creation of a time series for losses (LP) as the negated version of the P&L time series.

Historical simulation technique for VaR estimation by sorting P&L observations.

Assumption of future following the past in historical simulation and its benefits.

Transition from nonparametric to parametric world with three approaches for VaR estimation.

Normal distribution assumption on P&L for calculating VaR in the first parametric approach.

Normal distribution assumption on arithmetic returns for the second parametric approach.

Log-normal distribution assumption on geometric returns for the third parametric approach.

Definition of coherent risk measures and their four conditions: monotonicity, subadditivity, positive homogeneity, and translational invariance.

Explanation of why VaR is not a coherent risk measure due to not always satisfying subadditivity.

Introduction to expected shortfall (ES) as a risk measure that provides insight into the tail beyond VaR.

Calculation of ES using historical simulation by averaging losses in the tail.

General risk measure as a weighted average of various quantiles and its relation to VaR and ES.

Spectral risk measure as a special subset of general risk measures that satisfies three conditions to ensure coherence.

Discussion on standard errors in VaR estimation and their relation to sample size, bin width, and confidence level.

QQ plot explanation for comparing empirical distribution with a benchmark distribution and its uses in identifying outliers and determining distribution parameters.

Transcripts

play00:00

having done this reading in a lot of

play00:01

detail now let's take a look at its key

play00:04

takeaways we started off with the

play00:06

various inputs that are required for any

play00:09

War estimation approach these inputs are

play00:12

number one the time series of prophets

play00:14

slash losses this this time series we

play00:18

computed it using twos time series the

play00:20

first of them was the time series of

play00:22

crisis if it's a listed instrument or

play00:25

the time series of M TMS or valuations

play00:28

if it's let's say an exotic product the

play00:32

second time series was that of dividends

play00:34

/ coupons which you can refer to as the

play00:37

income time series so we compute the P&L

play00:40

as let's say the receipts at the end of

play00:43

any period t minus the initial

play00:46

investment let's say as the beginning of

play00:48

the period we then took a look at two

play00:50

ways in which we can let's say adjust

play00:53

this definition of PL for time value of

play00:57

money either by discounting the payments

play01:00

received at time T or by capitalizing

play01:03

the payments made at time t minus 1 both

play01:07

using let's say the risk-free rate R we

play01:10

then moved on to defining two kinds of

play01:12

returns the first return was what we

play01:15

referred to as arithmetic returns we

play01:18

calculated these by just taking in the

play01:20

pl time series take the pl and divided

play01:23

by the initial investment that's an

play01:25

arithmetic return remember this

play01:27

arithmetic return is something which you

play01:29

use as a proxy for the average or the

play01:32

expected return the other kind of return

play01:35

that we took a look at was a geometric

play01:37

return this return is for a chosen or

play01:40

designated time period and it takes into

play01:43

account reinvestment of this income and

play01:46

a certain chosen compounding frequency

play01:48

so if you assume that your compounding

play01:51

happens you with the continuous

play01:52

compounding then I can define my

play01:55

geometric return as the log of the final

play01:57

proceeds divided by the initial

play02:00

investment the two kinds of returns

play02:02

arithmetic and geometric can be linked

play02:05

to one another using this expression

play02:07

generally speaking the two returns are

play02:09

close to one another if your period over

play02:13

which these returns are earned is a very

play02:15

small period let's say a daily period

play02:17

then we also took a look at another time

play02:20

series which is that of losses so I can

play02:23

create another time series which lets me

play02:25

call it as LP I define this time series

play02:28

to be lets say the negated version of

play02:30

the pl time series and if we are talking

play02:33

about distributions then we said that

play02:36

the mean of the LP distribution is

play02:38

simply the negated value of the mean of

play02:42

the PL distribution and the standard

play02:44

deviation of the LP distribution we said

play02:47

is the same as the standard deviation of

play02:49

the PL distribution when it comes to the

play02:52

most sort of simplest technique for var

play02:55

estimation we said it's the historical

play02:57

simulation technique all that we do in

play02:59

this technique is we take the

play03:01

observations that are given to us assume

play03:03

that these are let's say the PL

play03:05

observations we sort them in an

play03:08

increasing order then let's say from the

play03:10

smallest to the highest smallest would

play03:12

therefore refer to negative entries or

play03:15

losses and we pick the var at a certain

play03:18

confidence level let's say at a C or a

play03:21

certain significance level let's call it

play03:24

alpha as the N alpha plus 1/8

play03:28

observation from the left here I am

play03:31

assuming that I am dealing with a PL

play03:33

time series number one and number two I

play03:35

am dealing with n observations so if I

play03:38

pick n alpha plus one observations and

play03:41

if a plus one at the observation I

play03:43

should say from the left then that

play03:45

observation would clearly leave a

play03:47

probability mass of alpha in the left

play03:50

tail and I've been working with the LP

play03:53

series we had said that we will pick our

play03:56

war again to be the N alpha plus 1 at

play03:59

observation but this time we'll pick it

play04:01

from the right okay so this was about

play04:04

historical simulation keep this thing in

play04:06

mind that it's a very simple technique

play04:08

but at its core it makes this assumption

play04:11

that the future will follow the past a

play04:14

very important benefit of this technique

play04:16

is that it accommodates real-life

play04:18

empirical observed distributions it does

play04:22

not it does not impose any

play04:23

distributional Essam

play04:25

on the PL or the returns now let's move

play04:30

from this nonparametric world which

play04:32

historical simulation was in to a

play04:34

parametric world in this parametric

play04:36

world we defined three approaches the

play04:39

first approach was one in which we

play04:41

impose as we do in any parametric

play04:44

approach a normal distribution

play04:45

assumption on this time the PL the

play04:49

profit and loss so if you do that then I

play04:52

can calculate the var which in the pl

play04:55

world remember is in the left tail by

play04:57

starting with the mean moving to the

play05:00

left and that's why you have a minus

play05:02

these many multiples of your standard

play05:05

deviation of the pl and since we are

play05:08

talking about a PL and since war is a

play05:10

lost number we had negated it to create

play05:12

a war had this been an LP distribution

play05:15

you would have computed the var by

play05:18

starting with the mean which is mu LP

play05:21

and moving to the right this time by

play05:24

these many multiples of the standard

play05:27

deviation Sigma PL is a same as Sigma LP

play05:30

now moving on to the second approach in

play05:33

this category we called it the normal

play05:35

word in this approach we impose the

play05:38

normal distribution assumption on the

play05:41

arithmetic returns and let's assume that

play05:43

these returns are distributed normally

play05:46

with this as the mean and this as the

play05:48

standard deviation to calculate the word

play05:51

which in this case would assume that

play05:53

valuations or prices are normally

play05:56

distributed I'll start with the mean

play05:59

return move to the left and that's why

play06:01

there's a minus these many multiples of

play06:04

the standard deviation what this gives

play06:07

me along with the minus is what we call

play06:09

a VAR percent because it's like a word

play06:12

number for $1 of notional invested this

play06:16

I have to scale it with the current

play06:18

valuation of my position which is PT

play06:21

minus 1 I am assuming I'm standing at

play06:23

time t minus 1 and this gives me a word

play06:26

at this confidence level see the next

play06:29

approach in this parametric category was

play06:32

a log normal word approach in this

play06:34

approach I impose the normality

play06:37

assumption

play06:38

this time on the geometric return which

play06:40

is the continuously compounded return if

play06:43

you do that then we know that the prices

play06:47

or valuations are log normally

play06:49

distributed and that's why we refer to

play06:51

this var as a log normal var in terms of

play06:54

its var calculation I should say how do

play06:57

we calculate it start with the mean

play06:59

return move to the left by these many

play07:02

multiples of the standard deviation this

play07:05

gives you some kind of a worst-case

play07:07

valuation so current valuation minus the

play07:10

worst-case scaled by the total size of

play07:13

your position this gives you the bar

play07:15

keep this thing in mind that normal and

play07:17

log normal var are close to one another

play07:19

if the period over which they are being

play07:22

computed I mean the horizon is small

play07:24

let's say daily we then moved on to

play07:27

defining coherent risk measures that

play07:29

means let's say we have portfolios x and

play07:31

y and our risk measure is denoted by Rho

play07:34

which is written we've written it as a

play07:37

function it's said to be coherent if it

play07:40

satisfies all of the four conditions

play07:43

listed here just to name them at this

play07:46

moment these conditions were

play07:47

monotonicity sub additive 'ti positive

play07:50

homogeneity and translational invariance

play07:53

we described each one of them in detail

play07:56

in our videos out of these four we

play07:58

reasoned out that sub additive 'ti is

play08:01

really important it's important from the

play08:03

standpoint of margin calculations if

play08:06

let's say the exchange uses this

play08:08

particular risk measure for calculating

play08:11

initial margins it's important from the

play08:13

standpoint of regulatory capital

play08:15

estimation if let's say the supervisor

play08:17

is using this risk measure for

play08:19

calculating Bank capital and it's also

play08:22

important from the standpoint of setting

play08:24

a conservative upper bound to the

play08:27

combined risk of let's say many

play08:29

positions put together now in terms of

play08:32

how var fares on these conditions then

play08:35

remember that var is not a coherent risk

play08:38

measure the reason being that it does

play08:40

not always satisfy this condition of sub

play08:43

additive 'ti by taking a suitable

play08:46

example we had reasoned out why this is

play08:48

not the key

play08:49

is why is it I mean why is it not

play08:51

coherent now in terms of making work

play08:54

coherent then and you know making it sub

play08:57

additive I should say you should pick a

play08:59

PL distribution which let's say comes

play09:01

from this family of distributions which

play09:04

we call the elliptical distribution

play09:05

family and this normal distribution

play09:08

remember is one member of this

play09:10

elliptical distribution family we then

play09:13

moved on to this disk measure of

play09:15

expected shortfall

play09:16

now this risk measure is something which

play09:20

helps you peek into the tail the VAR

play09:23

only tells you a threshold loss number

play09:26

it doesn't tell you how bad things can

play09:28

get if the losses were to exceed the war

play09:31

in terms of how we defined es we defined

play09:34

it as the probability weighted average

play09:36

of tail losses in terms of the formula

play09:40

it's the expected value of the loss

play09:42

conditional upon this fact that the loss

play09:45

exceeds the war so this is like a

play09:48

conditional expectation when you come to

play09:51

the world of historical simulation which

play09:52

is very simple approach we know and in

play09:55

this world the es can be estimated very

play09:58

simply by a simple average of all losses

play10:02

that lie in the tail remember we had

play10:04

picked the N alpha plus 1 at observation

play10:07

from the right if it's a loss

play10:09

distribution as my var so if I can take

play10:12

these n alpha observations which lie in

play10:15

the tail and take their simple average I

play10:17

arrive at the expected shortfall now the

play10:21

expected shortfall be reasoned out you

play10:23

know with lot of detailed analysis can

play10:26

also be computed using another formula

play10:29

and that formula does not deal directly

play10:31

with losses in the tail but rather it

play10:33

deals with quantiles or we should say it

play10:36

deals with VARs which are computed at a

play10:39

confidence level that is higher than the

play10:42

confidence level for which we are

play10:44

computing this es these confidence

play10:47

levels which are higher than C let's

play10:49

call them C I I pick them which are you

play10:52

know these are comfortably in the tail

play10:54

and I pick them as equally spaced

play10:57

confidence levels it's like I am slicing

play10:59

the tail in

play11:00

- endless slices and then I sort of take

play11:03

an average of all these words to compute

play11:07

my es now more are the number of slices

play11:10

of the stale more accurate with this

play11:13

approximation be for the expected

play11:15

shortfall at this point also note that

play11:17

if your losses are continuously

play11:20

distributed I can write down two

play11:22

formulas for the expected shortfall the

play11:25

first formula remember is like an

play11:29

integral it's like a conditional

play11:30

expectation which looks something like

play11:32

this it's 1 by alpha run this integral

play11:35

from the var up until infinity and then

play11:38

compute let's say X FX DX where X refers

play11:42

to the loss the other formula for this

play11:44

which gives generally the same result as

play11:47

this one is 1 by alpha this time the

play11:50

integral runs over a probability that

play11:52

I'll run it from C which is the level of

play11:54

confidence right up till 1 and I'll do

play11:57

this integration as Q P DP ok so it's

play12:01

like I am taking in a quantile and I'm

play12:04

basically integrating it over the

play12:06

probability values which span from C to

play12:08

1 both these integrals are basically

play12:10

computing an integral over the right

play12:12

tail which is the case for the loss

play12:15

distribution now let's move to more

play12:19

genetic risk measures and we defined a

play12:22

measure which is like some kind of a

play12:24

weighted average of various quantiles so

play12:27

if I run through various quantiles

play12:29

I have X you know expressed my quantile

play12:31

as QP so if this is my loss distribution

play12:34

then a quantile QP would be that number

play12:37

on the horizontal axis to the left of

play12:40

which you will have a total accumulated

play12:42

area of P so if I can you know take

play12:46

various QP s like this and combine them

play12:49

in some kind of a weighted average way

play12:51

then I arrive at a risk measure which I

play12:54

refer to as a general risk measure the

play12:57

weighting function is this file you keep

play12:59

changing the file and the risk measure

play13:01

keeps changing we had said that both the

play13:04

var and the es can actually be subsumed

play13:07

into this definition of the general risk

play13:09

measure

play13:10

an appropriate choice of the weighting

play13:13

functions so these are what are you know

play13:15

what is given here now we then raised

play13:17

this question our general risk measures

play13:20

also coherent and we said not

play13:22

necessarily so to make them coherent we

play13:25

moved to a special subset of these risk

play13:28

measures which we refer to as a spectral

play13:30

risk measure this subset is one in which

play13:34

the weighting function satisfies three

play13:36

conditions the first one is that these

play13:39

weights are non-negative they can be 0

play13:41

but they can never be negative these

play13:44

weights you know in very laypersons

play13:46

terms sum to one that's like a

play13:48

requirement which we always impose on

play13:50

weights when we do a weighted average in

play13:52

this world since these weights are

play13:54

continuously defined so it's like saying

play13:57

the area under the weighting function is

play13:59

equal to 1 the last one is a very

play14:01

important condition we had said if you

play14:04

are risk-averse

play14:05

then the weight that you assign to a

play14:08

certain quantile which is further into

play14:11

the tail that means for this quantile P

play14:13

2 is greater than a quantile to its left

play14:16

which is let's say P 1 then the weight

play14:19

which you assign to a verse or an

play14:21

extreme loss cannot be less than the

play14:24

weight that you assign to a less extreme

play14:26

loss that means they're not drawing it

play14:29

visually if this is P 2 quite an extreme

play14:32

loss and this is P 1 then the weight

play14:35

which you assign to this guy cannot be

play14:37

lower than the weight you assign to this

play14:38

guy so es it satisfies this condition

play14:41

but what does not so es is actually a

play14:44

spectral risk measure and it's also

play14:46

coherent we then moved on to defining

play14:49

standard errors so when we calculate war

play14:52

or estimated based on a sample of

play14:55

observed let's say PNL or returns then

play14:58

it's like a statistical estimate every

play15:00

statistical estimate we said is is to be

play15:03

accompanied by some kind of a standard

play15:06

error estimation because if we have the

play15:09

standard error let's say it's 4q which

play15:12

is a point estimate of war and I am

play15:14

saying standard error is a C of Q if I

play15:16

have this standard error available with

play15:19

me I can easily use it to Cal

play15:21

or estimate a confidence interval for my

play15:24

bar now let's take a look at how we we

play15:29

computed the standard error we had said

play15:31

let's pick the quartile we had used the

play15:34

case of a PL distribution so my bar was

play15:37

in the left tail

play15:38

this was my bar we had created a thin

play15:41

strip around it

play15:42

this strip had a width of H and we had

play15:45

computed three probabilities the

play15:47

probability to the left of this strip to

play15:50

let's say the right of the strip and in

play15:52

the middle of the strip based on these

play15:55

three we had computed the standard error

play15:57

using this formula we had then done no

play16:00

quick rules of thumb in terms of how

play16:02

this standard error varies with respect

play16:05

to the size of the sample so as the

play16:08

sample size increases intuitively you

play16:10

could agree that thus the standard error

play16:13

decreases then the other determinant we

play16:15

had you know reasoned out was the width

play16:18

of this bin we had said as the bin width

play16:21

increases the standard error decreases

play16:24

and lastly the determinant was the

play16:26

confidence level at which this VAR was

play16:29

estimated we had said as you estimate

play16:31

what which are let's say for higher and

play16:34

higher confidence levels their

play16:36

uncertainty goes on increasing that

play16:39

means the standard error for high

play16:41

confidence VARs is pretty high we had

play16:44

actually finished then this chapter with

play16:46

a quick discussion of this plot which we

play16:49

refer to as a QQ plot expands as the

play16:52

quantile verses quantile plot this plot

play16:55

is basically one which plots quantiles

play16:58

which are read from one distribution

play17:00

let's refer to it as an empirical

play17:02

distribution plotted against the

play17:04

quantiles read from a specified

play17:07

benchmark distribution if you what you

play17:10

achieve is a pretty linear graph

play17:12

something like this that means that the

play17:14

specified benchmark distribution is a

play17:17

very good choice to fit the empirical or

play17:20

observed data if it's not the case then

play17:23

you switch to another benchmark

play17:24

distribution and keep trying out

play17:26

different choices the second thing which

play17:29

we noted about QQ plots was that if you

play17:31

plot this QQ plot and the extremities of

play17:34

the

play17:35

plot you see them bending towards a

play17:37

certain axis then the distribution

play17:39

plotted on that axis has fatter tails so

play17:43

that was my second observation my third

play17:46

observation was that QQ plots they can

play17:49

be used to somehow get a handle on the

play17:52

parameters such as location which is

play17:55

another name for mean and scale another

play17:57

name for Sigma of any distribution if

play18:00

you were to do a linear transformation

play18:02

of one of the distributions in the QQ

play18:04

plot then this transformation goes and

play18:07

changes both the slope and the intercept

play18:10

of this QQ plot these changes in slope

play18:13

and intercept are the ones which can

play18:15

help you sort of determine the mean and

play18:18

the Sigma of this empirical distribution

play18:21

lastly we had the fourth point we had

play18:24

said that QQ plot is a very good way of

play18:27

identifying outliers

play18:28

in your data so this was a very quick

play18:31

run-through of various takeaways in our

play18:34

first chapter estimating market risk

play18:37

measures

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Market RiskRisk EstimationVAR AnalysisFinancial ModelingArithmetic ReturnsGeometric ReturnsHistorical SimulationParametric ApproachCoherent MeasuresExpected ShortfallQQ Plot