Estimating Market Risk Measures: A Quick Review (FRM Part 2, Book 1, Market Risk)
Summary
TLDRThis script delves into market risk estimation, exploring time series analysis of profits/losses and dividends, and adjusting for time value of money. It discusses calculating arithmetic and geometric returns, and introduces historical simulation and parametric methods for Value at Risk (VaR) estimation. The script also covers coherent risk measures, expected shortfall, and general risk measures, emphasizing VaR's limitations and the importance of sub-additivity. It concludes with standard error estimation and QQ plots for distribution fitting and outlier detection.
Takeaways
- đ **Time Series Analysis**: The script discusses the importance of time series data for financial instruments, including crisis data for listed instruments and MTM valuations for exotic products.
- đč **Profit and Loss (P&L) Calculation**: It explains how P&L is calculated as the difference between the end-of-period receipts and the initial investment at the beginning of the period.
- đ° **Time Value of Money**: The script covers two methods to adjust P&L for the time value of money: discounting future payments or capitalizing past payments using the risk-free rate.
- đą **Arithmetic vs. Geometric Returns**: It differentiates between arithmetic returns (average return proxy) and geometric returns (considering income reinvestment) and how they relate.
- đ **Loss Time Series (LP)**: Introduces the concept of creating a loss time series as the negated version of the P&L time series, important for understanding risk.
- đ **Historical Simulation VAR**: Describes the historical simulation technique for VAR estimation, which involves sorting P&L observations and selecting a value at a certain confidence level.
- đ **Parametric VAR Methods**: Discusses three parametric approaches to VAR estimation: normal distribution on P&L, normal distribution on returns, and log-normal distribution on returns.
- đ **Coherent Risk Measures**: Defines coherent risk measures and their four conditions, emphasizing the importance of sub-additivity for margin calculations and regulatory capital.
- đ **Expected Shortfall (ES)**: Introduces ES as a risk measure that provides insight into tail losses beyond the VAR threshold, calculated as a weighted average of tail losses.
- đ **Generalized Risk Measures**: Explains how VAR and ES can be part of a broader class of risk measures, the generalized risk measures, which are weighted averages of quantiles.
- đ **QQ Plots**: Describes the use of QQ plots for comparing empirical distributions with benchmark distributions, identifying outliers, and estimating distribution parameters.
Q & A
What are the two types of time series inputs required for a War estimation approach?
-The two types of time series inputs required are the time series of profits/losses and the time series of dividends/coupons, which can be referred to as the income time series.
How is Profit and Loss (P&L) computed in the context of War estimation?
-Profit and Loss (P&L) is computed as the receipts at the end of any period t minus the initial investment at the beginning of the period.
What are the two methods to adjust P&L for time value of money?
-The two methods to adjust P&L for time value of money are by discounting the payments received at time T or by capitalizing the payments made at time t-1, both using the risk-free rate R.
What is the difference between arithmetic and geometric returns?
-Arithmetic returns are calculated by dividing the P&L by the initial investment and are used as a proxy for average or expected return. Geometric returns consider reinvestment of income and a chosen compounding frequency over a designated time period.
How is Value at Risk (VaR) estimated using the historical simulation technique?
-In the historical simulation technique, VaR is estimated by sorting the P&L observations in increasing order and picking the VaR at a certain confidence level as the N alpha plus 1/8 observation from the left.
Why is the sub additive property important for risk measures?
-The sub additive property is important for risk measures because it is crucial for margin calculations, regulatory capital estimation, and setting a conservative upper bound to the combined risk of multiple positions.
What is the Expected Shortfall (ES) and how is it different from VaR?
-Expected Shortfall (ES) is the probability-weighted average of tail losses and provides insight into how bad things can get if losses exceed VaR. Unlike VaR, which only indicates a threshold loss number, ES gives a measure of the average loss in the tail beyond the VaR level.
How can the standard error of an estimate be computed?
-The standard error of an estimate can be computed by creating a thin strip around the VaR with a width of H and calculating three probabilities: to the left of the strip, to the right of the strip, and in the middle of the strip. These probabilities are then used in a formula to compute the standard error.
What is a QQ plot and how is it used in risk estimation?
-A QQ plot is a quantile versus quantile plot that compares quantiles from an empirical distribution with quantiles from a specified benchmark distribution. It is used to identify if the benchmark distribution is a good fit for the empirical data, to identify outliers, and to estimate parameters such as location and scale.
Why is the normal distribution assumption used in parametric VaR approaches?
-The normal distribution assumption is used in parametric VaR approaches to simplify calculations and because it is a commonly used distribution in statistics. However, it may not always accurately represent the true distribution of P&L or returns, especially in the tails.
What is a spectral risk measure and why is it considered coherent?
-A spectral risk measure is a special subset of general risk measures where the weighting function satisfies three conditions: non-negative weights, weights sum to one, and weights assigned to more extreme losses are not less than those assigned to less extreme losses. It is considered coherent because it satisfies all four conditions of coherent risk measures, including sub additivity.
Outlines
đ Estimation Approaches and Returns
This paragraph introduces the process of estimating market risk, focusing on the inputs required for such estimations. It explains the computation of time series for prophets/losses and dividends/coupons, which are essential for calculating profit and loss (P&L). The P&L is adjusted for time value of money using either discounting or capitalizing methods with the risk-free rate. The paragraph then delves into the calculation of two types of returns: arithmetic returns, which serve as a proxy for expected returns, and geometric returns, which consider reinvestment of income. The relationship between these returns is highlighted, and the importance of understanding the distribution of losses is emphasized. The historical simulation technique for Value at Risk (VaR) estimation is introduced, which involves sorting P&L observations and selecting a specific observation based on the confidence level. The limitations and benefits of this technique are discussed, including its simplicity and its reliance on the assumption that the future will mirror the past.
đ Parametric Approaches and Coherent Risk Measures
The second paragraph shifts the discussion to parametric approaches, starting with the normal distribution assumption for P&L to calculate VaR. It contrasts this with the normal world approach, where normality is assumed for arithmetic returns to estimate the Word. The Log-normal VaR approach is also explained, which assumes normality in geometric returns, leading to log-normally distributed prices or valuations. The paragraph then transitions into a discussion on coherent risk measures, which are risk measures that satisfy four conditions: monotonicity, sub-additivity, positive homogeneity, and translational invariance. VaR is noted as not being a coherent risk measure due to its failure to meet the sub-additivity condition. The concept of expected shortfall (ES) is introduced as a risk measure that provides insight into the tail of losses beyond VaR, and it is explained how ES can be estimated using historical simulation or through a formula involving higher confidence levels.
đ General Risk Measures and Standard Errors
This paragraph introduces general risk measures, which are weighted averages of various quantiles and can encompass both VaR and ES with appropriate weighting functions. The concept of spectral risk measures is discussed, which are a subset of general risk measures that satisfy three specific conditions to ensure coherence. The paragraph then explains the calculation of standard errors for VaR estimates, which provide a measure of uncertainty. Factors affecting standard error, such as sample size, bin width, and confidence level, are discussed. The chapter concludes with a discussion on QQ plots, which are used to compare an empirical distribution with a benchmark distribution, and their utility in identifying outliers and estimating distribution parameters.
đ Summary of Market Risk Estimation
The final paragraph provides a summary of the key takeaways from the chapter on estimating market risk measures. It encapsulates the various methods for calculating VaR, the importance of understanding the distribution of returns and losses, and the use of QQ plots for distribution fitting and outlier detection. The summary underscores the importance of choosing the right distribution for risk estimation and the implications of different risk measures for regulatory capital and margin calculations.
Mindmap
Keywords
đĄTime Series
đĄP&L (Profit and Loss)
đĄTime Value of Money
đĄArithmetic Return
đĄGeometric Return
đĄHistorical Simulation
đĄValue at Risk (VaR)
đĄCoherent Risk Measure
đĄExpected Shortfall (ES)
đĄStandard Errors
đĄQQ Plot
Highlights
Introduction to inputs required for War estimation: time series of profits/losses and dividends/coupons.
Calculating Profit & Loss (P&L) as the difference between receipts at the end of a period and initial investment.
Adjusting P&L for time value of money by discounting or capitalizing payments using the risk-free rate.
Definition and calculation of arithmetic returns as a proxy for average or expected return.
Explanation of geometric returns considering reinvestment of income and compounding frequency.
Linking arithmetic and geometric returns using a specific expression.
Creation of a time series for losses (LP) as the negated version of the P&L time series.
Historical simulation technique for VaR estimation by sorting P&L observations.
Assumption of future following the past in historical simulation and its benefits.
Transition from nonparametric to parametric world with three approaches for VaR estimation.
Normal distribution assumption on P&L for calculating VaR in the first parametric approach.
Normal distribution assumption on arithmetic returns for the second parametric approach.
Log-normal distribution assumption on geometric returns for the third parametric approach.
Definition of coherent risk measures and their four conditions: monotonicity, subadditivity, positive homogeneity, and translational invariance.
Explanation of why VaR is not a coherent risk measure due to not always satisfying subadditivity.
Introduction to expected shortfall (ES) as a risk measure that provides insight into the tail beyond VaR.
Calculation of ES using historical simulation by averaging losses in the tail.
General risk measure as a weighted average of various quantiles and its relation to VaR and ES.
Spectral risk measure as a special subset of general risk measures that satisfies three conditions to ensure coherence.
Discussion on standard errors in VaR estimation and their relation to sample size, bin width, and confidence level.
QQ plot explanation for comparing empirical distribution with a benchmark distribution and its uses in identifying outliers and determining distribution parameters.
Transcripts
having done this reading in a lot of
detail now let's take a look at its key
takeaways we started off with the
various inputs that are required for any
War estimation approach these inputs are
number one the time series of prophets
slash losses this this time series we
computed it using twos time series the
first of them was the time series of
crisis if it's a listed instrument or
the time series of M TMS or valuations
if it's let's say an exotic product the
second time series was that of dividends
/ coupons which you can refer to as the
income time series so we compute the P&L
as let's say the receipts at the end of
any period t minus the initial
investment let's say as the beginning of
the period we then took a look at two
ways in which we can let's say adjust
this definition of PL for time value of
money either by discounting the payments
received at time T or by capitalizing
the payments made at time t minus 1 both
using let's say the risk-free rate R we
then moved on to defining two kinds of
returns the first return was what we
referred to as arithmetic returns we
calculated these by just taking in the
pl time series take the pl and divided
by the initial investment that's an
arithmetic return remember this
arithmetic return is something which you
use as a proxy for the average or the
expected return the other kind of return
that we took a look at was a geometric
return this return is for a chosen or
designated time period and it takes into
account reinvestment of this income and
a certain chosen compounding frequency
so if you assume that your compounding
happens you with the continuous
compounding then I can define my
geometric return as the log of the final
proceeds divided by the initial
investment the two kinds of returns
arithmetic and geometric can be linked
to one another using this expression
generally speaking the two returns are
close to one another if your period over
which these returns are earned is a very
small period let's say a daily period
then we also took a look at another time
series which is that of losses so I can
create another time series which lets me
call it as LP I define this time series
to be lets say the negated version of
the pl time series and if we are talking
about distributions then we said that
the mean of the LP distribution is
simply the negated value of the mean of
the PL distribution and the standard
deviation of the LP distribution we said
is the same as the standard deviation of
the PL distribution when it comes to the
most sort of simplest technique for var
estimation we said it's the historical
simulation technique all that we do in
this technique is we take the
observations that are given to us assume
that these are let's say the PL
observations we sort them in an
increasing order then let's say from the
smallest to the highest smallest would
therefore refer to negative entries or
losses and we pick the var at a certain
confidence level let's say at a C or a
certain significance level let's call it
alpha as the N alpha plus 1/8
observation from the left here I am
assuming that I am dealing with a PL
time series number one and number two I
am dealing with n observations so if I
pick n alpha plus one observations and
if a plus one at the observation I
should say from the left then that
observation would clearly leave a
probability mass of alpha in the left
tail and I've been working with the LP
series we had said that we will pick our
war again to be the N alpha plus 1 at
observation but this time we'll pick it
from the right okay so this was about
historical simulation keep this thing in
mind that it's a very simple technique
but at its core it makes this assumption
that the future will follow the past a
very important benefit of this technique
is that it accommodates real-life
empirical observed distributions it does
not it does not impose any
distributional Essam
on the PL or the returns now let's move
from this nonparametric world which
historical simulation was in to a
parametric world in this parametric
world we defined three approaches the
first approach was one in which we
impose as we do in any parametric
approach a normal distribution
assumption on this time the PL the
profit and loss so if you do that then I
can calculate the var which in the pl
world remember is in the left tail by
starting with the mean moving to the
left and that's why you have a minus
these many multiples of your standard
deviation of the pl and since we are
talking about a PL and since war is a
lost number we had negated it to create
a war had this been an LP distribution
you would have computed the var by
starting with the mean which is mu LP
and moving to the right this time by
these many multiples of the standard
deviation Sigma PL is a same as Sigma LP
now moving on to the second approach in
this category we called it the normal
word in this approach we impose the
normal distribution assumption on the
arithmetic returns and let's assume that
these returns are distributed normally
with this as the mean and this as the
standard deviation to calculate the word
which in this case would assume that
valuations or prices are normally
distributed I'll start with the mean
return move to the left and that's why
there's a minus these many multiples of
the standard deviation what this gives
me along with the minus is what we call
a VAR percent because it's like a word
number for $1 of notional invested this
I have to scale it with the current
valuation of my position which is PT
minus 1 I am assuming I'm standing at
time t minus 1 and this gives me a word
at this confidence level see the next
approach in this parametric category was
a log normal word approach in this
approach I impose the normality
assumption
this time on the geometric return which
is the continuously compounded return if
you do that then we know that the prices
or valuations are log normally
distributed and that's why we refer to
this var as a log normal var in terms of
its var calculation I should say how do
we calculate it start with the mean
return move to the left by these many
multiples of the standard deviation this
gives you some kind of a worst-case
valuation so current valuation minus the
worst-case scaled by the total size of
your position this gives you the bar
keep this thing in mind that normal and
log normal var are close to one another
if the period over which they are being
computed I mean the horizon is small
let's say daily we then moved on to
defining coherent risk measures that
means let's say we have portfolios x and
y and our risk measure is denoted by Rho
which is written we've written it as a
function it's said to be coherent if it
satisfies all of the four conditions
listed here just to name them at this
moment these conditions were
monotonicity sub additive 'ti positive
homogeneity and translational invariance
we described each one of them in detail
in our videos out of these four we
reasoned out that sub additive 'ti is
really important it's important from the
standpoint of margin calculations if
let's say the exchange uses this
particular risk measure for calculating
initial margins it's important from the
standpoint of regulatory capital
estimation if let's say the supervisor
is using this risk measure for
calculating Bank capital and it's also
important from the standpoint of setting
a conservative upper bound to the
combined risk of let's say many
positions put together now in terms of
how var fares on these conditions then
remember that var is not a coherent risk
measure the reason being that it does
not always satisfy this condition of sub
additive 'ti by taking a suitable
example we had reasoned out why this is
not the key
is why is it I mean why is it not
coherent now in terms of making work
coherent then and you know making it sub
additive I should say you should pick a
PL distribution which let's say comes
from this family of distributions which
we call the elliptical distribution
family and this normal distribution
remember is one member of this
elliptical distribution family we then
moved on to this disk measure of
expected shortfall
now this risk measure is something which
helps you peek into the tail the VAR
only tells you a threshold loss number
it doesn't tell you how bad things can
get if the losses were to exceed the war
in terms of how we defined es we defined
it as the probability weighted average
of tail losses in terms of the formula
it's the expected value of the loss
conditional upon this fact that the loss
exceeds the war so this is like a
conditional expectation when you come to
the world of historical simulation which
is very simple approach we know and in
this world the es can be estimated very
simply by a simple average of all losses
that lie in the tail remember we had
picked the N alpha plus 1 at observation
from the right if it's a loss
distribution as my var so if I can take
these n alpha observations which lie in
the tail and take their simple average I
arrive at the expected shortfall now the
expected shortfall be reasoned out you
know with lot of detailed analysis can
also be computed using another formula
and that formula does not deal directly
with losses in the tail but rather it
deals with quantiles or we should say it
deals with VARs which are computed at a
confidence level that is higher than the
confidence level for which we are
computing this es these confidence
levels which are higher than C let's
call them C I I pick them which are you
know these are comfortably in the tail
and I pick them as equally spaced
confidence levels it's like I am slicing
the tail in
- endless slices and then I sort of take
an average of all these words to compute
my es now more are the number of slices
of the stale more accurate with this
approximation be for the expected
shortfall at this point also note that
if your losses are continuously
distributed I can write down two
formulas for the expected shortfall the
first formula remember is like an
integral it's like a conditional
expectation which looks something like
this it's 1 by alpha run this integral
from the var up until infinity and then
compute let's say X FX DX where X refers
to the loss the other formula for this
which gives generally the same result as
this one is 1 by alpha this time the
integral runs over a probability that
I'll run it from C which is the level of
confidence right up till 1 and I'll do
this integration as Q P DP ok so it's
like I am taking in a quantile and I'm
basically integrating it over the
probability values which span from C to
1 both these integrals are basically
computing an integral over the right
tail which is the case for the loss
distribution now let's move to more
genetic risk measures and we defined a
measure which is like some kind of a
weighted average of various quantiles so
if I run through various quantiles
I have X you know expressed my quantile
as QP so if this is my loss distribution
then a quantile QP would be that number
on the horizontal axis to the left of
which you will have a total accumulated
area of P so if I can you know take
various QP s like this and combine them
in some kind of a weighted average way
then I arrive at a risk measure which I
refer to as a general risk measure the
weighting function is this file you keep
changing the file and the risk measure
keeps changing we had said that both the
var and the es can actually be subsumed
into this definition of the general risk
measure
an appropriate choice of the weighting
functions so these are what are you know
what is given here now we then raised
this question our general risk measures
also coherent and we said not
necessarily so to make them coherent we
moved to a special subset of these risk
measures which we refer to as a spectral
risk measure this subset is one in which
the weighting function satisfies three
conditions the first one is that these
weights are non-negative they can be 0
but they can never be negative these
weights you know in very laypersons
terms sum to one that's like a
requirement which we always impose on
weights when we do a weighted average in
this world since these weights are
continuously defined so it's like saying
the area under the weighting function is
equal to 1 the last one is a very
important condition we had said if you
are risk-averse
then the weight that you assign to a
certain quantile which is further into
the tail that means for this quantile P
2 is greater than a quantile to its left
which is let's say P 1 then the weight
which you assign to a verse or an
extreme loss cannot be less than the
weight that you assign to a less extreme
loss that means they're not drawing it
visually if this is P 2 quite an extreme
loss and this is P 1 then the weight
which you assign to this guy cannot be
lower than the weight you assign to this
guy so es it satisfies this condition
but what does not so es is actually a
spectral risk measure and it's also
coherent we then moved on to defining
standard errors so when we calculate war
or estimated based on a sample of
observed let's say PNL or returns then
it's like a statistical estimate every
statistical estimate we said is is to be
accompanied by some kind of a standard
error estimation because if we have the
standard error let's say it's 4q which
is a point estimate of war and I am
saying standard error is a C of Q if I
have this standard error available with
me I can easily use it to Cal
or estimate a confidence interval for my
bar now let's take a look at how we we
computed the standard error we had said
let's pick the quartile we had used the
case of a PL distribution so my bar was
in the left tail
this was my bar we had created a thin
strip around it
this strip had a width of H and we had
computed three probabilities the
probability to the left of this strip to
let's say the right of the strip and in
the middle of the strip based on these
three we had computed the standard error
using this formula we had then done no
quick rules of thumb in terms of how
this standard error varies with respect
to the size of the sample so as the
sample size increases intuitively you
could agree that thus the standard error
decreases then the other determinant we
had you know reasoned out was the width
of this bin we had said as the bin width
increases the standard error decreases
and lastly the determinant was the
confidence level at which this VAR was
estimated we had said as you estimate
what which are let's say for higher and
higher confidence levels their
uncertainty goes on increasing that
means the standard error for high
confidence VARs is pretty high we had
actually finished then this chapter with
a quick discussion of this plot which we
refer to as a QQ plot expands as the
quantile verses quantile plot this plot
is basically one which plots quantiles
which are read from one distribution
let's refer to it as an empirical
distribution plotted against the
quantiles read from a specified
benchmark distribution if you what you
achieve is a pretty linear graph
something like this that means that the
specified benchmark distribution is a
very good choice to fit the empirical or
observed data if it's not the case then
you switch to another benchmark
distribution and keep trying out
different choices the second thing which
we noted about QQ plots was that if you
plot this QQ plot and the extremities of
the
plot you see them bending towards a
certain axis then the distribution
plotted on that axis has fatter tails so
that was my second observation my third
observation was that QQ plots they can
be used to somehow get a handle on the
parameters such as location which is
another name for mean and scale another
name for Sigma of any distribution if
you were to do a linear transformation
of one of the distributions in the QQ
plot then this transformation goes and
changes both the slope and the intercept
of this QQ plot these changes in slope
and intercept are the ones which can
help you sort of determine the mean and
the Sigma of this empirical distribution
lastly we had the fourth point we had
said that QQ plot is a very good way of
identifying outliers
in your data so this was a very quick
run-through of various takeaways in our
first chapter estimating market risk
measures
5.0 / 5 (0 votes)