FAQ's about the Market Libraries and volatility

more info/support

Data Sources

Where does The Currency Library data come from?

Currency rates are supplied by DTN Market Access vs. Tullet Tokyo and Barclays.   They are interbank rates.

Volatility FAQ's

What is volatility, anyway?

Well, at least to some degree, it depends upon whom you ask… As a terse, general definition it would be hard to improve upon the following, from a paper recently published in the derivatives journal, Futures and Options World:

Volatility can be described as the uncertainty surrounding a forecast value…

‘Educated Estimates’
Kostas Giannopoulos and Brian Eales
Futures and Options World
April, 1996 p 45

And for a more closely specified definition from a particularly authoritative source:

…practitioners often refer to the term "volatility" when speaking of movements in financial prices and rates. In what follows we use the term volatility to mean the standard deviation of continuously compounded financial returns…

RiskMetrics - Technical Document, p.77
Fourth Edition, 1966
J.P. Morgan/Reuters
New York, NY

However general or closely specified the definition may be, the never-ending search for better tools to reduce risk and enhance reward - combined with faster, more powerful and cheaper computer-based analysis - have brought the study of volatility to the forefront of financial study and practice in recent years, where it is likely to remain. The contents of the Market Libraries place powerful tools for state-of-the-art volatility analysis in the hands of any financial practitioner with access to the Internet.

What types of Volatility calculations will I find in the Market Libraries?

The Market Libraries include charts utilizing three different methods of calculating price and rate volatility:

Why do the Market Libraries offer charts with differing time-frames? The Market Library offers several different time-frames or "x-axis windows", designed to meet the requirements of different categories of financial professionals. Corporate strategic planners, for instance, often require the perspective of 5-year and 3-year time-frames, while dealers and traders may need the detailed focus of 1-month charts.
How can Dealers use theFinancials.com Volatility Charts? There are many specialties in the spot and forward dealing world - not only in terms of specific instruments or commodities, but in terms of trading and risk-management horizons, multi-currency portfolio risk and style-of-trading as well. State-of-the-art volatility analyses tailored to meet individual requirements can make an important contribution to the successful dealing desk.
What use can corporate financial managers make of theFinancials.com Volatility Charts?

Corporate financial managers often have responsibility for more than one category of exchange rate risk management, ranging from "macro", long range strategic protection, to monthly cash flow series and one-off capital projects. theFinancials.com Volatility Charts chosen to reflect specific situations can be a big help in making effective boardroom presentations, as well as in developing, implementing and monitoring risk management strategies.
What use can investors make of theFinancials.com Volatility Charts? Exchange rate and price risk accompanies a very wide range of investment choices in the international debt and equity markets. Often, with hindsight, exchange rate and price movements turn out to be as important to net results as the "domestic-context" price or rate behavior of the underlying instrument itself. Having the right charts to monitor and evaluate foreign-exchange rate and commodity price risk for individual and portfolio situations can be a big help in the search for superior results.
Are Market Library Charts available for other instruments or in other formats?

When we invite our clients to think of the Market Libraries as an in-house resource, we aren’t kidding. Only a small percentage of our chart resources are on display at any given time.
  • If you don’t see the specific rate, instrument or cross rate you need in our inventory, ask. Chances are we can have the charts you need up and running inside 24 hours.
  • If you need one or more charts comprised of a specific set of exchange rate or commodity analyses to match your dealing, risk-management, or investment requirements, ask. We can probably have your custom charts available within 24 hours.

Example: You need a combination of Italian Lira-based GARCH charts vs. several other currencies and in varying time-frames, all packaged in a very specific way. No problem.

How does theFinancials.com calculate Historical Volatility (VolSD)?

Add Free Content to Your Site

Currency Resources

Commodity Resources

Index Resources

Actually, any method of calculating the volatility of a time series which relies on past values could be classified as a measure of "historical" volatility. However, in the financial markets, and the financial option markets in particular, "Historical Volatility" is generally taken to mean the measure of volatility specified by Professors Fischer Black and Myron Scholes, in describing the required inputs to the seminal Black-Scholes Model for European-style Options, (1973). This measure centers upon a standard deviation calculation applied to a historical time series of prices or rates, hence the time series label VolSD used in Market Library Charts.

In one sentence, this measure could be defined as

the standard deviation of the change in the natural logarithm of the underlying asset’s price that is expected over the lifetime of the option, expressed as an annual rate, and obtained from a historical time series of asset prices.

  • The Historical Volatility calculation makes the assumption that the daily returns in question are normally distributed, and that the time series in question can be considered a "random walk."
  • The Historical Volatility calculation requires the arbitrary selection of a "sample period" for the required analysis, i.e. a number of days, ending with the most recent available date, upon which to base the standard deviation. (Note that these are "business" or "trading" days, not calendar days.) Fifty days and twenty days are perhaps the most commonly used time-frames in general use, and Market Library Charts are available for both.

How does theFinancials.com calculate Exponentially Weighted Moving Average volatility (VolEWMA)?

Add Free Content to Your Site

Currency Resources

Commodity Resources

Index Resources

The Exponentially Weighted Moving Average (EWMA) approach to characterizing volatility is an example of exponential smoothing. Exponential Smoothing (ES) techniques employ one or more exponential smoothing parameters to give more weight to recent observations and less weight to older observations, in an attempt to respond "dynamically" to the changing value of the time series. The smoothing process is exponential because the weights employed are not arithmetic, but, instead lie along an exponential curve.

EWMA is an example of the simplest form of the exponential smoothing method, or Single Exponential Smoothing (SES), which, logically enough, employs a single smoothing parameter. Several assumptions must be made about the nature of the data making up the underlying time series, in order for an SES technique like EWMA to be an appropriate analytic tool:

  • The first assumption is that the process generating the data is "stationary", meaning that the data is in equilibrium, or moves randomly, around the underlying mean.
  • Furthermore, proper use of SES techniques further assume that variance around the mean remains constant over time and that no systematic trend or "seasonality" exists in the day-to-day changes in the time series.

How does the theFinancials.com EWMA measure of volatility differ
from the
J.P. Morgan RiskMetrics© approach used in VaR calculations?
The J.P. Morgan RiskMetrics© approach to estimating and forecasting volatility uses an exponentially weighted moving average model (EWMA) which is virtually identical to the method used for theFinancials.com EWMA Volatility calculations.

Both models require that a "decay factor" be specified, in order to determine the rate at which the weighting of past observations diminishes. In order that Market Library EWMA Charts provide closely-comparable output to RiskMetrics volatility estimation, our model uses the same decay factor as RiskMetrics’ EWMA, i.e. 0.94.

Likewise, in choosing between yesterday's and today's price movement to reflect "market change", both the VaR and theFinancials.com versions utilize "today’s market change" for this purpose.

What is GARCH volatility? GARCH stands for Generalized Autoregressive Conditional Heteroskedasticity - and that’s a mouthful for anyone. GARCH models are comparative youngsters on the econometric modeling scene, having being first specified by R.F. Engle in 1982 and T. Bollerslev in 1986. In a relatively short period of time, they have become popular in many areas of econometrics, including dealing, trading, hedging and investing in and with financial instruments, in large part because they are specifically designed to model and forecast changes in variance, or volatility per se.

(First off, let’s get a handle on the "heteroskedasticity" part of GARCH. This term refers to a condition which exists when the differences between actual and forecast values do not have a constant variance across an entire range of time series observations.)

One reason many sophisticated practitioners prefer GARCH volatility estimation and forecasting techniques over other approaches relates to the fact that GARCH model specification "makes sense" in terms of the real-world context within which professionals actually operate. Broadly speaking, A GARCH(1,1) model incorporates the assumption that today’s volatility depends upon three factors:

  • a constant,
  • yesterday’s "news" about volatility, and
  • yesterday’s forecast variance.

This specification parallels, in an informal sense, an environment wherein a dealer or trader typically tries to assess today’s volatility in the context of

  • a longer-term "baseline" or "average" value,
  • where yesterday’s volatility was "expected" to be, and
  • where yesterday’s volatility actually "turned out" to be.

Furthermore, the GARCH specification incorporates and handles well the frequently-observed financial time series behavior called "volatility clustering." Volatility clustering describes the situation wherein large volatility movements are more likely to be succeeded by further large volatility movements of either sign than by small movements.

Another aspect of financial price and rate behavior that GARCH handles particularly well relates to the speed with which it "re-adjusts" in the aftermath of event-induced "shocks" to the time series in question. This can be observed time after time in Market Library chart formats which display the varying responses of three different measures of volatility to the same underlying event.

Does the GARCH specification do a "perfect" job of estimating and forecasting volatility? Of course not. Interestingly enough, even the academic underpinnings of the GARCH specification do not support the expectation of superior "point forecasts" for the underlying series. However, for gaining a sophisticated grasp of how volatility has behaved in the past, especially the recent past, it pretty much represents state-of-the-art for most practitioners.

Which measure of Historical Volatility is best? Forget the temptation to define the "best" measure of volatility as being the one that always leads to successful market decisions is best, ‘cause there ain’t no such animal...on the other hand, some general conclusions regarding the comparative advantages and disadvantages of the three volatility methodologies employed in the Market Libraries can be drawn.

Before attempting to discuss the comparative advantages and disadvantages embodied in different approaches to analyzing the volatility of exchange rates - or any other financial price or rate series - it is important to distinguish between several different yet related objectives of such analysis. These objectives are:

  1. to appropriately model, and thereby understand the changing structure of a time series,
  2. to correctly assess the volatility-based risk associated with a particular financial instrument or rate, and
  3. to accurately forecast the future value of a particular instrument or rate.

1) "Classic" Historical Volatility (VolSD).

On the plus side:

  • it’s about the simplest measure of volatility available,
  • it’s computationally cheap to calculate, and
  • it remains a "benchmark", against which alternative approaches can be measured, especially for the depiction of current and past volatility, as opposed to forecasting

On the negative side:

  • as an "unconditional" model, it offers no "dynamic update" capability, i.e. it applies fixed and equal weights to all observations during the sampling period. This is a serious drawback for forecasting purposes.

    (For instance, in the case of a 20-day measure, since the formulation is based upon the mean of the 20 daily measures, in the event of a large move, or "shock" in the data series, the forecast for the next period is unable to catch up, and its impact contributes no more to the measure than an identical shock occurring 19 days earlier.

    In a similar fashion, equal weighting means that the impact of a shock input persists at "constant strength" for a further 19 days, whereupon it abruptly disappears.)

  • by applying 1- or 2-standard-deviation upper and lower limits to a VolSD forecast, within the model’s terms of reference users can discus the resulting range as possessing either a 68% or 95% chance of encompassing movement by the underlying price series for the next forecast period. However, this range will remain constant from day to day, and will not take into account whether current volatility is at historically high or low levels. This is counter-intuitive for market practitioners.


2) Exponentially Weighted Moving Average volatility (VolEWMA).

The use of exponential smoothing in both the theFinancials.com and RiskMetrics EWMA calculation represents an attempt to deal with the weaknesses inherent in the "classic" Historical Volatility calculation. This is not to say, however, that exponential smoothing results in a "perfect" depiction of volatility as it changes over time.

On the positive side:

  • EWMA models update their forecasts as new information becomes available, which the "classic" approach does not,
  • EWMA models employ "exponential" weighting in a fashion which imparts more importance to recent observations than to older datapoints. Thus, following a "shock", EWMA models react more quickly not only to the shock itself, but to the "recovery" of the marketplace as the impact of the shock is absorbed.
  • EWMA models are only slightly more complex than the "classic" model, which means the computational resources required to "crank" them are still reasonable. This can be important, especially in the context of RiskMetric calculations, where correlation calculations upon hundreds of financial series require many thousands of calculations.

On the negative side:

  • criticism of SES models, of which EWMA is an example, relates to the fact that since they are considerably simpler in their formulation than full-blown conditional econometric models like, for instance, those comprising the ARCH family, they therefore do not perform as well under various sets of circumstances.

    For instance, since SES models use only one coefficient to establish the impact of the previous period’s errors and volatility on current volatility, whereas GARCH models use two, SES models should be expected to recover more slowly from large shocks.
  • Additionally, at least one study suggests that after a short period of low-volatility days, SES model output may drop to levels which "systematically underestimate the variance." (Giannopoulos and Eales , ‘Educated estimates’, Futures and Options World, April, 1996, p. 45.)

3) GARCH volatility (VolGARCH)

On the plus side:

  • GARCH models are sophisticated, state-of-the-art econometric models designed for the specific purpose of modeling volatility. As such they should be expected to do as-good-as-or better a job than any available alternative, and most empirical studies tend to confirm this assumption.

On the negative side:

  • GARCH models are among the most "computationally expensive" approach to modeling volatility in current use, which explains why very little GARCH-based analysis has found its way into the public domain prior to the Market Libraries.
  • An often-omitted point in discussions of GARCH performance is the fact that whatever their superior qualities in regard to the modeling of volatility, by their very nature, GARCH forecasts should not be expected to necessarily outperform other approaches to forecasting future values.
    Various studies support this point. (See RiskMetrics Technical Document, Fourth Edition, p.89.)

In summary, then, assuming that the computational intensity involved in cranking out GARCH values is not a problem, the GARCH methodology should provide

  • forecasts equal to any current alternative approach, plus
  • widely-conceded superior modeling of the actual volatility-inducing process.

Isn’t Implied Volatility better than Historical Volatility? Most currency and commodity option dealing desks rely very heavily upon the volatility implied by the actual trading levels of currency and commodity option premiums for the pricing, risk management and pursuit of profit in their currency option books.
  • This "implied volatility" is obtained by simply "re-writing" standard option valuation models to a) utilize the current pricing level of a specified option contract as a model input, and to b) produce a consequent "implied volatility" as output.
  • If this process is undertaken for a particular European-style option contract that is priced "at the money", (where the exercise price of the option is equal to the current forward rate), a case can be made that the "indirect" volatility thus implied can be considered as a legitimate alternative to the directly-measured volatility of the underlying currency.
  • Furthermore, implied volatility is considered to "forward-looking", in that it is a measure of market expectations for the exchange rate in question, whereas any "historical" approach can be considered, by definition, to be "backward-looking".

There are, however, there are, however, several major conceptual and practical problems with this approach:

  • The quality of the volatility output implied for the underlying currency by this method is purely a function of the option model itself, which introduces a host of problematic issues related to the valuation of derivative instruments per se,
  • The volatility level thus implied has a specific forecast horizon equal to the time-to-expiration of the option contract in question,
  • Consistent and sufficiently granular implied volatility data from the OTC options markets is difficult to source, except for major players and major exchange rates. The amount of direct-observation data from the underlying foreign exchange markets available for analysis is vastly larger than that available via indirect observation of option transactions.
  • Studies comparing direct, "historical" or "backward-looking" methods, principally GARCH - to the indirect, implied, "forward-looking" approach - have not led to a consensus on the part of researchers regarding the efficacy of option-based results.

Conclusion: While the case for forecasting underlying exchange rate and price behavior from the volatilities implied by market transactions in derivative instruments has its exponents, most spot and forward practitioners will most likely continue to rely primarily upon one or more directly-modeled approaches for the foreseeable future, perhaps with one eye on option-based analysis when and where it is available.

Currency Library Home