Thursday, August 9, 2007

Tectonic Shifts in the Quantitative Management World

I have several friends who manage quantitative portfolios across a range of asset classes, and they have pretty much *all* been having a very difficult time over the past few weeks as the subprime mortgage crisis provokes a crisis of leverage and in turn provokes a liquidity crisis which then provokes an asset crisis. It seems that we are in the midst of one of those 100+ year events that come by every 5 to 10 years or so. Some managers are reporting “20+ sigma” events which are events that - under normal circumstances - shouldn’t happen more than once in the lifetime of the universe (if that). Rumors circulate that Goldman Sach’s Global Alpha fund is shutting down, or at least in major liquidation, and other managers are running scared, although no doubt comforted that this is a strategy-wide crisis, rather than any one manager’s mistake. So obviously “something’s afoot,” but what is it?

There are two interesting (in the Chinese sense of the world) things happening right now. One is to see how trouble in a relatively small sector of the credit market is triggering a major shift that could have 1929-style repercussions across many asset classes. The other is an observation that despite the very large number of quantitative funds, many established only in the last few years, there is not a great deal of diversification in the strategies employed. At first we had optimization as a method to control risk within a portfolio; then risk managers developed enterprise risk management as a method to protect enterprises from loading up on the same risk factors across portfolios; now we may be faced with an economy-wide risk management problem, where multiple financial enterprises are all exposed to the same risk factors, and get caught in a cycle of self-eliminating value, even if the fundamentals of underlying companies in the economy are relatively strong.

In 1929, the pancaking collapse of the stock market was linked to public belief that stocks simply could not lose for long, combined with excessive use of leveraging to increase returns. There was rampant speculation by the lay population, the use of techniques like astrology to predict market moves, and a number of purely behavioral activities which ultimately resulted in a substantial proportion of investments supporting companies with questionable economic value but whose stock was bid up by waves of fashion and euphoria. Markets usually have a self correcting mechanism to control for this type of irrationality after a while, but the crash was exacerbated by the fact that these stocks were highly leveraged, increasing risk. They could be bought with only about 10% margin, so that a large enough drop in stock price could trigger forced selling, which would then lower the price further, triggering more margin calls, more selling, etc.. With a 10-to-1 leveraging, this process could happen extremely quickly. One of the results of October 1929 is that - at least in the US - margin requirements were tightened to allow only 2-1 leveraging, rather than 10-1. In theory, this means that the chain reaction of margin calls lowering prices and provoking more margin calls, lowering prices further should not happen, or at least happen more slowly.

The development of modern derivative products has offered investors a way to get around the 2-1 leveraging requirement. Futures contracts typically require only 5% margin, theoretically allowing up to 20-1 leveraging, and making more cash or collateral available for additional investments. In turn, this helped keep borrowing rates low, which prompted more borrowing to extend leverage further. In addition, options allow for non-linear exposure to the market, also highly leveraged. Derivatives of derivatives allow nonlinear dependence on nonlinear products which sets the stage for deterministic chaotic behavior that can provoke behavioral panics. It is very easy to point to derivative products and blame them for a variety of financial ills, but it is not the product itself, merely the way they are used that has primed the system for a vicious cycle of value destruction.

The remarkable thing about the present situation is how small the real trouble in the subprime market sector really is, and yet how large an effect it is having. Mortgage defaults are up, but not enormously, and most market observers knew instinctively that there were a lot of low-quality loans being issued and that one should take these with healthy dose of skepticism.

The issue is that many of the subprime loans were then securitized, and rolled up into Collateralized Debt Obligations (CDOs) that divide up the payments into tranches and magically transform a large proportion of BB rated debt into AA rated debt that many institutional investors would be able to purchase. As long as mortgage defaults were relatively uncorrelated in time, the AA rated debt was protected by the fact that lower-rated tranches would absorb any losses from problematic loans and that a large number of defaults would have to occur before any real capital losses happened.

Now however, there are larger numbers of defaults; they seem more correlated than expected, and people are worried that the AA tranches may not be as safe as they thought. As a result, credit spreads have widened because managers want to be compensated for extra risks, and, as a result, many funds cannot afford to be as leveraged as they are. Therefore, they sell assets and cover positions to compensate for higher costs of leveraging debt. These sales in turn lower the price of assets, triggering more margin calls, forcing more selling, now across many asset classes. Sound familiar?

Now, what is the challenge in the quantitative world?

One big advantage of the quantitative model in normal times is that it can evaluate thousands of securities quickly and identify overvalued and undervalued assets, generating buy and sell orders. What this means is that one can make a large number of small-profit bets on relatively tiny mispricings rather than have to wait for substantial mispricings before it makes sense to act. It is akin to scooping up nickles and dimes with a vacuum cleaner rather than hunting for five and ten dollar bills by hand - provided you can scoop fast enough, it can be less risky and more profitable, since there are more likely to be nickles and dimes lying around than five and ten dollar bills. For funds that allow both long and short positions, there is an additional advantage is that a quantitative strategy can make money both by purchasing underpriced securities and by selling short overpriced ones.

The problem is - what to do when there is a major contraction of liquidity and computers in charge of substantial sums need to sell and cover positions. If the quantitative models are similar enough, then the computers from a number of funds are all likely to be instructing traders to be buying and selling the same sets of securities. Even so-called market-neutral funds are likely to be hit hard, because the underpriced securities are being sold to reduce leverage, and the overpriced securities are being bought to protect against (and simultaneously cause) a short squeeze. Therefore the prices of the underpriced securities drop, eliminating profit, and the prices of overpriced securities rise, eliminating profit; investors may decide to redeem funds, necessitating further selling and covering of short positions.

In finance programs across the world, the Capital Asset Pricing Model (CAPM) is taught because it is relatively simple to implement, easy to understand, and forms a basis for understanding other models of security pricing. The reason why CAPM is minimally plausible as something that reduces asset pricing to one single factor is that it does appear to explain roughly 85% of the variation in asset prices. The Fama-French (FF) three factor APT model, which includes CAPM’s market factor, plus two others, only improves explained variance by 5-10%, which means that most of the FF’s explanatory power results from the fact that its specification is very similar to CAPM.

Global Alpha and other quantitative models may have discovered new sets of paid risk factors that give them competitive advantages in normal times, but approximately 85% of predicted variations are still explained by the single CAPM factor, and a great many of Global Alpha’s competitors are using models that include this factor too. As a result, when liquidity contracts, these models are still generating buy and sell orders for roughly 85% of the same securities. After that, index funds need to change their adjustments, and the remaining 15% are affected. A similar logic may be argued for the quantitative optimization process, where the degree of liquidation required means that small differences in portfolio holdings are irrelevant to the net effect of having to sell or cover bits of everything in the portfolio.

What to do? It is hard to tell at this point. The fund managers that have large stores of cash are probably best positioned, and indeed, the low-leveraged funds might be able to increase their leverage at a later point to pick through the ashes for bargains. Given the tendency to overreact, there will be a bargain picking stage eventually, but the real question is how long to wait before looking.

Another thought is to try to construct what one might call a “Cassandra portfolio,” which generates unremarkable (possibly cash-like) returns in normal times, but expands enormously during market crises like this. Trying to figure out how to balance a regular portfolio and a Cassandra portfolio is an intriguing optimization problem.

A third thought is to introduce random elements to the securities that are bought and sold to reduce the correlation among buy and sell orders. This might work only for smaller sized portfolios, however, and also runs against a portfolio managers’ instinct to maximize the power of his or her active decisions. It would be difficult to explain to a supervisor that one decided what securities to buy and sell at random, but if every manager did this, the correlation between bought and sold securities would be reduced.

One question we will no doubt learn soon is which of the global macro and directional funds had swing positions set to take advantage of a liquidity shift. Most people knew that housing and mortgages were due for a correction, and many may have felt that the housing correction was already working its way through. People with bets on increased volatility are likely to be the biggest winners, provided those bets are big enough to offset the damage done to other more traditional strategies. Fundamental funds are also likely to be hit by the systematic effects, but perhaps not as quickly or intensely, since they are more likely to have a long-term perspective on their positions and a wider margin of error for the investment decisions they have committed to.

As the saying goes: “to err is human; to totally foul things up requires a computer.” But at least we can watch it all in color.

Tectonic Shifts in the Quantitative Management World

I have several friends who manage quantitative portfolios across a range of asset classes, and they have pretty much *all* been having a very difficult time over the past few weeks as the subprime mortgage crisis provokes a crisis of leverage and in turn provokes a liquidity crisis which then provokes an asset crisis. It seems that we are in the midst of one of those 100+ year events that come by every 5 to 10 years or so. Some managers are reporting “20+ sigma” events which are events that - under normal circumstances - shouldn’t happen more than once in the lifetime of the universe (if that). Rumors circulate that Goldman Sach’s Global Alpha fund is shutting down, or at least in major liquidation, and other managers are running scared, although no doubt comforted that this is a strategy-wide crisis, rather than any one manager’s mistake. So obviously “something’s afoot,” but what is it?

There are two interesting (in the Chinese sense of the world) things happening right now. One is to see how trouble in a relatively small sector of the credit market is triggering a major shift that could have 1929-style repercussions across many asset classes. The other is an observation that despite the very large number of quantitative funds, many established only in the last few years, there is not a great deal of diversification in the strategies employed. At first we had optimization as a method to control risk within a portfolio; then risk managers developed enterprise risk management as a method to protect enterprises from loading up on the same risk factors across portfolios; now we may be faced with an economy-wide risk management problem, where multiple financial enterprises are all exposed to the same risk factors, and get caught in a cycle of self-eliminating value, even if the fundamentals of underlying companies in the economy are relatively strong.

In 1929, the pancaking collapse of the stock market was linked to public belief that stocks simply could not lose for long, combined with excessive use of leveraging to increase returns. There was rampant speculation by the lay population, the use of techniques like astrology to predict market moves, and a number of purely behavioral activities which ultimately resulted in a substantial proportion of investments supporting companies with questionable economic value but whose stock was bid up by waves of fashion and euphoria. Markets usually have a self correcting mechanism to control for this type of irrationality after a while, but the crash was exacerbated by the fact that these stocks were highly leveraged, increasing risk. They could be bought with only about 10% margin, so that a large enough drop in stock price could trigger forced selling, which would then lower the price further, triggering more margin calls, more selling, etc.. With a 10-to-1 leveraging, this process could happen extremely quickly. One of the results of October 1929 is that - at least in the US - margin requirements were tightened to allow only 2-1 leveraging, rather than 10-1. In theory, this means that the chain reaction of margin calls lowering prices and provoking more margin calls, lowering prices further should not happen, or at least happen more slowly.

The development of modern derivative products has offered investors a way to get around the 2-1 leveraging requirement. Futures contracts typically require only 5% margin, theoretically allowing up to 20-1 leveraging, and making more cash or collateral available for additional investments. In turn, this helped keep borrowing rates low, which prompted more borrowing to extend leverage further. In addition, options allow for non-linear exposure to the market, also highly leveraged. Derivatives of derivatives allow nonlinear dependence on nonlinear products which sets the stage for deterministic chaotic behavior that can provoke behavioral panics. It is very easy to point to derivative products and blame them for a variety of financial ills, but it is not the product itself, merely the way they are used that has primed the system for a vicious cycle of value destruction.

The remarkable thing about the present situation is how small the real trouble in the subprime market sector really is, and yet how large an effect it is having. Mortgage defaults are up, but not enormously, and most market observers knew instinctively that there were a lot of low-quality loans being issued and that one should take these with healthy dose of skepticism.

The issue is that many of the subprime loans were then securitized, and rolled up into Collateralized Debt Obligations (CDOs) that divide up the payments into tranches and magically transform a large proportion of BB rated debt into AA rated debt that many institutional investors would be able to purchase. As long as mortgage defaults were relatively uncorrelated in time, the AA rated debt was protected by the fact that lower-rated tranches would absorb any losses from problematic loans and that a large number of defaults would have to occur before any real capital losses happened.

Now however, there are larger numbers of defaults; they seem more correlated than expected, and people are worried that the AA tranches may not be as safe as they thought. As a result, credit spreads have widened because managers want to be compensated for extra risks, and, as a result, many funds cannot afford to be as leveraged as they are. Therefore, they sell assets and cover positions to compensate for higher costs of leveraging debt. These sales in turn lower the price of assets, triggering more margin calls, forcing more selling, now across many asset classes. Sound familiar?

Now, what is the challenge in the quantitative world?

One big advantage of the quantitative model in normal times is that it can evaluate thousands of securities quickly and identify overvalued and undervalued assets, generating buy and sell orders. What this means is that one can make a large number of small-profit bets on relatively tiny mispricings rather than have to wait for substantial mispricings before it makes sense to act. It is akin to scooping up nickles and dimes with a vacuum cleaner rather than hunting for five and ten dollar bills by hand - provided you can scoop fast enough, it can be less risky and more profitable, since there are more likely to be nickles and dimes lying around than five and ten dollar bills. For funds that allow both long and short positions, there is an additional advantage is that a quantitative strategy can make money both by purchasing underpriced securities and by selling short overpriced ones.

The problem is - what to do when there is a major contraction of liquidity and computers in charge of substantial sums need to sell and cover positions. If the quantitative models are similar enough, then the computers from a number of funds are all likely to be instructing traders to be buying and selling the same sets of securities. Even so-called market-neutral funds are likely to be hit hard, because the underpriced securities are being sold to reduce leverage, and the overpriced securities are being bought to protect against (and simultaneously cause) a short squeeze. Therefore the prices of the underpriced securities drop, eliminating profit, and the prices of overpriced securities rise, eliminating profit; investors may decide to redeem funds, necessitating further selling and covering of short positions.

In finance programs across the world, the Capital Asset Pricing Model (CAPM) is taught because it is relatively simple to implement, easy to understand, and forms a basis for understanding other models of security pricing. The reason why CAPM is minimally plausible as something that reduces asset pricing to one single factor is that it does appear to explain roughly 85% of the variation in asset prices. The Fama-French (FF) three factor APT model, which includes CAPM’s market factor, plus two others, only improves explained variance by 5-10%, which means that most of the FF’s explanatory power results from the fact that its specification is very similar to CAPM.

Global Alpha and other quantitative models may have discovered new sets of paid risk factors that give them competitive advantages in normal times, but approximately 85% of predicted variations are still explained by the single CAPM factor, and a great many of Global Alpha’s competitors are using models that include this factor too. As a result, when liquidity contracts, these models are still generating buy and sell orders for roughly 85% of the same securities. After that, index funds need to change their adjustments, and the remaining 15% are affected. A similar logic may be argued for the quantitative optimization process, where the degree of liquidation required means that small differences in portfolio holdings are irrelevant to the net effect of having to sell or cover bits of everything in the portfolio.

What to do? It is hard to tell at this point. The fund managers that have large stores of cash are probably best positioned, and indeed, the low-leveraged funds might be able to increase their leverage at a later point to pick through the ashes for bargains. Given the tendency to overreact, there will be a bargain picking stage eventually, but the real question is how long to wait before looking.

Another thought is to try to construct what one might call a “Cassandra portfolio,” which generates unremarkable (possibly cash-like) returns in normal times, but expands enormously during market crises like this. Trying to figure out how to balance a regular portfolio and a Cassandra portfolio is an intriguing optimization problem.

A third thought is to introduce random elements to the securities that are bought and sold to reduce the correlation among buy and sell orders. This might work only for smaller sized portfolios, however, and also runs against a portfolio managers’ instinct to maximize the power of his or her active decisions. It would be difficult to explain to a supervisor that one decided what securities to buy and sell at random, but if every manager did this, the correlation between bought and sold securities would be reduced.

One question we will no doubt learn soon is which of the global macro and directional funds had swing positions set to take advantage of a liquidity shift. Most people knew that housing and mortgages were due for a correction, and many may have felt that the housing correction was already working its way through. People with bets on increased volatility are likely to be the biggest winners, provided those bets are big enough to offset the damage done to other more traditional strategies. Fundamental funds are also likely to be hit by the systematic effects, but perhaps not as quickly or intensely, since they are more likely to have a long-term perspective on their positions and a wider margin of error for the investment decisions they have committed to.

As the saying goes: “to err is human; to totally foul things up requires a computer.” But at least we can watch it all in color.

Monday, August 6, 2007

Styles of Quantitative Finance

“Quants” is the nickname given to the financial analyst community’s quantitative analysts and sometimes also to the products they manage and produce. Although it is hard to think that asset or investment management could somehow not be quantitative, when the key figure is whether you have more at the end than you did at the beginning, the term really refers to the degree of quantitative knowledge necessary to manage and take part in some new processes that have recently become practical largely because of advances in computing technology and data networks. Traditional analysis of companies required comfort with algebra and perhaps a little calculus, but not necessarily a great deal more. Modern derivative products, on the other hand, typically demand an understanding of more complex procedures, such as stochastic integration, matrix algebra, and others.

In 1973, when Burton Malkiel published A Random Walk Down Wall Street and concluded that index funds offered better risk-adjusted results than any kind of active management, Malkiel divided active analysis methods into two camps. “Fundamental” analysis looks at company financial statements, management quality, and corporate strategy, whereas “Technical” analysis looks at pricing patterns and exchange volume for signals of when to buy and sell securities. Today, Quantitative Analysis presents a third strategy, possibly imagined, but certainly not available in 1973.

Emanuel Derman, one of the pioneering quants at Goldman Sachs and now at Columbia University, wrote in My Life as A Quant that the three essential ingredients to a good quantitative analyst are 1) an understanding of finance theory, 2) strong mathematical training, and 3) computer programming ability. Certainly these are the ingredients that are requested in quantitative job ads, but it is a mistake to think of quantitative analysis as a single thing. There are at least four different realms of quantitative analysis in modern finance, and they demand a different balance of these and other skill sets. The four types of quantitative analysis I see are Econometric Forecasting, Portfolio Optimization, Risk Management, and Derivatives/Arbitrage Pricing.

Econometric Forecasting is the use of statistical and regression techniques to predict rates of return on assets and the dispersion of that prediction. Major issues in econometric forecasting have to do with identifying the proper factors to include in the model and and reducing or at least predicting the spread of the residuals (essentially the error between what the model predicts and what you observe in reality). The major advance in the last 20 years has been the availability of data and improvements in computing speed, so that these regressions can be calculated for 1000s of securities, rather than the 10-20 per analyst that is feasible with fundamental analysis. What is necessary for this type of quantitative analysis is an understanding of the key factors to include in the model, the proper methods of model specification, and the effects of breaking key econometric assumptions.

Portfolio Optimization is the application of techniques introduced by Harry Markowitz in the 1950s to determine the optimal weightings for a portfolio of risky assets in order to ensure either the minimum degree of risk for a given expected return or the maximum possible return for a given amount of risk. Markowitz’s principal insight is to use the correlations (or lack of correlation) between assets to reduce the overall volatility of a portfolio beyond the volatility of its components - basically a mathematization of the portfolio diversification principle. The key issues in post-Markowitz portfolio optimization have to do accounting for potential errors of inputs to the optimization, the incorporation of trading costs when portfolios that drift from their target weights are rebalanced, and for computer coders to write numerical algorithms that can quickly converge on an optimal set of portfolio weights. These techniques demand a familiarity with statistical theory, matrix algebra, and an ability to produce efficient computer code.

Risk Management is closely related to portfolio optimization in its goal. Whereas portfolio optimizers seek to minimize the risk associated with a given level of expected return, risk managers seek to understand the risk a collection of funds or portfolios presents to a business operation. The difference arises where an individual portfolio may be optimized and have the minimum volatility for the desired level of return, but that level of volatility is still too great from the perspective of the business entity as a whole, in part because other funds in the organization may all be exposed to the same types of risk. Therefore even if an individual portfolio is optimized, it may be necessary to reduce the total risk exposure for the sake of the organization, and it is the risk manager’s job to go tell the portfolio manager to “cool it down a little.” This puts the risk manager in an unenviable position - in good times, risk managers are politically unwelcome because they tell portfolio managers that they can’t make all the bets they want; in bad times, they can take the blame for not having prevented the portfolio managers from taking those very same bets. A good risk manager generally needs broad quantitative skills because they need to understand the models of many different portfolios and asset classes and be able to combine them into models of enterprise risk. Thus, they need to be strong quantitative generalists across all classes.

Derivatives/Arbitrage Pricing is probably the most mathematically intensive of modern quantitative analysis, and makes use of stochastic differential equations to link the random behavior of underlying assets to the fair price. For the more timid, some derivatives can be priced using Monte Carlo simulation, provided that one has modeled the random behavior with the right random process, and some more exotic derivatives may have to be modeled with the Monte Carlo method. Derivatives are generally priced on arbitrage principles, which is to say that they are priced so that the derivative is equal to the cost of manufacturing an equivalent payoff with combinations of other securities (which may include other priced derivatives). There are several talents that can do well in these areas. First, a special ability to spot the combinations that will replicate the payoffs and then combine the prices efficiently is necessary to generate the arbitrage possibilities. Then, one needs to understand stochastic calculus sufficiently to derive analytic solutions, or one needs to be a highly efficient programmer to manage the Monte Carlo process rapidly enough to price items while they are still available. Ideally, one is good at both.

My own quantitative background is more in econometric forecasting and modeling, although I do find the derivatives and arbitrage pricing very interesting, and I like the application of portfolio optimization to investment problems. Still, I have often felt intimidated by the mathematics, engineering, and physics Ph.D.s who dominate the arbitrage and derivatives pricing world (and to some extent in risk management), and it has taken me some time to realize that econometric insights into asset valuation are still highly valuable, especially when needing to manage traditional assets. Indeed, the more mathematical an approach, the more assumptions typically need to be made about human and market behavior, and the more easily a model can fail to apply to economic reality. In these situations, a student of psychology, politics, and economics brings some very important advantages.

To some extent, we may be observing just this issue in the current subprime mortgage / CDO crisis which threatens to engulf other asset classes. I have a feeling that there may be considerable overreaction to the crisis in some sectors, but the increasing cost of leverage may force many asset managers to sell, driving down prices, and so there is certainly some danger of contagion from the mortgage market to the general equities market. The big question in my mind is about the credit default swap (CDS) market. Many asset managers may think that they are covered because their CDSs effectively have them insured against credit defaults. But if enough defaults occur, someone eventually will have to come up with money to pay out that insurance, and if their models have not adequately priced the insurance costs, then we are in for a very interesting ride.

Labels:

Styles of Quantitative Finance

“Quants” is the nickname given to the financial analyst community’s quantitative analysts and sometimes also to the products they manage and produce. Although it is hard to think that asset or investment management could somehow not be quantitative, when the key figure is whether you have more at the end than you did at the beginning, the term really refers to the degree of quantitative knowledge necessary to manage and take part in some new processes that have recently become practical largely because of advances in computing technology and data networks. Traditional analysis of companies required comfort with algebra and perhaps a little calculus, but not necessarily a great deal more. Modern derivative products, on the other hand, typically demand an understanding of more complex procedures, such as stochastic integration, matrix algebra, and others.

In 1973, when Burton Malkiel published A Random Walk Down Wall Street and concluded that index funds offered better risk-adjusted results than any kind of active management, Malkiel divided active analysis methods into two camps. “Fundamental” analysis looks at company financial statements, management quality, and corporate strategy, whereas “Technical” analysis looks at pricing patterns and exchange volume for signals of when to buy and sell securities. Today, Quantitative Analysis presents a third strategy, possibly imagined, but certainly not available in 1973.

Emanuel Derman, one of the pioneering quants at Goldman Sachs and now at Columbia University, wrote in My Life as A Quant that the three essential ingredients to a good quantitative analyst are 1) an understanding of finance theory, 2) strong mathematical training, and 3) computer programming ability. Certainly these are the ingredients that are requested in quantitative job ads, but it is a mistake to think of quantitative analysis as a single thing. There are at least four different realms of quantitative analysis in modern finance, and they demand a different balance of these and other skill sets. The four types of quantitative analysis I see are Econometric Forecasting, Portfolio Optimization, Risk Management, and Derivatives/Arbitrage Pricing.

Econometric Forecasting is the use of statistical and regression techniques to predict rates of return on assets and the dispersion of that prediction. Major issues in econometric forecasting have to do with identifying the proper factors to include in the model and and reducing or at least predicting the spread of the residuals (essentially the error between what the model predicts and what you observe in reality). The major advance in the last 20 years has been the availability of data and improvements in computing speed, so that these regressions can be calculated for 1000s of securities, rather than the 10-20 per analyst that is feasible with fundamental analysis. What is necessary for this type of quantitative analysis is an understanding of the key factors to include in the model, the proper methods of model specification, and the effects of breaking key econometric assumptions.

Portfolio Optimization is the application of techniques introduced by Harry Markowitz in the 1950s to determine the optimal weightings for a portfolio of risky assets in order to ensure either the minimum degree of risk for a given expected return or the maximum possible return for a given amount of risk. Markowitz’s principal insight is to use the correlations (or lack of correlation) between assets to reduce the overall volatility of a portfolio beyond the volatility of its components - basically a mathematization of the portfolio diversification principle. The key issues in post-Markowitz portfolio optimization have to do accounting for potential errors of inputs to the optimization, the incorporation of trading costs when portfolios that drift from their target weights are rebalanced, and for computer coders to write numerical algorithms that can quickly converge on an optimal set of portfolio weights. These techniques demand a familiarity with statistical theory, matrix algebra, and an ability to produce efficient computer code.

Risk Management is closely related to portfolio optimization in its goal. Whereas portfolio optimizers seek to minimize the risk associated with a given level of expected return, risk managers seek to understand the risk a collection of funds or portfolios presents to a business operation. The difference arises where an individual portfolio may be optimized and have the minimum volatility for the desired level of return, but that level of volatility is still too great from the perspective of the business entity as a whole, in part because other funds in the organization may all be exposed to the same types of risk. Therefore even if an individual portfolio is optimized, it may be necessary to reduce the total risk exposure for the sake of the organization, and it is the risk manager’s job to go tell the portfolio manager to “cool it down a little.” This puts the risk manager in an unenviable position - in good times, risk managers are politically unwelcome because they tell portfolio managers that they can’t make all the bets they want; in bad times, they can take the blame for not having prevented the portfolio managers from taking those very same bets. A good risk manager generally needs broad quantitative skills because they need to understand the models of many different portfolios and asset classes and be able to combine them into models of enterprise risk. Thus, they need to be strong quantitative generalists across all classes.

Derivatives/Arbitrage Pricing is probably the most mathematically intensive of modern quantitative analysis, and makes use of stochastic differential equations to link the random behavior of underlying assets to the fair price. For the more timid, some derivatives can be priced using Monte Carlo simulation, provided that one has modeled the random behavior with the right random process, and some more exotic derivatives may have to be modeled with the Monte Carlo method. Derivatives are generally priced on arbitrage principles, which is to say that they are priced so that the derivative is equal to the cost of manufacturing an equivalent payoff with combinations of other securities (which may include other priced derivatives). There are several talents that can do well in these areas. First, a special ability to spot the combinations that will replicate the payoffs and then combine the prices efficiently is necessary to generate the arbitrage possibilities. Then, one needs to understand stochastic calculus sufficiently to derive analytic solutions, or one needs to be a highly efficient programmer to manage the Monte Carlo process rapidly enough to price items while they are still available. Ideally, one is good at both.

My own quantitative background is more in econometric forecasting and modeling, although I do find the derivatives and arbitrage pricing very interesting, and I like the application of portfolio optimization to investment problems. Still, I have often felt intimidated by the mathematics, engineering, and physics Ph.D.s who dominate the arbitrage and derivatives pricing world (and to some extent in risk management), and it has taken me some time to realize that econometric insights into asset valuation are still highly valuable, especially when needing to manage traditional assets. Indeed, the more mathematical an approach, the more assumptions typically need to be made about human and market behavior, and the more easily a model can fail to apply to economic reality. In these situations, a student of psychology, politics, and economics brings some very important advantages.

To some extent, we may be observing just this issue in the current subprime mortgage / CDO crisis which threatens to engulf other asset classes. I have a feeling that there may be considerable overreaction to the crisis in some sectors, but the increasing cost of leverage may force many asset managers to sell, driving down prices, and so there is certainly some danger of contagion from the mortgage market to the general equities market. The big question in my mind is about the credit default swap (CDS) market. Many asset managers may think that they are covered because their CDSs effectively have them insured against credit defaults. But if enough defaults occur, someone eventually will have to come up with money to pay out that insurance, and if their models have not adequately priced the insurance costs, then we are in for a very interesting ride.

Labels: