Archive for the ‘Interesting External Papers’ Category

BoC Releases Winter 2009-10 Review

Thursday, February 18th, 2010

The Bank of Canada has released the Bank of Canada Review Winter 2009-2010, with articles:

  • Declining Inflation Persistence in Canada: Causes and Consequences
  • The Evolution of Capital Flows to Emerging-Market Economies
  • Making Bank Notes Accessible for Canadians Living with Blindness or Low Vision

Inflation persistence is defined as

the correlation between current and lagged inflation.

The article concludes that the low level of inflation persistence in Canada will facilitate Price Level Targetting, should be BoC implement such a policy:

For a central bank considering the relative merits of price-level versus inflation targeting, recent research suggests that low structural persistence in inflation will tend to favour the former. Moreover, the transition period to a price-level-targeting regime, when the private sector may still be learning about the precise nature of the change, appears to be less costly when structural infl ation persistence is low.

Boston Fed Examines Fair Value Accounting and Asset Fire Sales

Wednesday, February 3rd, 2010

The Federal Reserve Bank of Boston has released a working paper by Sanders Shaffer titled Fair Value Accounting: Villain or Innocent Victim: Exploring the Links between Fair Value Accounting, Bank Regulatory Capital and the Recent Financial Crisis:

There is a popular belief that the confluence of bank capital rules and fair value accounting helped trigger the recent financial crisis. The claim is that questionable valuations of long term investments based on prices obtained from illiquid markets created a pro-cyclical effect whereby mark to market adjustments reduced regulatory capital forcing banks to sell off investments which further depressed prices. This ultimately led to bank instability and the credit effects that reached a peak late in 2008. This paper analyzes a sample of large banks to attempt to measure the strength of the link between fair value accounting, regulatory capital rules, pro-cyclicality and financial contagion. The focus is on large banks because they value a significant portion of their balance sheets using fair value. They also hold investment portfolios that contain illiquid assets in large enough volumes to possibly affect the market in a pro-cyclical fashion. The analysis is based on a review of recent historical financial data. The analysis does not reveal a clear link for most banks in the sample, but rather suggests that there may have been other more significant factors putting stress on bank regulatory capital.

After a discussion of Fair Value Accounting and the criticism that has been leveled against it, the author points out:

Fair value is applied to investment securities depending on how they are classified. Investment securities classified as available for sale are measured at fair value each reporting period. The resulting adjustments are termed unrealized gains or losses. These adjustments are recorded in an equity account called Accumulated Other Comprehensive Income. An important point here is that fair value adjustments related to debt securities and unrealized gains on equity securities are excluded when computing Tier 1 regulatory capital.

The author hypothesizes:

This analysis does not address whether raising capital through the sale of investments in a distressed market would be a first choice or last resort. However, one may be able to infer that if banks were actually being forced into distressed sales, they would first try to reduce more discretionary items. Dividends on common stock are discretionary and can be reduced or suspended as a method to maintain capital ratios.

Further, it does not appear that losses were realized in practice:

To summarize, this analysis looked at the largest financial institutions. It then isolated the impacts that critics have linked to capital destruction, namely the application of fair value to banks’ investment portfolios. The analysis shows that the impact on regulatory capital was quite small and does not appear to be large enough to be considered the driver of the pro-cyclical dynamic whereby declining asset prices lead to lower capital, then on to sales of assets to replenish capital, creating further pressure on prices and so on. In addition, there was no evidence found in reported financial data which would be indicative of distressed selling activity during the crisis period of 2008.

So if mark-to-market wasn’t the villain, what was?:

Based on further analysis of 2008 financial results, it was noted that loan loss provision had a significant impact on regulatory capital for most institutions in the sample.

An example is supplied:

At the height of the crisis, State Street stock fell 59 percent in one day when it was announced that unrealized losses had doubled, and analysts noted that TCE was approaching zero based on pro-forma calculations that added in the impact of consolidating certain off-balance sheet investment conduit programs.

The Simple TCE Ratio is calculated as STCE/tangible assets. Tangible assets = total assets – goodwill – intangible assets (excluding Mortgage Servicing Rights). It is not known how much emphasis was placed on TCE versus other significant factors that were also affecting bank stocks at the same time. That being said, State Street and BNYM are two possible examples in this analysis where fair value accounting may have contributed to bank instability based on the significant affect on TCE. It should be noted though that State Street and BNYM did not sell investment assets in response to capital depletion or market stress. They were able to rely on debt and equity issuances as well as participation in government capital programs. So although fair value may have contributed to some instability, the link between fair value and pro-cyclicality did not necessarily come to fruition here, at least partially due to government intervention.

The author concludes:

Based on this simple analysis it would appear that fair value accounting had a minimal impact on the capital of most banks in the sample during the crisis period through the end of 2008. Capital destruction was due to deterioration in loan portfolios and was further depleted by items such as proprietary trading losses and common stock dividends. These are a result of lending practices and the actions of bank management, not accounting rules. Furthermore, the data suggests that banks were not raising significant capital through distressed asset sales; rather they were relying on government programs as well as debt and equity markets.

This paper is of particular interest given the recent BoC Paper on Systemic Capital Requirements and its concern regarding the contagion effect of Asset Fire Sales.

BoC Paper on Systemic Capital Requirements

Friday, January 29th, 2010

The Bank of Canada has released a working paper by Céline Gauthier, Alfred Lehar, and Moez Souissi titled Macroprudential Regulation and
Systemic Capital Requirements
:

In the aftermath of the financial crisis, there is interest in reforming bank regulation such that capital requirements are more closely linked to a bank’s contribution to the overall risk of the financial system. In our paper we compare alternative mechanisms for allocating the overall risk of a banking system to its member banks. Overall risk is estimated using a model that explicitly incorporates contagion externalities present in the financial system. We have access to a unique data set of the Canadian banking system, which includes individual banks’ risk exposures as well as detailed information on interbank linkages including OTC derivatives. We find that systemic capital allocations can differ by as much as 50% from 2008Q2 capital levels and are not related in a simple way to bank size or individual bank default probability. Systemic capital allocation mechanisms reduce default probabilities of individual banks as well as the probability of a systemic crisis by about 25%. Our results suggest that financial stability can be enhanced substantially by implementing a systemic perspective on bank regulation.

To be frank, I found this paper rather difficult to follow – I suspect that the authors had to agree to severe restrictions on what they could publish as a condition of getting the data:

Data on exposures related to derivatives come from a survey initiated by OSFI at the end of 2007. In that survey, banks are asked to report their 100 largest mark-to-market counterparty exposures that were larger than $25 million. These exposures were related to both OTC and exchange traded derivatives. They are reported after netting and before collateral and guarantees.

The interbank exposures were enormous:

The aggregate size of interbank exposures was approximately $21.6 billion for the Six major
Canadian banks. As summarized in Table 3, total exposures between banks accounted for around 25 per cent of bank capital on average. The available data suggest that exposures related to traditional lending (deposits and unsecured loans) were the largest ones compared with mark-to-market derivatives and cross-shareholdings exposures. Indeed, in May 2008, exposures related to traditional lending represented around $12.7 billion on aggregate, and 16.3 percent of banks’ Tier 1 capital on average. Together, mark-to-market derivatives and cross-shareholdings represented 10 per cent of banks’ Tier 1 capital on average.

Banks can affect each other’s probability of default (PD) in a number of ways:

A default is called fundamental when credit losses are sufficient to wipe out all the capital of the bank as defined in Equation (19). Column three in Table 4 summarizes the PD from defaults due to interbank contagion without asset fire sales, i.e. when a bank has sufficient capital to absorb the credit losses in the non-bank sectors but is pushed to bankruptcy because of losses on its exposures to other banks (as defined in Equation (21) with p = 1). The third category of default is the contagion due to asset fire sales as defined in Equation (20). In these cases a bank has enough capital to withstand both the credit losses in the non-banking sector and the writedown on other banks’ exposures but, conditional on the other losses having reduced capital, not enough to withstand the mark-to-market losses due to its own asset fire sale and/or the asset fire sale of the other banks.

The Canadian banking system is very stable without the consideration of asset fire sales. Both fundamental and contagious PDs are well below 20 basis points, even though we assume increased loan losses due to an adverse macro scenario throughout the paper. In the asset fire sale scenario, troubled banks want to maintain regulatory capital requirements by selling off assets, which causes externalities for all other banks as asset prices fall. PDs increase and as banks get weaker because of writedowns they also become more susceptible for contagion.

By me, one of the more important results discussed in the paper is:

The number of bankruptcies jumps dramatically as market liquidity decreases. In columns two to four of table 5 we allow asset prices to drop 50 percent more than in the base scenario. We immediately see that our analysis is very sensitive to the minimum asset price, which defines our demand function for the illiquid asset. Allowing fire sale discounts of three percent increases banks’ PDs significantly All banks default almost in two out of three cases. Default correlation is almost one (not shown), which explains why the total PDs are almost identical. While banks 2 and 4 are very likely to default because of writedowns in the value of their illiquid assets, banks 3, 6 and especially bank 5 are more likely to be affected by contagion. Two possible reasons could explain why these results are so sensitive to asset fire sale discounts. First, high Tier 1 capital requirements of 7%, compared to the 4% under Basel rules, trigger asset fire sales early, causing other banks to follow. Second the small number of banks causes each bank to have a huge price impact when selling off illiquid securities, creating negative externalities for the whole system.

… and the authors reflect:

Two policy insights stand out from the latter results. First, in the last financial crisis, regulators were criticized for helping banks to offload assets from their balance sheets at subsidized prices, and for relaxing accounting rules on the basis that market prices did not reflect fundamental values, which allowed banks to avoid mark-to-market writedowns of their assets. While our analysis cannot show the long-term costs associated with these measures, we can at least document that there is a significant immediate benefit for financial stability by preventing asset fire sale induced writedowns. Second, a countercyclical reduction in the minimum Tier 1 capital requirements triggering asset fire sales (or a higher capital buffer built in good time) would reduce dramatically the risk of default triggered by AFS.

Further:

We can see again that the Canadian banking system is interdependent. The default of any one bank is correlated with the default of any other bank with a probability of more than 50%. Consistent with the results in table 7, the defaults of banks one and five (two and six) are correlated the most (less) with other banks default.

Footnote: 36The results in this section are based on correlations and do not reflect causality

I’m rather disappointed that the various capital allocation schemes tested did not include a straightforward approach based on changing the risk weight of interbank exposures. There’s not much that can be done about the decline of asset values in a fire-sale situation; but contagion via direct markdown/default on interbank loans is very easy to restrict. There’s a trade-off against banking system efficiency (interbank loans allow, effectively, Bank A to lend to Bank B’s customers when there are differing investment opportunities), but I’m not sure how important that might be in Canada, where all but one of the Big 6 is national in scope.

John Hull Supports Tranche Retention, Bonus Deferral

Saturday, January 16th, 2010

John Hull has published an essay titled The Credit Crunch of 2007: What Went Wrong? Why? What Lessons Can Be Learned?:

This paper explains the events leading to the credit crisis that began in 2007 and the products that were created from residential mortgages. It explains the multiple levels of securitization that were involved. It argues that the inappropriate incentives led to a short‐term focus in the decision making of traders and a failure to evaluate the risks being taken. The products that were created lacked transparency with the payoffs from one product depending on the performance of many other products. Market participants relied on the AAA ratings assigned to products without evaluating the models used by rating agencies. The paper considers the steps that can be taken by financial institutions and their regulators to avoid similar crises in the future. It suggests that companies should be required to retain some of the risk in each instrument that is created when credit risk is transferred. The compensation plans within financial institutions should be changed so that they have a longer term focus. Collateralization through either clearinghouses or two‐way collateralization agreements should become mandatory. Risk management should involve more managerial judgment and rely less on the mechanistic application of value‐at‐risk models.

With respect to tranche retention, Dr. Hull argues:

The present crisis might have been less severe if the originators of mortgages (and other assets where credit risk is transferred) were required by regulators to keep, say, 20% of each tranche created. This would have better aligned the interests of originators with the interests of the investors who bought the tranches.

The most important reason why originators should have a stake in all the tranches created is that this encourages the originators to make the same lending decisions that the investors would make. Another reason is that the originators often end up as administrators of the mortgages (collecting interest, making foreclosure decisions, etc). It is important that their decisions as administrators are made in the best interests of investors.

This idea might have reduced the market excesses during the period leading up to the credit crunch of 2007. However, it should be acknowledged that one of the ironies of the credit crunch is that securitization did not in many instances get the mortgages off the books of originating banks. Often AAA-rated senior tranches created by one part of a bank were bought by other parts of the bank. Because banks were both investors in and originators of mortgages, one might expect a reasonable alignment of the interests of investors and originators. But the part of the bank investing in the mortgages was usually far removed from the part of the bank originating the mortgages and there appears to have been little information flow from one to the other.

Assiduous Readers will not be surprised to learn that I don’t like this idea. In my role as bond trader I have never bought a securitization … I would if the spreads were high enough, but generally spreads are compressed by other buyers.

When I buy a bond, I want to know somebody’s on the hook for it. I like the idea that if the borrower is a day late or a dollar short, I can force an operating company into bankruptcy and cause great anguish and financial ill effects on the deadbeats. Securitizations tend to be highly correllated; while this is claimed to be counterbalanced by the overcollateralization (or tranche subordination, which is simply a formalization of the process) I confess I have a great preference for keeping actual bonds in my bond portfolios.

Tranche retention is simply a methodology whereby securitizations become more bond-like. I object to such blurring of the lines, especially when enforced by governmental regulatory fiat. What I am being told, in a world where such retention is mandated, is that if something has been issued that I – for good reasons or bad – wish to buy and that the security originator wishes to sell, we’ll both go to jail if we consummate the transaction.

I will also point out the logical implications of tranche retention: when I sell 100 shares of SLF.PR.A, I should be forced to retain 20 of them, so that the buyer will know they’re OK. That’s crazy. The buyer should do his own damn homework and make up his own mind.

The world has learned over and over that while regulation is very nice, the only thing that works really well is caveat emptor. I do not want some 20-year old regulator with a college certificate in boxtickingology telling me what I may and may not buy.

Dr. Hull has underemphasized the heart of the matter: one of the ironies of the credit crunch is that securitization did not in many instances get the mortgages off the books of originating banks. Often AAA-rated senior tranches created by one part of a bank were bought by other parts of the bank..

In this context, I will repeat some of Sheila Bair’s testimony to the Crisis Committee:

In the mid-1990s, bank regulators working with the Basel Committee on Banking Supervision (Basel Committee) introduced a new set of capital requirements for trading activities. The new requirements were generally much lower than the requirements for traditional lending under the theory that banks’ trading-book exposures were liquid, marked-to-market, mostly hedged, and could be liquidated at close to their market values within a short interval—for example 10 days.

The market risk rule presented a ripe opportunity for capital arbitrage, as institutions began to hold growing amounts of assets in trading accounts that were not marked-to-market but “marked-to-model.” These assets benefitted from the low capital requirements of the market risk rule, even though they were in some cases so highly complex, opaque and illiquid that they could not be sold quickly without loss. Indeed, in late 2007 and through 2008, large write-downs of assets held in trading accounts weakened the capital positions of some large commercial and investment banks and fueled market fears.

I see the basic problem as one that happens when traders try to be investors. Traders do not typically know a lot about the market – although they can talk a good game – and when they try their hand at actual investing, bad things will happen more often than not. It’s a totally different mindset.

I didn’t make a penny during the tech bubble – never bought any of it. I have numerous friends, however, who made out like bandits and set themselves up for life during those years. The difference between us was not the knowledge that that stuff was garbage … we all knew it was garbage. But I could not sleep at night knowing I had garbage in my portfolio; they were fine with the idea, so long as there was lots of positive chatter and prices kept going up.

A long, long time ago – so long I can’t remember the reference – I read an interview with a big wheel (perhaps the proprietor) of a small NASDAQ trading firm. The interview was interupted when one of his staff burst in with the news that another brokerage (XYZ brokers) wanted to sell a large block of stock (45,000 shares, if I remember correctly) in ABC Company and was willing to do so at a discount to market. So they look at the recent price/volume history, check the news and the deal gets done. When the interview resumed, the interviewer asked “So … what’s ABC Company?”. The trader replied, patienty and wearily: “Its something XYZ wanted to sell 45,000 shares of.”

Now that’s trading!

Despite constant interviews by the media, there is not really much correlation between trading ability and investing ability.

So anyway, I will suggest that when considering a regulatory response to the Credit Crunch, a clearer distinction between trading and investing activities is what’s required. As I have previously suggested, there should be no bright-line between investment banks and vanilla banks; but the difference should be recognized in the capital rules. Investment banks should have low capital requirements for trading inventory and higher ones for investment positions; the reverse for vanilla banks. And for heaven’s sake, make sure that there’s no jiggery-pokery with aging positions on the trading books! Hold it for thirty days, and the capital charge goes up progressively! Start trading too many “investment” positions and you’ll find your investment portfolio reclassified.

As far as bonus deferral is concerned … it’s suitable for investors, not so much for traders. Bonus deferral requires a lot of trust by the employee, trust that is all too often unjustified as exemplified by the Citigroup case discussed January 7 and, here in Canada, by the case of David Berry. The major effect of bonus deferral, I believe, will be to spawn a migration of talent to hedge funds and boutiques.

Dr. Hull suggests:

One idea is the following. At the end of each year a financial institution awards a “bonus accrual” (positive or negative) to each employee reflecting the employee’s contribution to the business. The actual cash bonus received by an employee at the end of a year would be the average bonus accrual over the previous five years or zero, whichever is higher. For the purpose of this calculation, bonus accruals would be set equal to zero for years prior to the employee joining the financial institution (unless the employee manages to negotiate otherwise) and bonuses would not be paid after an employee leaves it. Although not perfect, this type of plan would motivate employees to use a multi-year time horizon when making decisions.

One problem I have with that is vesting. Is the vesting of this bonus iron-clad or not? Is it held by a mutually agreed-upon third party in treasury bills? And what happens if the employee leaves the firm and somebody else starts trading his book? Who takes any future losses then?

Another problem, of course, is trust (assuming the vesting is not iron-clad). When a relationship turns sour – or somebody gets greedy – things can turn nasty in a hurry. It should always be remembered that the purpose of regulation is not to protect anybody. The purpose of regulation is to ensure that everybody is guilty of something.

I have twice been offered jobs with the stupidest incentive scheme in the world. Not only would my bonus be determined by how well the firm did – putting me on the hook for decisions made by people I didn’t even know – but because of deferral, up-front transfers and discretion, I could have worked there for five years and paid them for the privilege. Those negotiations didn’t take long!

Excess Reserves at Fed: Another Cross-Current

Friday, January 15th, 2010

Excess bank reserves at the Fed were last discussed on PrefBlog when reviewing a New York Fed working paper titled Why are banks holding so many excess reserves?.

The Federal Reserve Bank of Cleveland has released the Econotrends, January 2010 with an interesting article by John B. Carlson and John Lindner titled Treasury Deposits and Excess Bank Reserves:

An interesting development on the Federal Reserve’s balance sheet is a decline in excess bank reserves. Th is decline has occurred despite an increase in the overall size of the Fed’s balance sheet. Th e key factor accounting for the decline in excess reserves is a substantial increase in U.S. treasury deposits at the Fed, which were made as a consequence of having issued new debt. When the treasury issues debt to the public and deposits the proceeds at the Fed in its general account, bank reserves decline. In normal times, the treasury typically holds some proceeds in Treasury Tax and Loan accounts at commercial banks, which keeps reserves in the banking system. Th is arrangement helps maintain a steady supply of reserves—a desirable outcome for when the Fed sought to keep the fed funds rate near a target rate.

Following the collapse of Lehman Brothers in September 2008, the Federal Reserve instituted a number of policies that sharply increased bank reserves in excess of required levels. Initially, the Fed sought to absorb most of the new reserves in order to keep the fed funds rate near its target rate. To help in this eff ort, the treasury issued short-term debt at special auctions (called the Supplementary Financing Program or SFP) and placed the proceeds in a new supplemental treasury account at the Federal Reserve. Still, the amount of reserves absorbed could not keep up with the amount of bank reserves that were being created with the Fed’s new credit policies. Subsequently, the fed funds target was lowered to zero, and the immediate need to absorb reserves abated.

In late 2009 the total level of treasury debt approached the limit authorized by Congress. As the SFP issues matured, the SFP deposits were used to redeem them, and excess reserves increased. In December Congress raised the debt ceiling, allowing the treasury to issue new debt. Th is time, the treasury deposited much of the proceeds into its general account with the Fed, which caused the observed decline in excess reserves.


Click for big

Hull & White on AAA Tranches of Subprime

Friday, January 15th, 2010

John Hull and Alan White have published a working paper titled The Risk of Tranches Created from Residential Mortgages:

This paper examines, ex-ante, the risk in the tranches of ABSs and ABS CDOs that were created from residential mortgages between 2000 and 2007. Using the criteria of the rating agencies, it tests how wide the AAA tranches can be under different assumptions about the correlation model and recovery rates. It concludes that the AAA ratings assigned to the senior tranches of ABSs were not totally unreasonable. However, the AAA ratings assigned to tranches of Mezz ABS CDOs cannot be justified. The risk of a Mezz ABS CDO tranche depends critically on the correlation between mortgage pools as well as on the correlation model and the thickness of the underlying BBB tranches. The BBB tranches of ABSs cannot be considered equivalent to BBB bonds for the purposes of subsequent securitizations.

This paper won’t be popular amongst the Credit Ratings Agencies Are Evil crowd!

Credit derivatives models often assume that the recovery rate realized when there is a default is constant. This is less than ideal. As the default rate increases, the recovery rate for a particular asset class can be expected to decline. This is because a high default rate leads to more of the assets coming on the market and a reduction in price.

As is now well known, this argument is particularly true for residential mortgages. In a normal market, a recovery rate of about 75% is often assumed for this asset class. If this is assumed to be the recovery rate in all situations, the worst possible loss on a portfolio of residential mortgages given by the model would be 25%, and the 25% to 100% senior tranche of an ABS created from the mortgages could reasonably be assumed to be safe. In fact, recovery rates on mortgages have declined in the high default rate environment experienced since 2007.

The evaluation of ABSs depends on a) the expected default rate, Q, for mortgages in the underlying pool, b) the default correlation, ρ, for mortgages in the pool, and c) the recovery rate, R. Data from the 1999 to 2006 period suggest a value of Q less than 5% assuming an average mortgage life of 5 years. But, as has been mentioned, a different macroeconomic environment could be anticipated over the next few years. It would seem to be more prudent to use an estimate of 10%, or even higher. We will present results for values of Q equal to 5%, 10%, and 20%. The Basel II capital requirements are based on a copula correlation of 0.15 for residential mortgages.6 We will present results for values of ρ between 0.05 and 0.30. As already mentioned, a recovery rate of 75% is often assumed for residential mortgages, but this is probably optimistic in a high default rate environment. We will present results for the situation where the recovery rate is fixed at 75% and for the situation where the recovery rate model in the previous section is used with Rmin=50% and Rmax=100%.

ABS CDOs also depend on the parameter, α. Loosely speaking, this measures the proportion of the default correlation that comes from a factor common to all pools. A value of α close to zero indicates that investors obtain good diversification benefits from the ABS CDO structure. In adverse market conditions some mezzanine tranches can be expected to suffer 100% losses while others incur no losses. However, a value of α close to one indicates that all mezzanine tranches will tend to sink or swim together. We do not know what estimates rating agencies made for α. (Ex post of course, we know that it was high.) We will therefore present results based on a wide range of values for this parameter.

The meat of the matter – at least as far as the CRAs are concerned – is:

Table 2 shows that when a 20% default rate is combined with a high default correlation, and a stochastic recovery rate model, the AAA ratings that were made seem a little high. Also, the ratings are difficult to justify when the most extreme model (double t copula, stochastic recovery rate) is used. But overall the results in Table 2 indicate that the AAA ratings that were assigned were not totally unreasonable.

Very bad things happened to CDOs created from the mezzanine tranches of the structures – and here the CRAs can be faulted:

It should be noted that a CDO created from the triple BBB tranches of ABSs is quite different from a CDO created from BBB bonds. This is true even when the BBB tranches have been chosen so that their probabilities of default and expected losses are consistent with their BBB rating. The reason is that the probability distribution of the loss from a BBB tranche is quite different from the probability distribution of the loss from a BBB bond.

The authors conclude:

Contrary to many of the opinions that have been expressed, the AAA ratings for the senior tranches of ABSs were not unreasonable. The weighted average life of mortgages is about five years. The probability of loss and expected loss of the AAA-rated tranches that were created were similar to or better than those of AAA-rated five-year bonds.

The AAA ratings for Mezz ABS CDOs are much less defensible. Scenarios where all the underlying BBB tranches lose virtually all their principal are sufficiently probable that it is not reasonable to assign a AAA rating to even a quite thin senior tranche. The risks in Mezz ABS CDOs depend critically on a) the width of the underlying BBB tranches, b) the correlation between pools, c) the tail default correlation, and d) the relationship between the recovery rate and the default rate. An important point is that the BBB tranche of an ABS cannot be assumed to be similar to a BBB bond for the purposes of determining the risks in ABS CDO tranches.

In practice Mezz ABS CDOs accounted for about 3% of all mortgage securitizations. Our conclusion is therefore that the vast majority of the AAA ratings assigned to tranches created from mortgages were reasonable, but in a small minority of the cases they cannot be justified.

I think it’s fair to conclude that the problems of the sub-prime crisis were not with the rating agencies or, to a small degree, with investors who plunked down their money. The problem lay in concentration: the banks took the view that if one is good, two is better … and went the way of all those who fail to diversify sufficiently.

Update: For a review of what participants were thinking at the time, see Making sense of the subprime crisis. For more on subprime default experience, see Subprime! Problems forseeable in 2005?. I will admit, though, that what I’m really waiting for is an accounting of realized losses on subprime paper.

DBRS Releases Global Bank Rating Methodology

Thursday, January 14th, 2010

DBRS has released its Global Methodology for Rating Banks and Banking Organisations that has some snippets of interest for preferred share investors:

DBRS notes that the regulatory focus on Tier 1 capital is evolving with increased focus on core Tier 1 capital that excludes hybrids. We will adjust our methodology in the future to reflect any changes in emphasis or requirements.

To assess leverage, another capital measure that we employ is the ratio of Tier 1 capital to tangible assets. This ratio, or a variation of it, is applied to banks in a number of countries. It is generally more constraining than the Basel ratios, as assets are not risk-adjusted, although no adjustment is made for off-balance sheet exposures. We anticipate that there is likely to be pressure for adoption of some variation on this leverage ratio in more countries in the aftermath of the crisis.

Taking advantage of the regulatory risk weightings, DBRS considers the ratio of tangible common equity to RWA. Refl ecting DBRS’s preference for equity over hybrids as a cushion for bondholders and other senior creditors, this ratio excludes the hybrid securities that are given full weight by the regulators, up to certain limits.

In the light of Sheila Bair’s testimony to the Crisis Committee, the following extract is interesting:

By their nature, however, these businesses, if poorly run with inadequate risk management, can detract from a bank’s strengths and constrain its ratings. It is worth noting again that while banks had extraordinary losses in their trading businesses in this cycle, most of the losses were concentrated in few business lines, primarily in certain areas of fi xed income, related to origination, structuring and packaging various forms of credit and more complex securities. Risk management of trading activities was predominantly successful in helping banks generate revenues and earnings across many of their trading businesses. The analysis focuses on the trading and other capital markets businesses, but does not ignore other exposures to market risk.

National Post Cheers on FixedResets

Thursday, January 14th, 2010

The National Post had an article on preferreds yesterday by Eric Lam And David Pett, Reset preferred shares fill trust gap with all the incisive, hard-hitting reporting that made the National Post what it is today: bankrupt:

John Nagel, vice-president at Desjardins Securities, preferred shares department, and one of the creators of reset preferreds, said the shares give investors much more flexibility.

“The very low interest-rate scenario that we’re in … if rates are a lot higher five or six years from now, there’s the option of going floating or being redeemed. That’s very attractive,” he said.

Flexibility? The redemption option belongs to the issuer. The holder – if not redeemed – has the relatively trivial opportunity to choose between fixed and floating. The flexibility that counts belongs to the issuer.

To date, almost all of the reset preferreds have gained in value from the price they were originally issued. That represents an added bonus for investors who got in early, but it also presents a particular challenge for investors looking to get in on the action now as they may face lower yields and the potential for large capital losses as shares get redeemed on the reset date at par.

Clearly, it’s important to think about the exit strategy right from the beginning.

“When you buy it, assume the worst,” Mr. Nagel said.

“The secret is to look at these six months or nine months ahead of [maturity], and make a decision. If you think they’re going to be redeemed, you should sell.”

Sell to whom? At what price?

If the bond yield has risen substantially, the issuer is likely going to redeem the shares to prevent you from cashing in on the elevated rates.

This part is nonsense. The redemption decision will have everything to do with credit spreads on the market – can they borrow on more attractive terms? – and virtually nothing to do with the absolute value of “bond yield” – assuming that by “bond” they mean five-year Canadas.

On the other hand, if the bond yield is low and it looks like the shares will be reset, the best bet — available in the vast majority of cases — is to convert to floating rate preferred shares, which are usually pegged to the Government of Canada three-month treasury bills plus the spread.

Hopeless nonsense. In a normal environment, a five year bond will outperform treasury bills bought and rolled for a five-year term. Not always, but more often than not.

Investors should remember that while FixedResets can certainly mitigate the effects of a rise in yields, you pay through the nose for that benefit; the bond market, as a whole, ascribes zero value to this benefit. And the credit risk is forever. Should Bad Things happen to Groupe Aeroplan – although it is hard to imagine bad things happening to a company that combines air travel with green stamp savings books – they will not be able to refinance at +375, not redeem, and the prefs will be trading at a big discount.

NY Fed Research on Term Spread & Business Cycle

Friday, January 8th, 2010

The Federal Reserve Bank of New York has released Staff Report 421 by Tobias Adrian, Arturo Estrella, and Hyun Song Shin titled Monetary Cycles, Financial Cycles, and the Business Cycle:

One of the most robust stylized facts in macroeconomics is the forecasting power of the term spread for future real activity. The economic rationale for this forecasting power usually appeals to expectations of future interest rates, which affect the slope of the term structure. In this paper, we propose a possible causal mechanism for the forecasting power of the term spread, deriving from the balance sheet management of financial intermediaries. When monetary tightening is associated with a flattening of the term spread, it reduces net interest margin, which in turn makes lending less profitable, leading to a contraction in the supply of credit. We provide empirical support for this hypothesis, thereby linking monetary cycles, financial cycles, and the business cycle.

I’ve never really been to comfortable with the idea that expectations inverts the yield curve – it seems to me to be asking too much of the world – expectations implies forecasting and forecasting, at least in my book, implies “wrong”. I’m a much bigger fan of the “roundaboutness” process, whereby a slowdown in consumer demand causes goods to pile up at each stage of the production cycle, which means that vendors of these goods have to borrow short term funds to finance the unexpected inventory, which drives up short rates and, as production slows down, also results in a general economic slowdown.

In other words, the curve flattening and the economic cycle are not causally related, but are both results of the same cause; it’s just that the yield curve reacts quicker. This explanation leave out the role of central banks, but I like it as the ‘unfettered free market’ explanation; central banks are simply there to smooth the extremes.

However, the authors of this paper put the central banks front and centre and seek to understand how the central bank action affects subsequent events – naturally enough, since that’s their job:

In this paper, we offer a possible causal mechanism that operates via the role of financial intermediaries and their active management of balance sheets in response to changing economic conditions. Banks and other financial intermediaries typically borrow in order to lend. Since the loans offered by banks tend to be of longer maturity than the liabilities that fund those loans, the term spread is indicative of the marginal profitability of an extra dollar of loans on intermediaries’ balance sheets. For any risk premium prevailing in the market, the compression of the term spread may mean that the marginal loan becomes uneconomic and ceases to be a feasible project from the bank’s point of view. There will, therefore, be an impact on the supply of credit to the economy, and, to the extent that the reduction in the supply of credit has a dampening effect on real activity, a compression of the term spread will be a causal signal of subdued real activity. Adrian and Hyun Song Shin (2009 a, b) argue that the reduced supply of credit also has an amplifying effect due to the widening of the risk premiums demanded by the intermediaries, putting a further downward spiral on real activity.

We explore this hypothesis, and present empirical evidence consistent with it.

They claim their results are relevant to the “Greenspan Conundrum”:

Our results shed light on the recent debate about the “interest rate conundrum.” When the FOMC raised the Fed Funds target by 425 basis points between June 2004 and June 2006 (from from 1 to 5.25 percent), the 10-year Treasury yield only increased by 38 basis points over that same time period (from 4.73 to 5.11 percent). Greenspan (2005) referred to this behavior of longer term yields as a conundrum for monetary policy makers. In the traditional, expectations driven view of monetary transmission, policy works as increases in short term rates lead to increases in longer term rates, which ultimately matter for real activity.

Our findings suggest that the monetary tightening of the 2004-2006 period ultimately did achieve a slowdown in real activity not because of its impact on the level of longer term interest rates, but rather because of its impact on the slope of the yield curve. In fact, while the level of the 10-year yield only increased from 38 basis points between June 2004 and 2006, the term spread declined 325 basis points (from 3.44 to .19 percent). The fact that the slope flattened meant that intermediary profitability was compressed, thus shifting the supply of credit, and hence inducing changes in real activity. The .19 percent at the end of the monetary tightening cycle is below the threshold of .92 percent, and, as a result, a recession occurred within 18 months of the end of the tightening cycle (the NBER dated the start of the recession as December 2007). The 18 month lag between the end of the tightening cycle, and the beginning of the recession is within the historical length.

They show a strong relationship between Fed action and the term spread:

The important impact of changes in the Fed Funds target is not on the level of longer term interest rates, but rather on the slope of the yield curve. In fact, Figure 4 below shows that there is a near perfect negative one-to-one relationship between 4-quarter changes of the Fed Funds target and 4-quarter changes of the term spread (the plot uses data from 1987q1 to 2008q3). Variations in the target affect real activity because they change the profitability of financial intermediaries, thus shifting the supply of credit.

BoC Working Paper on Liquidity & Volatility

Tuesday, January 5th, 2010

The Bank of Canada has released Working Paper 2010-1 by B. Ravikumar and Enchuan Shao titled Search Frictions and Asset Price Volatility:

We examine the quantitative effect of search frictions in product markets on asset price volatility. We combine several features from Shi (1997) and Lagos and Wright (2002) in a model without money. Households prefer special goods and general goods. Special goods can be obtained only via a search in decentralized markets. General goods can be obtained via trade in centralized competitive markets and via ownership of an asset. There is only one asset in our model that yields general goods. The asset is also used as a medium of exchange in the decentralized market to obtain the special goods. The value of the asset in facilitating transactions in the decentralized market is determined endogenously. This transaction role makes the asset pricing implications of our model different from those in the standard asset pricing model. Our model not only delivers the observed average rate of return on equity and the volatility of the equity price, but also accounts for most of the spectral characteristics of the equity price.

This is a good paper; unfortunately the prosaic explanations of the model are rather heavily larded with the math; and I am not sufficiently comfortable with the math to provide my own textual explanation. But I’ll do what I can.

The authors were most interested in attacking the excess volatility puzzle:

LeRoy and Porter (1981) and Shiller (1981) calculated the time series for asset prices using the simple present value formula – the current price of an asset is equal to the expected discounted present value of its future dividends. Using a constant interest rate to discount the future, they showed that the variance of the observed prices for U.S. equity exceeds the variance implied by the present value formula (see figure 1). This is the excess volatility puzzle.

After the rather precious definition of the General Good as a “tree” and the Special Goods as “fruits”, they explain:

Random matching during the day will typically result in non-degenerate distributions of asset holdings. In order to maintain tractability, we use the device of large households along the lines of Shi (1997). Each household consists of a continuum of worker-shopper (or, seller-buyer) pairs. Buyers cannot produce the special good, only sellers are capable of production. We assume the fraction of buyers = fraction of sellers = 1 / 2 . Then, the probability of single coincidence meetings during the day is 1/4 α. Each household sends its buyers to the decentralized day market with take-it-or-leave-it instructions (q; s) – accept q units of special goods in exchange for s trees. Each household also sends its sellers with “accept” or “reject” instructions. There is no communication between buyers and sellers of the same household during the day. After the buyers and sellers finish trading in the day, the household pools the trees and shares the special goods across its members each period. By the law of large numbers, the distributions of trees and special goods are degenerate across house-holds. This allows us to focus on the representative household. The representative household’s consumption of the special good is [q α/4].

…. and the interesting part is …

To compute the “liquidity value” of the asset, we set β and δ set at their benchmark values (Table 1) and calculate the price sequence for a standard asset pricing model such as Lucas (1978). This is easily done by setting u'(qt α / 4) = 1 for all t in equation (10). Since the standard asset pricing model does not assign any medium of exchange role to the asset, the difference between the prices implied by the standard model and ours would be the liquidity value of the asset. We compute the liquidity value as a fractionof the price implied by the standard model i.e., liquidity value = (Pmodel – PLucas) / PLucas. The mean liquidity value implied by our model is 17.5%.

This is a fascinating result, illustrating the value of liquidity in a segmented market. It is the function of dealers – and their capital – to reduce friction for all players, but to keep a piece for themselves . I will be fascinated to follow the progress of this model as, perhaps, it gets extended to include “households” that function in such a manner.

It is also apparent that when friction increases, the “flight to quality” into government bonds may be characterized to a great extent as a “flight to liquidity”.