Category: Interesting External Papers

Interesting External Papers

BoC Paper on Systemic Capital Requirements

The Bank of Canada has released a working paper by Céline Gauthier, Alfred Lehar, and Moez Souissi titled Macroprudential Regulation and
Systemic Capital Requirements
:

In the aftermath of the financial crisis, there is interest in reforming bank regulation such that capital requirements are more closely linked to a bank’s contribution to the overall risk of the financial system. In our paper we compare alternative mechanisms for allocating the overall risk of a banking system to its member banks. Overall risk is estimated using a model that explicitly incorporates contagion externalities present in the financial system. We have access to a unique data set of the Canadian banking system, which includes individual banks’ risk exposures as well as detailed information on interbank linkages including OTC derivatives. We find that systemic capital allocations can differ by as much as 50% from 2008Q2 capital levels and are not related in a simple way to bank size or individual bank default probability. Systemic capital allocation mechanisms reduce default probabilities of individual banks as well as the probability of a systemic crisis by about 25%. Our results suggest that financial stability can be enhanced substantially by implementing a systemic perspective on bank regulation.

To be frank, I found this paper rather difficult to follow – I suspect that the authors had to agree to severe restrictions on what they could publish as a condition of getting the data:

Data on exposures related to derivatives come from a survey initiated by OSFI at the end of 2007. In that survey, banks are asked to report their 100 largest mark-to-market counterparty exposures that were larger than $25 million. These exposures were related to both OTC and exchange traded derivatives. They are reported after netting and before collateral and guarantees.

The interbank exposures were enormous:

The aggregate size of interbank exposures was approximately $21.6 billion for the Six major
Canadian banks. As summarized in Table 3, total exposures between banks accounted for around 25 per cent of bank capital on average. The available data suggest that exposures related to traditional lending (deposits and unsecured loans) were the largest ones compared with mark-to-market derivatives and cross-shareholdings exposures. Indeed, in May 2008, exposures related to traditional lending represented around $12.7 billion on aggregate, and 16.3 percent of banks’ Tier 1 capital on average. Together, mark-to-market derivatives and cross-shareholdings represented 10 per cent of banks’ Tier 1 capital on average.

Banks can affect each other’s probability of default (PD) in a number of ways:

A default is called fundamental when credit losses are sufficient to wipe out all the capital of the bank as defined in Equation (19). Column three in Table 4 summarizes the PD from defaults due to interbank contagion without asset fire sales, i.e. when a bank has sufficient capital to absorb the credit losses in the non-bank sectors but is pushed to bankruptcy because of losses on its exposures to other banks (as defined in Equation (21) with p = 1). The third category of default is the contagion due to asset fire sales as defined in Equation (20). In these cases a bank has enough capital to withstand both the credit losses in the non-banking sector and the writedown on other banks’ exposures but, conditional on the other losses having reduced capital, not enough to withstand the mark-to-market losses due to its own asset fire sale and/or the asset fire sale of the other banks.

The Canadian banking system is very stable without the consideration of asset fire sales. Both fundamental and contagious PDs are well below 20 basis points, even though we assume increased loan losses due to an adverse macro scenario throughout the paper. In the asset fire sale scenario, troubled banks want to maintain regulatory capital requirements by selling off assets, which causes externalities for all other banks as asset prices fall. PDs increase and as banks get weaker because of writedowns they also become more susceptible for contagion.

By me, one of the more important results discussed in the paper is:

The number of bankruptcies jumps dramatically as market liquidity decreases. In columns two to four of table 5 we allow asset prices to drop 50 percent more than in the base scenario. We immediately see that our analysis is very sensitive to the minimum asset price, which defines our demand function for the illiquid asset. Allowing fire sale discounts of three percent increases banks’ PDs significantly All banks default almost in two out of three cases. Default correlation is almost one (not shown), which explains why the total PDs are almost identical. While banks 2 and 4 are very likely to default because of writedowns in the value of their illiquid assets, banks 3, 6 and especially bank 5 are more likely to be affected by contagion. Two possible reasons could explain why these results are so sensitive to asset fire sale discounts. First, high Tier 1 capital requirements of 7%, compared to the 4% under Basel rules, trigger asset fire sales early, causing other banks to follow. Second the small number of banks causes each bank to have a huge price impact when selling off illiquid securities, creating negative externalities for the whole system.

… and the authors reflect:

Two policy insights stand out from the latter results. First, in the last financial crisis, regulators were criticized for helping banks to offload assets from their balance sheets at subsidized prices, and for relaxing accounting rules on the basis that market prices did not reflect fundamental values, which allowed banks to avoid mark-to-market writedowns of their assets. While our analysis cannot show the long-term costs associated with these measures, we can at least document that there is a significant immediate benefit for financial stability by preventing asset fire sale induced writedowns. Second, a countercyclical reduction in the minimum Tier 1 capital requirements triggering asset fire sales (or a higher capital buffer built in good time) would reduce dramatically the risk of default triggered by AFS.

Further:

We can see again that the Canadian banking system is interdependent. The default of any one bank is correlated with the default of any other bank with a probability of more than 50%. Consistent with the results in table 7, the defaults of banks one and five (two and six) are correlated the most (less) with other banks default.

Footnote: 36The results in this section are based on correlations and do not reflect causality

I’m rather disappointed that the various capital allocation schemes tested did not include a straightforward approach based on changing the risk weight of interbank exposures. There’s not much that can be done about the decline of asset values in a fire-sale situation; but contagion via direct markdown/default on interbank loans is very easy to restrict. There’s a trade-off against banking system efficiency (interbank loans allow, effectively, Bank A to lend to Bank B’s customers when there are differing investment opportunities), but I’m not sure how important that might be in Canada, where all but one of the Big 6 is national in scope.

Interesting External Papers

John Hull Supports Tranche Retention, Bonus Deferral

John Hull has published an essay titled The Credit Crunch of 2007: What Went Wrong? Why? What Lessons Can Be Learned?:

This paper explains the events leading to the credit crisis that began in 2007 and the products that were created from residential mortgages. It explains the multiple levels of securitization that were involved. It argues that the inappropriate incentives led to a short‐term focus in the decision making of traders and a failure to evaluate the risks being taken. The products that were created lacked transparency with the payoffs from one product depending on the performance of many other products. Market participants relied on the AAA ratings assigned to products without evaluating the models used by rating agencies. The paper considers the steps that can be taken by financial institutions and their regulators to avoid similar crises in the future. It suggests that companies should be required to retain some of the risk in each instrument that is created when credit risk is transferred. The compensation plans within financial institutions should be changed so that they have a longer term focus. Collateralization through either clearinghouses or two‐way collateralization agreements should become mandatory. Risk management should involve more managerial judgment and rely less on the mechanistic application of value‐at‐risk models.

With respect to tranche retention, Dr. Hull argues:

The present crisis might have been less severe if the originators of mortgages (and other assets where credit risk is transferred) were required by regulators to keep, say, 20% of each tranche created. This would have better aligned the interests of originators with the interests of the investors who bought the tranches.

The most important reason why originators should have a stake in all the tranches created is that this encourages the originators to make the same lending decisions that the investors would make. Another reason is that the originators often end up as administrators of the mortgages (collecting interest, making foreclosure decisions, etc). It is important that their decisions as administrators are made in the best interests of investors.

This idea might have reduced the market excesses during the period leading up to the credit crunch of 2007. However, it should be acknowledged that one of the ironies of the credit crunch is that securitization did not in many instances get the mortgages off the books of originating banks. Often AAA-rated senior tranches created by one part of a bank were bought by other parts of the bank. Because banks were both investors in and originators of mortgages, one might expect a reasonable alignment of the interests of investors and originators. But the part of the bank investing in the mortgages was usually far removed from the part of the bank originating the mortgages and there appears to have been little information flow from one to the other.

Assiduous Readers will not be surprised to learn that I don’t like this idea. In my role as bond trader I have never bought a securitization … I would if the spreads were high enough, but generally spreads are compressed by other buyers.

When I buy a bond, I want to know somebody’s on the hook for it. I like the idea that if the borrower is a day late or a dollar short, I can force an operating company into bankruptcy and cause great anguish and financial ill effects on the deadbeats. Securitizations tend to be highly correllated; while this is claimed to be counterbalanced by the overcollateralization (or tranche subordination, which is simply a formalization of the process) I confess I have a great preference for keeping actual bonds in my bond portfolios.

Tranche retention is simply a methodology whereby securitizations become more bond-like. I object to such blurring of the lines, especially when enforced by governmental regulatory fiat. What I am being told, in a world where such retention is mandated, is that if something has been issued that I – for good reasons or bad – wish to buy and that the security originator wishes to sell, we’ll both go to jail if we consummate the transaction.

I will also point out the logical implications of tranche retention: when I sell 100 shares of SLF.PR.A, I should be forced to retain 20 of them, so that the buyer will know they’re OK. That’s crazy. The buyer should do his own damn homework and make up his own mind.

The world has learned over and over that while regulation is very nice, the only thing that works really well is caveat emptor. I do not want some 20-year old regulator with a college certificate in boxtickingology telling me what I may and may not buy.

Dr. Hull has underemphasized the heart of the matter: one of the ironies of the credit crunch is that securitization did not in many instances get the mortgages off the books of originating banks. Often AAA-rated senior tranches created by one part of a bank were bought by other parts of the bank..

In this context, I will repeat some of Sheila Bair’s testimony to the Crisis Committee:

In the mid-1990s, bank regulators working with the Basel Committee on Banking Supervision (Basel Committee) introduced a new set of capital requirements for trading activities. The new requirements were generally much lower than the requirements for traditional lending under the theory that banks’ trading-book exposures were liquid, marked-to-market, mostly hedged, and could be liquidated at close to their market values within a short interval—for example 10 days.

The market risk rule presented a ripe opportunity for capital arbitrage, as institutions began to hold growing amounts of assets in trading accounts that were not marked-to-market but “marked-to-model.” These assets benefitted from the low capital requirements of the market risk rule, even though they were in some cases so highly complex, opaque and illiquid that they could not be sold quickly without loss. Indeed, in late 2007 and through 2008, large write-downs of assets held in trading accounts weakened the capital positions of some large commercial and investment banks and fueled market fears.

I see the basic problem as one that happens when traders try to be investors. Traders do not typically know a lot about the market – although they can talk a good game – and when they try their hand at actual investing, bad things will happen more often than not. It’s a totally different mindset.

I didn’t make a penny during the tech bubble – never bought any of it. I have numerous friends, however, who made out like bandits and set themselves up for life during those years. The difference between us was not the knowledge that that stuff was garbage … we all knew it was garbage. But I could not sleep at night knowing I had garbage in my portfolio; they were fine with the idea, so long as there was lots of positive chatter and prices kept going up.

A long, long time ago – so long I can’t remember the reference – I read an interview with a big wheel (perhaps the proprietor) of a small NASDAQ trading firm. The interview was interupted when one of his staff burst in with the news that another brokerage (XYZ brokers) wanted to sell a large block of stock (45,000 shares, if I remember correctly) in ABC Company and was willing to do so at a discount to market. So they look at the recent price/volume history, check the news and the deal gets done. When the interview resumed, the interviewer asked “So … what’s ABC Company?”. The trader replied, patienty and wearily: “Its something XYZ wanted to sell 45,000 shares of.”

Now that’s trading!

Despite constant interviews by the media, there is not really much correlation between trading ability and investing ability.

So anyway, I will suggest that when considering a regulatory response to the Credit Crunch, a clearer distinction between trading and investing activities is what’s required. As I have previously suggested, there should be no bright-line between investment banks and vanilla banks; but the difference should be recognized in the capital rules. Investment banks should have low capital requirements for trading inventory and higher ones for investment positions; the reverse for vanilla banks. And for heaven’s sake, make sure that there’s no jiggery-pokery with aging positions on the trading books! Hold it for thirty days, and the capital charge goes up progressively! Start trading too many “investment” positions and you’ll find your investment portfolio reclassified.

As far as bonus deferral is concerned … it’s suitable for investors, not so much for traders. Bonus deferral requires a lot of trust by the employee, trust that is all too often unjustified as exemplified by the Citigroup case discussed January 7 and, here in Canada, by the case of David Berry. The major effect of bonus deferral, I believe, will be to spawn a migration of talent to hedge funds and boutiques.

Dr. Hull suggests:

One idea is the following. At the end of each year a financial institution awards a “bonus accrual” (positive or negative) to each employee reflecting the employee’s contribution to the business. The actual cash bonus received by an employee at the end of a year would be the average bonus accrual over the previous five years or zero, whichever is higher. For the purpose of this calculation, bonus accruals would be set equal to zero for years prior to the employee joining the financial institution (unless the employee manages to negotiate otherwise) and bonuses would not be paid after an employee leaves it. Although not perfect, this type of plan would motivate employees to use a multi-year time horizon when making decisions.

One problem I have with that is vesting. Is the vesting of this bonus iron-clad or not? Is it held by a mutually agreed-upon third party in treasury bills? And what happens if the employee leaves the firm and somebody else starts trading his book? Who takes any future losses then?

Another problem, of course, is trust (assuming the vesting is not iron-clad). When a relationship turns sour – or somebody gets greedy – things can turn nasty in a hurry. It should always be remembered that the purpose of regulation is not to protect anybody. The purpose of regulation is to ensure that everybody is guilty of something.

I have twice been offered jobs with the stupidest incentive scheme in the world. Not only would my bonus be determined by how well the firm did – putting me on the hook for decisions made by people I didn’t even know – but because of deferral, up-front transfers and discretion, I could have worked there for five years and paid them for the privilege. Those negotiations didn’t take long!

Interesting External Papers

Excess Reserves at Fed: Another Cross-Current

Excess bank reserves at the Fed were last discussed on PrefBlog when reviewing a New York Fed working paper titled Why are banks holding so many excess reserves?.

The Federal Reserve Bank of Cleveland has released the Econotrends, January 2010 with an interesting article by John B. Carlson and John Lindner titled Treasury Deposits and Excess Bank Reserves:

An interesting development on the Federal Reserve’s balance sheet is a decline in excess bank reserves. Th is decline has occurred despite an increase in the overall size of the Fed’s balance sheet. Th e key factor accounting for the decline in excess reserves is a substantial increase in U.S. treasury deposits at the Fed, which were made as a consequence of having issued new debt. When the treasury issues debt to the public and deposits the proceeds at the Fed in its general account, bank reserves decline. In normal times, the treasury typically holds some proceeds in Treasury Tax and Loan accounts at commercial banks, which keeps reserves in the banking system. Th is arrangement helps maintain a steady supply of reserves—a desirable outcome for when the Fed sought to keep the fed funds rate near a target rate.

Following the collapse of Lehman Brothers in September 2008, the Federal Reserve instituted a number of policies that sharply increased bank reserves in excess of required levels. Initially, the Fed sought to absorb most of the new reserves in order to keep the fed funds rate near its target rate. To help in this eff ort, the treasury issued short-term debt at special auctions (called the Supplementary Financing Program or SFP) and placed the proceeds in a new supplemental treasury account at the Federal Reserve. Still, the amount of reserves absorbed could not keep up with the amount of bank reserves that were being created with the Fed’s new credit policies. Subsequently, the fed funds target was lowered to zero, and the immediate need to absorb reserves abated.

In late 2009 the total level of treasury debt approached the limit authorized by Congress. As the SFP issues matured, the SFP deposits were used to redeem them, and excess reserves increased. In December Congress raised the debt ceiling, allowing the treasury to issue new debt. Th is time, the treasury deposited much of the proceeds into its general account with the Fed, which caused the observed decline in excess reserves.


Click for big
Interesting External Papers

Hull & White on AAA Tranches of Subprime

John Hull and Alan White have published a working paper titled The Risk of Tranches Created from Residential Mortgages:

This paper examines, ex-ante, the risk in the tranches of ABSs and ABS CDOs that were created from residential mortgages between 2000 and 2007. Using the criteria of the rating agencies, it tests how wide the AAA tranches can be under different assumptions about the correlation model and recovery rates. It concludes that the AAA ratings assigned to the senior tranches of ABSs were not totally unreasonable. However, the AAA ratings assigned to tranches of Mezz ABS CDOs cannot be justified. The risk of a Mezz ABS CDO tranche depends critically on the correlation between mortgage pools as well as on the correlation model and the thickness of the underlying BBB tranches. The BBB tranches of ABSs cannot be considered equivalent to BBB bonds for the purposes of subsequent securitizations.

This paper won’t be popular amongst the Credit Ratings Agencies Are Evil crowd!

Credit derivatives models often assume that the recovery rate realized when there is a default is constant. This is less than ideal. As the default rate increases, the recovery rate for a particular asset class can be expected to decline. This is because a high default rate leads to more of the assets coming on the market and a reduction in price.

As is now well known, this argument is particularly true for residential mortgages. In a normal market, a recovery rate of about 75% is often assumed for this asset class. If this is assumed to be the recovery rate in all situations, the worst possible loss on a portfolio of residential mortgages given by the model would be 25%, and the 25% to 100% senior tranche of an ABS created from the mortgages could reasonably be assumed to be safe. In fact, recovery rates on mortgages have declined in the high default rate environment experienced since 2007.

The evaluation of ABSs depends on a) the expected default rate, Q, for mortgages in the underlying pool, b) the default correlation, ρ, for mortgages in the pool, and c) the recovery rate, R. Data from the 1999 to 2006 period suggest a value of Q less than 5% assuming an average mortgage life of 5 years. But, as has been mentioned, a different macroeconomic environment could be anticipated over the next few years. It would seem to be more prudent to use an estimate of 10%, or even higher. We will present results for values of Q equal to 5%, 10%, and 20%. The Basel II capital requirements are based on a copula correlation of 0.15 for residential mortgages.6 We will present results for values of ρ between 0.05 and 0.30. As already mentioned, a recovery rate of 75% is often assumed for residential mortgages, but this is probably optimistic in a high default rate environment. We will present results for the situation where the recovery rate is fixed at 75% and for the situation where the recovery rate model in the previous section is used with Rmin=50% and Rmax=100%.

ABS CDOs also depend on the parameter, α. Loosely speaking, this measures the proportion of the default correlation that comes from a factor common to all pools. A value of α close to zero indicates that investors obtain good diversification benefits from the ABS CDO structure. In adverse market conditions some mezzanine tranches can be expected to suffer 100% losses while others incur no losses. However, a value of α close to one indicates that all mezzanine tranches will tend to sink or swim together. We do not know what estimates rating agencies made for α. (Ex post of course, we know that it was high.) We will therefore present results based on a wide range of values for this parameter.

The meat of the matter – at least as far as the CRAs are concerned – is:

Table 2 shows that when a 20% default rate is combined with a high default correlation, and a stochastic recovery rate model, the AAA ratings that were made seem a little high. Also, the ratings are difficult to justify when the most extreme model (double t copula, stochastic recovery rate) is used. But overall the results in Table 2 indicate that the AAA ratings that were assigned were not totally unreasonable.

Very bad things happened to CDOs created from the mezzanine tranches of the structures – and here the CRAs can be faulted:

It should be noted that a CDO created from the triple BBB tranches of ABSs is quite different from a CDO created from BBB bonds. This is true even when the BBB tranches have been chosen so that their probabilities of default and expected losses are consistent with their BBB rating. The reason is that the probability distribution of the loss from a BBB tranche is quite different from the probability distribution of the loss from a BBB bond.

The authors conclude:

Contrary to many of the opinions that have been expressed, the AAA ratings for the senior tranches of ABSs were not unreasonable. The weighted average life of mortgages is about five years. The probability of loss and expected loss of the AAA-rated tranches that were created were similar to or better than those of AAA-rated five-year bonds.

The AAA ratings for Mezz ABS CDOs are much less defensible. Scenarios where all the underlying BBB tranches lose virtually all their principal are sufficiently probable that it is not reasonable to assign a AAA rating to even a quite thin senior tranche. The risks in Mezz ABS CDOs depend critically on a) the width of the underlying BBB tranches, b) the correlation between pools, c) the tail default correlation, and d) the relationship between the recovery rate and the default rate. An important point is that the BBB tranche of an ABS cannot be assumed to be similar to a BBB bond for the purposes of determining the risks in ABS CDO tranches.

In practice Mezz ABS CDOs accounted for about 3% of all mortgage securitizations. Our conclusion is therefore that the vast majority of the AAA ratings assigned to tranches created from mortgages were reasonable, but in a small minority of the cases they cannot be justified.

I think it’s fair to conclude that the problems of the sub-prime crisis were not with the rating agencies or, to a small degree, with investors who plunked down their money. The problem lay in concentration: the banks took the view that if one is good, two is better … and went the way of all those who fail to diversify sufficiently.

Update: For a review of what participants were thinking at the time, see Making sense of the subprime crisis. For more on subprime default experience, see Subprime! Problems forseeable in 2005?. I will admit, though, that what I’m really waiting for is an accounting of realized losses on subprime paper.

Interesting External Papers

DBRS Releases Global Bank Rating Methodology

DBRS has released its Global Methodology for Rating Banks and Banking Organisations that has some snippets of interest for preferred share investors:

DBRS notes that the regulatory focus on Tier 1 capital is evolving with increased focus on core Tier 1 capital that excludes hybrids. We will adjust our methodology in the future to reflect any changes in emphasis or requirements.

To assess leverage, another capital measure that we employ is the ratio of Tier 1 capital to tangible assets. This ratio, or a variation of it, is applied to banks in a number of countries. It is generally more constraining than the Basel ratios, as assets are not risk-adjusted, although no adjustment is made for off-balance sheet exposures. We anticipate that there is likely to be pressure for adoption of some variation on this leverage ratio in more countries in the aftermath of the crisis.

Taking advantage of the regulatory risk weightings, DBRS considers the ratio of tangible common equity to RWA. Refl ecting DBRS’s preference for equity over hybrids as a cushion for bondholders and other senior creditors, this ratio excludes the hybrid securities that are given full weight by the regulators, up to certain limits.

In the light of Sheila Bair’s testimony to the Crisis Committee, the following extract is interesting:

By their nature, however, these businesses, if poorly run with inadequate risk management, can detract from a bank’s strengths and constrain its ratings. It is worth noting again that while banks had extraordinary losses in their trading businesses in this cycle, most of the losses were concentrated in few business lines, primarily in certain areas of fi xed income, related to origination, structuring and packaging various forms of credit and more complex securities. Risk management of trading activities was predominantly successful in helping banks generate revenues and earnings across many of their trading businesses. The analysis focuses on the trading and other capital markets businesses, but does not ignore other exposures to market risk.

Interesting External Papers

National Post Cheers on FixedResets

The National Post had an article on preferreds yesterday by Eric Lam And David Pett, Reset preferred shares fill trust gap with all the incisive, hard-hitting reporting that made the National Post what it is today: bankrupt:

John Nagel, vice-president at Desjardins Securities, preferred shares department, and one of the creators of reset preferreds, said the shares give investors much more flexibility.

“The very low interest-rate scenario that we’re in … if rates are a lot higher five or six years from now, there’s the option of going floating or being redeemed. That’s very attractive,” he said.

Flexibility? The redemption option belongs to the issuer. The holder – if not redeemed – has the relatively trivial opportunity to choose between fixed and floating. The flexibility that counts belongs to the issuer.

To date, almost all of the reset preferreds have gained in value from the price they were originally issued. That represents an added bonus for investors who got in early, but it also presents a particular challenge for investors looking to get in on the action now as they may face lower yields and the potential for large capital losses as shares get redeemed on the reset date at par.

Clearly, it’s important to think about the exit strategy right from the beginning.

“When you buy it, assume the worst,” Mr. Nagel said.

“The secret is to look at these six months or nine months ahead of [maturity], and make a decision. If you think they’re going to be redeemed, you should sell.”

Sell to whom? At what price?

If the bond yield has risen substantially, the issuer is likely going to redeem the shares to prevent you from cashing in on the elevated rates.

This part is nonsense. The redemption decision will have everything to do with credit spreads on the market – can they borrow on more attractive terms? – and virtually nothing to do with the absolute value of “bond yield” – assuming that by “bond” they mean five-year Canadas.

On the other hand, if the bond yield is low and it looks like the shares will be reset, the best bet — available in the vast majority of cases — is to convert to floating rate preferred shares, which are usually pegged to the Government of Canada three-month treasury bills plus the spread.

Hopeless nonsense. In a normal environment, a five year bond will outperform treasury bills bought and rolled for a five-year term. Not always, but more often than not.

Investors should remember that while FixedResets can certainly mitigate the effects of a rise in yields, you pay through the nose for that benefit; the bond market, as a whole, ascribes zero value to this benefit. And the credit risk is forever. Should Bad Things happen to Groupe Aeroplan – although it is hard to imagine bad things happening to a company that combines air travel with green stamp savings books – they will not be able to refinance at +375, not redeem, and the prefs will be trading at a big discount.

Interesting External Papers

NY Fed Research on Term Spread & Business Cycle

The Federal Reserve Bank of New York has released Staff Report 421 by Tobias Adrian, Arturo Estrella, and Hyun Song Shin titled Monetary Cycles, Financial Cycles, and the Business Cycle:

One of the most robust stylized facts in macroeconomics is the forecasting power of the term spread for future real activity. The economic rationale for this forecasting power usually appeals to expectations of future interest rates, which affect the slope of the term structure. In this paper, we propose a possible causal mechanism for the forecasting power of the term spread, deriving from the balance sheet management of financial intermediaries. When monetary tightening is associated with a flattening of the term spread, it reduces net interest margin, which in turn makes lending less profitable, leading to a contraction in the supply of credit. We provide empirical support for this hypothesis, thereby linking monetary cycles, financial cycles, and the business cycle.

I’ve never really been to comfortable with the idea that expectations inverts the yield curve – it seems to me to be asking too much of the world – expectations implies forecasting and forecasting, at least in my book, implies “wrong”. I’m a much bigger fan of the “roundaboutness” process, whereby a slowdown in consumer demand causes goods to pile up at each stage of the production cycle, which means that vendors of these goods have to borrow short term funds to finance the unexpected inventory, which drives up short rates and, as production slows down, also results in a general economic slowdown.

In other words, the curve flattening and the economic cycle are not causally related, but are both results of the same cause; it’s just that the yield curve reacts quicker. This explanation leave out the role of central banks, but I like it as the ‘unfettered free market’ explanation; central banks are simply there to smooth the extremes.

However, the authors of this paper put the central banks front and centre and seek to understand how the central bank action affects subsequent events – naturally enough, since that’s their job:

In this paper, we offer a possible causal mechanism that operates via the role of financial intermediaries and their active management of balance sheets in response to changing economic conditions. Banks and other financial intermediaries typically borrow in order to lend. Since the loans offered by banks tend to be of longer maturity than the liabilities that fund those loans, the term spread is indicative of the marginal profitability of an extra dollar of loans on intermediaries’ balance sheets. For any risk premium prevailing in the market, the compression of the term spread may mean that the marginal loan becomes uneconomic and ceases to be a feasible project from the bank’s point of view. There will, therefore, be an impact on the supply of credit to the economy, and, to the extent that the reduction in the supply of credit has a dampening effect on real activity, a compression of the term spread will be a causal signal of subdued real activity. Adrian and Hyun Song Shin (2009 a, b) argue that the reduced supply of credit also has an amplifying effect due to the widening of the risk premiums demanded by the intermediaries, putting a further downward spiral on real activity.

We explore this hypothesis, and present empirical evidence consistent with it.

They claim their results are relevant to the “Greenspan Conundrum”:

Our results shed light on the recent debate about the “interest rate conundrum.” When the FOMC raised the Fed Funds target by 425 basis points between June 2004 and June 2006 (from from 1 to 5.25 percent), the 10-year Treasury yield only increased by 38 basis points over that same time period (from 4.73 to 5.11 percent). Greenspan (2005) referred to this behavior of longer term yields as a conundrum for monetary policy makers. In the traditional, expectations driven view of monetary transmission, policy works as increases in short term rates lead to increases in longer term rates, which ultimately matter for real activity.

Our findings suggest that the monetary tightening of the 2004-2006 period ultimately did achieve a slowdown in real activity not because of its impact on the level of longer term interest rates, but rather because of its impact on the slope of the yield curve. In fact, while the level of the 10-year yield only increased from 38 basis points between June 2004 and 2006, the term spread declined 325 basis points (from 3.44 to .19 percent). The fact that the slope flattened meant that intermediary profitability was compressed, thus shifting the supply of credit, and hence inducing changes in real activity. The .19 percent at the end of the monetary tightening cycle is below the threshold of .92 percent, and, as a result, a recession occurred within 18 months of the end of the tightening cycle (the NBER dated the start of the recession as December 2007). The 18 month lag between the end of the tightening cycle, and the beginning of the recession is within the historical length.

They show a strong relationship between Fed action and the term spread:

The important impact of changes in the Fed Funds target is not on the level of longer term interest rates, but rather on the slope of the yield curve. In fact, Figure 4 below shows that there is a near perfect negative one-to-one relationship between 4-quarter changes of the Fed Funds target and 4-quarter changes of the term spread (the plot uses data from 1987q1 to 2008q3). Variations in the target affect real activity because they change the profitability of financial intermediaries, thus shifting the supply of credit.

Interesting External Papers

BoC Working Paper on Liquidity & Volatility

The Bank of Canada has released Working Paper 2010-1 by B. Ravikumar and Enchuan Shao titled Search Frictions and Asset Price Volatility:

We examine the quantitative effect of search frictions in product markets on asset price volatility. We combine several features from Shi (1997) and Lagos and Wright (2002) in a model without money. Households prefer special goods and general goods. Special goods can be obtained only via a search in decentralized markets. General goods can be obtained via trade in centralized competitive markets and via ownership of an asset. There is only one asset in our model that yields general goods. The asset is also used as a medium of exchange in the decentralized market to obtain the special goods. The value of the asset in facilitating transactions in the decentralized market is determined endogenously. This transaction role makes the asset pricing implications of our model different from those in the standard asset pricing model. Our model not only delivers the observed average rate of return on equity and the volatility of the equity price, but also accounts for most of the spectral characteristics of the equity price.

This is a good paper; unfortunately the prosaic explanations of the model are rather heavily larded with the math; and I am not sufficiently comfortable with the math to provide my own textual explanation. But I’ll do what I can.

The authors were most interested in attacking the excess volatility puzzle:

LeRoy and Porter (1981) and Shiller (1981) calculated the time series for asset prices using the simple present value formula – the current price of an asset is equal to the expected discounted present value of its future dividends. Using a constant interest rate to discount the future, they showed that the variance of the observed prices for U.S. equity exceeds the variance implied by the present value formula (see figure 1). This is the excess volatility puzzle.

After the rather precious definition of the General Good as a “tree” and the Special Goods as “fruits”, they explain:

Random matching during the day will typically result in non-degenerate distributions of asset holdings. In order to maintain tractability, we use the device of large households along the lines of Shi (1997). Each household consists of a continuum of worker-shopper (or, seller-buyer) pairs. Buyers cannot produce the special good, only sellers are capable of production. We assume the fraction of buyers = fraction of sellers = 1 / 2 . Then, the probability of single coincidence meetings during the day is 1/4 α. Each household sends its buyers to the decentralized day market with take-it-or-leave-it instructions (q; s) – accept q units of special goods in exchange for s trees. Each household also sends its sellers with “accept” or “reject” instructions. There is no communication between buyers and sellers of the same household during the day. After the buyers and sellers finish trading in the day, the household pools the trees and shares the special goods across its members each period. By the law of large numbers, the distributions of trees and special goods are degenerate across house-holds. This allows us to focus on the representative household. The representative household’s consumption of the special good is [q α/4].

…. and the interesting part is …

To compute the “liquidity value” of the asset, we set β and δ set at their benchmark values (Table 1) and calculate the price sequence for a standard asset pricing model such as Lucas (1978). This is easily done by setting u'(qt α / 4) = 1 for all t in equation (10). Since the standard asset pricing model does not assign any medium of exchange role to the asset, the difference between the prices implied by the standard model and ours would be the liquidity value of the asset. We compute the liquidity value as a fractionof the price implied by the standard model i.e., liquidity value = (Pmodel – PLucas) / PLucas. The mean liquidity value implied by our model is 17.5%.

This is a fascinating result, illustrating the value of liquidity in a segmented market. It is the function of dealers – and their capital – to reduce friction for all players, but to keep a piece for themselves . I will be fascinated to follow the progress of this model as, perhaps, it gets extended to include “households” that function in such a manner.

It is also apparent that when friction increases, the “flight to quality” into government bonds may be characterized to a great extent as a “flight to liquidity”.

Interesting External Papers

Bernanke: Monetary Policy and the Housing Bubble

Bernanke has given a speech titled Monetary Policy and the Housing Bubble to the Annual Meeting of the American Economic Association:

As with regulatory policy, we must discern the lessons of the crisis for monetary policy. However, the nature of those lessons is controversial. Some observers have assigned monetary policy a central role in the crisis. Specifically, they claim that excessively easy monetary policy by the Federal Reserve in the first half of the decade helped cause a bubble in house prices in the United States, a bubble whose inevitable collapse proved a major source of the financial and economic stresses of the past two years.

These assertions have been discussed on PrefBlog, for example Taylor Rules and the Credit Crunch Cause; the seminal paper was discussed on Econbrowser, The Taylor Rule and the Housing Boom. But Bernanke has One Big Problem with blind use of the Taylor Rule:

For my purposes today, however, the most significant concern regarding the use of the standard Taylor rule as a policy benchmark is its implication that monetary policyshould depend on currently observed values of inflation and output.

However, because monetary policy works with a lag, effective monetary policy must take into account the forecast values of the goal variables, rather than the current values. Indeed, in that spirit, the FOMC issues regular economic projections, and these projections have been shown to have an important influence on policy decisions (Orphanides and Wieland, 2008).

when one takes into account that policymakers should and do respond differently to temporary and longer-lasting changes in inflation, monetary policy following the 2001 recession appears to have been reasonably appropriate, at least in relation to a simple policy rule.

Central to Bernanke’s argument is:

To demonstrate this finding in a simple way, I will use a statistical model developed by Federal Reserve Board researchers that summarizes the historical relationships among key macroeconomic indicators, house prices, and monetary policy (Dokko and others, 2009).

The model incorporates seven variables, including measures of economic growth, inflation, unemployment, residential investment, house prices, and the federal funds rate, and it is estimated using data from 1977 to 2002.

The right panel of the figure shows the forecast behavior of house prices during the recent period, taking as given macroeconomic conditions and the actual path of the federal funds rate. As you can see, the rise in house prices falls well outside the predictions of the model. Thus, when historical relationships are taken into account, it is difficult to ascribe the house price bubble either to monetary policy or to the broader macroeconomic environment.

One reason he suggests for the decoupling of historical relationships is ARMs and other exotic mortgages:

Clearly, for lenders and borrowers focused on minimizing the initial payment, the choice of mortgage type was far more important than the level of short-term interest rates.

The availability of these alternative mortgage products proved to be quite important and, as many have recognized, is likely a key explanation of the housing bubble.

Slide 8 is evidence of a protracted deterioration in mortgage underwriting standards, which was further exacerbated by practices such as the use of no-documentation loans. The picture that emerges is consistent with many accounts of the period: At some point, both lenders and borrowers became convinced that house prices would only go up. Borrowers chose, and were extended, mortgages that they could not be expected to service in the longer term. They were provided these loans on the expectation that accumulating home equity would soon allow refinancing into more sustainable mortgages. For a time, rising house prices became a self-fulfilling prophecy, but ultimately, further appreciation could not be sustained and house prices collapsed. This description suggests that regulatory and supervisory policies, rather than monetary policies, would have been more effective means of addressing the run-up in house prices.

He concludes:

I noted earlier that the most important source of lower initial monthly payments, which allowed more people to enter the housing market and bid for properties, was not the general level of short-term interest rates, but the increasing use of more exotic types of mortgages and the associated decline of underwriting standards. That conclusion suggests that the best response to the housing bubble would have been regulatory, not monetary. Stronger regulation and supervision aimed at problems with underwriting practices and lenders’ risk management would have been a more effective and surgical approach to constraining the housing bubble than a general increase in interest rates. Moreover, regulators, supervisors, and the private sector could have more effectively addressed building risk concentrations and inadequate risk-management practices without necessarily having had to make a judgment about the sustainability of house price increases.

For my own part, I can’t really do much but repeat my views expressed in the post Is Crony Capitalism Really Returning to America:

Americans should also be taking a hard look at the ultimate consumer friendliness of their financial expectations. They take as a matter of course mortgages that are:

  • 30 years in term
  • refinancable at little or no charge (usually; this may apply only to GSE mortgages; I don’t know all the rules)
  • non-recourse to borrower (there may be exceptions in some states)
  • guaranteed by institutions that simply could not operate as a private enterprise without considerably more financing
  • Added 2008-3-8: How could I forget? Tax Deductible

And I will add: following the Crash of 1929, margin rules on stock purchases were tightened:

The great stock market crash of 1929 was blamed on rampant speculation, excessive leverage, and inadequate regulatory oversight. The debacle caused a wave of bank and brokerage failures that devastated the US financial system. Investors were left reeling. In order to restore confidence in the securities markets, the Federal government took several steps, including creating the Securities and Exchange Act of 1934, separating the banking and securities industry, and giving the Federal Reserve Board the authority to set margin requirements, which it subsequently did through Regulation T [Reg T].

Margin rules have occasionally come under attack, as reported in 1985:

Federal Reserve Chairman Paul Volcker contended last week in a cover letter accompanying a 189-page report that such federal regulations are no longer needed. If they exist at all, he wrote, they should be set by the securities industry. Buying stocks on credit, his study concluded, “has become much less important . . . than it was in the early 1930s.” In 1928 nearly 10% of all stocks were bought on margin; last year only 1.4% were bought that way.

I think most will agree that in order to protect financial stability, the core banking/brokerage system must make a clear distinction between owner and lender by requiring the owner to put up significant capital to take the first loss in the event of adverse moves. So one regulatory change that would be worth seeing is a requirement that every mortgage have – for example – an ultimate loan-to-value ratio of less than 80%.

Thus, on a $500,000 house, the buyer should put up $100,000. I say “should” rather than “must” because I would not support overly-intrusive regulation: if a bank wants to fund the entire $500,000 and call it a loan – they’re quite welcome to. But – and it’s a big but, as the Bishop said to the actress – that $100,000 has to come from somewhere, so making such a loan will require a dollar-for-dollar adjustment to Tier 1 Capital.

Currently, the $500,000 loan would be risk-weighted at 35% to 175,000; maintaining a 10% Tier 1 Capital ratio then requires $17,500 in capital.

With the change as suggested, $400,000 would be treated as a 35% risk-weighted loan, equating to $140,000, requiring $14,000 in capital; but the $100,000 capital would also be required, bringing the total Tier 1 Capital required for the loan to $114,000 – a rather major difference!

In Canada, mortgages extended by a chartered bank with a loan-to-value of greater than 75% 80% must be CMHC insured; this accomplishes the same purpose (since the capital is, effectively, provided by the bottomless pockets of the taxpayer). And we’ll just have to hope that the CMHC gets their calculations right when setting premia!

Update: Last paragraph edited to reflect comment. See also the CMHC premium schedule and a Globe article on Spend-every-Penny’s musings on tightening the rules.

Update: Musing over the part of Slide 6 that I have reproduced can lead to interesting conclusions … primary among them being that, according to the Fed, US real-estate is a screaming buy right now.

Update, 2010-01-04: Don’t count on CMHC getting the premia calculations right! The Canada Small Business Financing Program, which supports the vitally important food and beverage sector by writing Credit Default Swaps, isn’t doing very well:

The program has so far guaranteed about $10 billion in small-business loans issued by banks, credit unions and others since 1999, and collects fees based on the size of the loan.

The revenue paid to Industry Canada was supposed to cover the default claims paid out, but the math has never worked in Ottawa’s favour.

Claims paid out have risen steadily over the decade, and now top $100 million annually, while revenues have consistently lagged, costing taxpayers a net $335 million so far.

Put another way, cost recovery is currently at only about 60 per cent rather than the 100 per cent that was planned, and is in steady decline.

“The gap between claims and fee revenues will continue to exist and most likely expand,” predicts the KPMG report, dated Oct. 30 and obtained by The Canadian Press under the Access to Information Act.

Update, 2010-01-05: Not surprisingly, Taylor doesn’t buy it:

John Taylor, creator of the so-called Taylor rule for guiding monetary policy, disputed Federal Reserve Chairman Ben S. Bernanke’s argument that low interest rates didn’t cause the U.S. housing bubble.

“The evidence is overwhelming that those low interest rates were not only unusually low but they logically were a factor in the housing boom and therefore ultimately the bust,” Taylor, a Stanford University economist, said in an interview today in Atlanta.

Update, 2010-01-08: James Hamilton of Econbrowser joins the consensus – it’s not a matter of either-or:

Fed Chair Ben Bernanke’s observations on monetary policy and the housing bubble have received a lot of attention. Like many other commentators (e.g., Arnold Kling, Paul Krugman, and Free Exchange), I agree with Bernanke’s conclusions, but only up to a point.

At least with the benefit of hindsight, I would have thought we could agree that the low interest rate targets of 2003-2005 were a mistake, because more stimulus to housing was the last thing the economy needed. This is not to deny that higher resource utilization rates were a possibility at the time. But I see this as one more illustration, to add to a long string of earlier historical examples, that it is possible to ask too much of monetary policy. Even if the unemployment rate is above where you want it to be and above where you expect it eventually to go, trying to bring it down faster by keeping the monetary gas pedal all the way to the floor can sometimes create bigger problems down the road.

The tone of the three references cited in Dr. Hamilton’s first paragraph is similar: ‘Well, sure, there were regulatory mistakes … but there will always be regulatory mistakes.’ While addressing these errors is a Good Thing, one should not forget to address the monetary policy that exacerbated these errors. Let’s not be too much like die-hard communists, claiming that every failure of that paradigm is due to errors of application, rather than fundamental errors of theory.

Interesting External Papers

Redefault on Modified Mortgages

The Federal Reserve Bank of New York has released a staff report by Andrew Haughwout, Ebiere Okah and Joseph Tracy titled Second Chances: Subprime Mortgage Modification and Re-Default:

Mortgage modifications have become an important component of public interventions designed to reduce foreclosures. In this paper, we examine how the structure of a mortgage modification affects the likelihood of the modified mortgage re-defaulting over the next year. Using data on subprime modifications that precede the government’s Home Affordable Modification Program, we focus our attention on those modifications in which the borrower was seriously delinquent and the monthly payment was reduced as part of the modification. The data indicate that the re-default rate declines with the magnitude of the reduction in the monthly payment, but also that the re-default rate declines relatively more when the payment reduction is achieved through principal forgiveness as opposed to lower interest rates.

More specifically:

After reviewing relevant previous studies and describing our data, we turn to an analysis of the effectiveness of the modifications we observe. We find that delinquent borrowers whose mortgages receive some kind of modification have a strong tendency to redefault, but that different kinds of modifications have diverse effects on outcomes. In particular, while HAMP focuses on reducing payment burdens, our results indicate the importance of borrower equity — the relationship between the mortgage balance and the home value — a factor that has been stressed in the previous literature on mortgage defaults. We conclude with a discussion of the implications of our results for modification policy.

The authors conclude, in part:

Our findings have potentially important implications for the design of modification programs going forward. The Administration’s HAMP program is focused on increasing borrowers’ ability to make their monthly payments, as measured by the DTI. Under HAMP, reductions in payments are primarily achieved by subsidizing lenders to reduce interest rates and extend mortgage term. While such interventions can reduce re-default rates, an alternative scheme would simultaneously enhance the borrower’s ability and willingness to pay the debt, by writing down principal in order to restore the borrower’s equity position. We estimate that restoring the borrower’s incentive to pay in this way can double the reduction in re-default rates achieved by payment reductions alone.

Another distinction between modifications that reduce the monthly payment by cutting the interest rate as compared to reducing the principal is the likely impact on household mobility. Ferreira et al (2010) using over two decades of data from the American Housing Survey estimate that each $1,000 in subsidized interest to a borrower reduces the two-year mobility rate by 1.4 percentage points. Modifying the interest rate to a below market rate creates an in-place subsidy to the borrower leading to a lock-in effect. That is, the borrower receives the subsidy only if he or she does not move.

Seems to me that HAMP is poorly designed – just another piece of poorly thought out feel-goodism.

One thing the authors did not address that interests me greatly is the accounting treatment for modifications which are wholly and directly owned by a single bank – as is the general rule in Canada – and how a choice between interest rate reduction and principal reduction might be reflected on the books of the firm. I suspect – but am not certain – that principal reduction will affect current profit, while interest rate reduction will merely affect future profit and be amortized over the length of the loan, despite the fact that it may be presumed that the choices are equivalent on a present value basis.

Given all the current fooferaw over the pending apocalypse in Canada when the current crop of 90%+ LTV mortgages comes due and needs to be refinanced at a higher rate, it might be prudent to start examining – and, perhaps, changing – the bookkeeping implications now, rather than being surprised later.

But then – who cares? Interest rate reductions are easy to understand and get more votes – so why should our glorious leaders do anything?

Update, 2012-7-31: Rebuttal from FHFA.

Update, 2013-5-13: Redesign of HAMP in 2010:

The US Treasury Department, as it continues to revamp the Home Affordable Modification Program (HAMP), announced today an initiative to encourage principal write-downs.

The principal reduction plan is one of the changes to HAMP, to be implemented in coming months.

The changes will encourage servicers to write-down a portion of mortgage debt as part of a HAMP modification, allow more borrowers to qualify for modification and help borrowers move into more affordable housing when modification is not possible, according to a fact sheet on the improvements provided to HousingWire.

S&P Commentary 2013-4-26:

In June of last year, Standard & Poor’s Ratings Services contended that principal forgiveness was more likely to keep U.S. mortgage borrowers current than more commonly used modification tools (see “The Best Way to Limit U.S. Mortgage Redefaults May Be Principal Forgiveness,” June 15, 2012). Data gathered since then not only support this view but also demonstrate servicers’ growing adoption of this form of loss mitigation. (Watch the related CreditMatters TV segment titled, “Principal Forgiveness Remains The Best Way To Limit U.S. Mortgage Redefaults,” dated May 7, 2013.)

As of February of this year, more than 1.5 million homeowners have received a permanent modification through the U.S. federal government’s Home Affordable Modification Program (HAMP). Since the publication of our June 2012 article, there have been more than 400,000 additional modifications on outstanding mortgages (as of March 2013). This translates to roughly a 22% rate of growth in the number of modifications on an additional $2.4 billion in mortgage debt.

Under the HAMP Principal Reduction Alternative (PRA) program, which provides monetary incentives to servicers that reduce principal, borrowers have received approximately $9.6 billion in principal forgiveness as of March 2013. Interestingly, servicers have ramped up their use of principal forgiveness on loans that don’t necessarily qualify for PRA assistance. Indeed, among the top five servicers for non-agency loans, we’ve noted that principal forgiveness, as a percentage of average modifications performed on a monthly basis, has increased by about 200% since the latter half of 2011 (see Chart 1). We attribute part of this to the $25 billion settlement in February 2012 with 49 state attorneys general and these same five servicers: Ally/GMAC, Bank of America, Citi, JPMorgan Chase, and Wells Fargo). In fact, although principal reduction remains the least common type of loan modification among servicers, the percentage of non-agency modified loans that have received principal forgiveness has increased by 3% since June 2012 (see Chart 2). Since 2009, servicers have forgiven principal on approximately $45 billion of outstanding non-agency mortgages.