Category: Interesting External Papers

Interesting External Papers

Haug and Taleb on Black-Scholes

Espen Gaarder Haug & Nassim Nicholas Taleb have produced a highly entertaining – but, alas, somewhat less than informative – polemic: Why We Have Never Used the Black-Scholes-Merton Option Pricing Formula:

Options traders use a pricing formula which they adapt by fudging and changing the tails and skewness by varying one parameter, the standard deviation of a Gaussian. Such formula is popularly called “Black-Scholes-Merton” owing to an attributed eponymous discovery (though changing the standard deviation parameter is in contradiction with it). However we have historical evidence that 1) Black, Scholes and Merton did not invent any formula, just found an argument to make a well known (and used) formula compatible with the economics establishment, by removing the “risk” parameter through “dynamic hedging”, 2) Option traders use (and evidently have used since 1902) heuristics and tricks more compatible with the previous versions of the formula of Louis Bachelier and Edward O. Thorp (that allow a broad choice of probability distributions) and removed the risk parameter by using put-call parity. 3) Option traders did not use formulas after 1973 but continued their bottom-up heuristics. The Bachelier-Thorp approach is more robust (among other things) to the high impact rare event. The paper draws on historical trading methods and 19th and early 20th century references ignored by the finance literature. It is time to stop calling the formula by the wrong name.

The tone of the paper is evident in the first angry footnote:

For us, practitioners, theories should arise from practice.

Footnote: For us, in this discussion, a practitioner is deemed to be someone involved in repeated decisions about option hedging, not a support quant who writes pricing software or an academic who provides “consulting” advice.

The main thrust of the article is that the premise of the Black-Scholes model is incorrect:

Referring to Thorp and Kassouf (1967), Black, Scholes and Merton took the idea of delta hedging one step further, Black and Scholes (1973):

If the hedge is maintained continuously, then the approximations mentioned above become exact, and the return on the hedged position is completely independent of the change in the value of the stock. In fact, the return on the hedged position becomes certain. This was pointed out to us by Robert Merton.

This may be a brilliant mathematical idea, but option trading is not mathematical theory. It is not enough to have a theoretical idea so far removed from reality that is far from robust in practice.

The authors point out that

  • Option trading has been around for a long time
  • The only way to hedge options properly is with other options, due to pricing discontinuities
  • Put-Call Parity is the basic theoretical foundation of proper hedging

The second main point of the article is that, consistent with the idea that only options are a proper hedge against options, the job of an options trader is not to value options based on some theory; it is to make money with a market-neutral book:

In that sense, traders do not perform “valuation” with some “pricing kernel” until the expiration of the security, but, rather, produce a price of an option compatible with other instruments in the markets, with a holding time that is stochastic. They do not need topdown “science”.

This raises a critical point: option traders do not “estimate” the odds of rare events by pricing out-ofthe-money options. They just respond to supply and demand. The notion of “implied probability distribution” is merely a Dutch-book compatibility type of proposition.

They conclude:

One could easily attribute the explosion in option volume to the computer age and the ease of processing transactions, added to the long stretch of peaceful economic growth and absence of hyperinflation. From the evidence (once one removes the propaganda), the development of scholastic finance appears to be an epiphenomenon rather than a cause of option trading. Once again, lecturing birds how to fly does not allow one to take subsequent credit.

This is why we call the equation Bachelier-Thorp. We were using it all along and gave it the wrong name, after the wrong method and with attribution to the wrong persons. It does not mean that dynamic hedging is out of the question; it is just not a central part of the pricing paradigm.

I must point out that Mr. Taleb’s rose-tinted vision of the good old days – while probably quite true in most respects – do not square completely with what I have read in other sources.

If I recall correctly, Morton Schulman recounted in his book “Anybody can still be a millionaire” his adventures as partner in a small Toronto brokerage in the … late ’60’s? early ’70’s?. He and his partners were willing to write puts and became, he says, amazingly popular with his New York counterparts because there was bottomless demand for them.

Interesting External Papers

Boston Fed Paper on Mortgage Renegotiation and Securitization

The Boston Fed has released a new Public Policy Discussion Paper by Manuel Adelino, Kristopher Gerardi, and Paul S. Willen, Why Don’t Lenders Renegotiate More Home Mortgages? The Myth of Securitization:

We document the fact that servicers have been reluctant to renegotiate mortgages since the foreclosure crisis started in 2007, having performed payment-reducing modifications on only about 3 percent of seriously delinquent loans. We show that this reluctance does not result from securitization: servicers renegotiate similarly small fractions of loans that they hold in their portfolios. Our results are robust to different definitions of renegotiation, including the one most likely to be affected by securitization, and to different definitions of delinquency. Our results are strongest in subsamples in which unobserved heterogeneity between portfolio and securitized loans is likely to be small, and for subprime loans. We use a theoretical model to show that redefault risk, the possibility that a borrower will still default despite costly renegotiation, and self-cure risk, the possibility that a seriously delinquent borrower will
become current without renegotiation, make renegotiation unattractive to investors.

This follows the earlier Boston Fed paper, Reducing Foreclosures, which argued that it was income shocks and housing price declines, not high payment-to-income ratios at origination, that were the driving force in the foreclosure boom.

Interesting External Papers

The Optimal Level of Deposit Insurance Coverage

The Boston Fed has released a working paper by Michael Manz, The Optimal Level of Deposit Insurance Coverage, in which a model of depositor behaviour leads to some interesting conclusions:

The model derived in this paper allows for a rigorous analysis of partial deposit insurance. The benefits of insurance involve eliminating inefficient withdrawals and bank runs due to noisy information and coordination failures, whereas the drawbacks consist of suppressing efficient withdrawals and of inducing excessive risk taking. A hitherto hardly noticed conclusion is that a high level of coverage can even be detrimental if bank risk is exogenous, because it undermines the occurrence of efficient bank runs. An extended version of the model shows that systemic risk calls for a higher level of deposit insurance, albeit only for systemically relevant banks from which contagion emanates, and not for the institutions that are potentially affected by contagion.

A vital contribution of the model is to provide comparative statics of the optimal level of coverage. In particular, the results imply that while tightening liquidity requirements is a substitute for deposit insurance, increasing transparency is not. Rather, the optimal coverage increases with the quality of the information available to depositors. Perhaps surprisingly, the degree of deposit insurance should not vary with expectations regarding the development of the real sector. This suggests that countries that in the past turned to increased or even unlimited deposit insurance as a reaction to a crisis, such as Japan, Turkey, or the United States, would do well to pause for thought on whether this is the right measure to strengthen their banking systems. The model also demonstrates why the presence of large creditors with uninsured claims calls for a lower level of insurance and why a high coverage is foremost in the interest of bankers and uninsured lenders. Moreover, it is consistent with small banks being particularly active lobbyists in favor of extending deposit insurance.

Another key advantage of the model is its applicability to various policy issues. In practice, only a small, albeit growing, number of countries maintaining deposit insurance require bank customers to coinsure a proportion of their deposits. According to the model, however, an optimal design of protection should build on coinsurance rather than on setting caps on insured deposits. It further indicates that deposit insurance becomes redundant in combination with full bailouts or optimal lending of last resort. While an unconditional bailout policy is about as inefficient as it can get, an optimal LolR policy combined with perfect public disclosure comes closest to the first best outcome in terms of welfare. Yet such an optimal policy, which requires protection to be contingent on bank solvency, seems far more demanding and hence less realistic in practice than unconditional deposit insurance. If regulators or central banks cannot precisely assess whether a bank is solvent, interventions are likely to be a mixture of the benchmark policies considered. Investigating these cases opens an interesting avenue for future research.

Interesting External Papers

Why are Most Funds Open-Ended?

This paper was highlighted in the BIS Annual Report, so I had a look.

The full title of the paper is Why are Most Funds Open-Ended? Competition and the Limits of Arbitrage:

The majority of asset-management intermediaries (e.g., mutual funds, hedge funds) are structured on an open-end basis, even though it appears that the open end form can be a serious impediment to arbitrage. I argue that the equilibrium degree of open-ending in an economy can be excessive from the point of view of investors. When funds compete for investors’ dollars, they may engage in a counterproductive race towards the open-end form, even though this form leaves them ill-suited to undertaking certain types of arbitrage trades. One implication of the analysis is that, even absent short-sales constraints or other frictions, economically large mispricings can coexist with rational, competitive arbitrageurs who earn small excess returns.

One implication of this is that the connection between arbitrageurs’ profits and overall market efficiency is very tenuous. In Example 1, the ex ante gross return to investors of 1.10 translates into a 2 percent annual alpha for professionally-managed money, assuming a five-year horizon. This looks relatively small, in keeping with the empirical evidence on the performance of fund managers. Yet this small alpha coexists with an infinitely elastic supply of the UC [Uncertain Convergence] asset, which can be thought of as having a price that deviates from fundamental value by a factor of three. Indeed, fund managers barely touch the UC asset, even though eventual convergence to fundamentals is assured, and there are no other frictions, such as trading costs or short-sales constraints. As noted in the introduction, this kind of story would seem to fit well with the unwillingness of hedge funds to bet heavily against the internet bubble of the late 1990s, by, e.g., taking an outright short position on the NASDAQ index—something which would certainly have been feasible to do at low cost from an execution/trading-frictions standpoint.

Footnote: Brunnermeier and Nagel (2003) give a nice illustration of the risks that hedge funds faced in betting against the internet bubble. They analyze the history of Julian Robertson’s Tiger Fund, which in early1999 eliminated all its investments in technology stocks (though it did not take an outright short position). By October 1999, the fund was forced to increase its redemption period from three to six months in an effort to stem outflows. By March 2000, outflows were so severe that the fund was liquidated—ironically, just as the bubble was about to burst.

The central point of this paper is easily stated. Asset-management intermediaries such as mutual funds and hedge funds compete for investors’ dollars, and one key dimension on which they compete is the choice of organizational form. In general, there is no reason to believe that this competition results in a form that is especially well-suited to the task of arbitrage. Rather, there is a tendency towards too much open-ending, which leaves funds unable to aggressively attack certain types of mispricing—i.e., those which do not promise to correct themselves quickly and smoothly. This idea may help to shed light on the arbitrage activities of another class of players: non-financial firms. Baker and Wurgler (2000) document that aggregate equity issuance by non-financial firms has predictive power for the stock market as a whole. And Baker, Greenwood and Wurgler (2003) show that such firms are also able to time the maturity of their debt issues so as to take advantage of changes in the shape of the yield curve. At first such findings appear puzzling. After all, even if one grants that managers have some insight into the future of their own companies, and hence can time the market for their own stock, it seems harder to believe that they would have an advantage over professional arbitrageurs in timing the aggregate stock and bond markets. However, there is another possible explanation for these phenomena. Even if managers of non-financial firms are less adept at recognizing aggregate-market mispricings than are professional money managers, they have an important institutional advantage—they conduct their arbitrage inside closed-end entities, with a very different incentive structure. For example, it was much less risky for the manager of an overpriced dot-com firm to place a bet against the internet bubble in 1999 (by undertaking an equity offering) than it was for a hedge-fund manager to make the same sort of bet. In the former case, if the market continued to rise, no visible harm would be done: the dot-com firm could just sit on the cash raised by the equity issue, and there would only be the subtle opportunity cost of not having timed the market even better. There would certainly be no worry about investors liquidating the firm as a result of the temporary failure of prices to converge to fundamentals.

There are some interesting corrollaries to the BIS desire for increased arbitrage:

  • Hedge Funds are Good
  • Shorting is Good
  • Credit Default Swaps are Good

One wonders how much consistency we may see in their views going forward!

Interesting External Papers

BIS Releases Annual Report

The Bank for International Settlements has released its Annual Report 2008-09; some of the policy recommendations are contentious!

One interesting reference is:

Moreover, managers of assets in a given asset class were rewarded for performance exceeding benchmarks representing average performance in that investment category. As a result, even if managers recognised a bubble in the price of some asset, they could not take advantage of that knowledge by selling short for fear that investors would withdraw funds. The result was herding that caused arbitrage to fail.

Footnote: For a discussion of how arbitrage fails when individual investors cannot distinguish good asset managers from bad ones, see J Stein, “Why are most funds open-end? Competition and the limits of arbitrage”, Quarterly Journal of Economics, vol 120, no 1, February 2005, pp 247–72.

I’ll have to read that reference sometime!

The regulators continue to deflect blame for the crisis onto the Credit Rating Agencies:

In the end, the rating agencies– assigned the task of assessing the risk of fixed income securities and thus of guarding collective safety – became overwhelmed and, by issuing unrealistically high ratings, inadvertently contributed to the build-up of systemic risk.

Footnote: Differences in the methodologies used by the rating agencies also provided incentives for the originators to structure their asset-backed securities in ways that would allow them to “shop” for the best available combination of ratings (across both rating agencies and the liabilities structure of those instruments). See I Fender and J Kiff, “CDO rating methodology: some thoughts on model risk and its implications”, BIS Working Papers, no 163, November 2004.

Which is, of course, self-serving nonsense. It is the regulators who have been assigned the task of guarding collective safety. CRAs merely publish their opinions, the same way I do and the same way any idiot with access to the internet can do. As discussed by researchers at the Boston Fed in Making Sense of the Sub-Prime Crisis (which has been discussed on PrefBlog), the tail risk of sub-prime securities was well-known, well-publicized and ignored.

And as for their observation that:

And because of the complexity of the instruments, reliance on ratings increased even among the most sophisticated institutional investors.

… well, if I don’t understand something and can’t understand how to model it, I don’t buy it. I guess that makes me the most sophisticated investor in the world.

They do make one sensible observation:

Finally, there were governance problems in risk management practices. For both structural and behavioural reasons, senior managers and board members were neither asking the right questions nor listening to the right people. The structural problem was that risk officers did not have sufficient day-to-day contact with top decision-makers, often because they did not have sufficiently senior positions in their organisations. Without support from top management, it didn’t matter much what the chief risk officer said or to whom he or she said it. The structural problem was compounded by the behavioural response to a risk officer whose job it is to tell people to limit or stop what they are doing. If what they are doing is profitable, it is going to be difficult to get managers and directors to listen.

However, I don’t see anything addressing the obvious corrollory that bank size must be controlled, preferrably through the imposition of progressively increasing capital requirements based on size. After all, if all the big international banks disappeared, where would an intelligent and knowledgable BIS employee look to for future employment? To achieve the desired end, it’s probably better to insist on a greater and better paid head-count in the Risk Management division.

The recommendation that has attracted the most attention (e.g., a Reuters report) is:

Balancing innovation and safety in financial instruments requires providing scope for progress while limiting the capacity of any new instrument to weaken the system as a whole. Balance can be achieved by requiring some form of product registration that limits investor access to instruments according to their degree of safety. In a scheme analogous to the hierarchy controlling the availability of pharmaceuticals, the safest securities would, like non-prescription medicines, be available for purchase by everyone; next would be financial instruments available only to those with an authorisation, like prescription drugs; another level down would be securities available in only limited amounts to pre-screened individuals and institutions, like drugs in experimental trials; and, finally, at the lowest level would be securities that are deemed illegal. A new instrument would be rated or an existing one moved to a higher category of safety only after successful tests – the analogue of clinical trials. These would combine issuance in limited quantities in the real world with simulations of how the instrument would behave under severe stress.
Such a registration and certification system creates transparency and enhances safety. But, as in the case of pharmaceutical manufacturers, there must be a mechanism for holding securities issuers accountable for the quality of what they sell. This will mean that issuers bear increased responsibility for the risk assessment of their products.

In other words, if I want to sell you a product with a complex pay-out and you want to buy it … too bad; we’ll both go to jail on charges of not being smart enough. Similarly, if I sell you an investment product and you lose money, you can get your money back from me because it is quite obvious that I sold you something you’re not smart enough to buy.

I wonder what that little change will make to the deemed capitalization of securities firms!

Update, 2009-7-2: Regulation of product types endorsed by Willem Buiter:

What I have in mind is an FDA for new financial instruments and institutions. Like new medical treatments, drugs and pharmacological concoctions, new financial instruments, products and institutions are potentially socially useful. They can also be toxic and health-threatening.

Interesting External Papers

BoE Releases June 2009 Financial Stability Report

The Bank of England has released its June Financial Stability Report filled with the usual tip-top commentary and analysis.

They open with a rather attention-grabbing chart showing their measure of financial market liquidity:

A number of steps towards “greater self insurance” are suggested; most of these a precious boiler-plate, but I was very pleased to see the inclusion of “constant net asset value MMMFs should be regulated as banks or forced to convert to variable net asset funds.” Section 3 fleshes out this idea a little more:

Measures to strengthen regulation and supervision will inevitably also increase avoidance incentives. Left unaddressed, this potentially poses risks for the future. Money market mutual funds (MMMFs) and structured investment vehicles (SIVs) are just two examples from the recent crisis of entities which contributed importantly to the build-up of risk in the financial system, but were not appropriately regulated. By offering to redeem their liabilities at par and effectively on demand, constant net asset value MMMFs in effect offer banking services to investors, without being regulated accordingly. The majority of the global industry comprises US domestic funds, with over US$3 trillion under management. During the crisis, as fears grew that these funds would not be able to redeem liabilities at par — so-called ‘breaking the buck’ — official sector interventions to support MMMFs were required. To guard against a recurrence, such funds need in future either to be regulated as banks or forced to convert into variable net asset value funds.

Footnote: In the United States, the Department of the Treasury has recently announced plans to strengthen the regulatory framework around MMMFs. The Group of Thirty, under Paul Volcker’s chairmanship, has recommended that MMMFs be recognised as special-purpose banks, with appropriate prudential regulation and supervision

They also make the rather breath-taking recommendation that: “The quality of banks’ capital buffers has fallen over time. In the future, capital buffers should comprise only common equity to increase banks’ capacity to absorb losses while remaining operational.” This would imply that Preferred Shares and Innovative Tier 1 Capital would not be included in Tier 1 Capital, if it is included in capital at all! In section 3, they elucidate:

The dilution of capital reduces banks’ ability to absorb losses.
Capital needs to be permanently available to absorb losses and banks should have discretion over the amount and timing of distributions. The only instrument reliably offering these characteristics is common equity. For that reason, the Bank favours a capital ratio defined exclusively in these terms — so-called core Tier 1 capital. This has increasingly been the view taken by market participants during the crisis, which is one reason why a number of global banks have undertaken buybacks and exchanges of hybrid instruments (Box 4).

Pre-committed capital insurance instruments and convertible hybrid instruments (debt which can convert to common equity) may also satisfy these characteristics. As such, these instruments could also potentially form an element of a new capital regime, provided banks and the authorities have appropriate discretion over their use. Taken together, these instruments might form part of banks’ contingent capital plans. There is a case for all banks drawing up such plans and regularly satisfying regulators that they can be executed. Subordinated debt should not feature as part of banks’ contingent capital plans, even though it may help to protect depositors in an insolvency.

There’s also an interesting chart showing bank assets as a percentage of GDP:

To my pleasure, they updated my favourite graph, the decomposition of corporate bond spreads. I had been under the impression that production of this graph had been halted pending review of the parameterization … wrong again!

They also show an interesting calculation of expected loss rates derived from CDS index data:

They also recommend a higher degree of risk-based deposit insurance:

Charging higher deposit insurance premia to riskier banks would go some way towards correcting this distortion. These premia should be collected every year, not only following a bank failure, so that risky banks incur the cost at the time when they are taking on the risk. That would also allow a deposit insurance fund to be built up. This could be used, as in the United States and other jurisdictions, to meet the costs of bank resolutions. It would also reduce the procyclicality of a pay-as-you-go deposit insurance scheme, which places greatest pressures on banks’ finances when they can least afford it. For these reasons, the Bank favours a pre-funded, risk-based deposit insurance scheme.

Canada has a pseudo-risk-based system; there are certainly tiers of defined risks, but the CDIC is proud of the fact that, basically, everybody fits into the bottom rung. The CDIC is also pre-funded (in theory) but the level of pre-funding is simply another feel-good joke.

Interesting External Papers

A Question of Liquidity: The Great Banking Run of 2008?

The Boston Fed has released a paper by Judit Montoriol-Garriga and Evan Sekeris titled A Question of Liquidity: The Great Banking Run of 2008?:

The current financial crisis has given rise to a new type of bank run, one that affects both the banks’ assets and liabilities. In this paper we combine information from the commercial paper market with loan level data from the Survey of Terms of Business Loans to show that during the 2007-2008 financial crises banks suffered a run on credit lines. First, as in previous crises, we find an increase in the usage of credit lines as commercial spreads widen, especially among the lowest quality firms. Second, as the crises deepened, firms drew down their credit lines out of fear that the weakened health of their financial institution might affect the availability of the funds going forward. In particular, we show that these precautionary draw-downs are strongly correlated with the perceived default risk of their bank. Finally, we conclude that these runs on credit lines have weakened banks further, curtailing their ability to effectively fulfill their role as financial intermediaries.

The authors note:

In order to test this hypothesis we used Credit Default Swap prices for the largest banks in our sample as a measure of how stressed the bank is perceived to be by the market. After controlling for other parameters, we found that banks with high CDS prices experienced significantly higher draw downs than banks with lower ones. In other words, when a bank was thought to be at high risk of default, firms that had credit lines with them were more likely to use them than if their credit line was with a healthier bank. This was a run on the banks by investors who ran away from the financial paper market which in turn triggered a run by borrowers of the weakest banks. This sequence of events was made possible by the combination of an increased reliance on the commercial paper market by financial institutions for their short term liquidity needs and the, often lax, underwriting of credit lines during the good years.

The authors suggest a hard look at the capital treatment of off-balance sheet committments:

However, lower quality firms can use the lines of credit that they negotiated prior to the crisis at significantly better terms when credit standards were lax. Banks find themselves lending to these businesses at spreads that no longer reflect the risk they are exposed to. These “forced” loans crowd-out new loans to either lower risk businesses or to equally risky businesses but with spreads that better reflect their financial health.
In light of this inefficient use of credit lines in the 2007-2008 crisis, one may call into question whether the current regulatory framework is appropriate to deal with situations of market illiquidity. In particular, regulators may need to reconsider the regulation on bank capital requirements for off-balance sheet items such as unused commitments, and more generally, strengthen prudential oversight of liquidity risk management.


Click for big
Interesting External Papers

Developments in Business Financing

There’s a very good review of the impact of the Credit Crunch on markets and the law thereof, prepared by Theresa Einhorn of Haynes & Boone LLP, The Corporate Debt Market And Credit Derivatives.

Section headings are:

  • The Top Ten – Credit Crunch is #1
  • Rates are High and “Covenant Tight” Replaces “Covenant Light”
  • The Credit Markets are a Market and the Stock Market is a Sideshow
  • A New Risk for Borrowers – Defaulting Lenders Fail to Fund under Corporate Lines of Credit.
  • Survival Strategies During the Credit Crunch: Restructuring by Repurchase or Exchange of Debt
  • Credit Default Swaps Market Disruption Clauses in Credit Agreements
  • Market-Based Pricing for Loans – Pricing Based on CDS
  • Hybrid Securities
  • Investing in Distressed Debt and Other Distressed Assets
Interesting External Papers

DBRS: Bank Capital Levels Robust

DBRS has published a newsletter highlighting Canadian bank capital levels, which is interesting in the light of their Review-Negative of non-Equity Tier 1 Capital.

They make the following rather curious statement:

DBRS believes the bank’s ability to access the capital markets for funding in good and bad times is an importantconsideration in its capital profile.

Well… has the ability of the banks to access capital markets in bad times really been tested? “Challenging” times, OK. “Difficult” times, why not? But can the past two years really be described as “bad” for Canadian banks?

They note:

The mix, quality and composition of capital are other important considerations in the overall assessment of capital. Thequality of capital has been a key rating consideration in DBRS’s assessment of Canadian banks for an extended periodof time. DBRS has a preference for common equity over hybrids, as the first loss cushion for bondholders and othersenior creditors. On average, 17% and 14% of the regulatory Tier 1 capital is made up of preferred shares andinnovative instruments, respectively, which DBRS views as reasonable. DBRS expects the quality of capital to remainrelatively steady given the recent focus by the market on “core capital,” although OSFI does allow this percentage tonow go as high as 40%, up from 30% as of November 2008.


Click for big
Interesting External Papers

Bank of Canada Releases Financial System Review

The Bank of Canada has released the June 2009 Financial System Review with the usual high level views of government and corporate finance.

I was most interested to see a policy recommendation in the review of the funding status of pension plans. First they state the problem:

Assets fell in 2008, largely because of the steep decline in Canadian and international equity markets. At the same time, declines in long-term bond yields caused the present value of liabilities to increase.

… and provide an illustration:

… and provide a prescription:

In this environment, plan members are concerned about pension obligations being met.

To address these issues, reforms should focus on: (i) flexibility to manage risks and (ii) proper incentives. Reforms (regulatory, accounting, and legal) should also focus on providing sponsors with the flexibility needed to actively maintain a balance between the future income from the pension fund and the payouts associated with promised benefits. Small pension funds should be encouraged to pool with larger funds to better diversify market risk as another way to help make pension funds more resilient to market volatility.

I may be a little slow, but I fail to follow the chain of logic between the premises and the conclusion that “small pension funds should be encouraged to pool with larger funds to better diversify market risk”.

This will not affect the liability side, which is based on the long Canada rate. But the Bank fails to show – or even to suggest – that the decline in assets was exacerbated by lack of diversification, that large pools are better diversified than small funds, or that pooled funds outperform small funds.

It may well be that these logical steps can be justified – but they ain’t, which makes me suspicious. The recommendation is supportive of the Ontario government’s lunatic plan to encourage the bureaucratization of the Ontario Teachers’ Pension Plan (OTPP) and OMERS, as discussed on March 31, without either mentioning the proposal or providing any of the supporting arguments that were also missing from the original.

This has the look of a quid pro quo – either institution to institution, or the old regulatory game of setting up some lucrative post-retirement consulting gigs. Even if completely straightforward and honest, the argumentation in this section is so sloppy as to be unworthy of the Bank.

Of more interest to Assiduous Readers will be the section on banks, commencing on page 29 of the PDF.

There are two things of interest here: first, for all the gloom and doom, credit losses are not as high as the post-tech-boom slowdown, let alone the recession of 1990; second, that the graph is cut off after the peak of the 1990 recession.

Why was the graph prepared in this manner? Given the Bank’s increased politicization (also evidenced by the pension plan thing) and increased regulatory bickering with OSFI, I regret that I am not only disappointed that they are not encouraging comparison with the last real recession, but suspicious that there is some kind of weird ulterior purpose. Whatever the rationale, this omission reflects poorly on the Bank’s ability to present convincing objective research.

There are a number of longer articles on the general topic of procyclicity:

  • Procyclicality and Bank Capital
  • Procyclicality and Provisioning: Conceptual Issues, Approaches, and Empirical Evidence
  • Regulatory Constraints on Leverage: The Canadian Experience
  • Procyclicality and Value at Risk
  • Procyclicality and Margin Requirements
  • Procyclicality and Compensation

The second article includes a chart on provisioning; not the same thing as credit losses, but a much better effort than that given in the main section of the report:

I was disappointed to see that bank capital and dynamic provisioning were discussed in the contexts of the general macro-economy and with great gobbets of regulatory discretion. I would much prefer to see surcharges on Risk Weighted Assets related to both the gross amount and the trend over time of RWA calculated by individual bank.

The emphasis on regulatory infallibility is particularly distressing because it is regulators who bear responsibility for the current crisis. The banks screwed up, sure. But we expect the banks to screw up, that’s why they’re regulated. And the regulators dropped the ball.

The paper on leverage has an interesting chart:

Update, 2009-7-21: