Archive for the ‘Interesting External Papers’ Category

C-EBS Releases Counter-Cyclical Capital Buffer Position Paper

Friday, July 17th, 2009

The Committee of European Banking Supervisors has released a position paper on counter-cyclical capital buffers, favouring discretionary supervision (Pillar 2) over Capital Rules (Pillar 1):

While the mechanisms identified might be alternatively employed in Pillar 1, its use under the Pillar 2 umbrella is still considered the most sensible option at this stage. Pillar 2 allows for flexibility in testing new prudential tools; moreover, an application in Pillar 1 would require further work and refinements.

With regard to this last point a meeting with rating agencies was organized. They stated very clearly that transparency on capital adequacy is a key issue and it is a precondition for market acceptance of time-varying capital buffers. Rating agencies seem to prefer Pillar 1 solutions, considered more transparent [and] less prone to national discretions; however, they seem also aware that Pillar 2 would allow quicker responses and may be used for testing tools to be subsequently improved and, possibly, implemented under Pillar 1.

I suggest it’s not a matter of awareness: it’s a matter of trust. In Canada, of course, we have OSFI with its demonstrated willingness to short-circuit Pillar 1 on the basis of a panicky ‘phone call, as well as contemptuous opacity towards the concerns of investors (Pillar 3).

Essentially, the position paper aims at a different methodology for calculating Expected Losses (EL) – see Expected Losses and the Assets to Capital Multiple. EL is calculated by the formula

EL = PD * EAD * LGD

where PD = Probability of Default
EAD = Exposure at Default
LGD = Loss Given Default (a percentage)

What C-EBS is aiming at is:

the use of mechanisms that rescale probabilities of default (PDs) estimated by banks, in order to incorporate recessionary conditions.

Currently:

The input to the IRB formula is the annual PD expected to be incurred in that grade (computed as the long-run average of one-year default rates).

As for the LGD, banks are requested to use LGD estimates that are as much as possible estimated for an economic downturn (where these are more conservative than the long-run average).

One problem I see with the approach is there does not appear to be an allowance for the term of the exposure. Would a bank dealing exclusively in mortgages with a 5-year term be expected to use the same recessionary PD as a bank with a portfolio of exclusively 30-year mortgages?

Cleveland Fed Releases July "Economic Trends"

Thursday, July 16th, 2009

The Cleveland Fed has released the July edition of Economic Trends with articles on:

  • May Price Statistics
  • The Yield Curve, June 2009
  • A Global Fiscal Crisis?
  • The Employment Situation, June 2009
  • Real GDP: First-Quarter 2009 Final Estimate
  • Gross Domestic Product Growth across States
  • Fourth District Employment Conditions
  • Consumer Credit Markets

Excluding food and energy prices (core CPI), the index rose just 1.7 percent in May, compared to 2.3 percent over the past three months and 1.8 percent over the past year. Alternative measures of underlying inflation trends—the median CPI and the 16 percent trimmed-mean CPI—increased 0.6 percent and 1.1 percent, respectively in May. The sluggish gain in the median CPI was the smallest increase in the measure since April 2003. The longer-term (12-month) trends in the underlying inflation measures all ticked down in May and are now ranging between 1.8 percent and 2.4 percent.

Indications so far suggest that the TALF is having a positive impact on consumer credit markets. In September 2008, the market for consumer ABS eff ectively shut down. This was particularly true for student loan ABS and credit card ABS. After the introduction of the TALF, the market began to revert to levels seen before the market’s collapse. For instance, total consumer ABS issuance in November was merely $0.5 billion, while six months later it had risen to $14.4 billion. This increase was not due entirely to Federal Reserve actions—the total increase in ABS issuance was larger than the amount lent under TALF. This would imply that banks are becoming less risk averse as they once again engage in securitization.

Haug and Taleb on Black-Scholes

Sunday, July 12th, 2009

Espen Gaarder Haug & Nassim Nicholas Taleb have produced a highly entertaining – but, alas, somewhat less than informative – polemic: Why We Have Never Used the Black-Scholes-Merton Option Pricing Formula:

Options traders use a pricing formula which they adapt by fudging and changing the tails and skewness by varying one parameter, the standard deviation of a Gaussian. Such formula is popularly called “Black-Scholes-Merton” owing to an attributed eponymous discovery (though changing the standard deviation parameter is in contradiction with it). However we have historical evidence that 1) Black, Scholes and Merton did not invent any formula, just found an argument to make a well known (and used) formula compatible with the economics establishment, by removing the “risk” parameter through “dynamic hedging”, 2) Option traders use (and evidently have used since 1902) heuristics and tricks more compatible with the previous versions of the formula of Louis Bachelier and Edward O. Thorp (that allow a broad choice of probability distributions) and removed the risk parameter by using put-call parity. 3) Option traders did not use formulas after 1973 but continued their bottom-up heuristics. The Bachelier-Thorp approach is more robust (among other things) to the high impact rare event. The paper draws on historical trading methods and 19th and early 20th century references ignored by the finance literature. It is time to stop calling the formula by the wrong name.

The tone of the paper is evident in the first angry footnote:

For us, practitioners, theories should arise from practice.

Footnote: For us, in this discussion, a practitioner is deemed to be someone involved in repeated decisions about option hedging, not a support quant who writes pricing software or an academic who provides “consulting” advice.

The main thrust of the article is that the premise of the Black-Scholes model is incorrect:

Referring to Thorp and Kassouf (1967), Black, Scholes and Merton took the idea of delta hedging one step further, Black and Scholes (1973):

If the hedge is maintained continuously, then the approximations mentioned above become exact, and the return on the hedged position is completely independent of the change in the value of the stock. In fact, the return on the hedged position becomes certain. This was pointed out to us by Robert Merton.

This may be a brilliant mathematical idea, but option trading is not mathematical theory. It is not enough to have a theoretical idea so far removed from reality that is far from robust in practice.

The authors point out that

  • Option trading has been around for a long time
  • The only way to hedge options properly is with other options, due to pricing discontinuities
  • Put-Call Parity is the basic theoretical foundation of proper hedging

The second main point of the article is that, consistent with the idea that only options are a proper hedge against options, the job of an options trader is not to value options based on some theory; it is to make money with a market-neutral book:

In that sense, traders do not perform “valuation” with some “pricing kernel” until the expiration of the security, but, rather, produce a price of an option compatible with other instruments in the markets, with a holding time that is stochastic. They do not need topdown “science”.

This raises a critical point: option traders do not “estimate” the odds of rare events by pricing out-ofthe-money options. They just respond to supply and demand. The notion of “implied probability distribution” is merely a Dutch-book compatibility type of proposition.

They conclude:

One could easily attribute the explosion in option volume to the computer age and the ease of processing transactions, added to the long stretch of peaceful economic growth and absence of hyperinflation. From the evidence (once one removes the propaganda), the development of scholastic finance appears to be an epiphenomenon rather than a cause of option trading. Once again, lecturing birds how to fly does not allow one to take subsequent credit.

This is why we call the equation Bachelier-Thorp. We were using it all along and gave it the wrong name, after the wrong method and with attribution to the wrong persons. It does not mean that dynamic hedging is out of the question; it is just not a central part of the pricing paradigm.

I must point out that Mr. Taleb’s rose-tinted vision of the good old days – while probably quite true in most respects – do not square completely with what I have read in other sources.

If I recall correctly, Morton Schulman recounted in his book “Anybody can still be a millionaire” his adventures as partner in a small Toronto brokerage in the … late ’60’s? early ’70’s?. He and his partners were willing to write puts and became, he says, amazingly popular with his New York counterparts because there was bottomless demand for them.

Boston Fed Paper on Mortgage Renegotiation and Securitization

Monday, July 6th, 2009

The Boston Fed has released a new Public Policy Discussion Paper by Manuel Adelino, Kristopher Gerardi, and Paul S. Willen, Why Don’t Lenders Renegotiate More Home Mortgages? The Myth of Securitization:

We document the fact that servicers have been reluctant to renegotiate mortgages since the foreclosure crisis started in 2007, having performed payment-reducing modifications on only about 3 percent of seriously delinquent loans. We show that this reluctance does not result from securitization: servicers renegotiate similarly small fractions of loans that they hold in their portfolios. Our results are robust to different definitions of renegotiation, including the one most likely to be affected by securitization, and to different definitions of delinquency. Our results are strongest in subsamples in which unobserved heterogeneity between portfolio and securitized loans is likely to be small, and for subprime loans. We use a theoretical model to show that redefault risk, the possibility that a borrower will still default despite costly renegotiation, and self-cure risk, the possibility that a seriously delinquent borrower will
become current without renegotiation, make renegotiation unattractive to investors.

This follows the earlier Boston Fed paper, Reducing Foreclosures, which argued that it was income shocks and housing price declines, not high payment-to-income ratios at origination, that were the driving force in the foreclosure boom.

The Optimal Level of Deposit Insurance Coverage

Tuesday, June 30th, 2009

The Boston Fed has released a working paper by Michael Manz, The Optimal Level of Deposit Insurance Coverage, in which a model of depositor behaviour leads to some interesting conclusions:

The model derived in this paper allows for a rigorous analysis of partial deposit insurance. The benefits of insurance involve eliminating inefficient withdrawals and bank runs due to noisy information and coordination failures, whereas the drawbacks consist of suppressing efficient withdrawals and of inducing excessive risk taking. A hitherto hardly noticed conclusion is that a high level of coverage can even be detrimental if bank risk is exogenous, because it undermines the occurrence of efficient bank runs. An extended version of the model shows that systemic risk calls for a higher level of deposit insurance, albeit only for systemically relevant banks from which contagion emanates, and not for the institutions that are potentially affected by contagion.

A vital contribution of the model is to provide comparative statics of the optimal level of coverage. In particular, the results imply that while tightening liquidity requirements is a substitute for deposit insurance, increasing transparency is not. Rather, the optimal coverage increases with the quality of the information available to depositors. Perhaps surprisingly, the degree of deposit insurance should not vary with expectations regarding the development of the real sector. This suggests that countries that in the past turned to increased or even unlimited deposit insurance as a reaction to a crisis, such as Japan, Turkey, or the United States, would do well to pause for thought on whether this is the right measure to strengthen their banking systems. The model also demonstrates why the presence of large creditors with uninsured claims calls for a lower level of insurance and why a high coverage is foremost in the interest of bankers and uninsured lenders. Moreover, it is consistent with small banks being particularly active lobbyists in favor of extending deposit insurance.

Another key advantage of the model is its applicability to various policy issues. In practice, only a small, albeit growing, number of countries maintaining deposit insurance require bank customers to coinsure a proportion of their deposits. According to the model, however, an optimal design of protection should build on coinsurance rather than on setting caps on insured deposits. It further indicates that deposit insurance becomes redundant in combination with full bailouts or optimal lending of last resort. While an unconditional bailout policy is about as inefficient as it can get, an optimal LolR policy combined with perfect public disclosure comes closest to the first best outcome in terms of welfare. Yet such an optimal policy, which requires protection to be contingent on bank solvency, seems far more demanding and hence less realistic in practice than unconditional deposit insurance. If regulators or central banks cannot precisely assess whether a bank is solvent, interventions are likely to be a mixture of the benchmark policies considered. Investigating these cases opens an interesting avenue for future research.

Why are Most Funds Open-Ended?

Monday, June 29th, 2009

This paper was highlighted in the BIS Annual Report, so I had a look.

The full title of the paper is Why are Most Funds Open-Ended? Competition and the Limits of Arbitrage:

The majority of asset-management intermediaries (e.g., mutual funds, hedge funds) are structured on an open-end basis, even though it appears that the open end form can be a serious impediment to arbitrage. I argue that the equilibrium degree of open-ending in an economy can be excessive from the point of view of investors. When funds compete for investors’ dollars, they may engage in a counterproductive race towards the open-end form, even though this form leaves them ill-suited to undertaking certain types of arbitrage trades. One implication of the analysis is that, even absent short-sales constraints or other frictions, economically large mispricings can coexist with rational, competitive arbitrageurs who earn small excess returns.

One implication of this is that the connection between arbitrageurs’ profits and overall market efficiency is very tenuous. In Example 1, the ex ante gross return to investors of 1.10 translates into a 2 percent annual alpha for professionally-managed money, assuming a five-year horizon. This looks relatively small, in keeping with the empirical evidence on the performance of fund managers. Yet this small alpha coexists with an infinitely elastic supply of the UC [Uncertain Convergence] asset, which can be thought of as having a price that deviates from fundamental value by a factor of three. Indeed, fund managers barely touch the UC asset, even though eventual convergence to fundamentals is assured, and there are no other frictions, such as trading costs or short-sales constraints. As noted in the introduction, this kind of story would seem to fit well with the unwillingness of hedge funds to bet heavily against the internet bubble of the late 1990s, by, e.g., taking an outright short position on the NASDAQ index—something which would certainly have been feasible to do at low cost from an execution/trading-frictions standpoint.

Footnote: Brunnermeier and Nagel (2003) give a nice illustration of the risks that hedge funds faced in betting against the internet bubble. They analyze the history of Julian Robertson’s Tiger Fund, which in early1999 eliminated all its investments in technology stocks (though it did not take an outright short position). By October 1999, the fund was forced to increase its redemption period from three to six months in an effort to stem outflows. By March 2000, outflows were so severe that the fund was liquidated—ironically, just as the bubble was about to burst.

The central point of this paper is easily stated. Asset-management intermediaries such as mutual funds and hedge funds compete for investors’ dollars, and one key dimension on which they compete is the choice of organizational form. In general, there is no reason to believe that this competition results in a form that is especially well-suited to the task of arbitrage. Rather, there is a tendency towards too much open-ending, which leaves funds unable to aggressively attack certain types of mispricing—i.e., those which do not promise to correct themselves quickly and smoothly. This idea may help to shed light on the arbitrage activities of another class of players: non-financial firms. Baker and Wurgler (2000) document that aggregate equity issuance by non-financial firms has predictive power for the stock market as a whole. And Baker, Greenwood and Wurgler (2003) show that such firms are also able to time the maturity of their debt issues so as to take advantage of changes in the shape of the yield curve. At first such findings appear puzzling. After all, even if one grants that managers have some insight into the future of their own companies, and hence can time the market for their own stock, it seems harder to believe that they would have an advantage over professional arbitrageurs in timing the aggregate stock and bond markets. However, there is another possible explanation for these phenomena. Even if managers of non-financial firms are less adept at recognizing aggregate-market mispricings than are professional money managers, they have an important institutional advantage—they conduct their arbitrage inside closed-end entities, with a very different incentive structure. For example, it was much less risky for the manager of an overpriced dot-com firm to place a bet against the internet bubble in 1999 (by undertaking an equity offering) than it was for a hedge-fund manager to make the same sort of bet. In the former case, if the market continued to rise, no visible harm would be done: the dot-com firm could just sit on the cash raised by the equity issue, and there would only be the subtle opportunity cost of not having timed the market even better. There would certainly be no worry about investors liquidating the firm as a result of the temporary failure of prices to converge to fundamentals.

There are some interesting corrollaries to the BIS desire for increased arbitrage:

  • Hedge Funds are Good
  • Shorting is Good
  • Credit Default Swaps are Good

One wonders how much consistency we may see in their views going forward!

BIS Releases Annual Report

Monday, June 29th, 2009

The Bank for International Settlements has released its Annual Report 2008-09; some of the policy recommendations are contentious!

One interesting reference is:

Moreover, managers of assets in a given asset class were rewarded for performance exceeding benchmarks representing average performance in that investment category. As a result, even if managers recognised a bubble in the price of some asset, they could not take advantage of that knowledge by selling short for fear that investors would withdraw funds. The result was herding that caused arbitrage to fail.

Footnote: For a discussion of how arbitrage fails when individual investors cannot distinguish good asset managers from bad ones, see J Stein, “Why are most funds open-end? Competition and the limits of arbitrage”, Quarterly Journal of Economics, vol 120, no 1, February 2005, pp 247–72.

I’ll have to read that reference sometime!

The regulators continue to deflect blame for the crisis onto the Credit Rating Agencies:

In the end, the rating agencies– assigned the task of assessing the risk of fixed income securities and thus of guarding collective safety – became overwhelmed and, by issuing unrealistically high ratings, inadvertently contributed to the build-up of systemic risk.

Footnote: Differences in the methodologies used by the rating agencies also provided incentives for the originators to structure their asset-backed securities in ways that would allow them to “shop” for the best available combination of ratings (across both rating agencies and the liabilities structure of those instruments). See I Fender and J Kiff, “CDO rating methodology: some thoughts on model risk and its implications”, BIS Working Papers, no 163, November 2004.

Which is, of course, self-serving nonsense. It is the regulators who have been assigned the task of guarding collective safety. CRAs merely publish their opinions, the same way I do and the same way any idiot with access to the internet can do. As discussed by researchers at the Boston Fed in Making Sense of the Sub-Prime Crisis (which has been discussed on PrefBlog), the tail risk of sub-prime securities was well-known, well-publicized and ignored.

And as for their observation that:

And because of the complexity of the instruments, reliance on ratings increased even among the most sophisticated institutional investors.

… well, if I don’t understand something and can’t understand how to model it, I don’t buy it. I guess that makes me the most sophisticated investor in the world.

They do make one sensible observation:

Finally, there were governance problems in risk management practices. For both structural and behavioural reasons, senior managers and board members were neither asking the right questions nor listening to the right people. The structural problem was that risk officers did not have sufficient day-to-day contact with top decision-makers, often because they did not have sufficiently senior positions in their organisations. Without support from top management, it didn’t matter much what the chief risk officer said or to whom he or she said it. The structural problem was compounded by the behavioural response to a risk officer whose job it is to tell people to limit or stop what they are doing. If what they are doing is profitable, it is going to be difficult to get managers and directors to listen.

However, I don’t see anything addressing the obvious corrollory that bank size must be controlled, preferrably through the imposition of progressively increasing capital requirements based on size. After all, if all the big international banks disappeared, where would an intelligent and knowledgable BIS employee look to for future employment? To achieve the desired end, it’s probably better to insist on a greater and better paid head-count in the Risk Management division.

The recommendation that has attracted the most attention (e.g., a Reuters report) is:

Balancing innovation and safety in financial instruments requires providing scope for progress while limiting the capacity of any new instrument to weaken the system as a whole. Balance can be achieved by requiring some form of product registration that limits investor access to instruments according to their degree of safety. In a scheme analogous to the hierarchy controlling the availability of pharmaceuticals, the safest securities would, like non-prescription medicines, be available for purchase by everyone; next would be financial instruments available only to those with an authorisation, like prescription drugs; another level down would be securities available in only limited amounts to pre-screened individuals and institutions, like drugs in experimental trials; and, finally, at the lowest level would be securities that are deemed illegal. A new instrument would be rated or an existing one moved to a higher category of safety only after successful tests – the analogue of clinical trials. These would combine issuance in limited quantities in the real world with simulations of how the instrument would behave under severe stress.
Such a registration and certification system creates transparency and enhances safety. But, as in the case of pharmaceutical manufacturers, there must be a mechanism for holding securities issuers accountable for the quality of what they sell. This will mean that issuers bear increased responsibility for the risk assessment of their products.

In other words, if I want to sell you a product with a complex pay-out and you want to buy it … too bad; we’ll both go to jail on charges of not being smart enough. Similarly, if I sell you an investment product and you lose money, you can get your money back from me because it is quite obvious that I sold you something you’re not smart enough to buy.

I wonder what that little change will make to the deemed capitalization of securities firms!

Update, 2009-7-2: Regulation of product types endorsed by Willem Buiter:

What I have in mind is an FDA for new financial instruments and institutions. Like new medical treatments, drugs and pharmacological concoctions, new financial instruments, products and institutions are potentially socially useful. They can also be toxic and health-threatening.

BoE Releases June 2009 Financial Stability Report

Friday, June 26th, 2009

The Bank of England has released its June Financial Stability Report filled with the usual tip-top commentary and analysis.

They open with a rather attention-grabbing chart showing their measure of financial market liquidity:

A number of steps towards “greater self insurance” are suggested; most of these a precious boiler-plate, but I was very pleased to see the inclusion of “constant net asset value MMMFs should be regulated as banks or forced to convert to variable net asset funds.” Section 3 fleshes out this idea a little more:

Measures to strengthen regulation and supervision will inevitably also increase avoidance incentives. Left unaddressed, this potentially poses risks for the future. Money market mutual funds (MMMFs) and structured investment vehicles (SIVs) are just two examples from the recent crisis of entities which contributed importantly to the build-up of risk in the financial system, but were not appropriately regulated. By offering to redeem their liabilities at par and effectively on demand, constant net asset value MMMFs in effect offer banking services to investors, without being regulated accordingly. The majority of the global industry comprises US domestic funds, with over US$3 trillion under management. During the crisis, as fears grew that these funds would not be able to redeem liabilities at par — so-called ‘breaking the buck’ — official sector interventions to support MMMFs were required. To guard against a recurrence, such funds need in future either to be regulated as banks or forced to convert into variable net asset value funds.

Footnote: In the United States, the Department of the Treasury has recently announced plans to strengthen the regulatory framework around MMMFs. The Group of Thirty, under Paul Volcker’s chairmanship, has recommended that MMMFs be recognised as special-purpose banks, with appropriate prudential regulation and supervision

They also make the rather breath-taking recommendation that: “The quality of banks’ capital buffers has fallen over time. In the future, capital buffers should comprise only common equity to increase banks’ capacity to absorb losses while remaining operational.” This would imply that Preferred Shares and Innovative Tier 1 Capital would not be included in Tier 1 Capital, if it is included in capital at all! In section 3, they elucidate:

The dilution of capital reduces banks’ ability to absorb losses.
Capital needs to be permanently available to absorb losses and banks should have discretion over the amount and timing of distributions. The only instrument reliably offering these characteristics is common equity. For that reason, the Bank favours a capital ratio defined exclusively in these terms — so-called core Tier 1 capital. This has increasingly been the view taken by market participants during the crisis, which is one reason why a number of global banks have undertaken buybacks and exchanges of hybrid instruments (Box 4).

Pre-committed capital insurance instruments and convertible hybrid instruments (debt which can convert to common equity) may also satisfy these characteristics. As such, these instruments could also potentially form an element of a new capital regime, provided banks and the authorities have appropriate discretion over their use. Taken together, these instruments might form part of banks’ contingent capital plans. There is a case for all banks drawing up such plans and regularly satisfying regulators that they can be executed. Subordinated debt should not feature as part of banks’ contingent capital plans, even though it may help to protect depositors in an insolvency.

There’s also an interesting chart showing bank assets as a percentage of GDP:

To my pleasure, they updated my favourite graph, the decomposition of corporate bond spreads. I had been under the impression that production of this graph had been halted pending review of the parameterization … wrong again!

They also show an interesting calculation of expected loss rates derived from CDS index data:

They also recommend a higher degree of risk-based deposit insurance:

Charging higher deposit insurance premia to riskier banks would go some way towards correcting this distortion. These premia should be collected every year, not only following a bank failure, so that risky banks incur the cost at the time when they are taking on the risk. That would also allow a deposit insurance fund to be built up. This could be used, as in the United States and other jurisdictions, to meet the costs of bank resolutions. It would also reduce the procyclicality of a pay-as-you-go deposit insurance scheme, which places greatest pressures on banks’ finances when they can least afford it. For these reasons, the Bank favours a pre-funded, risk-based deposit insurance scheme.

Canada has a pseudo-risk-based system; there are certainly tiers of defined risks, but the CDIC is proud of the fact that, basically, everybody fits into the bottom rung. The CDIC is also pre-funded (in theory) but the level of pre-funding is simply another feel-good joke.

A Question of Liquidity: The Great Banking Run of 2008?

Thursday, June 25th, 2009

The Boston Fed has released a paper by Judit Montoriol-Garriga and Evan Sekeris titled A Question of Liquidity: The Great Banking Run of 2008?:

The current financial crisis has given rise to a new type of bank run, one that affects both the banks’ assets and liabilities. In this paper we combine information from the commercial paper market with loan level data from the Survey of Terms of Business Loans to show that during the 2007-2008 financial crises banks suffered a run on credit lines. First, as in previous crises, we find an increase in the usage of credit lines as commercial spreads widen, especially among the lowest quality firms. Second, as the crises deepened, firms drew down their credit lines out of fear that the weakened health of their financial institution might affect the availability of the funds going forward. In particular, we show that these precautionary draw-downs are strongly correlated with the perceived default risk of their bank. Finally, we conclude that these runs on credit lines have weakened banks further, curtailing their ability to effectively fulfill their role as financial intermediaries.

The authors note:

In order to test this hypothesis we used Credit Default Swap prices for the largest banks in our sample as a measure of how stressed the bank is perceived to be by the market. After controlling for other parameters, we found that banks with high CDS prices experienced significantly higher draw downs than banks with lower ones. In other words, when a bank was thought to be at high risk of default, firms that had credit lines with them were more likely to use them than if their credit line was with a healthier bank. This was a run on the banks by investors who ran away from the financial paper market which in turn triggered a run by borrowers of the weakest banks. This sequence of events was made possible by the combination of an increased reliance on the commercial paper market by financial institutions for their short term liquidity needs and the, often lax, underwriting of credit lines during the good years.

The authors suggest a hard look at the capital treatment of off-balance sheet committments:

However, lower quality firms can use the lines of credit that they negotiated prior to the crisis at significantly better terms when credit standards were lax. Banks find themselves lending to these businesses at spreads that no longer reflect the risk they are exposed to. These “forced” loans crowd-out new loans to either lower risk businesses or to equally risky businesses but with spreads that better reflect their financial health.
In light of this inefficient use of credit lines in the 2007-2008 crisis, one may call into question whether the current regulatory framework is appropriate to deal with situations of market illiquidity. In particular, regulators may need to reconsider the regulation on bank capital requirements for off-balance sheet items such as unused commitments, and more generally, strengthen prudential oversight of liquidity risk management.


Click for big

Developments in Business Financing

Tuesday, June 23rd, 2009

There’s a very good review of the impact of the Credit Crunch on markets and the law thereof, prepared by Theresa Einhorn of Haynes & Boone LLP, The Corporate Debt Market And Credit Derivatives.

Section headings are:

  • The Top Ten – Credit Crunch is #1
  • Rates are High and “Covenant Tight” Replaces “Covenant Light”
  • The Credit Markets are a Market and the Stock Market is a Sideshow
  • A New Risk for Borrowers – Defaulting Lenders Fail to Fund under Corporate Lines of Credit.
  • Survival Strategies During the Credit Crunch: Restructuring by Repurchase or Exchange of Debt
  • Credit Default Swaps Market Disruption Clauses in Credit Agreements
  • Market-Based Pricing for Loans – Pricing Based on CDS
  • Hybrid Securities
  • Investing in Distressed Debt and Other Distressed Assets