Category: Interesting External Papers

Interesting External Papers

The Savings & Loan Crisis

I started hunting for a good reference after reading Assiduous Reader lystgl‘s quotation in the comments to September 30.

Timothy Curry and Lynn Shibut of the FDIC wrote a paper for the FDIC Banking Review, The Cost of the Savings and Loan Crisis: Truth and Consequences.

It seems, not surprisingly, that there are a lot of estimates of the cost, which I assume are influenced by political considerations:

Over time, misinformation about the cost of the crisis has been widespread; some published reports have placed the cost at less than $100 billion, and others as high as $500 billion.

Some numbers which provide perspective – and will, I assume, not be particularly controversial – are provided in Table 1. From 1986-1989, there were 296 failures with assets of $125-billion addressed by the Federal Savings and Loan Insurance Corporation; from 1989-1995 there were 747 failures with assets of $394-billion addressed by Resolution Trust Corp.

As of year-end 1986, 441 thrifts with $113 billion in assets were book insolvent, and another 533 thrifts, with $453 billion in assets, had tangible capital of no more than 2 percent of total assets. These 974 thrifts held 47 percent of industry assets.

One of the problems with estimating the costs of this mess is ‘what to do about interest charges’:

During the FSLIC and RTC eras, the industry contributed $38.3 billion sometimes in partnership with the Treasury) in funding for the cleanup. Special government-established financing entities (FICO and REFCORP) raised these funds by selling long-term bonds in the capital markets. The Treasury contributed another $99 billion, 14 some or all of which was also borrowed because the federal government was experiencing large budget deficits during the period. When some analysts tabulated the costs of the cleanup, they included not only the principal borrowed but also interest costs for periods of up to 30 to 40 years on some or all of the borrowings.

Including the financing costs in addition to principal could easily double or triple the estimates of the final cost of the cleanup. However, in our view, including financing costs when tallying the costs of the thrift crisis is methodologically incorrect. It is invalid because, in present value terms, the amount borrowed is equal to the sum of the interest charges plus debt repayment. Adding the sum of interest payments to the amount borrowed would overstate the true economic cost of resolving the crisis.

The authors present their calculations with admirable clarity – I don’t know how much dispute there still might be over the total figure, but their tables look like a very good place to start reconciling differences – and conclude:

The savings and loan crisis of the 1980s and early 1990s produced the greatest collapse of U.S. financial institutions since the Great Depression. Over the 1986–1995 period, 1,043 thrifts with total assets of over $500 billion failed. The large number of failures overwhelmed the resources of the FSLIC, so U.S. taxpayers were required to back up the commitment extended to insured depositors of the failed institutions. As of December 31, 1999, the thrift crisis had cost taxpayers approximately $124 billion and the thrift industry another $29 billion, for an estimated total loss of approximately $153 billion. The losses were higher than those predicted in the late 1980s, when the RTC was established, but below those forecasted during the early to mid-1990s, at the height of the crisis.

Interesting External Papers

Risk Transfer, Zombie Firms and the Credit Crunch

Edward Kane of Boston College managed a rare accomplishment last April; he wrote an essay on the economics of regulation and moral hazard that is both entertaining and informative.

The paper is Extracting Nontransparent Safety-Net Subsidies by Strategically Expanding and Contracting a Financial Institution’s Accounting Balance Sheet.

He argues that the complexity of (what the Bank of England calls) Large Complex Financial Institutions is not a natural consequence of size and success, but is achieved in a deliberate (if, perhaps unconcious) effort to maximize implicit government subsidies:

… value maximization leads them to trade off diseconomies from becoming inefficiently large or complex against the safety-net benefits that increments in scale or scope can offer them. Arguably, Citigroup has been the poster child for this kind of behavior.

Along with investments in political clout, an institution can obtain and hold TDFU [Too Difficult to Fail and Unwind] and TBDA [Too Big to Discipline Adequately] status by: (1) moving highly leveraged loss exposures formally off their accounting balance-sheet, and (2) maintaining an aggressive program of mergers and acquisitions. Over time, either strategy makes a large institution ever more gigantic, ever more complex, and ever more politically influential. The profitability of undertaking these dialectical responses to FDICIA [FDIC Improvement Act] tells us that the current wave of financial-institution consolidation and convergence is not just an efficiency-enhancing Schumpeterian long-cycle response either to past overbanking or to secularly improving technologies of communication, contracting, and record-keeping. Mergers that involve a TDFU or TBTDA organization have been shown to increase the capitalized value of the implicit government credit enhancements imbedded in their capital structure (Kane, 2000; Penas and Unal, 2004; Brewer and Jagtiani, 2007).

This thesis is then particularized:

It is a mistake to characterize the current turmoil as a liquidity crisis caused by fire-sale pricing and to try to cure the turmoil by auctioning off central-bank loans. In practice, multiple-tranche securitization (and resecuritization) of highly leveraged loans has revealed itself to be less about risk transfer than about risk shifting: i.e., undercompensating counterparties for the risks they assume. TBFU originators of leveraged loans and TBFU sponsors of securitization conduits transformed traditional default and interest-rate risks into hard-to-understand counterparty and funding risks that in distressed times pass back for reputational reasons from securitization vehicles. The critical point is that off-balance-sheet vehicles that booked complex swaps and structured securitizations created reputation-driven loss exposures for sponsors that managers and accountants knew lacked transparency for supervisors and creditors. The victims were investors who accepted inflated estimates of the credit quality of the instruments they purchased and the safety-net managers and taxpayers who now have to clean up the mess.

Besides confusing investors, complex forms of structured finance expand risk-shifting possibilities by making it easy for authorities to neglect the safety-net implications these positions generate and to exempt complex loss exposures from appropriate capital discipline.

I can certainly testify that the dealer community just loves to repackage risk and charge a high price for it. Back in the old days – by which I mean the late 1980’s – it was enormously profitable in Canada simply to strip the coupons from a government bond and and sell them individually – surely one of the simpler mechanisms of creating a synthetic. And I cannot count the number of times I’ve been offered some kind of hideously complex product that has left me puzzled for hours about the methodology of pricing it, let alone actually doing the pricing! These usually came with some kind of underhanded deal in which the purchasing portfolio manager could make a bet outside his mandate – currency speculation, say – while holding something that could plausibly be called a bond.

And, of course, they took their cut!

Dr. Kane concludes:

To minimize the costs of rehabilitating a damaged firm, a private rescuer begins by poring over its books to establish a solid knowledge of unrealized losses and continuing loss exposures. Armed with that knowledge, private rescuers (whose behavior can be typified by capital assistance provided by JP Morgan-Chase and sovereign investment funds during the current turmoil) force rescued stockholders to accept a deal that gives the rescuer a claim to the incremental future profits that the rescue might generate. This tells us that to control moral hazard, government rescuers must insist that the rights of shareholders in TDFU zombie firms undergo severe dilution. To see that taxpayers receive fair compensation for their preservation efforts, government rescuers must be made accountable for establishing for their agency (and ultimately for taxpayers) an appropriately large equity or warrant position on the upside of the rescued firm.

This is a big step up from Bagehot, but Dr. Kane is referring to zombie firms – those that are insolvent. Bagehot applies only to problems of illiquidity.

Interesting External Papers

Naked Shorting

Christopher Culp & J.B.Heaton have written an essay on The Economics of Naked Short Selling. They review the mechanics and economic theory of short selling to conclude:

There is little meaningful economic difference between the two forms of short selling … The only difference is who acts as the effective lender of the security … The buyer, after all, is now in the position of the security lender and has a very solvent counterparty in the NSCC [National Securities Clearing Corporation].

The Depository Trust and Clearing Corporation itself has Question & Answer page … from 2005!

Certainly there have been cases in the past where it has, and those cases have been prosecuted by the SEC and other appropriate enforcement agencies. I suppose there will be cases where someone else will try to break the law in the future. But I also don’t believe that there is the huge, systemic, illegal naked shorting that some have charged is going on. To say that there are trillions of dollars involved in this is ridiculous. The fact is that fails, as a percentage of total trading, hasn’t changed in the last 10 years.

Now, as far as I can see from SEC Form X-17A-5 PART II, shorts on a firm’s books resulting from a fail to receive are merely marked-to-market; there is no requirement that the position be over-collateralized by either the customer or the firm.

And this is the crux of the issue. If I am correct, and naked short-selling is simply a methodology to get around margin requirements, then it is the margin requirements that need to be fixed.

I have posted a question on Jim Hamilton’s blog … we shall see!

Update: The above is rather cryptic, isn’t it?

I’ve been puzzled about naked short selling and why it is considered the epitome of evil; by-and-large taking the view of Culp & Heaton. The problem – as I see it – is counterparty risk. If somebody naked-short-sells you a million shares of Morgan Stanley, that, in and of itself, is no big deal. You don’t have to pay for them and as long as the price doesn’t change, there’s no big risk.

The risk is that the shares will go up a lot and the counterparty will go bust, leaving you high and dry … particularly if you’ve taken other market action based on your purchase.

As I noted yesterday, Accrued Interest thinks a lot of hedgies are going to go bust in the near future and I agree with him. The ones who shorted financials on Thursday go first.

And we have seen a lot of problems lately with undercollateralization of exposure. If CDS exposures had been adequately collateralized, there would not have been nearly so much of a problem caused by MBIA, Ambac and AIG. If the parties at risk on naked shorting of financials turn out to be the financials themselves, we could have a very interesting co-dependency conundrum!

So I want to know, but I don’t know: what are the over-collateralization requirements, if any, on Fails-to-Receive?

Update, 2008-9-21: I have found a paper by Leslie Boni of the UNX and University of New Mexico titled Strategic Delivery Failures in U.S. Equity Markets, abstract:

Sellers of U.S equities who have not provided shares by the third day after the transaction are said to have “failed-to-deliver” shares. Using a unique dataset of the entire cross-section of U.S. equities, we document the pervasiveness of delivery failures and provide evidence consistent with the hypothesis that market makers strategically fail to deliver shares when borrowing costs are high. We also document that many of the firms that allow others to fail to deliver to them are themselves responsible for fails-to-deliver in other stocks. Our findings suggest that many firms allow others to fail strategically simply because they are unwilling to earn a reputation for forcing delivery and hope to receive quid pro quo for their own strategic fails. Finally, we discuss the implications of these findings for short-sale constraints, short interest, liquidity, price volatility, and options listings in the context of the recently adopted Securities and Exchange Commission Regulation SHO.

In the text:

Any clearing member with a failure-to-receive position has the option of notifying the NSCC that it wants to try to force delivery of (“buy in”) some or all of that position. Evans, Geczy, Musto, and Reed (2003) provide evidence that buy-ins may be rarely requested. Using fails and buy-in data from one major options market maker for the period 1998-1999, they find that the market maker failed-to-deliver all or at least a portion of the shares in 69,063 transactions. The market maker was bought-in on only 86 of these positions. An interesting question is why clearing members that fail to receive shares allow the fails to persist.27 The following explanations have been suggested by market participants.

1) Costs of failures to receive are small. Regardless of whether shares are delivered, long and short positions are marked-to-market each day. Although long positions that fail to receive shares forego the opportunity to lend them, short interest levels and lending as a percentage of outstanding shares are low on average.

2) Clearing member may have to recall stock loans that have been made via the National Securities Clearing Corporation (“NSCC”) before requesting buy-ins.

3) Bought-in shares will themselves have a high probability of delivery failure.

4) Firms that fail to receive, by not forcing delivery, hope to bank future goodwill with those that fail to deliver.

I’m not concerned about the price-discovery process, or possible distortions thereto that might be created by naked short selling. I am concerned about counterparty and systemic risk. These risks are best addressed through ensuring that fails are adequately covered by capital; applying a capital charge – with mark-to-market – is the most direct way of addressing these risks.

Interesting External Papers

What Happened to the Quants in August 2007?

I referred to the paper on September 15 and republished the abstract in that post.

The paper is by Amir E. Khandani, a graduate student at MIT, and Andrew W. Low, a Professor at the MIT Sloan School of Management (among other titles).

The paper was written in an attempt to understand the events of August 7th to August 10th:

With laser-like precision, model-driven long/short equity funds were hit hard on Tuesday August 7th and Wednesday August 8th, despite relatively little movement in fixed-income and equity markets during those two days and no major losses reported in any other hedge-fund sectors. Then, on Thursday August 9th when the S&P 500 lost nearly 3%, most of these market-neutral funds continued their losses, calling into question their market-neutral status.

By Friday, August 10th, the combination of movements in equity prices that caused the losses earlier in the week had reversed themselves, rebounding significantly but not completely. However, faced with mounting losses on the 7th, 8th, and 9th that exceeded all the standard statistical thresholds for extreme returns, many of the affected funds had cut their risk exposures along the way, which only served to exacerbate their losses while causing them to miss out on a portion of the reversals on the 10th. And just as quickly as it descended upon the quants, the perfect financial storm was over.

I’ll quibble with the idea that taking losses when the market goes down calls into question their market-neutral status. Ideally, the distribution of gains and losses for such a fund will be completely uncorrelated with market movements; there will be just as many opposites as there are matches over a sufficiently long time period (like, more than a week!).

The authors define the class of hedge fund investigated as:

including any equity portfolios that engage in shortselling, that may or may not be market-neutral (many long/short equity funds are long-biased), that may or may not be quantitative (fundamental stock-pickers sometimes engage in short positions to hedge their market exposure as well as to bet on poor-performing stocks), and where technology need not play an important role.

… but warn that distinctions between funds are blurring (similarly to recently observed private-equity investments in junk bonds).

The authors attempt to reproduce the overall performance of the model-driven long/short equity funds by use of a naive model:

consider a long/short market-neutral equity strategy consisting of an equal dollar amount of long and short positions, where at each rebalancing interval, the long positions are made up of “losers” (underperforming stocks, relative to some market average) and the short positions are made up of “winners” (outperforming stocks, relative to the same market average).

By buying yesterday’s losers and selling yesterday’s winners at each date, such a strategy actively bets on mean reversion across all N stocks, profiting from reversals that occur within the rebalancing interval. For this reason, (1) has been called a “contrarian” trading strategy that benefits from market overreaction, i.e., when underperformance is followed by positive returns and vice-versa for outperformance

And at this point in the paper I got extremely excited, because the following paragraph proves these guys have actually thought about what they’re saying (which is extremely unusual):

However, another source of profitability of contrarian trading strategies is the fact that they provide liquidity to the marketplace. By definition, losers are stocks that have under-performed relative to some market average, implying a supply/demand imbalance, i.e., an excess supply that caused the prices of those securities to drop, and vice-versa for winners. By buying losers and selling winners, contrarians are increasing the demand for losers and increasing the supply of winners, thereby stabilizing supply/demand imbalances. Traditionally, designated marketmakers such as the NYSE/AMEX specialists and NASDAQ dealers have played this role, for which they are compensated through the bid/offer spread. But over the last decade, hedge funds and proprietary trading desks have begun to compete with traditional marketmakers, adding enormous amounts of liquidity to U.S. stock markets and earning attractive returns for themselves and their investors in the process.

The concept of “selling liquidity” is central to the Hymas Investment Management investment philosophy.

The naive strategy works extemely well:

In 1995, the average daily return of the contrarian strategy for all stocks in our sample is 1.38%, but by 2000, the average daily return drops to 0.44% and the year-to-date figure for 2007 (up to August 31) is 0:13%. Figure 1 illustrates the near-monotonic decline of the expected returns of this strategy, no doubt a reflection of increased competition, changes in market structure, improvements in trading technology and electronic connectivity, the growth in assets devoted to this type of strategy, and the corresponding decline in U.S. equity-market volatility over the last decade. This secular decline in profitability has significant implications for the use of leverage, which we will explore in Section 6.

These trends are consistent with my own informal observations. The authors suggest that the problem of declining returns may have been addressed by the simple expedient of increasing leverage … where have I heard that one before? But the naive method data is interesting – maybe I’ll do something like this for preferreds some day!

So what happened during the period at issue?

The three days in the second week, August 7th, 8th, and 9th are the outliers, with losses of -1.16%, -2.83%, and -2.86%, respectively, yielding a cumulative three-day loss of -6.85%. Although this three-day return may not seem that significant – especially in the hedge-fund world where volatility is a fact of life – note from Table 2 that the contrarian strategy’s 2006 daily standard deviation is 0.52%, so a -6.85% cumulative return represents a loss of 12 daily standard deviations! Moreover, many long/short equity managers were employing leverage (see Section 6 for further discussion), hence their realized returns were magnified several-fold.

Curiously, a significant fraction of the losses was reversed on Friday, August 10th, when the contrarian strategy yielded a return of 5.92%, which was another extreme outlier of 11.4 daily standard deviations. In fact, the strategy’s cumulative return for the entire week of August 6th was -0.43%, not an unusual weekly return in any respect.

We could quibble over the use of “standard deviations” in the above, but we won’t. We’ve already done that.

The authors provide a variety of rationales for excess losses experienced by real hedge funds, cautioning that they have no access to the books and are therefore only guessing. What makes sense to me is the following scenario:

  • Randomly selected hedge fund does a large liquidation (either by change in strategy, or to meet client redemption
  • Market impact for this liquidation distorts the market for several days
  • By policy or by counterparty insistence, funds unwind positions after losses, and
  • hence, do not share in the excess returns seen Friday

It all hangs together. The scariest and funniest part of the paper is:

Moreover, the widespread use of standardized factor risk models such as those from MSCI/BARRA or Northfield Information Systems by many quantitative managers will almost certainly create common exposures among those managers to the risk factors contained in such platforms.

But even more significant is the fact that many of these empirical regularities have been incorporated into non-quantitative equity investment processes, including fundamental “bottom-up” valuation approaches like value/growth characteristics, earnings quality, and financial ratio analysis. Therefore, a sudden liquidation of a quantitative equity market-neutral portfolio could have far broader repercussions, depending on that portfolio’s specific factor exposures.

So the scary part is: this is cliff-risk come to equities, cliff-risk having been discussed – briefly – on PrefBlog on April 4. The funny part is: we have an enormous industry with enormously compensated personnel that wind up all making the same bet (or, at least, bets all having extremely similar common factors). It’s all sales!

Update, 2010-8-6: There are claims that the 2010-5-6 Flash Crash had similar origins:

Critics focus on unusual market volatility experienced in August 2007 and again during the “flash crash” of May 2010 to show why HFT has increased rather than reduced volatility. The essential similarity among many quant strategies results in trades becoming crowded, and the application of HFT techniques ensures that when they go wrong it results in a disorderly rush for the exits.

The August 2007 and May 2010 episodes were the only ones involving high-frequency computer-driven trading. But the same basic problem (automated quant-based strategies and crowded trades causing liquidity to disappear in a crisis) can be traced back to previous market crises in 1998 (the failure of Long-Term Capital Management) and October 1987 (portfolio insurance and the stock market plunge), according to critics.

Interesting External Papers

BIS releases Quarterly Review

The Bank for International Settlements has released its September 2008 Quarterly Review, filled, as usual, with many fascinating graphs, analysis and informational tidbits.

Unfortunately, this was released at a time when I am buried up to my neck with month-end duties, so I cannot review the articles thoroughly at this time. I have, however, scanned Peter Hördahl’s The inflation risk premium in the term structure of interest rates, as well as The ABX: how do the markets price subprime mortgage risk? by Ingo Fender and Martin Scheicher. Good stuff – I might have time to review them thoroughly next week – I might not – read it yourselves!

Other features, of less personal interest to me are

as well as a review of international banking and financial market developments. In general terms, I heartily recommend reading these reports by international bodies because, unlike absolutely everybody else who will attempt to explain the financial world, these people are not trying to sell you anything – not even a copy of the daily newspaper. What bias they do have is limited to the occasional “regulation = good” reference.

Interesting External Papers

Sub-Prime – Not Completely Bad Underwriting

The Great Credit Crunch of 2007-?? will be rich source of theses and fistfights for many years to come. The New York Fed has published a staff paper by Andrew Haughwout, Richard Peach and Joseph Tracy titled Juvenile Delinquent Mortgages: Bad Credit or Bad Economy?

Even borrowers with negative equity, however, default less frequently than simple models would predict (see Vandell 1995 for a summary of the empirical evidence and Elul 2006 for an update). For an owner occupant considering default, transactions costs include moving costs, the cost of purchasing or renting a new residence, and damage to one’s credit score resulting in higher future borrowing costs. All told, some authors have argued that these costs can typically range from 15 to 30% of the value of the house, helping to explain why default appears to be underexercised relative to the simple option-theoretic prediction (Cunningham and Hendershott, 1984). Investors face fewer of these transaction costs and therefore may be more likely to default for a given LTV level.

The rapid house price increases in the boom/bust states prior to the downturn would act to keep the put option for default out-of-the-money. Even where the lender finances most or all of the borrower’s down payment with a 2nd lien loan, twelve months of double-digit house price appreciation will generate more than sufficient equity to cover the transactions costs of selling the house. Similarly, in cases where a borrower in a boom/bust state suffers a job loss, divorce or significant health problem during the boom period, we would not expect to see this result in a default. The borrower would have a financial incentive to sell the house and prepay the mortgage rather than default. Finally, as discussed earlier, owners may be less likely to exercise the default put option than investors other things equal.

Despite the focus in the press made on no-doc mortgages, in each year the incidence of no-doc mortgages was in single digits, and was declining over the sample period. What is more notable is the shift in composition from fully documented to limited documented underwriting. From 2001 to 2006, the share of fully documented subprime mortgages fell from 77.8 percent to 61.7 percent, while the share of fully documented alt-a mortgages fell from 36.8 percent to 18.9 percent.

For borrowers with negative equity, the data indicate that investors appear to be much more likely than owners to default. The point estimate for the incremental effect on the default rate is over 24.6 percentage points for subprime investors and 20.3 percentage points for alt-a investors

The major difference between 2003 and 2005-2007 was a dramatic change in house price appreciation. After rising nearly 14% in 2003, the OFHEO index accelerated to 16% in 2004 before slowing and eventually reversing. For 2005-2007, OFHEO grew 10%, 1% and –4% respectively.31 The decomposition indicates that changes in economic variables, particularly this reversal in house price appreciation, from 2003-2007 account for the bulk of our explanation for observed increases in early defaults. In 2006, we estimate that changes in the economy added 2.4 percentage points to the average early default rate for subprime loans, while in 2007 that figure rises to 4.1 percentage points.

“Bad Credit,” on the other hand, contributes less to our explained rise in average early defaults. Had the economy continued to produce unemployment and house price appreciation rates in 2005 through 2007 like those in 2003, our model predicts that changes in the credit profiles of new nonprime mortgages in each year would result in an increases in average early default rates for subprime loans of less than a percentage point in each year.

We use loan-level data on securitized nonprime mortgages to examine what we refer to as “juvenile delinquency”: default or serious delinquency in the first year following a mortgage’s origination. Early default became much more common for loans originated in 2005-2007. Two complementary explanations have been offered for this phenomenon. The industry-standard explanation of default behavior focuses attention on a relaxation of lending standards after 2003.

We see evidence of this in our data, as some underwriting criteria, particularly loan-to-value ratios at origination, deteriorated. At the same time, however, the housing market experienced a sharp and pervasive downturn, a factor which has received attention in recent research. Our results suggest that while both of these factors – bad credit and bad economy – played a role in increasing early defaults starting in 2005, changes to the economy appear to have played the larger role.

Perhaps as important a finding is that, in spite of the set of covariates we control for, our model predicts at most 43 percent of the annual increase in subprime early defaults during the 2005-2007 period. Observable changes in standard underwriting standards and key economic measures appear to be unable to explain the majority of the run-up in early defaults. The fact, noted in our introduction, that many participants in the industry appeared to have been surprised by the degree of the increase in early defaults is in some sense verified here: observable characteristics of the loans, borrowers and economy seem to leave much unexplained, even with the benefit of hindsight. The difference between what we predict, conditional on observables, and what we actually observe is the difference between a bad few years for lenders/investors and a full-blown credit crunch.

The data does indicate a significant difference in behavior between owners and investors, especially in terms of how they respond to downward movements in house prices and negative equity situations. This has implications for underwriting. First, there may be payoffs to increased efforts at determining the true occupancy status of the borrower as part of the underwriting process. Second, originators may want to require additional equity up front from investors to reduce the likelihood that future house price declines could push the investor into negative equity.

In other words, a good part of the sub-prime debacle can be blamed not just on poor under-writing and the fashionably loathed originate-and-distribute model, but on a failure of investors to understand that they were short a put on housing prices … or, if they understood that, to price the put option properly.

Interesting External Papers

Origin of US Treasury Bill Market

OK, this is way off topic. I admit that freely. But I really enjoyed the paper by Kenneth D. Garbade recently published by the New York Fed: Why the US Treasury Began Auctioning Treasury Bills in 1929:

The U.S. Treasury began auctioning Treasury bills in 1929 to correct several flaws in the post-war structure of Treasury financing operations. The flaws included underpricing securities sold in fixed-price subscription offerings, infrequent financings that necessitated borrowing in advance of need, and payment with deposit credits that gave banks an added incentive to oversubscribe to new issues and contributed to the appearance of weak post-offering secondary markets for new issues.

All three flaws could have been addressed without introducing a new class of securities. For example, the Treasury could have begun auctioning certificates of indebtedness (instead of bills), it could have begun offering certificates between quarterly tax dates, and it could have begun selling certificates for immediately available funds. However, by introducing a new class of securities, the Treasury was able to address the defects in the existing primary market structure even as it continued to maintain that structure. If auction sales, tactical issuance, and settlement in immediately available funds proved successful, the new procedure could be expanded to notes and bonds. If subsequent experience revealed an unanticipated flaw in the new procedure, however, the Treasury was free to return to exclusive reliance on regularly scheduled fixed-price subscription offerings and payment by credit to War Loan accounts. The introduction of Treasury bills in 1929 gave the Treasury an exit strategy—as well as a way forward—in the development of the primary market for Treasury securities.

I also found the following to be amusing:

Bidding on a price basis insulated the Treasury from specifying how bids in terms of interest rates would be converted to prices. Market participants used a variety of conventions. For example, the price of a bill with n days to maturity quoted at a discount rate of D is P = 100 – (n/360)×D. The price of the same bill quoted at a money market yield of R is P = 100/[1+.01×(n/360)×R]. In the case of a ninety-day bill quoted at 4.50 percent, P = 98.875 if the quoted rate is a discount rate, that is, if D = 4.50 percent, and P = 98.888 if the quoted rate is a money market yield, that is, if R = 4.50 percent.

Some things never change!

Interesting External Papers

IIAC 2Q08 Issuance Report

The IIAC has released its Equity New Issues and Trading 2Q08 Report:

Preferred share issuance continued to rise with $2.3 billion in capital raised — up 59% from Q1 and 49.2% from a year ago (Chart 4). For the second straight quarter we saw increased issuance from financial institutions in measures to beef up their balance sheets in the wake of a series of write downs in the sector.

Hat Tip: Streetwise Blog.

Interesting External Papers

FDIC Releases 2Q08 Report

The full report is available on their website … which, by the way, highlights the amusingly exasperated notice:

The FDIC creates reports on problem or troubled banks in the aggregate. We do not make the details of this list publicly available. The FDIC does not, at any time, comment on open financial institutions.

The “problem list” is highlighted in stories on Bloomberg and Dealbreaker. So go there for that story – it’s interesting enough, but there are other interesting things.

The FDIC highlights a steep decline in net income:

Insured commercial banks and savings institutions reported net income of $5.0 billion for the second quarter of 2008. This is the second-lowest quarterly total since 1991 and is $31.8 billion (86.5 percent) less than the industry earned in the second quarter of 2007. Higher loan-loss provisions were the most significant factor in the earnings decline. Loss provisions totaled $50.2 billion, more than four times the $11.4 billion quarterly total of a year ago. Second-quarter provisions absorbed almost one-third (31.9 percent) of the industry’s net operating revenue (net interest income plus total noninterest income), the highest proportion since the third quarter of 1989.

Almost 18 percent of all insured institutions were unprofitable in the second quarter, compared to only 9.8 percent in the second quarter of 2007.

Noninterest income of $60.8 billion was $7.4 billion (10.9 percent) lower than in the second quarter of 2007. The decline in noninterest income was attributable to lower trading income (down $5.5 billion, or 88.6 percent); smaller gains from sales of loans, foreclosed properties, and other assets (down $1.7 billion, or 41.2 percent); and lower income from securitization activities (down $1.5 billion, or 28.3 percent). In addition to the decline in noninterest income, securities sales yielded $2.3 billion in net losses in the second quarter, compared to $573 million in net gains a year earlier. Expenses for goodwill and other intangibles totaled $4.5 billion, more than double the $2.1 billion incurred by the industry in the second quarter of 2007. Net interest income was one of the few bright spots in industry revenues, rising by $8.2 billion (9.3 percent) over year-earlier levels. Servicing fee income increased by $1.9 billion (35.9 percent). Service charges on deposit accounts increased by $853 million (8.6 percent) at insured commercial banks and state-chartered savings banks.

The average net interest margin (NIM) improved slightly compared to the first quarter, from 3.33 percent to 3.37 percent.

Net charge-offs of loans and leases totaled $26.4 billion in the second quarter, almost triple the $8.9 billion that was charged off in the second quarter of 2007. The annualized net charge-off rate in the second quarter was 1.32 percent, compared to 0.49 percent a year earlier. This is the highest quarterly charge-off rate for the industry since the fourth quarter of 1991.

For the third consecutive quarter, insured institutions added almost twice as much in loan-loss provisions to their reserves for losses as they charged-off for bad loans. Provisions exceeded charge-offs by $23.8 billion in the second quarter, and industry reserves rose by $23.1 billion (19.1 percent). The industry’s ratio of reserves to total loans and leases increased from 1.52 percent to 1.80 percent, its highest level since the middle of 1996. However, for the ninth consecutive quarter, increases in noncurrent loans surpassed growth in reserves, and the industry’s “coverage ratio” fell very slightly, from 88.9 cents in reserves for every $1.00 in noncurrent loans, to 88.5 cents, a 15-year low for the ratio.

The industry added $10.6 billion to its total regulatory capital in the second quarter, the smallest quarterly increase since the fourth quarter of 2003. A majority of institutions (60.0 percent) reported declines in their total risk-based capital ratios during the quarter. More than half (50.9 percent) of the 4,056 institutions that paid dividends in the second quarter of 2007 reported smaller dividend payments in the second quarter of 2008, including 673 institutions that paid no quarterly dividend. Dividend payments in the second quarter totaled $17.7 billion, less than half the $40.9 billion insured institutions paid a year earlier.

I also found it interesting that this highly touted ‘117 institutions on the problem list’ represents an increase of exactly one from the 2003 figure … though, to be fair, assets at 2003’s problem banks were only $30-billion, compared to $78-billion now.

Interesting External Papers

BoC External Review

The Globe and Mail reported on an external review of the Bank of Canada’s research quality; this report has been published by the Bank on its website. Both the report itself and the BoC response are available.

The Globe and Mail’s emphasis on “micromanagement” is not borne out by the actual report. The term is used in the section “Strategic Principles for Promoting Research”:

Although the operational work of the Bank must be planned and directed by policymakers and managers (with an essential role for strategic plans and short-term deadlines), longer-term research should generally be initiated and self-directed by individual economists. This approach to research is essential for fostering higher-quality research output, because substantial creativity and flexibility are needed to ensure that the focus and direction of the research project can be adjusted in response to preliminary findings, unanticipated obstacles, new methodological developments, and results from other academic and central bank researchers working on related topics. Furthermore, by giving researchers sufficient freedom to pursue longer-term research projects, the Bank will encourage and will attract and retain new economists with strong analytical abilities in both research and policy analysis. We see no alternative to this approach for achieving the Bank’s “second-to-none” objective.

While avoiding “micro-management” of longer-term research, the senior staff does have a crucial role in promoting high-quality research that informs the Bank’s short-term analysis and policy decisions. The Bank should establish and reinforce the incentive system—in terms of financial rewards and promotion opportunities—for conducting high-quality research on policy-relevant issues. In addition, senior staff should identify broad topics and policy questions on which the state of knowledge is currently insufficient but could be significantly expanded by long-term research that stretches over a few years, which is the relevant horizon over which the research is likely to be successful in addressing such issues. Finally, the senior staff should incorporate these considerations in initiating and managing medium-term projects; as noted above, such projects are not directly aimed at producing publishable research output, but in many cases, the economists working on a given medium-term project will end up pursuing new longer-term research that contributes to the broader state of knowledge on that topic.

… which isn’t anything more than a statement of good management practice, particularly in an intellectual environment.
Some justification for the label is found in the “Summary of Recommendations”:

Other initiatives call for increasing the efficiency with which existing resources are deployed. Principal among this set of recommendations is that researchers should be given more freedom to select their own topics and manage their own research agendas. Today, research is managed to an important degree to meet the objectives set out in the medium-term plan for policy relevant research. This policy not only interferes with the staff’s ability to produce publishable research, but is also not the best approach for generating analysis to inform policy decisions.

Thus, the Globe’s statement that researchers are stifled by micromanagement is clearly unsupported by the actual report. Currently, research at the Bank is guided with a focus on policy concerns; to achieve the Bank’s objective of being “second to none” in research, they will have to:

  • Hire top people
  • Pay them well
  • Tell them to think smart thoughts and write them down

I see no indictment of Bank management in the actual report. To the extent that there is direct criticism of the Bank, it is regarding pay scales:

The Committee perceived a substantial inconsistency between the “second-to-none” objective and the Bank’s current pay structure, which clearly hampers the Bank’s ability to recruit and retain highly-talented economists.

In recent years, the Bank’s salary for entry-level economists has been aligned with the median pay for second-tier Canadian universities—a level which is substantially below that of top-tier Canadian universities and even further below that of comparable positions at U.S. universities or international institutions such as the IMF. This salary structure seems like a clear recipe for mediocrity rather than excellence. Indeed, in its interviews with some recently-hired staff economists, the Committee heard several comments like “I didn’t receive any other job offers, so I accepted the position at the Bank of Canada.”

I, for one, feel that the BoC should be a centre for excellence in the economic field, perhaps in the same manner as the Perimeter Institute is for Physics, with cross-appointments to the purely academic community.