Archive for the ‘Interesting External Papers’ Category

TRACE and the Bond Market

Saturday, October 18th, 2014

Paul Asquith, Thomas R. Covert and Parag Pathak have written a paper titled The Effects of Mandatory Transparency in Financial Market Design: Evidence from the Corporate Bond Market:

Many financial markets have recently become subject to new regulations requiring transparency. This paper studies how mandatory transparency affects trading in the corporate bond market. In July 2002, TRACE began requiring the public dissemination of post-trade price and volume information for corporate bonds. Dissemination took place in Phases, with actively traded, investment grade bonds becoming transparent before thinly traded, high-yield bonds. Using new data and a differences-in-differences research design, we find that transparency causes a significant decrease in price dispersion for all bonds and a significant decrease in trading activity for some categories of bonds. The largest decrease in daily price standard deviation, 24.7%, and the largest decrease in trading activity, 41.3%, occurs for bonds in the final Phase, which consisted primarily of high-yield bonds. These results indicate that mandated transparency may help some investors and dealers through a decline in price dispersion, while harming others through a reduction in trading activity.

Proponents of TRACE argue that transparency makes the corporate bond market accessible to retail clients, enhances market integrity and stability, and provides regulators greater ability to monitor the market. They reason that with the introduction of transparency, price discovery and the bargaining power of previously uninformed participants should improve (NASD 2005). This in turn should be reflected in a decrease in bond price dispersion and, if more stable prices attract additional participants, an increase in trading activity (Levitt 1999).

Opponents of TRACE object to mandatory transparency, saying that is unnecessary and potentially harmful. They argue that “transparency would add little or no value” to highly liquid and investment grade bonds since these issues often trade based on widely known US Treasury benchmarks (NASD 2006). They further argue that if additional information about trades was indeed valuable, then third‐party participants would already collect and provide it, a view that dates back to Stigler (1963). Opponents also forecast adverse consequences for investors since, if price transparency reduces dealer margins, dealers would be less willing to commit capital to hold certain securities in inventory making it more difficult to trade in these securities. The Bond Market Association argued that the adverse effects of transparency may be exacerbated for lower‐rated and less frequently traded bonds (Mullen 2004). Lastly, opponents saw TRACE as imposing heavy compliance costs, particularly for small firms who do not self‐clear (Jamieson 2006). Thus, opponents argue that market transparency reduces overall trading activity and the depth of the market. Not surprisingly, similar arguments for and against transparency have resurfaced in response to the recent introduction of the Dodd‐Frank’s post‐trade transparency requirements for swaps (Economist 2011).

With all respect to the various debaters, and while recognizing that the above is an extremely quick summary of their thoughts, I have to say that all the quoted arguments miss the mark. The fundamental question is: what is the corporate bond market for? I claim that the purpose of the corporate bond market is to allow issuers to access capital at as little cost as possible; therefore, all regulation related to the bond market should be first examined through the lens of ‘what will this do to new issue spreads?’. While this is not the only thing to be addressed, it is the most important thing and it is something I rarely see addressed.

It was addressed, however, in a 2012 comment letter to FINRA from SIFMA:

Issuers face the ultimate risk from decreases to market liquidity since the public dissemination of trade information, as a general matter, makes broker-dealers less willing to take risk on large size trades. A reduction in liquidity will cause institutional investors to demand greater yield from issuers (to compensate for the reduced liquidity), or to simply refuse to buy new issues in meaningful size. Therefore, a careful balance between transparency and the preservation of liquidity must be struck. Data shows that dealers have recently chosen to (or been forced to, in the case of rules like the Volcker Rule) put capital to work elsewhere. This means that institutional investors will face greater difficulty selling a larger sized amount of an issue. Pre-TRACE, and pre-financial crisis, dealers provided a much larger outlet where they would take the risk temporarily while they worked to uncover a buyer. This outlet has been much reduced in recent years, due to a combination of regulation and other market structure issues. The real liquidity differential for larger vs. smaller “on the run” amounts has been meaningfully amplified, and eliminating caps on disseminated volumes would exacerbate this problem. At a much more specific level, it is more difficult to issue securities in smaller sizes when participant’s transactions are immediately made public and expose exact amounts taken down by particular investors. An increase in the dissemination caps will increase the threshold where these securities issuances are somewhat more challenging, and disproportionately harm smaller issuers. In each case, the macro and the granular, the result is a higher cost of capital for issuers.

Letting that issue slide for a moment and returning to Asquith, Covert and Pathak:

FINRA implemented TRACE in Phases because of concerns about the possible negative impact of transparency on thinly traded, small issue and low‐credit rated bonds. Examining issue size across all Phases, we find that trading activity decreases more for large issue size bonds, but that the reduction in price dispersion is uncorrelated with issue size. Credit ratings, however, matter for both trading activity and price dispersion. High‐yield bonds experience a large and significant reduction in trading activity, while the results are mixed for investment grade bonds. High‐yield bonds also experience the largest decrease in price dispersion, but price dispersion significantly falls across all credit qualities. Therefore, the introduction of transparency in the corporate bond market has heterogeneous effects across sizes and rating classes.

Price dispersion also decreases due to TRACE. This decrease is significant across bonds that change dissemination in Phases 2, 3A, and 3B, but is largest, 24.7%, for Phase 3B bonds. This finding is also robust across different measures of price dispersion and alternative regression specifications. Moreover, event studies show that the fall in price dispersion occurs immediately after the start of dissemination. It is important to note, if the transparency introduced in Phase 1 affects bonds that become transparent in subsequent Phases, our estimates are probably lower bounds on TRACE’s overall impact.

There are several welfare implications of increased transparency in the corporate bond market. One consequence is that it may change the relative bargaining positions of investors and dealers, allowing investors to obtain fairer prices at the expense of dealers. The reduction in price dispersion should allow investors and dealers to base their capital allocation and inventory holding decisions on more stable prices. Therefore, the reduction of price dispersion likely benefits customers and possibly, but not necessarily, dealers.

The implications of a reduction in trading activity are not as clear. Whether a reduction in trading activity is desirable depends on why market participants trade. A decrease in trading activity may be beneficial if much of the trading in a bond is unnecessary “noise” trading. On the other hand, if most trading is information‐based, a decrease in trading activity may slow down how quickly prices reflect new information. In addition, if the decrease in trading activity is the result of dealers’ unwillingness to hold inventory, transparency will have caused a reduction in the range of investing opportunities. That is, even if a decline in price dispersion reflects a decrease in transaction costs, the concomitant decrease in trading activity could reflect an increased cost of transacting due to the inability to complete trades.

Our results on the corporate bond market have two major implications for the current and planned expansions of mandated market transparency. The implicit assumption underlying the proposed TRACE extensions and the use of TRACE as a template for regulations such as Dodd‐Frank is that transparency is universally beneficial. First, it is not clear that transparency for all instruments is necessarily beneficial. Overall, trading in the corporate bond market is large and active, although, as seen, not comparable across all types of bonds. Many over‐the‐counter securities are similar to the bonds FINRA placed in Phase 3B. That is, they are infrequently traded, subject to dealer inventory availability, and trading in these securities is motivated by idiosyncratic, firm‐specific information. Therefore, the expansion of TRACE‐inspired regulations, such as those for 144a bonds, asset‐ and mortgage‐backed securities, and the swap market, may have adverse consequences on trading activity and may not, on net, be beneficial.

Second, our results indicate that transparency affects different segments of the same market in different ways. As a consequence, our results provide empirical support for the view that not every segment of each security market should be subject to the same degree of mandated transparency. Both academic commentators (French et al., (2010), Acharya et al. (2009)) and leading industry associations (e.g., Financial Services Forum, et al., (2011)) have articulated this position. Despite these recommendations, the expansion of transparency by the Commodity Futures Trading Commission (CFTC) in various swap markets, i.e. interest rate, credit index, equity, foreign exchange and commodities, in December 2012 and February 2013 was immediate for all swaps in those markets. This stands in sharp contrast to FINRA’s cautious implementation of TRACE in Phases. The fact that the effect of transparency varies significantly across categories of bonds within the corporate bond market suggests that additional research will be required to evaluate the tradeoffs associated universal transparency in other over‐the‐counter securities.

There is one assertion in the above with which I take particular issue: One consequence is that it may change the relative bargaining positions of investors and dealers, allowing investors to obtain fairer prices at the expense of dealers. Long time Assiduous Readers will probably be snickering to themselves, having determined that I am probably going to complain about the use of the word “fairer”, since I don’t know what “fair” means, and they’re quite right.

By “fair”, I assume the authors mean “at a price closer to the dealers’ cost than otherwise”, but that is not necessarily “fair” when examined in a broader context.

Suppose, for instance, that you are a bond dealer – horns, pitchfork, cloven hooves and all – and somebody asks you to bid on something. OK, so you do – but why do you? The answer, of course, is to make a profit and as a rational economic actor you seek to maximize your profit. But you’re not seeking to maximize your profit on every possible transaction or even to maximize your gross profit; you’re seeking to maximize the annual profit of your desk expressed as a fraction of your capital. This has a number of implications; for instance, you might give regular customers who deal exclusively with you slightly better prices than the other ones, simply to ensure that these guys never have any reason to consider going anywhere else.

But the most important consideration for purposes of this discussion is the question of maximizing profitability as a fraction of capital. That’s what determines the firm’s capital allocation and that’s what determines your bonus. And for a single given transaction, we can write the following equation:

Desirability = (Sell – Buy) / (Capital * Days)

Where the gross profitability is the Sell price less the Buy price, Capital is the amount of capital used when financing the position and Days is the number of days you have to hold the thing in inventory until it’s sold (or bought, if the position was initiated with a short sale). In this equation I am ignoring the Carry (the difference between the yield of the bond and the cost of financing it); I’m also ignoring default risk and lots of other considerations, with the objective of keeping this simple.

Under the pre-TRACE regime, one way to maximize trade desirability was, obviously, to maximize the difference between your Sell and Buy prices, but TRACE makes that a lot more difficult; after all, that’s the whole point of TRACE and Asquith, Covert and Pathak have made a solid argument that it is not happening to the same extent under TRACE as it was in the good old days. So for practical purposes, when the dealer is putting a price on taking a position, he is doing so with the knowledge that gross profit is capped.

The “Capital” term in the simplified equation is set by regulation and the bond desk has no control over it. As far as they’re concerned, it’s a constant.

Therefore, in order to increase the Desirability of the trade, the only avenue left open to the dealer is Days, which is inversely related. If they can make their $0.50 per bond profit in one day, that’s a whole lot better than if it takes a month! Therefore, when taking a position, they will concentrate their energies on how they will flatten their position. This will, of course, be much easier if they offer their position to a potential buyer at an attractive price. Therefore, I claim, TRACE will lead to the initial seller getting a really lousy price for his bond, which is turned over in short order to the ultimate buyer who gets a really good price.

There is evidence for this in the secondary GIC market, which has to be one of the most ridiculously infinitesimal markets in the world, but which exists at the major dealers not so much as a money-maker, but as a service to clients. Some GICs are transferrable and the dealer will buy them from the owner at a really, really lousy price – I think the bid yield is about 150bp over the market rate, but I confess I’m not too sure of that. I have a major dealer’s offering sheet from 2012 on hand, which is headed by the statement: “ALL Secondary GICS offered at approximately +50bps over today’s Best GIC Rates (on the [Redacted] System”

This is a great deal for buyers, and I have often recommended to clients that they open accounts at a major dealer for the purpose of access to new issues and access to secondary GICs. And why I am I saying this? Because I think the buyer will get a “fair” price, just like teacher talked about in kindergarten? Hell, no! It’s because I think the buyer will get a really good price, courtesy of the really, really shitty price that was offered to the poor sucker who needed to cash his GIC early.

Now this example comes from a market that barely exists, but I claim that it shows in sharp relief the problem with TRACE – which is that it encourages prices that are not “fair”, but prices that really stick it to the liquidity seeker in order to reward both the interim and the ultimate liquidity supplier.

And I will claim that this cannot be considered a Good Thing. This is an increase in the cost of liquidity, which leads to a decline in liquidity, which leads to an increase in the liquidity premium demanded for holding a position, which leads to higher coupons required from the issuer at issue time. And I claim that this is a Bad Thing because the purpose of the corporate bond market is to allow issuers to source cheap capital.

Note that none of these assertions has been tested, but for now we’ll call it the Shitty Price Hypothesis. It has the advantage of actually providing a causal mechanism for the reduction of trading experienced under TRACE: say you’re a portfolio manager and there’s a wave of redemptions. You have to raise cash. In the old days, you could utilize the opportunity to rebalance and improve your portfolio slightly. Got too much junk in the portfolio? Fine, raise the cash by selling it. But if all you see is stink-bids, you’re almost forced to move up the credit quality ladder and sell something more liquid. Thus, TRACE has made it more difficult for you to do your job.

To be fair, the authors make what might be an indirect allusion to this at the end of their Section 6:

In addition, the bond market is a dealer market, so dealer inventory will affect trading levels and the potential impacts of TRACE. Dealers only hold inventory in those bonds with sufficient trading activity to cover their carry cost. Thinly traded bonds may require dealers to have higher spreads to cover their holding costs. Since TRACE reduces price dispersion significantly, the benefit of holding bonds in inventory decreases. TRACE reduces price dispersion the most for high‐yield bonds, so the incentive to reduce inventory is strongest for those bonds. Thus, lower trading activity in high‐yield bonds post‐TRACE may be the result of a supply‐side response of dealers.

Another paper I found while updating myself on academic commentary about TRACE was by Rainer Jankowitsch, Amrut J. Nashikkar and Marti G. Subrahmanyam, titled Price Dispersion in OTC Markets: A New Measure of Liquidity:

In this paper, we model price dispersion effects in over-the-counter (OTC) markets to show that in the presence of inventory risk for dealers and search costs for investors, traded prices may deviate from the expected market valuation of an asset. We interpret this deviation as a liquidity effect and develop a new liquidity measure quantifying the price dispersion in the context of the US corporate bond market. This market offers a unique opportunity to study liquidity effects since, from October 2004 onwards, all OTC transactions in this market have to be reported to a common database known as the Trade Reporting and Compliance Engine (TRACE). Furthermore, market-wide average price quotes are available from Markit Group Limited, a financial information provider. Thus, it is possible, for the first time, to directly observe deviations between transaction prices and the expected market valuation of securities. We quantify and analyze our new liquidity measure for this market and find significant price dispersion effects that cannot be simply captured by bid-ask spreads. We show that our new measure is indeed related to liquidity by regressing it on commonly-used liquidity proxies and find a strong relation between our proposed liquidity measure and bond characteristics, as well as trading activity variables. Furthermore, we evaluate the reliability of end-of-day marks that traders use to value their positions. Our evidence suggests that the price deviations are significantly larger and more volatile than previously assumed. Overall, the results presented here improve our understanding of the drivers of liquidity and are important for many applications in OTC markets, in general.

Using a volume-weighted hit-rate analysis, we find that only 51.12% of the TRACE prices and 58.59% of the Markit quotations lie within the bid and ask range quoted on Bloomberg. These numbers are far smaller than previously assumed. Since these marks are widely used in the financial services industry, our findings may be of interest to financial institutions and their regulators.

The evidence that so many actual prices are outside the pre-trade quote is supportive of the Shitty Price Hypothesis, but more detail is needed!

And now 144a (exempt) bonds are being TRACEd:

Corporate-bond brokers may face a squeeze on profits as regulators start publishing prices for almost $1 trillion of privately sold debt, if the past is any guide.

The Financial Industry Regulatory Authority, seeking to “foster more competitive pricing,” plans to start disseminating trading levels for securities issued under a rule known as 144a on its 11-year-old Trace system within the next year. That means the notes, sold only to institutional investors, will face the same price transparency as publicly registered corporate bonds for which buyers demand half a percentage point less in yield spreads. Brokers typically are paid larger fees from higher-yielding debt.

Firms from Knight Capital Group Inc. to Gleacher & Co. and Pierpont Securities LLC sold or shuttered credit units this year as corporate-bond trading volumes fell to the lowest proportion of the market on record and smaller price swings shrink potential profit margins.

Stamford, Connecticut-based Pierpont, one of the dealers started after the 2008 collapse of Lehman Brothers Holdings Inc. decided to exit the high-yield bond and loan business this month. New York-based Gleacher said in April that it was exiting fixed-income trading and sales. Knight in Jersey City, New Jersey, sold its credit-brokerage unit to Stifel Financial Corp., according to a July 1 statement.

Jefferies Group LLC, the investment bank owned by Leucadia National Corp., said profit plunged 83 percent in the three months ended Aug. 31 as trading revenue fell to the lowest since the depths of the financial crisis.

Julie Dickson Provides Footnotes

Thursday, April 3rd, 2014

OSFI has published Remarks by Superintendent Julie Dickson to the 58th Annual Canadian Reinsurance Conference, Toronto, Ontario, April 2, 2014:

We were recently subject to an IMF financial sector assessment (i.e. FSAP). The FSAP process determines a country’s compliance with international standards and provides an overview of the economic health of a country as well as regulatory effectiveness. The assessors were asked to look for any signs of complacency at OSFI. I am happy to say that they did not find any. They have concluded that we continue to be effective with a high level of compliance with international standards. [Footnoted link]

OSFI tends to strongly support global standards and practices, versus entering into endless debates about whether we need them.

I was very happy to see the December 2013 report of the U.S. Department of Treasury, on modernizing insurance regulation. It did not mince words in advocating group capital adequacy and consolidated supervision in the U.S., something that has been debated for a long, long time. [Footnoted link]

The IAIS [International Association of Insurance Supervisors ] is developing backstop capital requirements, as well as higher loss absorbency requirements for designated Global Systemically Important Insurers (G-SIIs), as well as global Insurance Capital Standards for all internationally active insurance groups (or IAIGs).

Harmonization of insurance capital requirements globally is an important development. It is premature to determine the impact these will have on Canadian capital requirements. Therefore, as expected, OSFI intends to continue development of our internal set of rules, given the international global standard might initially act as only a minimum requirement until sufficient time has been allowed to develop and test a robust enough capital test.

The first footnoted link is to the IMF’s Canada page, which is headed by a link to CANADA – FINANCIAL SECTOR ASSESSMENT PROGRAM – CRISIS MANAGEMENT AND BANK RESOLUTION – FRAMEWORK—TECHNICAL NOTE:

The ex-ante funding of CDIC should continue to be bolstered. To achieve the targeted 100 basis points coverage of the insured deposits (from the current 39 basis points), an increase of the premiums paid by financial institutions will be necessary. Enhanced data collection on depositors would ensure that the coverage limit and the target ex-ante financing strike the right balance between depositor protection, financial stability, and market discipline. The proposed simplification of the rules for eligibility for deposit insurance of complex deposit products is welcome.

The Canadian financial system is large, relatively complex, and concentrated. The financial system accounts for almost 500 percent of the GDP and is composed of a large spectrum of federally and provincially regulated institutions. The six domestic systemically important banks (D-SIBs) and one large provincially incorporated credit cooperative network hold almost half of the financial sector assets. The financial system was exceptionally resilient during the global financial crisis and no financial institution had to be closed or rescued.

A number of legal provisions create room for “constructive tensions” between the
OSFI and CDIC, at which point their actions should be closely coordinated.
For example, CDIC can terminate deposit insurance, even if the institution is still solvent (Section 30 of the CDIC Act). In the past, the CDIC has terminated deposit insurance as an enforcement action against two solvent members. Such powers were used by the CDIC when it was responsible for the administration of the Standards of Sound Financial and Business Practices. The CDIC has to be concerned about minimizing the exposure of the insurance fund to loss from failing institutions. This could create an incentive to resolve an institution sooner rather than later—for instance, to lean against perceived regulatory forbearance—but may conflict with supervisory interests. Such risks call for close bilateral coordination between the OSFI and CDIC, as well as though the FISC cooperation.

The authorities should consider introducing some form of depositor preference. Depositor preference not only mitigates the risk of depositor runs, it can also improve recoveries for depositors, the deposit insurance agencies, and the government in the case of a bank’s failure. In the context of the proposal to introduce bail-in powers, the introduction of depositor preference is all the more important as unsecured creditors will need to be written-down or have their debts converted into equity. If depositors are ranked equally with unsecured creditors, a bail-in cannot be effected without discriminating within the class of creditors. Depositor preference could be tailored to take different forms (although national depositor preference should be avoided as it could hamper cross-border resolution), based on a rigorous analysis of the desired impact and interaction with other features of the existing bank operating and resolution framework (Appendix II).

The CDIC is ex-ante funded and reviews its target funding level regularly.CDIC currently has funding of Can$2.6 billion representing an estimated 39 basis points of insured deposits. The existing resources are sufficient to repay insured deposits in all small banks individually, or concurrently in a number of small banks, but would not be sufficient to cover insured deposits in a medium-sized institution. The relatively low level of ex-ante coverage reflects a long period of time in which the corporation had to recover from substantial losses incurred in the mid eighties and early nineties. The CDIC plans to achieve a minimum target ex ante funding of 100 basis points of insured deposits (currently equivalent to Can$6.5 billion), over the coming ten years.

In addition, CDIC can terminate deposit insurance (as per Section 30 Report). The basis for termination can be evidence of unsound standards of prudent business financial practices (e.g. unsound capital management). The issuance of a Section 30 report is typically preceded by the conduct of a special examination, following which the institution has to rectify the situation. A copy of the Section 30 report shall be provided to the MOF (or provincial Minister if provincial member) and indicates that a failure to remedy the situation could lead to the termination of the deposit insurance policy. The MOF has the power to override such decision based on public interest grounds.

76. The termination of deposit insurance triggers the taking control of the supervised institution by OSFI. The existing eligible deposits would continue to be insured for two years from the termination date (or for term deposits with a longer term, until the maturity date of the term deposit). Alternatively, CDIC has the discretionary authority to make an immediate deposit insurance payment for all eligible deposits. In its history, CDIC has terminated the deposit insurance policy of three member institutions through the Section 30 process and has immediately reimbursed deposits in all the three cases.

The NVCC is a gone-concern contingent instrument. The NVCC aims to ensure that investors in non-common Tier 1 and Tier 2 regulatory capital instruments bear losses before taxpayers where the government determines it is in the public interest to rescue a non-viable bank, based on clearly specified trigger events. The NVCC triggers are very late and very remote and the Canadian authorities confirm that they would only elect to trigger the NVCC where there is a high level of confidence that the conversion accompanied by additional measures (i.e. liquidity assistance provided by BOC, liquidity assistance provided by CDIC, change in management, change in business plan, public or private capital injection) would restore the viability of the failed financial institution. The NVCC instruments are not contingent convertible instruments (Co-Cos), the key distinction being the timing and nature of the NVCC triggers, which can be exercised only at the discretion of the authorities at the point of non-viability.

The NVCC is just an option in the resolution toolkit. The decision to maintain an institution as a going concern where it would otherwise become non-viable will be informed by OSFI’s interaction with the FISC and on the CDIC Board of Directors. However, the Canadian authorities will retain full discretion to choose not to trigger NVCC notwithstanding a determination by the Superintendent that an institution ceased, or is about to cease, to be viable. Therefore, other resolution options, including the creation of a bridge bank, could be used to resolve a failing institution either as an alternative to NVCC or in conjunction with or following an NVCC conversion, and could also subject capital providers to loss.

To the date when the FSAP was conducted, none of the major banks had issued a de novo NVCC instrument, although the first issuance was expected soon. CIBC did, however, amend via a deed poll the terms of three series of its preferred shares to make them NVCCcompliant. A number of smaller, closely-held banks have issued NVCC or modified instruments to make them NVCC-compliant. For these banks, OSFI has permitted alternatives to the market-based conversion required under the CAR Guideline to accommodate the unlisted nature of their common shares or intercompany issuances where all of the capital has been issued to the parent or affiliates. Under the CAR Guideline, each instrument must have a formula governing the conversion mechanism that references the market value of equity when OSFI determines the institution is no longer available. OSFI expects good demand from institutional fixed income and other investors for NVCC.

Furthermore, the bail-in regime needs to be consistent with other financial stability objectives. Several long-term aspects will need to be carefully taken into consideration when introducing the new regime. The introduction of bail-in could increase the funding costs for unsecured debt and which may, in turn, trigger shifts in banks’ liability structure towards other forms of funding (i.e. secured) which are outside the scope of the bail-in regime. Such arbitrage incentives would be countered, however, by other regulatory measures including the Basel III Net Stable Funding Ratio which will incentivize banks to hold higher levels of stable, long-term funding; and asset encumbrance limits that restrain banks’ reliance on secured debt funding. It would be also useful to consider requiring the D-SIBs to hold a minimum amount of capital instruments and senior, unsecured debt in conjunction with the bail-in regime to ensure a minimum amount of gone-concern loss-absorption capacity. Last, when deploying bail-in, authorities should be mindful of cross-sector contagion in crisis times, as for example insurance companies are major investors in banks’ debt instruments.

FinancialSectorStructure
Click for Big

The second footnote of Dickson’s speech references the Reports & Notices page of the US Treasury; the first link there is to How to Modernize and Improve the System of Insurance Regulation in the United States:

By drawing attention to the supervision of diversified complex financial institutions such as American International Group, Inc. (AIG), the financial crisis added another dimension to the debate on regulating the insurance industry. The crisis demonstrated that insurers, many of which are large, complex, and global in reach, are integrated into the broader U.S. financial system and that insurers operating within a group may engage in practices that can cause or transmit severe distress to and through the financial system. AIG’s near-collapse revealed that, despite having several functional regulators, a single regulator did not exercise the responsibility for understanding and supervising the enterprise as a whole. The damage to the broader economy and to the financial system caused by the financial crisis underscored the need to supervise firms on a consolidated basis, to improve safety and soundness standards so as to make firms less susceptible to financial shocks, and to better understand and regulate interconnections between financial companies.

If an insurer is to receive credit against a capital or reserve requirement because of risk transferred to an insurance captive, the rules governing the quality and quantum of assets offered in support of the captive should be uniform across states and sufficiently robust and transparent in order to prevent arbitrage by insurers. The matter is one that must be assessed within the rubric of the capital adequacy of an insurance group as a whole. Under the current state-based capital adequacy regime, group capital assessments rely on CRA ratings or on a firm-produced ORSA to evaluate a group’s capital position and the strength of intra-group guarantees. Neither of these measures of group capital adequacy, however, is a substitute for group capital standards that are established and supervised by regulators.

How to Dissect a Housing Bubble

Thursday, March 13th, 2014

An article in the Globe and Mail brought to my attention a paper by the eponymous Will Dunning Inc. titled How to Dissect a Housing Bubble which seems highly worthy of close attention:

This report begins (in section 2.0) by looking at one of the key pieces of evidence that is brandished by those who believe a housing bubble exists in Canada: data on the ratio of house prices to rents, which has been created by the Organization for Economic Co-operation and Development (“OECD”). To be blunt, while the OECD has relied on data that it might consider the best available for the purpose, the data in reality is badly flawed and results in wildly inaccurate estimates.

The subsequent section (3.0 A Better Dataset) utilizes an alternative dataset, from the Royal LePage House Price Survey2. This analysis finds that the price to rent ratio in Canada has indeed increased, although the rise in the ratio is much less than was estimated by the OECD.

Section 4.0 (Housing Affordability Indicators) takes a slightly different approach, looking at evolving mortgage costs in relation to incomes. Several organizations publish housing affordability indexes. These generally indicate that housing affordability has deteriorated in Canada, and this has become an important part of the discussion. In this author’s opinion, these indexes share a major flaw: they rely on a measure of interest rates (“posted rates”) that exists only for administrative purposes and is divorced from the interest rates that can be found in the marketplace.

Well! That’s provocative enough, but I’ll grant him more credibility than the usual purveyors of shadow-statistics. The mortgage and GIC rates posted by the Bank of Canada are nonsensical (perhaps the legacy of some plan hatched by the Feds and executed by Lapdog Carney) – a five year mortgage is listed as costing 5.24% for the last three weeks of February and 4.99% for the first two weeks of March. Give me a break. Those are the posted rates – the banks have found that by having a posted rate and applying a discount, they can issue mortgages at the discounted price, but force buy-backs at the posted price when mortgagors exit early. Nice work, if you can get it, but why is the BoC participating in this charade? They’ve been reporting 5-Year GICs at 1.63% for the past five weeks, which will surprise anybody with access to the Financial Post web page or any of the myriad web sites which are able to report financial data more accurately than Parakeet Poluz and his fellow lackeys.

Actually, it’s even worse than it looks – Dunning claims:

Posted rates exist for administrative purposes only: lenders must use them in the calculation of debt service costs for some mortgages that receive federally guaranteed mortgage insurance. In mortgage contracts, interest rates and options for future interest rates are sometimes expressed as the posted rate minus a discount. In addition, when lenders calculate the penalties that borrowers pay for repaying early, posted rates (minus a pre-determined discount) are often in input into the calculations.

Dunning charges that:

The OECD calculates the Canadian house price to rent ratio using:

  • • For house prices – the Teranet/National Bank National Composite House Price Index from 1999Q2. Prior to that date, data are from the Canadian Department of Finance.
  • • For rents, the rent component of the Consumer Price Index (“CPI”)


For the years prior to 1999 (during which it appears that the CREA average national resale price is used), the rates of price growth have likely been distorted upwards by changes in the quality of the housing inventory.

Looking at the rent data used in the calculation (the rent component of Statistics Canada’s Consumer Price Index), rates of rent growth are badly under-estimated.

Data from rental surveys conducted by Canada Mortgage and Housing Corporation (“CMHC”) hint at the degree to which rent increases have been under-estimated in the Statistics Canada data that has been used by the OECD. It is clear in this data that the methodology change made in 2009 did not fully cure the data quality issues, and that the CPI rent index remains highly inaccurate.

The chart to the right presents CMHC data on average rents for apartments (units with two bedrooms) in Canada. To permit comparison to the rent component of the Consumer Price Index, the author has converted both datasets to indexes that equal 100 in 1992. Over the entire period covered, the CMHC data shows a total increase of 57.4% (2.2% per year); the CPI data shows a total rise of 32.7% (1.4% per year). Even for the period subsequent to the 2009 methodology revision, the CPI data shows a significantly slower rate of rent growth (1.3% per year) compared to the CMHC data (2.4% per year).

It can be argued that the CMHC data is not “constant quality” (because of additions to the inventory through new construction as well as due to renovations) and therefore the CMHC data might be distorted compared to the CPI (which attempts to measure rent change for constant quality accommodation). However, it should be noted that there are few additions to the inventory that is covered by the CMHC rental market survey – during the time period covered in the chart most growth of rental inventory has been in rented condominiums and other housing forms that are not included in the survey. Therefore, the degree of distortion from new supply is likely to be very small.

rentGrowth
Click for Big

Interesting, indeed! This indicated long-term underestimation of rental growth is given credence by the OECD figures that are so fraught with interest:

priceToRentOECD
Click for Big

Now, I’m reserving judgement on whether we’re in a housing bubble or not. I won’t even express a view as to whether houses are currently either rich or cheap. But I will not believe, not for one New York minute, that the Price-to-Rent ratio has quintupled since 1970. And, while I realize that Toronto is not actually equivalent to Canada (there are still a few unfortunate districts in the country), I will not believe, not for one New York minute, that the Price-to-Rent ratio has doubled since the bubble I remember of the late eighties, when one of my fellow clerks – making a clerical salary and not blessed with independent wealth – owned four houses, each of them mortgaged to hell ‘n’ gone (she got wiped out).

Of particular interest is his comparison of CMHC figures and the rent component of inflation:

rentIncreaseMeasures
Click for Big

This may well give the conspiracy theorists some live ammunition about CPI underestimation, at long last, but there’s more too it than simply academic discussions of price-rent ratios – the CPI is used to increment rent-control limits. One does not need to be a rabid partisan of Mike Harris to believe that rent control destroyed traditional rental housing in Toronto; a long term underestimate of rent increases just makes it more obvious.

That’s the meat of the report I found most interesting … I’ll just reproduce two more charts:

affordability
Click for Big

capRate
Click for Big

The current calculated cap rate of 4-5% looks right to me. In one building with which I am familiar, two-bedrooms sell for about $140,000, or can be rented for about $1,200 monthly – and this is roughly average for the area according to the CMHC. There’s a $600 maintenance fee, so the landlord’s cap rate is … just under 5%.

Anyway, this report is fraught with interest, what with us being told daily that we’re in imminent danger of a crash. The OECD and its fellow travellers need to address these issues and explain why they believe their figures to be accurate indicators of the Canadian housing market.

Market-Based Bank Capital Regulation

Wednesday, March 5th, 2014

Assiduous Reader DR sent me the following query:

Today’s Financial Posts has an article “A better Basel mousetrap to protect taxpayers”, by Finn Poschmann regarding NVCC.

What is your opinion?

A short search brought up the article in question, A Better Basel Mousetrap to Protect Taxpayers, which in turn led me to the proposal by Jeremy Bulow and Paul Klemperer titled Market-Based Bank Capital Regulation:

Today’s regulatory rules, especially the easily-manipulated measures of regulatory capital, have led to costly bank failures. We design a robust regulatory system such that (i) bank losses are credibly borne by the private sector (ii) systemically important institutions cannot collapse suddenly; (iii) bank investment is counter-cyclical; and (iv) regulatory actions depend upon market signals (because the simplicity and clarity of such rules prevents gaming by firms, and forbearance by regulators, as well as because of the efficiency role of prices). One key innovation is “ERNs” (equity recourse notes — superficially similar to, but importantly distinct from, “cocos”) which gradually “bail in” equity when needed. Importantly, although our system uses market information, it does not rely on markets being “right.”

Our solution is based on two rules. First, any systemically important financial institution (SIFI) that cannot be quickly wound down must limit the recourse of non-guaranteed creditors to assets posted as collateral plus equity plus unsecured debt that can itself be converted into equity–so these creditors have some recourse but cannot force the institution into re-organization. Second, any debt guaranteed by the government, such as deposit accounts, must be backed by government-guaranteed securities. This second rule can only realistically be thought of as a very long-run ambition – our interim objective would involve a tight ring-fence of government-guaranteed deposits collateralized by assets that are haircut at rates similar to those applied by lenders (including central banks3 and the commercial banks themselves!) to secured borrowers.

Specifically: first, we would have banks replace all (non-deposit) existing unsecured debt with “equity recourse notes” (ERNs). ERNs are superficially similar to contingent convertible debt (“cocos”) but have important differences. ERNs would be long-term bonds, subject to certain term-structure requirements, with the feature that any interest or principal payments payable on a date when the stock price is lower than a pre-specified price would be paid in stock at that pre-specified price. The pre-specified price would be required to be no less than (say) 25 percent of the share price on the date the bond was issued. For example, if the stock were selling at $100 on the day a bond was issued and then fell below $25 by the time a payment of $1000 was due, the firm would be required to pay the creditor (1000/25) = 40 shares of stock in lieu of the payment. If the stock rebounded in price, future payments could again be in cash.

Crucially, for ERNs, unlike cocos:

  • any payments in shares are at a pre-set share price, not at the current share price or at a discount to it—so ERNs are stabilizing because that price will always be at a premium to the market
  • conversion is triggered by market prices, not regulatory values—removing incentives to manipulate regulatory measures, and making it harder for regulators to relax requirements
  • conversion is payment-at-a-time, not the entire bond at once (because ERNs become equity in the states that matter to taxpayers, they are, for regulatory purposes, like equity from their date of issuance so there is no reason for faster conversion)–further reducing pressures for “regulatory forbearance” and also largely solving a “multiple equilibria” problem raised in the academic literature
  • we would replace all existing unsecured debt with ERNs, not merely a fraction of it—ensuring, as we show below, that ERNs become cheaper to issue when the stock price falls, creating counter-cyclical investment incentives when they are most needed.

OK, so I have difficulties with all this. Their first point is that non-guaranteed creditors “cannot force the institution into re-organization.” Obviously there are many differences of opinion in this, but I take the view that being able to force a company into re-organization – which may include bankruptcy – is one of the hallmarks of a bond. For example, I consider preferred shares to be fixed income – as they have a cap on their total return and they have first-loss protection – but I do not consider them bonds – as they cannot force bankruptcy. The elimination of bankruptcy, although very popular among politicians (who refer to bankruptcy as a form of terrorism) is a very big step; bankruptcy is a very big stick that serves to concentrate the minds of management and directors.

Secondly, they want insured deposits to be offset by government securities. There’s an immediate problem about this in Canada, because insured deposits total $646-billion while government of Canada marketable debt totals $639-billion. You could get around this by saying the CMHC-guaranteed mortgages are OK, but even after years of Spend-Every-Penny pouring fuel on the housing fire, CMHC insurance totals only $559.8-billion (out of a total of $915-billion. At present, Canadian Chartered Banks hold only about $160-billion of government debt. So it would appear that, at the very least, this part of the plan would essentially force the government to continue to insure a ridiculous proportion of Canadian residential mortgages.

And, specifically, they want all (non-deposit) existing unsecured debt with “equity recourse notes”. OK, so how much is that? Looking at recent figures from RBC:

RBCBalanceSheet
Click for Big

So roughly a quarter of Royal Bank’s liabilities would become ERNs …. and who’s going to buy it? It’s forcibly convertible into equity long before the point of non-viability – that’s the whole point – so for risk management purposes it is equity. If held by another bank, it will attract a whopping capital charge (or if it doesn’t, it should) and it can’t be held by institutional bond portfolios (or if it is, it shouldn’t be). I have real problems with this.

The paper makes several entertaining points about bank regulation:

The regulatory system distorts incentives in several ways. One of the motivations for Citigroup to sell out of Smith, Barney at what was generally believed to be a low price, was that it allowed Citi to book an increase in regulatory capital. Conversely, selling risky “toxic assets” with a regulatory value greater than market is discouraged because doing so raises capital requirements even while reducing risk.[footnote].

[Footnote reads] : Liquidity reduction is another consequence of the current regulatory system, as firms will avoid price-discovery by avoiding buying as well as selling over-marked assets. For example, Goldman Sachs stood ready to sell assets at marks that AIG protested were too low, but AIG did not take up these offers. See Goldman Sachs (2009). For an example of traders not buying even though they claimed the price was too low, see the FCIC transcript of a July 30, 2007 telephone call between AIG executives. “We can’t mark any of our positions, and obviously that’s what saves us having this enormous mark to market. If we start buying the physical bonds back … then any accountant is going to turn around and say, well, John, you know you traded at 90, you must be able to mark your bonds then.” Duarte (2012) discusses the recent trend of European banks to meet their requirements to raise regulatory capital by repurchasing their own junk bonds, arguably increasing the exposure of government insurers.

However, don’t get me wrong on this: the basic idea – of conversion to a pre-set value of stock once the market breaches that pre-set value – is one that I’ve been advocating for a long time. They are similar in spirit to McDonald CoCos, which were first discussed on PrefBlog under the heading Contingent Capital with a Dual Price Trigger (regrettably, the authors did not discuss McDonald’s proposal in their paper). ERNs are ‘high-trigger’ instruments, and therefore will help serve to avert a crisis, rather than merely mitigate one, as is the case with OSFI’s NVCC rules; I have long advocated high triggers.

My basic problem is simply that the authors:

  • Require too many ERNs as a proportion of capital, and
  • Seek to Ban the Bond

However, it may easily be argued that these objections are mere matters of detail.

DBRS Announces New SplitShare Rating Methodology

Tuesday, July 30th, 2013

DBRS has announced that it:

has today published updated versions of two Canadian structured finance methodologies:
— Stability Ratings for Canadian Structured Income Funds
— Rating Canadian Split Share Companies and Trusts

Neither of the methodology updates resulted in any meaningful changes and as such, neither publication has resulted in any rating changes or rating actions.

DBRS’s criteria and methodologies are publicly available on its website, www.dbrs.com, under Methodologies. DBRS’s rating definitions and the terms of use of such ratings are available at www.dbrs.com.

Of interest in the methodology is the explicit nature of their rating categories (I have added the Asset Coverage Ratio, calculated from the Downside Protection):

Minimum Downside Protection Criteria by Rating Category
DBRS Preferred Share Rating Minimum Downside Protection*
(Net of Agents’ Fees and Offering Expenses)
Asset
Coverage
Ratio
(JH)
Pfd-2 (high) 57% 2.3+:1
Pfd-2 50% 2.0:1
Pfd-2 (low) 44% 1.8-:1
Pfd-3 (high) 38% 1.6+:1
Pfd-3 33% 1.5-:1
Pfd-3 (low) 29% 1.4+:1
* Downside protection = percentage reduction in portfolio NAV before preferred shares are in a loss position.

and

Downside Protection Adjustments for Portfolio Diversifi cation
Level of Diversification Adjustment to Minimum Downside
Protection Level (Multiple)
Strong by industry and by number of securities 1.0x (i.e., no change)
Adequate by industry and by number of securities 1.0x to 1.2x
Adequate by number of securities, one industry 1.2x to 1.3x
Single entity 1.3x to 1.5x

Also noteworthy is:

The importance of credit quality in a portfolio increases as the diversification of the portfolio decreases. To be included as a single name in a split share portfolio, a company should be diversified in its business operations by product and by geography. The rating on preferred shares with exposure to single-name portfolios will generally not exceed the rating on the preferred shares of the underlying company since the downside protection is dependent entirely on the value of the common shares of that company.

They are, quite reasonably, unimpressed by call writing strategies:

DBRS views the strategy of writing covered calls as an additional element of risk for preferred shareholders because of the potential to give up unrealized capital gains that would increase the downside protection available to cover future portfolio losses. Furthermore, an option-writing strategy relies on the ability of the investment manager. The investment manager has a large amount of discretion to implement its desired strategy, and the resulting trading activity is not monitored as easily as the performance of a static portfolio. Relying partially on the ability of the investment manager rather than the strength of a split share structure is a negative rating factor.

They even have a table for the effect of cash grind (which is a special case of Sequence of Return Risk):

Impact of Capital Share Distributions on Initial Ratings
Size of Regular Capital Distributions (see note) NAV Test Likely Impact on Initial Rating
Excess income None None
5% or less per annum 1.75x coverage 0-1 notches lower
5% or less per annum 1.5x coverage 1 notch lower
8% per annum 1.75x coverage 1-2 notches lower
8% per annum 1.5x coverage 2 notches lower
The likely impact on ratings for these distribution sizes assumes a typical split share structure (preferred shares $10 each, capital shares $15 each). If a structure were to differ from this assumption significantly, the likely impact on the preferred share rating will not match what is shown in the table.

I consider their VaR methodology highly suspect:

The steps in the VaR analysis completed by DBRS are as follows:
(1) Gather daily historical performance data for a defined period.
(2) Annualize each daily return by multiplying it by the square root of the number of trading days in a year.
(3) Sort the annualized returns from lowest to highest.
(4) Using the initial amount of downside protection available to the preferred shares, determine the appropriate dollar loss required for the preferred shares to be in a loss position (i.e., asset coverage ratio is less than 1.0)
(5) Solve for the probability that will yield a one-year VaR at the appropriate dollar-loss amount for the transaction.
(6) Determine the implied long-term bond rating by comparing the probability of default with the DBRS corporate cumulative default probability table.
(7) Link the implied bond rating to the appropriate preferred share rating using an assumption that the preferred shares of a company should be rated two notches below the company’s issuer rating.

As stated, it’s nonsensical. Whatever one’s views on long-term mean reversion of equity returns, there is definitely short-term mean reversion, so annualizing a single day’s return is far too pessimistic. Using the square root of the days in the year to annualize the results implies that each day’s returns are independent.

There’s a big table titled “Maximum Preferred Share Ratings Based on Portfolio Credit Quality and Correlation”, which I won’t reproduce here simply because it’s too big.

I am not a big fan of this “base case plus adjustments” methodology and (not surprisingly) continue to prefer my own stochastic model, which is used in every edition of PrefLetter. Implications of my methodology have been discussed in my articles It’s all about Sequence and Split Share Credit Quality.

Why Were Canadian Banks So Resilient? Other Views

Friday, April 12th, 2013

An article in the Globe and Mail by Grant Bishop titled Canadian Banks: Bigger is better highlighted views regarding Canadian banking regulation that differ from my own:

A recently published paper by Columbia University’s Charles Calomiris argues that banking crises have political foundations. He contends that anti-populist political structures guard against the capture of the banking system by voters seeking easy credit. But parts of Mr. Calomiris’ thesis have raised eyebrows: he implies that the British aim of oppressing French Canada is the indirect root of Canada’s long-term banking stability.

What policy-makers should take away from Calomiris’ account is the importance of keeping politics away from banking regulation. As Anita Anand and Andrew Green of the University of Toronto also argue in a recent paper, the independence of financial regulation from politics in Canada is a cornerstone of the integrity of our system.

It is, of course, my argument that both OSFI and the Bank of Canada have become intensely politicized in over the recent past, with both Carney and Dickson acting as stalking horses for politically inspired regulatory ideas (mostly to do with contingent capital and the Ban The Bond movement).

In another recent paper, UBC’s Angela Redish and her colleagues ask “Why didn’t Canada have a banking crisis in 2008 (or in 1930, or 1907, or…)?” They conclude that, in contrast with the fragmented U.S. system, Canadian banking stability owed to the single overarching regulator and the high concentration of the sector.

A cross-country study by World Bank researchers reaches a similar conclusion: more concentrated banking systems appear to be more stable.

The paper by Anita Anand & Andrew Green titled REGULATING FINANCIAL INSTITUTIONS: THE VALUE OF OPACITY has the abstract:

In this article, we explore a question of institutional design: What characteristics make a regulatory agency effective? We build on the growing body of administrative law literature that rigorously examines the impacts of transparency, insulation, and related administrative processes. We argue that there are certain benefits associated with an opaque and insulated structure, including the ability to regulate unfettered by partisan politics and majoritarian preferences. We examine Canada’s financial institution regulator, the Office of the Superintendent of Financial Institutions (OSFI), whose efficacy in part explains the resilience of Canada’s banking sector throughout the financial crisis of 2008. In particular, OSFI operates in a “black box”, keeping information about the formation of policy and its enforcement of this policy confidential. With its informational advantage, it is able to undermine the possibility that banks will collude or rent-seek. Our conclusions regarding the value of opacity cut against generally held views about the benefits of transparency in regulatory bodies.

Huh. I wonder if they’re also in favour of re-establishing the Star Chamber, which worked very well until it didn’t.

The body of this paper commences with the startling claim:

Canada’s financial institutions weathered the crisis well relative to their international peers, an outcome that has been attributed at least in part to the presence of an effective regulator.[Footnote]

Footnote reads: See Canadian Securities Institute, Canadian Best Practices Take Centre Stage at Financial Conference in China (25 February 2009), online: Focus Communications Inc ; Financial Services Authority, Bank of England & Treasury, Financial Stability and Depositor Protection: Further Consultation (London: HM Treasury, 2008), online: HM Treasury

The Canadian Securities Institute is not something I consider an authoritative, or even credible, source, so I looked for the paper HM Treasury: Financial stability and depositor protection: further consultation, July 2008 which contains four instances of the word Canada (in each case, grouped with other countries as a participant or example), no instances of “OSFI”, one instance of “Superintendent” (the “Superintendents’ Association of Northern Ireland”, which responded to an earlier consultation), and two instances of “effective regulat”, both of which referred to the UK bodies aspirations.

Ratnovski and Huang, for example, examine the performance of the seventy-two largest commercial banks in OECD countries during the financial crisis, analyzing the factors behind Canadian banks’ relative resilience at this time.[Footnote 4] They identify two main causes, one of which is regulatory factors that reduced banks’ incentives to take excessive risks.[Footnote 5]

Footnote 4 reads: Lev Ratnovski & Rocco Huang, “Why Are Canadian Banks More Resilient?” (2009) IMF Working Paper No 152, online: Social Science Research Network .

Footnote 5 reads: Ibid. Other factors included a higher degree of retail depository funding, and to a lesser extent, sufficient capital and liquidity.

The Ratnovski & Huang paper was reviewed in my post Why Have Canadian Banks Been More Resilient?. The paper is available from the IMF

Specifically, Ratnovski & Huang say:

The second part of this paper (Section 3) reviews regulatory and structural factors that may have reduced Canadian banks’ incentives to take risks and contributed to their relative resilience during the turmoil. We identify a number of them: stringent capital regulation with higher-than-Basel minimal requirements, limited involvement of Canadian banks in foreign and wholesale activities, valuable franchises, and a conservative mortgage product market.

Returning to the Anand & Green paper, the guts of the argument is:

OSFI’s efficacy may at first be surprising. It is the primary regulator of Canada’s five big banks (which account for approximately 85 percent of Canada’s banking sector).10 Its power to overcome the possibility for rent seeking or capture by these institutions depends on its rule making and enforcement processes, and forms of accountability for its actions. That is, if not sufficiently independent, regulated institutions might seek rules that favour their profitability at the expense of consumers. Yet on many important issues, including capital adequacy requirements, OSFI relies on guidelines rather than regulations. OSFI creates these guidelines through a largely opaque process in which the regulated parties have early input. Other parties (such as consumers) not only face considerable collective action problems but are limited to a stunted notice and comment process. The comment process thereby potentially privileges the views of regulated institutions. Further, in addressing compliance with regulations or guidelines, OSFI attempts to work informally with regulated parties, ultimately rendering it unnecessary for it to take formal enforcement action. This structure seems to point more towards capture by the large (albeit regulated) players. To aid in the discussion of the appropriate institutional structure for banks, we examine whether Canada’s financial institutions—and banks in particular—have been successful because of, or despite, the presence of OSFI.

Later comes a real howler:

Despite its insulation and opacity, however, OSFI is almost universally viewed to be an effective regulator.[Footnote 17]

[Footnote 17 reads] The Strategic Counsel, Qualitative Research: Deposit-Taking Institutions Sector Consultation,
(Toronto: 2010) at 2-6, online: OSFI < http://www.osfi-bsif.gc.ca/app/DocRepository/ 1/eng/reports/osfi/DTISC_e.pdf >

So lets have a look at the OSFI document that says OSFI is doing a great job. It’s a survey. Who is surveyed, you may ask. So I will tell you:

A total of 49 one-on-one interviews were conducted among Chief Executive Officers (CEOs), Chief Financial Officers (CFOs), Chief Risk Officers (CROs), Chief Compliance Officers (CCOs), other senior executives, auditors and lawyers of deposit-taking institutions regulated by OSFI.

The use of the phrase “almost universally” seems … a little strained.

The paper’s arguments are founded upon the premise that OSFI is doing a great job, therefore the way it does it must also be great. Unfortunately, the premises do not support the conclusions.

BoC Releases Autumn 2012 Review

Sunday, November 18th, 2012

The Bank of Canada has released the Bank of Canada Review – Autumn 2012 with articles:

The first article attracted some notice from the Globe and Mail, in pieces by David Parkinson and Kevin Carmichael. An earlier working paper by Ms. Pomeranets and Daniel G. Weaver which focussed on the historical experience in New York State was reviewed on PrefBlog. This paper was quoted in support of the conclusion:

On balance, the literature suggests that an FTT is unlikely to reduce volatility and may instead increase it, which is consistent with arguments made by opponents of the tax.

The current paper is introduced with:

ƒƒ

  • The financial transaction tax (FTT) is a policy idea with a long history that, in the wake of the global financial crisis, has attracted renewed interest in some quarters.
  • Historically, there have been two motivating factors for the introduction of the tax. The first is its potential to raise substantial revenues, and the second is its perceived potential to discourage speculative trading and reduce volatility.
  • There is, however, little empirical evidence that an FTT reduces volatility. Numerous studies suggest that an FTT harms market quality and is associated with an increase in volatility and a decrease in both market liquidity and trading volume. When the cost of acquiring a security rises, its required rate of return and cost of capital also increase. As a result, an FTT may reduce the flow of profitable projects, decreasing levels of real production, expansion, capital investment and even employment.
  • There are many unanswered questions regarding the design of FTTs and their ability to raise significant revenues.

The imposition of a 20bp transaction charge in France (which has resulted in a greater interest in derivatives such as Contracts For Difference) was discussed on PrefBlog on November 15.

The authors also see fit to highlight:

Umlauf (1993) examines how financial transaction taxes (FTT s) affect stock market behaviour in Sweden. In 1984, Sweden introduced a 1 per cent tax on equity transactions, which was doubled to 2 per cent in 1986. Umlauf studies the impact of these changes on volatility and finds that volatility did not decline following the increase to the 2 per cent tax rate, but equity prices, on average, did decline.

Furthermore, Umlauf concludes that 60 per cent of the trading volume of the 11 most actively traded Swedish share classes migrated to London to avoid the tax. After the migration, the volatilities of London-traded shares fell relative to their Stockholm-traded counterparts. As trading volumes fell in Stockholm, so did revenues from capital gains taxes, completely offsetting the 4 billion Swedish kronor that the tax had raised in 1988.

Pomeranets also points out:

Critics of the FTT argue that it reduces market liquidity by making each trade more costly, simply because it is a tax and also because market forces react to it by offering fewer and lower-quality trading opportunities. The cost impact is evident in the way the FTT widens the bid-ask spread. Bid-ask spreads compensate traders for three things—order-processing costs, inventory risk and information risk—often called the three components of the bid-ask spread. The FTT will increase the costs of these three components in the following ways:…

And finally, we get the the social function of markets – capital formation:

Another measure of market quality examined in the literature is the cost of capital. Amihud and Mendelson (1992) conclude that a 0.5 per cent FTT would lead to a 1.33 per cent increase in the cost of capital. This result is consistent with their previous work that finds a positive relationship between required rates of return and transaction costs (Amihud and Mendelson 1986). When the cost of acquiring a security increases, its required rate of return and cost of capital also increase. As a result, an FTT would increase the cost of capital, which could have several harmful consequences. It could reduce the flow of profitable projects, shrinking levels of real production, expansion, capital investment and even employment.

Ms. Pomeranets concludes:

This article examines the main arguments regarding the costs and benefits of FTTs and explores some of the significant practical issues surrounding the implementation of an FTT. Little evidence is found to suggest that an FTT would reduce speculative trading or volatility. In fact, several studies conclude that an FTT increases volatility and bid-ask spreads and decreases trading volume. Furthermore, a number of challenges associated with the design and effectiveness of an FTT could limit the revenues that FTTs are intended to raise. For these reasons, countries considering the imposition of FTTs should be aware of their negative consequences and the challenges involved in implementation.

The second article examines a hobby-horse of mine – central clearing for derivatives, a dangerous policy recklessly promoted by the political establishment both directly and through their mouthpiece, Lapdog Carney. The last BoC attempt at justification, in the June 2012 Financial System Review was discussed on PrefBlog.

In this go-round, the authors state:

ƒƒ
  • Central counterparties manage and mitigate counterparty credit risk in order to make markets more resilient and reduce systemic risk. Better management of counterparty risk can also open up markets to new participants, which in turn should reduce concentration and increase competition. These benefits are maximized when access to central counterparties is available to a wide range of market participants.
  • In an over-the-counter market, there is an important trade-off between competition and risk. Concentrated, less competitive markets are more profitable and thus participants are less likely to default. But a central counterparty that provides sufficient access can improve this trade-off, since the gains from diversification—which will become greater as participation grows—can simultaneously reduce risk and increase competition.
  • Regulators have developed, and central counterparties are implementing, new standards for fair, open and risk-based access criteria. Such standards will, among other things, counter any incentives that might exist for members of a central counterparty to limit access in order to protect their market share.

In other words, a major goal of Central Clearing is to provide employment for regulators, who will make fair and open, and, it must be emphasized, entirely corruption-free decisions regarding which smaller and and less creditworthy firms will be admitted to the club.

The crux of the matter is this:

The improved management of counterparty credit risk at a CCP opens markets to greater participation, which can increase competition. In OTC markets that are cleared bilaterally, participants are directly exposed to the risk that their counterparties may default and therefore have an incentive to restrict trading to counterparties that are known to be creditworthy. When a CCP with strong risk controls takes on the management of credit risk, however, participants can feel more secure trading with others—even anonymously— since the CCP guarantees that the terms of the trade will be honoured.

In other words, when the Bank of Downtown Plonksville enters into a trade with central clearing, its counter-party will charge exactly the same risk premium as it charges to the Bank of Canada. Some people consider this to be an advantage of the new regime.

If direct access to a CCP was limited to the largest dealers, their systemic importance would increase, potentially exacerbating the “too-big-to-fail” problem and preventing the CCP from providing the full benefits of diversification. Limited access could also make mid-tier institutions more vulnerable in times of stress and slow the transition to central clearing (Slive, Wilkins and Witmer 2011).

I wonder if the CCP itself is “too-big-to-fail” ….

The authors emphasize the importance of regulators and their awesomely wise, highly informed decisions throughout the process.

The third article is introduced with:

ƒƒ
  • The financial crisis of 2007–09 and the subsequent extended period of historically low real interest rates in a number of major advanced economies have revived the question of whether economic agents are willing to take on more risk when interest rates remain low for a prolonged time period.
  • This type of induced behaviour—an increased appetite for risk that causes economic agents to search for investment assets and strategies that generate higher investment returns—has been called the risk-taking channel of monetary policy.
  • Recent academic research on banks suggests that lending policies in times of low interest rates can be consistent with the existence of a risk taking channel of monetary policy in Europe, South America, the United
    States and Canada. Specifically, studies find that the terms of loans to risky borrowers become less stringent in periods of low interest rates. This risk-taking channel may amplify the effects of traditional transmission mechanisms, resulting in the creation of excessive credit.

This effect is also inherent in the offsetting behaviours of the “expectations” component and the “risk premium” component of long-term rates, discussed in the Summer 2012 review discussed in PrefBlog.

The results suggest that the difference in the all-in-drawn spreads between loans to risky and less-risky borrowers decreases when interest rates are low relative to periods when they are high. Accounting for loan, firm and bank balance-sheet factors, as well as yearly and quarterly factors, the results show that the difference in the all-in-drawn spread between risky and less-risky borrowers is 48 per cent smaller when interest rates are lower than when they are higher (based on the first definition). This result is also economically significant: it implies that the difference in loan rates between risky and less-risky borrowers is 107 basis points smaller when the rates are low than when they are high.

The fourth article, of great interest to those in the field and to the Bank, but of somewhat less importance to other investors, is summarized as:

ƒƒ
  • The share of cash in overall retail payments has decreased continuously
    over the past 20 years.
  • Recent Bank of Canada research on consumers’ choice of payment instruments indicates that cash is frequently used for transactions with low values because of its speed, ease of use and wide acceptance, while debit and credit cards are more commonly used for transactions with higher values because of perceived attributes such as safety and record keeping.
  • While innovations in retail payments currently being introduced into the Canadian marketplace could lead to a further reduction in the use of cash over the longer term, the implications for the use of cash of some of the structural and regulatory developments under way are less clear.
  • The Bank of Canada will continue to monitor various developments in retail payments and study their implications for the demand for cash over the longer term.

BoC Releases Summer 2012 Review

Saturday, November 17th, 2012

This post is really late, I know. But I’m catching up slowly!

The Bank of Canada has released the Bank of Canada Review: Summer 2012 with the articles:

  • Measurement Bias in the Canadian Consumer Price Index: An Update
  • Global Risk Premiums and the Transmission of Monetary Policy
  • An Analysis of Indicators of Balance-Sheet Risks at Canadian Financial Institutions

The first article, by Patrick Sabourin, makes the point:

Commodity-substitution bias reflects the fact that, while the weights of items in the CPI basket are held constant for a period of time, a change in relative prices may cause patterns in consumer spending to change. If, for example, the price of chicken were to increase considerably following supply constraints, consumers would likely purchase less chicken and increase their consumption of beef, since the two meats may be perceived as substitutes for each other. The CPI, however, assumes that consumers would continue to purchase the same quantity of chicken following a price change. This means that the measured change in the CPI will overstate the increase in the minimum cost of reaching a given standard of living (i.e., there is a positive bias).

I’ve always had trouble with this concept. I love beef. I despise chicken. As far as I am concerned, there is a separate quality adjustment that must be made that would mitigate, if not completely offset, the substitution adjustment when beef becomes too expensive and I have to eat chicken.

And, I am sure, this occurs for every other possible substitution. Although I might try explaining to my girlfriend that Coach handbags have become too expensive and I will follow theoretically approved procedure and get her, say, a plastic shopping bag for Christmas instead.

The second article, by Gregory H. Bauer and Antonio Diez de los Rios examines the relationship between long- and short-term interest rates:

ƒƒ
  • An important channel in the transmission of monetary policy is the relationship between the short-term policy rate and long-term interest rates.
  • Using a new term-structure model, we show that the variation in long-term interest rates over time consists of two components: one representing investor expectations of future policy rates, and another reflecting a term-structure risk premium that compensates investors for
    holding a risky asset.
  • The time variation in the term-structure risk premium is countercyclical and largely determined by global macroeconomic conditions. As a result, long-term rates are pushed up during recessions and down during times of expansion. This is an important phenomenon that central banks need to take into account when using short-term rates as a policy tool.
  • We illustrate this phenomenon by showing that the “conundrum” observed in the behaviour of long-term interest rates when U.S. monetary policy was tightened during the 2004–05 period was actually part of a global phenomenon.

In their model:

The long-term rate is decomposed into two terms in the following equation:

The first term involves market expectations, that is, the average expected 1-year interest rate over the next 10 years. In our model, we use the 1-year interest rate in country j as a proxy for that country’s policy rate. Observed yields will, on average, equal the expectations component only under the “expectations hypothesis,” which has been statistically rejected in many studies.

The rejection of the expectations hypothesis is typically attributed to the existence of the second term in equation (1), a time-varying term-structure risk premium. The risk premium represents the extra compensation that investors require for holding a 10-year bond. In our model, agents hold portfolios for one year, and the prices of long-term bonds may change considerably over that period, necessitating a higher expected rate of return. Several studies have focused on the properties of the term-structure risk premium (see Cochrane and Piazzesi (2005) and their references).

The second real-world aspect of the model consists of the constraints placed
on the time-varying risk premium, the second component of equation (1). Previous work has shown that imposing restrictions on the term-structure risk premium makes the forecast values of interest rates more realistic than those in unrestricted models.7 Our model restricts risk premiums on bonds through its assumption of global asset pricing; i.e., in integrated international markets, only global risks carry significant risk premiums. As a result, the term-structure risk premium on any bond is driven by the bond’s exposure to the global level and slope factors only. The local factors, while helping to explain prices at a point in time, do not affect expected returns (i.e., changes in prices), since investors can eliminate their effects by diversifying with a global portfolio.


Click for Big


Click for Big

This effect is evident during the financial crisis of 2007–09. While short-term U.S. rates fell by 263 basis points, long-term U.S. rates decreased by a mere 23 basis points. This occurred because, although the Fed succeeded in lowering expectations of future policy moves by 224 basis points (Table 2), the term-structure risk premium rose by 190 basis points.

The analysis in this article demonstrates the extent to which the global term-structure risk premium as well as monetary policy actions influence long-term interest rates. The risk premium is countercyclical to the global business cycle and thus may affect long-term interest rates in the opposite direction to that related to central bank policy actions. As a result, central banks need to take these forces into account in appropriately calibrating their policy response. Indeed, given the current low level of long-term rates, understanding movements in the global risk premium is important for the monetary policy decision-making process.

Since monetary policy may affect expectations and the term-structure risk premium differently, the levels of these two components may, in turn, affect the macroeconomy in various ways. For these reasons, understanding the effects on growth and inflation of movements in market expectations and the global term-structure risk premium is an important aim for future research.

The third article, by David Xiao Chen, H. Evren Damar, Hani Soubra and Yaz Terajima, will be of interest to students of Canadian banking and regulation thereof:

    ƒƒ
  • This article compares different types of Canadian financial institutions by examining over time ratios that are indicators of four balance-sheet risks—leverage, capital, asset liquidity and funding.
  • The various risk indicators have decreased during the past three decades for most of the non-Big Six financial institutions in our sample and have remained relatively unchanged for the Big Six banks, resulting in increasing heterogeneity in these indicators of balance-sheet risks.
  • The observed overall decline and increased heterogeneity in the risk indicators follow certain regulatory changes, such as the introduction of liquidity guidelines on funding in 1995 and the implementation of bank specific leverage requirements in 2000. This suggests that regulatory changes have had significant and heterogeneous effects on the management of balance sheets by financial institutions and, given that these regulations required more balance-sheet risk management, they contributed to the increased resilience of the banking sector.

Click for Big

Of particular interest is the funding ratio, defined as:

we define a funding ratio as the proportion of a bank’s total assets that are funded by wholesale funding (a relatively less stable funding source than retail (personal) deposits, for example):

Funding ratio (%) = 100 x (Non-personal deposits + repos)/Total assets.

A higher funding ratio indicates that a bank relies on greater market-based funding and is therefore more exposed to adverse shocks in the market that could disrupt continuous funding of its assets.

The Funding Ratio is of great interest due to Moody’s recent highlighting of:

the large Canadian banks’ noteworthy reliance on confidence-sensitive wholesale funding, which is obscured by limited public disclosure, increases their vulnerability to financial markets turmoil.

The BoC paper also highlights the eagerness of the politicians to inflate the housing bubble:

In addition, the growing popularity of mortgage-loan securitization in the late 1990s, following the introduction of the Canada Mortgage Bonds Program, raised the percentage of mortgage loans on bank balance sheets, especially among large and medium-sized financial institutions.(note)

Note reads: Increasing demand for mortgage loans caused by demographic shifts and lower down-payment
requirements has also played a role. See Chen et al. (forthcoming) for more details.

The authors conclude:

This article analyzes the balance-sheet ratios of Canadian financial institutions. Overall, various measures of risk have decreased over the past three decades for most non-Big Six institutions and have remained relatively unchanged for the Big Six banks. We find that smaller institutions, particularly trust and loan companies, generally have lower leverage and higher capital ratios than other types of financial institutions, including the Big Six banks. They also have larger holdings of liquid assets and face lower funding risk compared with other financial institutions. The observed overall decline and increased heterogeneity in risk (as measured by divergent trends in the leverage, capital and asset-liquidity ratios) followed certain regulatory changes, such as the introduction of liquidity guidelines on funding in 1995 (which preceded a sharp decline in, and more dispersion of, funding ratios among non-Big Six institutions) and the implementation of bankspecific leverage requirements in 2000 (which preceded a divergence in leverage ratios between the Big Six and non-Big Six institutions). This suggests that regulatory changes had significant and heterogeneous impacts on the management of balance sheets by financial institutions, resulting in the increased resilience of the banking system. While market discipline may have also played a role, more research is needed to identify changes in the degree of market discipline in the Canadian banking sector.

Given the observed variation in behaviour among Canadian financial institutions, continued analysis of different types of institutions can enable a more comprehensive assessment of financial stability. Understanding the different risks faced by various types of financial institutions improves the framework that the Bank of Canada uses to monitor developments of potential risks in the banking sector.

The statement that “This suggests that regulatory changes had significant and heterogeneous impacts on the management of balance sheets by financial institutions, resulting in the increased resilience of the banking system.” strikes me as being a little bit fishy. Regulatory change did indeed have “significant and heterogeneous impacts on the management of balance sheets by financial institutions”, but whether this resulted “in the increased resilience of the banking system.” has not been addressed in the paper. That was the intention, certainly, and may well be true, but a cause and effect relationship has not been demonstrated.

Sovereign Credit Ratings: Driver or Reflector?

Wednesday, September 5th, 2012

Manfred Gärtner and Björn Griesbach have published a paper titled Rating agencies, self-fulfilling prophecy and multiple equilibria? An empirical model of the European sovereign debt crisis 2009-2011. The introduction for the paper was reproduced badly on the link provided; there’s another version on Scribd:

We explore whether experiences during Europe’s sovereign debt crisis support the notion that governments faced scenarios of self-fulfilling prophecy and multiple equilibria. To this end, we provide estimates of the effect of interest rates and other macroeconomic variables on sovereign debt ratings, and estimates of how ratings bear on interest rates. We detect a nonlinear effect of ratings on interest rates which is strong enough to generate multiple equilibria. The good equilibrium is stable, ratings are excellent and interest rates are low. A second unstable equilibrium marks a threshold beyond which the country falls into an insolvency trap from which it may only escape by exogenous intervention. Coefficient estimates suggest that countries should stay well within the A section of the rating scale in order to remain reasonably safe from being driven into eventual default.

The literature review shows some controversy:

Among the first to put rating agencies into the game, in the sense that ratings might have an influence on outcomes if multiple sunspot equilibria exist, were Kaminsky & Schmukler (2002). In a panel regression they show that sovereign debt ratings do not only affect the bond market but also spill over into the stock market. This effect is stronger during crises, which could be explained by the presence of multiple equilibria. As a consequence they claim that rating agencies contribute to the instability in emerging financial markets. Carlson & Hale (2005) argue that if rating agencies are present, multiple equilibria emerge in a market in which otherwise only one equilibrium would exist. The purely theoretical paper is an application of global game theory and features heterogeneous investors. Boot, Milbourn & Schmeits (2006) arrive at the opposite conclusion: ratings serve as a coordination mechanism in situations where multiple equilibria loom. Using a rational-herd argument, they show that if enough agents base their investment decisions on ratings, others rationally follow. Since ratings have economic consequences, they emphasize that the role of rating agencies is probably far greater than that of the self-proclaimed messenger.

“Multiple sunspot equilibria”? I had to look that one up:

‘Sunspots’ is short-hand for ‘the extrinsic random variable’ (or ‘extrinsic randomizing device’) upon which agents coordinate their decisions. In a proper sunspot equilibrium, the allocation of resources depends in a non-trivial way on sunspots. In this case, we say that sunspots matter; otherwise, sunspots do not matter. Sunspot equilibrium was introduced by Cass and Shell; see Shell (1977) and Cass and Shell (1982, 1983). Sunspot models are complete rational-expectations, general-equilibrium models that offer an explanation of excess volatility.

The authors regress a nine-factor model:

  • Rating
  • GDP Growth
  • GDP per capital
  • Budget Surplus
  • Primary Surplus
  • Debt Ratio
  • Inflation
  • Bond Yield
  • Credit Spread

These indicators explain 60 percent of the variance of sovereign bond ratings in our panel. All estimated coefficients possess the expected signs, though not all are significantly different from zero. Ratings are found to improve with higher income growth and income levels, or with better overall and primary budget situations. Ratings deteriorate when the debt ratio, inflation or government bond yields go up.

Applying the test proposed in Davies (1987), the null hypothesis of no break was rejected, and the break point was found to lie between a BBB+ and a BBB rating.23 Regression estimates for the resulting two segments are shown as regressions 2a and 2b in Table 3. The differences between the two segments are striking. The slope coefficients differ by a ratio of ten to one. While, on average, a rating downgrade by one notch raises interest rates by 0.3 percentage points when ratings are in the range between AAA and A-, which comprises seven categories, a downgrade by one step raises the interest rate by 3.12 percent once the rating has fallen into the B segment or below.

That makes sense, at least qualitatively – default probabilities are not linear by notch, according to the agencies.

Now they get to the really controversial part:

This means that at sovereign debt ratings outside the A-segment, i.e. of BBB+ or worse, a downgrade generates an increase in the interest rate that justi es or more than justi es the initial downgrade, and may trigger a spiral of successive and eventually disastrous downgrades. Only countries in the A-segment of the rating scale appear to be safe from this, at least when the shocks to which they are exposed are only small. However, this only applies when marginal rating shocks occur. Larger shocks, and these have not been the exceptions during Europe’s sovereign debt crisis, may even jeopardize countries which were in secure A territory. We may illustrate this by looking at the impulse responses of equation (11) to shocks of various kinds and magnitudes. This provides us with insolvency thresholds that identify the size of a rating downgrade required to destabilize the public finances of countries with a given sovereign debt rating.

When rating shocks last, however, as has apparently been the case for the eurozone’s PIGS members, much smaller unsubstantiated rating changes may play havoc with government bond markets and suce to run initially healthy countries into trouble, as shown in Figure 6(b). In this scenario, an arbitrary, yet persistent, downgrade by two notches would trigger a downward spiral in a country with an AA rating. Rising interest rates would call for further downgrades, which would appear to justify the initial downgrade as an apparently good forecast.

And then they get to the real meat:

A more detailed look at the dynamics of the effect of debt rating downgrades on interest rates revealed that at least for countries with sovereign debt ratings outside the A range even erroneous, arbitrary or abusive rating downgrades may easily generate the very conditions that do actually justify the rating. Combined with earlier evidence that many of the rating downgrades of the eurozone’s peripheral countries appeared arbitrary and could not be justified on the basis of rating algorithms that explain the ratings of other countries or ratings before 2009, this result is highly discomforting. It urges governments to take a long overdue close look at financial markets in general, and at sovereign bond markets in particular, and at the motivations, dependencies and conflicts of interest of key players in these markets.

This paper has S&P’s shorts in a knot, and they have indignantly replied with a paper by Moritz Kraemer titled S&P’s Ratings Are Not “Self-Fulfilling Prophecies”:

In questioning the agencies’ integrity, the authors appear to suggest that the agencies follow some hidden agenda that has led them to act “abusively”. As is usually the case with conspiracy theories, little by way of evidence or persuasive rationale is offered to explain who benefits from the agencies’ supposed “strategic” or “disastrous” downgrades. Alas, the reality is not nearly as spectacular: rating agencies take their decisions based on their published criteria and are answerable to regulators if they fail to do so.

The authors also claim that the agencies’ rating actions “cannot be justified” because they do not accord with a mechanistic “ratings algorithm” of the authors’ own devising. Apart from the fact that ratings are subjective opinions as to possible future creditworthiness (and, like other opinions, neither “right” nor “wrong”), the authors fail to justify why their algorithm has more merit than the published comprehensive methodologies of the rating agencies. Nevertheless, so persuaded are the authors of their own algorithm they admonish the agencies for “manipulating the market by deviating” from the authors’ “correct rating algorithm”!

Standard & Poor’s, for one, long ago rejected an algorithmic approach to sovereign ratings as simplistic and unable to account for the subtleties of a sovereign’s political and institutional behavior.

Even more seriously:

At the heart of the paper’s confusion is its treatment of causality and correlation. The paper suggests that investors react to rating changes by asking for higher interest rates when a rating is lowered, but provide no evidence for their claim. In fact, the authors probably cannot provide such evidence as their data set has merely an annual observation frequency. To show causality, the paper should present data that played out during a period bounded by at least two yearly observation points. With such limited data, one cannot determine what came first: rating action or interest movement, or, indeed, whether one caused a change in the other at all!

The suggestion that in Europe’s financial crisis, the underlying pattern was one of ratings causality is effectively contradicted by the fact that spreads did not react for several years to our downgrades (starting in 2004) of several eurozone periphery countries.

Until early 2009, the CDS-market traded swaps on Portugal as though it were a ‘AAA’ credit (i.e. four notches above our rating at the time). When sentiment changed rapidly, the market “downgraded” Portugal to around ‘B’ in 2010, a full eight notches below the S&P rating at the time. Suggesting that the relatively modest rating changes had caused this massive sell-off appears far-fetched.

And then they get downright nasty:

We note that under the paper’s algorithm Greece should have been downgraded by a mere 0.15 notches between 2009 and 2011. In our view, the algorithm therefore would have entirely missed the Greek default in early 2012, the largest sovereign restructuring in financial history. By contrast, far from having acted in an “arbitrary or abusive” manner, Standard & Poor’s anticipated Greece’s default well before it occurred.

Basel Committee Releases D-SIB Proposal For Comments

Friday, June 29th, 2012

In addition to tweaking the rules on liquidity the Basel Committee on Banking Supervision has released a consulative document regarding A framework for dealing with domestic systemically important banks – important for Canada since we’ve got six of ’em! Provided, of course, that OSFI is honest about the assignments, which is by no means assured.:

Principle 2: The assessment methodology for a D-SIB should reflect the potential impact of, or externality imposed by, a bank’s failure.
….
Principle 8: National authorities should document the methodologies and considerations used to calibrate the level of HLA [Higher Loss Absorbency] that the framework would require for D-SIBs in their jurisdiction. The level of HLA calibrated for D-SIBs should be informed by quantitative methodologies (where available) and country-specific factors without prejudice to the use of supervisory judgement.

Principle 9: The HLA requirement imposed on a bank should be commensurate with the degree of systemic importance, as identified under Principle 5. In the case where there are multiple D-SIB buckets in a jurisdiction, this could imply differentiated levels of HLA between D-SIB buckets.

[Assessment Methodology Principle 2] 13. Paragraph 14 of the G-SIB rules text states that “global systemic importance should be measured in terms of the impact that a failure of a bank can have on the global financial system and wider economy rather than the risk that a failure can occur. This can be thought of as a global, system-wide, loss-given-default (LGD) concept rather than a probability of default (PD) concept.” Consistent with the G-SIB methodology, the Committee is of the view that D-SIBs should also be assessed in terms of the potential impact of their failure on the relevant reference system. One implication of this is that to the extent that D-SIB indicators are included in any methodology, they should primarily relate to “impact of failure” measures and not “risk of failure” measures.

Principle 7: National authorities should publicly disclose information that provides an outline of the methodology employed to assess the systemic importance of banks in their domestic economy.

[Higher Loss Absorbency Principle 8] 31. The policy judgement on the level of HLA requirements should also be guided by country-specific factors which could include the degree of concentration in the banking sector or the size of the banking sector relative to GDP. Specifically, countries that have a larger banking sector relative to GDP are more likely to suffer larger direct economic impacts of the failure of a D-SIB than those with smaller banking sectors. While size-to-GDP is easy to calculate, the concentration of the banking sector could also be considered (as a failure in a medium-sized highly concentrated banking sector would likely create more of an impact on the domestic economy than if it were to occur in a larger, more widely dispersed banking sector).

[Higher Loss Absorbency Principle 10] 40. The Committee is of the view that any form of double-counting should be avoided and that the HLA requirements derived from the G-SIB and D-SIB frameworks should not be additive. This will ensure the overall consistency between the two frameworks and allows the D-SIB framework to take the complementary perspective to the G-SIB framework.

Principle 12: The HLA requirement should be met fully by Common Equity Tier 1 (CET1). In addition, national authorities should put in place any additional requirements and other policy measures they consider to be appropriate to address the risks posed by a D-SIB.