The municipal rating scale has been discussed often on PrefBlog – most recently in Moody’s to Assign Global Ratings to Municipals … after all, municipals in the States are cousins to preferreds in Canada in that they are logically included in taxable fixed income accounts – although there are major differences in credit and term exposure, of course! Liquidity can be similar though.
Moody’s and Fitch are knuckling under to the pressure:
Moody’s Investors Service and Fitch Ratings took steps to address calls by public officials from California to Congress to rate municipal bonds by the same standards as those for debt sold by companies and countries.
Moody’s started taking comments on its plan to give state and local governments the option to get a so-called global-scale rating, based on the criteria used to assess corporations, for tax-exempt bonds beginning in May. Fitch named Robert Grossman to lead efforts by its public finance unit to explore whether corporate and municipal ratings should be blended.
…
When California sold $250 million of bonds to fund stem- cell research in October, the state paid $46,200 for the municipal scale rating, $25,000 more for the global scale and $6,250 a year for the life of the bond, Dresslar said. Moody’s municipal rating on the bonds is A1, while the global scale rating is Aaa.
If California, the most-populous U.S. state, had top credit ratings, it might save more than $5 billion over the 30-year life of $61 billion in yet-to-be-sold, voter-approved debt, [California State Treasurer Bill] Lockyer has said.
Whoosh! Assuming that savings of … what? 20-30bp annually can be realized at the stroke of a pen is more than just a little hard to swallow, but we’ll get to that in a minute. As I mentioned on March 19 I had an exchange with Naked Capitalism on the topic of Municipal ratings, on the comments to a virtually unrelated thread. I think the exchange is too interesting to linger unread in the comments of an old thread, and I’m too lazy to recast my thoughts … so I’ll extract comments here.
First up was an anonymous Naked Capitalism reader who had read my March 3 report:
Don’t count on help from The Iceheads either:
http://www.prefblog.com/
Naked Capitalism does not explain why all fault lies with the Credit Rating Agencies and not with the issuers and investors; nor does he speculate why Moody’s, for instance, would choose to publish explanations of their municipal rating scale if it’s such a big secret.
There’s a thread on Financial Webring Forum discussing long-term equity premia. It is clear that the long term equity premium will vary, moving marginally up and down in response to transient mispricing – this was discussed in a paper by Campbell, Diamond & Shoven, presented to the (American) Social Security Advisory Board in August 2001 (quoted with a different author for each paragraph):
With a response from Naked Capitalism writer Yves Smith:
He almost always takes issue with what I write.
For the record, the official policy of the rating agencies has been for many many years that ratings are supposed to mean the same thing as regards default risk regardless of the type of asset rated.
They have drifted more and more from that policy but have not been terribly forthcoming (note that S&P in the Wall Street Journal yesterday attempted to maintain that the ratings were indeed consistent). Saying that someone is not forthcoming (as Rosner and Mason said in their extensively documented paper) is not the same as saying secret. They’ve chosen to say as little as they can publicly about the issue of the consistency of their ratings because they know their practices have shifted over time (while regs have been static) and they haven’t been candid.
More important, numerous regulations key off official ratings (“investment grade” being the most glaring). The very existence of those standards presupposes that the ratings standards are consistent. But a long-term drift from those standards has created a huge amount of damage, witness the behavior of AAA CDOs. And no AAA rated asset should be able to be cut in a single review by 12 or 16 grades, as has happened more than occasionally.
The rating agencies do not deserve to be defended, period. If it were possible to sue them, even under a standard that limited their liability, they would have gone out of business long ago. The embarrassment of what would be exposed in discovery would have led to a sharp curtailment of their role.
PrefBlog ought to know full well that the US muni market in particular is full of not-terribly-savvy investors who are ratings-dependent. The ratings are supposed to help solve the “caveat emptor” problem, not exacerbate it.
There were then twelve unrelated comments, after which I found the mention of PrefBlog while doing a vanity check and responded:
Yves Smith : PrefBlog ought to know full well that the US muni market in particular is full of not-terribly-savvy investors who are ratings-dependent.
As I understand it, this is precisely why a different scale has been used for the past 100 years. According to Moody’s: Compared to the corporate bond experience, rated municipal bond defaults have been much less common and recoveries in the event of default have been much higher. As a result, municipal investors have demanded, and rating agencies have provided, finer distinctions within a narrower band of potential credit losses than those provided for corporate bonds.
Like the bond markets themselves, Moody’s rating approach to municipal issuers has been quite distinct from its approach to corporate issuers. In order to satisfy the needs of highly risk averse municipal investors, Moody’s credit opinions about US municipalities have, since their inception in the early years of the past century, been expressed on the municipal bond rating scale, which is distinct from the corporate bond rating scale used for corporations, non-US governmental issuers, and structured finance securities.
Compared to Moody’s corporate rating practices, Moody’s rating system for municipal obligations places considerable weight on an overall assessment of financial strength within a very small band of creditworthiness. Municipal investors have historically demanded a ratings emphasis on issuer financial strength because they are generally risk averse, poorly diversified, concerned about the liquidity of their investments, and in the case of individuals, often dependent on debt service payments for income. Consequently, the municipal rating symbols have different meanings to meet different investor expectations and needs. The different meanings account for different default and loss experience between similarly rated bonds in the corporate and municipal sectors.
Moodys also reviewed their consultations with real live investors in their testimony to the House Financial Services Committee
Yves Smith:
James,
That is rating agency attempts at revisionist history, now that their practices are under the spotlight. Rating agencies have historically claimed that their rating were consistent across issuer and product; indeed, why would so many regulations (Basel I and II, pension fund and insurance), simply designate gross ratings limitations (AAA, investment grade, and so on) without specifying the grade per type of issuer if it was known that the ratings were NOT consistent as to risk? That defies all logic.
Consider this statement from a paper published last year by Joseph Mason and Joshua Rosner:
The value of ratings to investors is generally assumed to be a benchmark of comparability it offers investors in differentiating between securities. Credit rating agencies (CRAs) have long argued that the ratings scales they employed were consistent across assets and markets. Not long ago Moody’s stated “The need for a unified rating system is also reflected in the growing importance of modern portfolio management techniques, which require consistent quantitative inputs across a wide range of financial instruments, and the increased use of specific rating thresholds in financial market regulation, which are applied uniformly without regard to the bond market sector.”6 In a similar pronouncement in 2001 Standard & Poor’s stated their “approach, in both policy and practice, is intended to provide a consistent framework for risk assessment that builds reasonable ratings consistency within and across sectors and geographies”.7
You can read more, and the citations, starting on page 8.
I have also seen (but can’t recall where) quotations of statements from the agencies the early 1990s that were much firmer regarding the consistency of ratings
The paper linked by Mr. Smith has been reviewed on Prefblog. Me:
indeed, why would so many regulations (Basel I and II, pension fund and insurance), simply designate gross ratings limitations (AAA, investment grade, and so on) without specifying the grade per type of issuer if it was known that the ratings were NOT consistent as to risk?
The Basel Accords are not quite so mechanical as all that – there is considerable leeway given to national regulators to interpret the principles and apply them to local conditions.
It is my understanding that General Obligation Municipals are assigned by definition a risk-weight of 20% regardless of rating (this is the same bucket as AAA/AA long-term ratings) while Revenue obligations are assigned a 50% risk-weight (which is the same bucket as “A” long-term ratings).
All this is mere hair-splitting, however. An investor who takes free advice without even asking what the advice means would be better advised to find an advisor.
The ratings agencies do what they do because they want to do it. If anybody has a better idea, they’re welcome to compete. Let a hundred flowers bloom, a hundred schools of thought contend!
Yves Smith:
James,
Competition is most certainly NOT open in the rating agency business. The SEC determines who is a “nationally recognized statistical rating organization.” It does not publish its criteria for how to become one. It took Egan-Jones, the most recent addition, eight to ten years to get the designation.
The Basel I rules made fairly strong use of ratings; Basel II permits more sophisticated organizations to use their own methodologies. But even the Fed’s discount window uses rating agency classifications to ascertain what is acceptable collateral and what hairicut to apply.
Their role is well enshrined in regulations. Per Wikipedia:
Ratings by NRSRO are used for a variety of regulatory purposes in the United States. In addition to net capital requirements (described in more detail below), the SEC permits certain bond issuers to use a shorter prospectus form when issuing bonds if the issuer is older, has issued bonds before, and has a credit rating above a certain level. SEC regulations also require that money market funds (mutual funds that mimick the safety and liquidity of a bank savings deposit, but without FDIC insurance) comprise only securities with a very high rating from an NRSRO. Likewise, insurance regulators use credit ratings from NRSROs to ascertain the strength of the reserves held by insurance companies.
The rating agencies are a protected oligopoly and as a result, are highly profitable. They are not charities
Me:
The most recently recognized US NRSRO is LACE Financial, registered 2008-2-11. Egan-Jones was 2007-12-21.
The big agencies are indeed quite profitable, irregardless of whether or not they’re a protected oligopoly. This is why they are currently under attack by the not-quite-so-profitable, not-quite-so-respected subscription agencies.
Rules for becoming a NRSRO were published in the Federal Register.
You do not need to be a NRSRO to get the “Rating Agency” exemption from Regulation FD, nor do you need to be an NRSRO to sell me a subscription to your your rating service.
You do, however, need to distribute your ratings freely to get the Regulation FD exemption; this is an aspect of the regulations I don’t like at all. It may be logical as far as it goes (the information will not be exploited for gain) but it means that investors cannot perform a fully independent check of the publicly available ratings.
As for the regulatory role of the NRSRO agencies … that’s the regulators’ problem, first and last. I can sympathize with the intent; and the implementation is a tip of the hat to the big agencies’ long and highly successful track record; but the agencies cannot be blamed if the regulators have decided to follow their advice blindly.
Yves Smith:
James,
I stand corrected on the criteria being available now, but note per above, the NRSRO designation was established in 1975, yet per your link, the guidelines for qualifying were not published till 2007. Egan Jones suffered repeated rejections of its application with no explanation.
In fact, if you had read the Wikipedia article, the SEC had published a “concept memo” in 2003 which set forth criteria that made new entry just about impossible:
The single most important factor in the Commission staff’s assessment of NRSRO status is whether the rating agency is “nationally recognized” in the United States as an issuer of credible and reliable ratings by the predominant users of securities ratings.
This as you can imagine is a massive chicken and egg problem. You have to be “nationally recognized” to be an NRSRO, yet who is going to take the risk of building up a sufficiently large operation when the approval barrier is high and ambiguous. This provision seemed intended to close the gate behind the current NRSROs.
Again per Wikipedia, the SEC provided guidelines only as a result of Congressional action:
In 2006, following criticism that the SEC’s “No Action letter” approach was simultaneously too opaque and provided the SEC with too little regulatory oversight of NRSROs, the U.S. Congress passed the Credit Rating Agency Reform Act. This law required the SEC to establish clear guidelines for determining which credit rating agencies qualify as NRSROs. It also gives the SEC the power to regulate NRSRO internal processes regarding record-keeping and how they guard against conflicts of interest, and makes the NRSRO determination subject to a Commission vote (rather than an SEC staff determination). Notably, however, the law specifically prohibits the SEC from regulating an NRSRO’s rating methodologies.
I never said that Egan Jones was the most recent rating agency; the Wikipedia link clearly shows LACE.
It is not hard to imagine that those two additions, which brings the list to nine, was in response to the recent criticism of the incumbents.
Me:
Do you have any problems with the manner in which NRSRO certification is awarded now, or is this yesterday’s battle?
I remain a little unclear on the link between NRSRO certification and the rating scale used for municipalities – can you clarify?
Additionally, it seems to me that, should municipalities be rated on the corporate scale, then they’ll be basically split between AAA and AA, with a few outliers. Will this truly improve the utility of the ratings to Joe Lunchbucket? It seems to me that – given a rational response to a lemons problem, and in the absence of independent analysis – issuers with greater financial strength will achieve no benefit, and end up paying more for funding. Have you seen any commentary on this?
Me again:
said…
I’ve had one other thought about the possible effects of a two-grade rating scale. The prior comment referred to the intra-grade effect on ratings, but there may well be an inter-grade effect as well.
If our good friend Joe Lunchbucket is presented with a list of, say, 100 offerings and their (current) ratings, he sees half a dozen or so categories – he also sees that a recognizable name like California is not in the highest rank.
This multiplicity of grades serves to emphasize the idea that the ratings represent graduated scales. I suspect that if the same list is presented to him with only two significantly populated rating classes, he might consider these to be indications of “good” and “bad” … or, perhaps, pass/fail.
Thus, it is entirely possible that spreads between municipals in the (corporate scale) AAA & AA classes will widen from historical norms – which will cost the lower-grade issuers a lot of money – unless, of course, they purchase evil bond insurance.
After all, municipal bonds are not in much competition with corporates for Joe Lunchbucket’s investment – they’re in competition with each other.
I recognize that it is currently so fashionable to blame the ratings agencies for all the world’s ills that little consideration will have been given to the probable effects of changing a 100-year-old system, let alone any actual work. But if you come across any informed research that addresses the above possibility, I would be very interested to see it.
I don’t know what the answer is. It does seem to me that introducing a two-grade rating scale will lead to problems and overall higher coupons payable by issuers, due to both intra-grade and inter-grade effects … but I am not so arrogant as to assume I know that for sure! I will go so far as to say that California Treasurer Bill Lockyer is dreaming in technicolour if he truly believes that California’s interest cost on bond issues will become the equal to what AAA (municipal) bonds are yielding now (there’s only so much investment money to go around) … but I would go so far as to say he probably knows better and is just grandstanding for his adoring voters.
If anybody can find some good discussion on this matter – behavioural finance is not what I do, and neither is US municipals! – please let me know.
Home-made Indices with Intra-Day Updating
Thursday, April 17th, 2008Assiduous Reader kaspu has complained about the volatility of the S&P/TSX Preferred Share Index (TXPR on Bloomberg) – or, at least, the reported volatility.
The problem is that this index is based on actual trades; hence, it can bounce around a lot when 100 shares trade at the ask, $1 above the bid. For instance, today:
This sort of behaviour is endemic to indices created by small shops without much market knowledge or experience. Readers in need of indices with more precision may wish to use the HIMIPref™ Indices, which are, of course, based on much less volatile bid prices.
“Gummy” has announced a new spreadsheet, available from his website. This spreadsheet allows the download of bid and ask prices – and lots of other information – for stocks reported (with a 20 minute delay) by Yahoo. It strikes me that with minimal effort, one could reproduce TXPR (using the defined basket of CPD) and update the index at the touch of a button, with minimal set-up time required.
The Gummy Stuff website, by the way, is reliable AS FAR AS IT GOES. Dr. Ponzo is math-oriented to a much greater degree than investment-oriented and does not always respect hallowed fixed income market conventions. In other words, I have found that things are properly calculated in accordance with the (usually stated) assumptions, but these assumptions are not necessarily the ones I might make when performing a calculation with the same purpose.
With respect to Kaspu‘s question about other indices … the latest CPD literature references the “Desjardins Preferred Share Universe Index”, which is new to me … and I have no further information. Claymore may be preparing for a showdown with the TSX about licensing fees (you should find out what they want for DEX bond data … it’s a scandal).
Additionally, there is the BMO Capital Markets “50” index, but that is available only to Nesbitt clients … maybe at a library, if you have a really good one nearby that gets their preferred share reports.
Update, 2008-5-1: “Gummy” has announced a spreadsheet that does exactly this! Just watch out for dividend ex-Dates!
Posted in Index Construction / Reporting, Miscellaneous News, Reader Initiated Comments | 2 Comments »