Archive for the ‘Interesting External Papers’ Category

BIS Releases Quarterly Review, September 2009

Monday, September 14th, 2009

The Bank for International Settlements has released its Quarterly Review, September 2009 of International banking and financial market developments, with articles:

  • Overview: cautious optimism on gradual recovery
  • Highlights of international banking and financial market activity
  • The future of securitisation: how to align incentives?
  • Central counterparties for over-the-counter derivatives
  • The cost of equity for global banks: a CAPM perspective from 1990 to 2009
  • The systemic importance of financial institutions

The essay on central counterparties for OTC derivatives is rather disappointing. It takes as an article of faith that regulatory oversight will improve the market’s stability; I am not so easily convinced. Politicians will always be more procyclical than the most manic-depressive market participant and while increased controls may reduce the frequency of financial crashes, I will assert that they will increase their severity. Unfortunately, the regulators now have a talking point and you can bet they’ll be using it to assure themselves of continued employment and prestiege … at least until the next paradigm shift.

It is of particular interest that the big push for a central clearinghouse is coming simultaneously with the pretense of horror at the existence of systemically important banks (pretense? Yes, pretense.) The essay does not address this contradiction.

IMF Releases September "Finance & Development"

Thursday, September 10th, 2009

The International Monetary Fund has released the September 2009 edition of Finance & Development, with articles:

  • Sustaining a Global Recovery
  • What’s In and Out in Global Money
  • Rebuilding the Financial Architecture
  • Looking Ahead
  • Growth after the Crisis
  • The Future of Reserve Currencies
  • Overhauling the System
  • Anticipating the Next Crisis
  • Faces of the Crisis
  • Dial Growth

This is not the most heavyweight or technical of journals, but provides good reviews of the global situation – say, about on the level of the Economist.

Boston Fed Publishes 1H09 Research Review

Wednesday, September 9th, 2009

The Boston Fed has published its Research Review, Jan. ’09 – Jun ’09 with abstracts, summaries and a few charts from their recent publications.

Article titles are:

  • Making Sense of the Subprime Crisis
  • Reducing Foreclosures
  • Reviving Mortgage Securitization: Lessons from the Brady Plan
  • Why Are (Some) Consumers (Finally) Writing Fewer Checks?
  • Another Hidden Cost of Incentives: The Detrimental Effect on Norm Enforcement
  • Has Overweight Become the New Normal? Evidence of a Generational Shift in Body Weight Norms
  • Empirical Estimates of Changing Inflation Dynamics
  • Geographic Variations in a Model of Physician Treatment Choice with Social Interactions
  • The Optimal Level of Deposit Insurance Coverage

Three of these papers were discussed on PrefBlog following their release; links are provided to the PrefBlog post.

Should Central Banks Target the Price Level?

Friday, September 4th, 2009

The Kansas City Fed has published a paper by George A. Kahn titled Beyond Inflation Targeting: Should Central Banks Target the Price Level?

The PDF is copy-protected – jerks!

Price Level Targetting was also discussed in the BoC Spring 2009 Review.

The Great Moderation, the Great Panic and the Great Contraction

Wednesday, September 2nd, 2009

Mr Charles Bean, Deputy Governor for Monetary Policy and Member of the Monetary Policy Committee, Bank of England, delivered the Schumpeter Lecture at the Annual Congress of the European Economic Association, Barcelona, 25 August 2009.

Selections from the text of the lecture published by the Bank for International Settlements:

The first distortion…

Creditors – especially if they are households rather than
sophisticated financial market participants – may not even factor in the implications of higher leverage for the possibility of default. And even if they do, the debt may be partially or wholly underwritten by the state, with the cost of the insurance only imperfectly passed back to the bank. Similarly, the bank may be thought to be too important to be allowed to fail, in which case people might expect an injection of capital by the state to make good abnormal losses. In any of these cases, there is an incentive for the bank to raise leverage. Moreover, the lower is the perceived uncertainty associated with the loans, the more the bank can afford to leverage up, while maintaining the same uncertainty over the return on its capital. So the environment of the Great Moderation would have been particularly conducive to intermediaries increasing the leverage of their positions.

The second distortion…

Moreover, a considerable amount of the remaining risk was contained in institutions which, while not formally recognised as banks, engaged in exactly the same sort of maturity transformation, financing long-term assets by short-term debt instruments. These included entities such as conduits, which housed the securitised loans and then financed them by selling short-term paper. But in many cases these entities had back-up credit lines to the supporting bank, so that when funding difficulties arose, the securitised loans in effect came back onto the bank’s balance sheet. And even where there was no formal obligation to act as a lender of last resort, originators often chose to provide back-up finance in order to protect their name in funding markets.

The motive for setting up these off-balance-sheet entities was entirely one of regulatory arbitrage. Off-balance-sheet vehicles were not required to hold capital in the same way as a bank would if the loans were on their balance sheet. So it appeared to be a neat way to boost profits without having to raise more capital. The Banco d’España, the Spanish banking supervisor, insisted that Spanish banks would have to treat conduits and the like as on balance sheet for capital purposes. As a result, Spain did not see the mushrooming of these off-balance-sheet vehicles.

And the third distortion…

One unintended consequence of financial innovation was that it enabled clever traders to create positions with considerable embedded leverage – that is, portfolios requiring little payment up front, but whose returns amplified changes in the value of the underlying assets. Traders then had a natural incentive to gravitate towards these types of highly risky instruments.

A related problem is that it is extremely difficult for management to observe the risk being taken on by their traders, particularly when innovative financial instruments have unusual return distributions. Take, for example, a deeply out of the money option. This pays a steady income premium and has little variation in value when the underlying instrument is a long way from the strike price, but generates rapidly escalating losses in bad states of the world. In good times this looks like a high return, low risk instrument. Only in very bad states of the world do the true risks taken on become apparent.

Knightian uncertainty about CDO returns increased information problems:

A typical CDO comprises a large number and variety of RMBS, including a mix of prime and sub-prime mortgages from a variety of originators. On the face of it, this might seem like a good thing as it creates diversification. However, even more than with plain vanilla RMBS, it becomes impossible to monitor the evolution of the underlying risks – it is akin to trying to unpick the ingredients of a sausage. That may not matter too much when defaults are low and only the holders of the first, equity, tranche suffer any losses. Holders of the safer tranches can in that case sit back and relax – a case of rational inattention. But once defaults begin to rise materially, it matters a lot what such a security contains. And with highly non-linear payoffs, returns can be extremely sensitive to small changes in underlying conditions.
When defaults on some US sub-prime mortgages originated in 2006 and 2007 started turning out much higher than expected, there was a realisation that losses could be much greater on some of these securities than previously believed. And a growing realisation of the informational complexity of these securities made them difficult to price in an objective sense. Essentially, investors switched from believing that returns behaved according to a tight and well-behaved distribution to one in which they had very little idea about the likely distribution of returns – a state of virtual Knightian uncertainty (Caballero and Krishnamurthy, 2008).

So who’s to blame?

First, in my view it would be a mistake to look for a single guilty culprit. Underestimation of risk born of the Great Moderation, loose monetary policy in the United States and a perverse pattern of international capital flows together provided fertile territory for the emergence of a credit/asset-price bubble. The creation of an array of complex new assets that were supposed to spread risk more widely ended up destroying information about the scale and location of losses, which proved to be crucial when the market turned. And an array of distorted incentives led the financial system to build up excessive leverage, increasing the vulnerabilities when asset prices began to fall. As in Agatha Christie’s Murder on the Orient Express, everyone had a hand in it.

Sheila Bair Writes Op-Ed on Super-Regulator

Tuesday, September 1st, 2009

Sheila Bair, head of the FDIC has written a New York Times op-ed piece, The Case Against a Super-Regulator:

The truth is, no regulatory structure — be it a single regulator as in Britain or the multiregulator system we have in the United States — performed well in the crisis.

The principal enablers of our current difficulties were institutions that took on enormous risk by exploiting regulatory gaps between banks and the nonbank shadow financial system, and by using unregulated over-the-counter derivative contracts to develop volatile and potentially dangerous products.

The reference to derivative contracts and shadow banks is almost certainly made with AIG specifically in mind. Assiduous Readers will know that I have no problems with these two things … what made AIG a systemic threat was regulatory incompetence that allowed regulated institutions to have a lot of AIG paper on their books without sufficient collateral or capital charges.

We can’t put all our eggs in one basket. The risk of weak or misdirected regulation would be increased if power was consolidated in a single federal regulator.

Hear, hear!

One advantage of our multiple-regulator system is that it permits diverse viewpoints. The Federal Deposit Insurance Corporation voiced strong concerns about the Basel Committee on Banking Supervision’s relatively relaxed rules for determining how much capital banks should have on hand.

If I remember correctly, it was the FDIC that insisted that the leverage ratio be maintained as a regulatory measure.

A Financial Conditions Index for the US

Monday, August 31st, 2009

The Bank of Canada has announced a new discussion paper by Kimberly Beaton, René Lalonde, and Corinne Luu, A Financial Conditions Index for the United States:

The financial crisis of 2007–09 has highlighted the importance of developments in financial conditions for real economic activity. The authors estimate the effect of current and past shocks to financial variables on U.S. GDP growth by constructing two growth based financial conditions indexes (FCIs) that measure the contribution to quarterly (annualized) GDP growth from financial conditions. One FCI is constructed using a structural vector-error correction model and the other is constructed using a large-scale macroeconomic model. The authors’ results suggest that financial factors subtracted around 5 percentage points from quarterly annualized real GDP growth in the United States in 2008Q4 and 2009Q1 and should subtract another 5 percentage points from growth in 2009Q2. Moreover, to assess the effect of financial shocks in terms of policy interest rate equivalent units, the authors convert the effect of financial developments on growth into the number of basis points by which the federal funds rate has been tightened. The authors show that the tightening of financial conditions since mid-2007 is equivalent to about 300 basis points of tightening in terms of the federal funds rate. Thus, the aggressive monetary easing undertaken by the Federal Reserve over the financial crisis has not been sufficient to offset the tightening of financial conditions. Finally, in a key contribution to the literature, the authors assess the relationship between financial shocks and real activity in the context of the zero lower bound. They find that the effect of the tightening of financial conditions on GDP growth in the current crisis may have been amplified by as much as 40 per cent due to the fact that policy interest rates reached the zero lower bound.

In particular, our MFCI adjusted for the binding lower bound suggests that financial factors subtracted around 5 percentage points from quarterly annualized growth in 2008Q4 and 2009Q1. Moreover, in order to assess the effect of financial shocks in terms of policy interest rate equivalent units, we have converted the effect of financial developments on growth into the number of basis points by which the federal funds rate has been tightened. The results suggest that the net tightening of financial conditions since mid-2007 is equivalent to about 300 basis points of tightening in terms of the federal funds rate, despite the actual 500 basis point decline in the policy rate. Given the ongoing disruptions in financial markets, the degree of tightening of price and non-price credit conditions and the substantial losses in wealth over 2008, and the long transmission lags between a shock to financial conditions and its impact on the real economy, these financial conditions are expected to continue to dampen growth going forward.

S&P US Preferred Stock Primer

Saturday, August 29th, 2009

Standard & Poor’s published a Preferred Stock Primer dated 2009-3-25:

Preferred stock returns have low correlations with common stock returns, making them good diversifiers. They also have relatively low correlations with bonds, with expected volatility and returns between those of common stocks and bonds. This characteristic makes them a good complement to a bond portfolio.

Later on in the paper they cite a preferred/bond correlation of 0.309 and a preferred/common correlation of 0.491.

There are many types of preferreds and preferred trademark names. However, most preferreds are based on two main structural models: Traditional Preferreds and Trust Preferreds. Traditional Preferreds are closer to stocks and are generally REIT preferreds, foreign preference shares trading in the U.S. and straight preferred stocks issued by U.S. corporations.

Traditional preferreds are a senior form of equity which rank above common shares but below corporate debt in creditor standings. Dividends from traditional preferreds are taxed as capital gains. Trust Preferreds are closer to bonds and thus rank higher in general creditor standings than regular preferreds. Dividends from trust preferreds are taxed as ordinary income. As of February 27, 2009, the S&P Preferred Stock Index contained 35 trust preferreds out of a total 72 constituents.

Somwhate confusingly, they later state:

Some preferred stocks have qualified dividend distributions that are taxed in the same manner as qualified dividends of common stocks. Others have their dividends charged as interest income and are subject to higher tax rates. Those that do have qualified dividend distributions have holding period requirements that are higher than those for common stocks. Investors need to contact their tax advisors to assess their tax situation. The favorable tax treatment for some preferreds is a result of tax laws passed in 2003, and this may change in the future.

Figuring out the tax status of US preferreds sounds like a full-time job in itself!

The modern era for preferred stocks started in the early 1990s. Since then, the preferred stock market has grown rapidly, quadrupling in size by 2005 to $193 billion. Concerns about default and conversion risk shrunk the market to about $100 billion in early 2008. For a comparative perspective, the total size of the U.S. stock market was in the range of $9.5 trillion; and the U.S. corporate bond market was in the range of $4 trillion.

Why Have Canadian Banks Been More Resilient?

Friday, August 28th, 2009

A VoxEU piece by Rocco Huang of the Philadelphia Fed and Lev Ratnovski of the IMF is based on an IMF working paper, Why are Canadian Banks More Resilient? that is of great interest (paper also available directly from the IMF):

Reviewing the data, we note that the pre-crisis capital and liquidity ratios of Canadian banks were not exceptionally strong relative to their peers in other OECD countries. However, Canadian banks clearly stood out in terms of funding structure: they relied much less on wholesale funding, and much more on depository funding, much of which came from retail sources such as households. We posit that the funding structure of Canadian banks was the key determinant of their resilience during the turmoil.

Although bank capital ratio taken by itself was not a robust predictor of resilience, a more specific dummy variable capturing critically low (under 4 percent) capital was a significant predictor of sharp equity declines and probability of government assistance. Low balance sheet liquidity did well in predicting extreme stress.

The second part of this paper (Section 3) reviews regulatory and structural factors that may have reduced Canadian banks’ incentives to take risks and contributed to their relative resilience during the turmoil. We identify a number of them: stringent capital regulation with higher-than-Basel minimal requirements, limited involvement of Canadian banks in foreign and wholesale activities, valuable franchises, and a conservative mortgage product market.

We measure capitalization as a ratio of total equity over total assets. This leverage-based measure has a number of shortcomings stemming from its simplicity: it is not risk-weighted and does not consider off-balance sheet exposures. However, it is well comparable across countries. We find that this simple measure of capitalization turns out to be a good predictor of bank performance during the turmoil, particularly by identifying vulnerabilities stemming from critically low bank capital (Table 2).

This last point is not particularly earth-shattering: see the first chart (reproduced from an IMF report) in the post Bank Regulation: The Assets to Capital Multiple.

We assess the impact of these ex-ante fundamentals on bank performance during the crisis. We use three objective and subjective measures of performance.

The first is the equity price decline from January 2007 to January 2009, which is an all-in summary measure of value destruction during the turmoil, resulting from credit losses, writedown on securities, and dilution from new equity issuances including government capital injections.

The second (pair) of measures are two dummy variables identifying whether that decline was greater than the median (70 percent) or extraordinarily large (85 to 100 percent), respectively.

The third measure of performance is a dummy capturing the degree of government intervention that a bank required during the turmoil: whether it was used to avoid extreme stress or to address a less dire weakness.

I have a problem with the use of equity prices as a measure of performance. It doesn’t really measure the stability of the bank, it measures the market’s perception of the stability of the bank. On the other hand, we have to live in the real world and perceptions can become reality very quickly.

We now turn to bank liquidity. We measure balance sheet liquidity as the ratio of liquid assets over total debt liabilities. We use the BankScope measure of liquid assets, which includes cash, government bonds, short-term claims on other banks (including certificates of deposit), and where appropriate the trading portfolio. BankScope harmonizes data from different jurisdictions to arrive at a globally comparable indicator. Data for bank liquidity is shown in Table 3.

Note that a large number of U.S. banks have very scarce balance sheet liquidity. The key reason is that those banks, in their risk-management, treated mortgage-backed securities and municipal bond as liquid, and reduced holdings of other more reliably liquid assets such as government securities. Our liquidity measure does not incorporate holdings of such private and quasi-private securities. With hindsight, it is fair to say that this narrow definition is a more accurate measure of liquidity during crisis.

I have a real problem with the incorporation of claims on other banks in a narrow definition of liquid assets – the same problem I have with the preferential treatment accorded bank paper in the rules for risk-weighting assets. Encouraging banks to hold each other’s paper seems to me to be a recipe for ensuring that bank crises become systemic with great rapidity.

Yet overall, balance sheet liquidity was a weaker predictor of resilience to the turmoil than the capital ratio. Although low liquidity was a clear handicap (of twelve least liquid banks, eight had equity price declines of more than 70 percent, and four required a significant government intervention), a large number of banks from different countries (U.S., UK, Switzerland) experienced significant distress despite being relatively liquid. Another way to think about the resilience effects of balance sheet liquidity is to recognize that it can provide only temporary relief from funding pressures. During a protracted turmoil, more fundamental determinants of resilience—such as capital or funding structure—should play a bigger role.

We now turn to bank funding structure (depository vs. wholesale market funding). The financial turmoil has originally propagated through wholesale financial markets, some of which effectively froze on occasions. Our measure of funding structure, a ratio of depository funding over total assets, seeks to reflect banks’ exposure to rollover risks — the wholesale market’s refusal to roll over short-term funding, often based only on very mild negative information or rumors (Huang and Ratnovski, 2008).

And it seems to me that this last paragraph supports my argument.

Canadian banks are clearly the “positive outliers” among OECD banks in the ratio of depository funding to total assets. On this ratio, almost all large Canadian banks are in the top quartile of our sample. Anecdotal evidence also suggests that a higher fraction (than in the U.S.) of Canadian bank deposits are “core deposits,” i.e., transaction accounts and small deposits, which are “stickier” than large deposits.

One likely reason for Canadian banks’ firm grip of deposit supply is their ability to provide one-stop service in mutual funds and asset management. Unlike in the U.S. Canadian banks have been historically universal banks, and there is relatively less competition for household savings from other alternative investment vehicles.

This might be used as an argument to reduce the choices available to Canadians even further. You can bet the banks’ lobbyists will have copies of this paper tucked into their briefcases during the next revision of the Bank Act.

Regression results are shown in Table 5.

The main specification (columns 1, 4, 7, 10) shows that depository funding significantly and robustly explains bank performance during the credit turmoil, consistent with initial casual observations of the data. Balance sheet illiquidity is a good predictor of particularly rapid deteriorations in bank conditions (government intervention under extreme stress or equity decline above 85 percent). However, interestingly, the capital ratio appears as an insignificant explanatory variable.

Assets-to-capital multiple. In addition to risk-based capital, Canada uses an assets-to-capital multiple (inverse leverage ratio) calculated by dividing the institution’s total assets by total (tiers 1 and 2) capital.

This is not quite correct; the ACM includes off-balance-sheet elements in the numerator.

Finally, the Canadian mortgage market is relatively conservative, with a number of factors contributing to the prudence of mortgage lending (see Kiff, 2009). Less than 3 percent of mortgages are subprime and less than 30 percent of mortgages are securitized (compared with about 15 percent and 60 percent respectively in the United States prior to the crisis). Mortgages with a loan-to-value ratio of more than 80 percent need to be insured for the whole amount (rather than the portion above 80 percent as in the United States). Mortgages with a loan-to-value ratio of more than 95 percent cannot be underwritten by federally-regulated depository institutions. To qualify for mortgage insurance, mortgage debt service-to-income ratio should usually not exceed 32 percent and total debt service 40 percent of gross household income. Few fixed-rate mortgages have a contract term longer than five years.

I suggest the last point is the most critical one here. If the CMHC had not stepped up to buy securitized mortgages at the height of the crisis, how many of these mortgages have been rolled over? That would have been catastrophic. The liquidity advantage of Canadian banks is heightened by the fact that so much of their lending has a maximum term of five years.

This research is clearly still in its early stages, but the paper is vastly superior to the OSFI puff-piece published in May.

DBRS Adjusts SplitShare Rating Methodology

Thursday, August 27th, 2009

DBRS has published its new Methodology: Rating Canadian Split Share Companies and Trusts, August 2009:

DBRS applies a combination of quantitative and qualitative analysis in its Preferred Share rating process. The quantitative analysis includes using an historical value-at-risk (VaR) framework to assess the likelihood of large portfolio losses based on historical data. DBRS uses VaR results together with qualitative analysis relating to general macroeconomic factors and to certain industries or companies to which the Portfolio will be exposed.

DRBS Preferred Share Rating Minimum Downside Protection Required* (Net of Agents’ Fees and Offering Expenses)
Rating Downside
Protection
(per DBRS)
Asset Coverage
(per HIMI Calculation)
Pfd-2 (high) 57% 2.3+:1
Pfd-2 50% 2.0:1
Pfd-2 (low) 44% 1.8-:1
Pfd-3 (high) 38% 1.6+:1
Pfd-3 33% 1.5:1
Pfd-3 (low) 29% 1.4+:1

Due to the unique risk of structured Preferred Shares (i.e., exposure to equity market fluctuations), DBRS will generally not assign a rating in the Pfd-1 range to Preferred Shares unless a de-leveraging mechanism is in place to provide greater protection on the repayment of Preferred Share principal. If a de-leveraging mechanism is in place, a portion of the Portfolio equal to the principal amount of Preferred Shares outstanding will be liquidated and invested in cash or cash equivalents if the Portfolio NAV declines by a predetermined percentage. In addition to the de-leveraging mechanism, there are other structural features to mitigate declines in downside protection that are addressed in this methodology, including the suspension of Capital Share distributions if the NAV drops below a predetermined level.

When rating split share transactions, DBRS will assign higher ratings to Issuers with a Preferred Share dividend coverage ratio suffi ciently greater than 100%.

Level of Diversification Adjustment to Minimum Downside Protection Required (Multiple)
Strong by industry and by number of securities 1.0x (i.e., no change)
Adequate by industry and by number of securities 1.0x to 1.2x
Adequate by number of securities, one industry 1.2x to 1.3x
Single entity 1.3x to 1.5x

In general, DBRS views the strategy of writing covered calls as an additional element of risk for Portfolios because of the potential for the Portfolio to give up unrealized gains when the option gets called and, at the same time, as part of the Portfolio’s mandate, the security may need to be repurchased in the market at the higher price. Furthermore, an option-writing strategy relies on the ability of the investment manager. The investment manager has a large amount of discretion to implement its desired strategy, and the resulting trading activity is not monitored as easily as the performance of a static Portfolio. Relying partially on the ability of the investment manager rather than the strength of a split share structure is a negative rating factor.

DBRS uses a variation of the historical VaR method to assess the likelihood of large declines in downside protection. VaR is the amount of loss that is expected to be exceeded with a given level of probability over a specifi ed time period. For example, if a Portfolio has a one-day VaR of $1 million with a probability of 5%, there is a 5% probability that the Portfolio will lose at least $1 million in a one-day period.

Alternatively, there is a 95% probability that the Portfolio will lose no more than $1 million in a one-day period. Using the historical method, daily returns are calculated for a given Portfolio using price data for a historical period of time specifi ed by DBRS. Daily returns are then sorted from lowest to highest. If there are 100 daily returns and a probability of 5% is desired, the 5% VaR would be approximately equal to the fifth worst return.

Why daily?

The key consideration in gathering historical data is the time period used. There is a balance between collecting enough returns and avoiding irrelevant data due to a major shift in the Portfolio’s environment that may decrease the value of using data observed prior to the shift. DBRS will generally aim to use ten years of historical data to calculate the probability of a large decline in downside protection. Shorter periods will be used if ten years of data is not available for a particular Portfolio; however, the comparison of split share Portfolios will always be completed using identical time periods.

The VaR methodology used here doesn’t make a whole lot of sense to me. Why daily data? Why ten year periods? It seems to me that it would make more sense, for instance, to use all available data over time periods that at the very least approximate the time period of the prediction … that is to say, if a bank-based split share has asset coverage of 2.0:1 and 5-years to maturity, what is the probability of a 5-year loss of 50% for the index? What’s the probability of a such a loss if there is only 1 year to maturity?

What they’re doing is:

(4) Using the initial amount of downside protection available to the Preferred Shares, determine the dollar loss required for the Preferred Shares to be in a loss position (i.e., asset coverage ratio is less than 1.0)

(5) Solve for the probability that will yield a one-year VaR at the appropriate dollar-loss amount for the
transaction.

Contrary to the methodology used, I will assert that the probability of a one-year loss of 50% in a diversified bank portfolios is GREATER THAN the probability of a five-year loss of 50%.

And as for:

(6) Determine the implied long-term bond rating by comparing the probability of default with the DBRS corporate cumulative default probability table.

(7) Link the implied bond rating to the appropriate Preferred Share rating using an assumption that the
preferred shares of a company should be rated two notches lower than the company’s issuer rating.

I’m gonna keep thinking! I’m gonna keep an open mind! But off the top of my head I can’t figure out why this makes sense.