Banking, Economics, Exception, Finance, Political economy, Risk, Uncertainty

Central banks are facing a credibility trap

Quite a few commentators have noted that central bankers have become rather less boring of late. Since the 2008 financial crisis, central banks have taken on new roles and responsibilities. They have experimented with a whole range of unconventional monetary policies. And, in the process, they have gained considerably in power and influence.

There has been less attention to a key paradox underlying central bankers’ new roles on the world stage: they are being forced to govern through exceptions in an era in which rule-following (particularly the holy grail of the 2% inflation target) has become the ultimate source of policy credibility. Where central bankers are supposed to stick to the rules, they have found themselves endlessly making exceptions, promising that one day things will return to normal.

This paradox poses real challenges for efforts to foster a sustained global economic recovery. Governing through exceptional policies is always a politically-fraught undertaking, particularly over the long-term, but it is even more difficult in a context in which the dominant convention is one of strict rule-following.

Since the early experiments with monetarism in the late 1970s and early 1980s, most central banks have moved towards an increasingly rule-based approach to monetary policy, with inflation targeting becoming the norm in many countries in recent years.

Yet today we are faced with a situation in which the rules no longer apply but are still being invoked as if they did.

A recent Buttonwood column notes that the Bank of England has missed its inflation target “almost exactly half the time” since 2008. The European Central Bank (ECB) has effectively expanded its narrow mandate, which formally requires it to make price stability its top priority, by arguing that employment and other issues are crucial to achieving it. Yet the ECB and the Bank of England continue to act as if the old rules still apply.

If we look beyond the narrow rules that are supposed to be governing central bank actions and examine the wider changes in their recent policies, we find similar patterns. Scratch an unconventional monetary policy and you will find a kind of economic exceptionalism: an argument that the crisis that we face is extreme enough that it requires a radical but temporary suspension of economic rules and norms.

Most of the unconventional monetary policies that have been tried to date, and just about all of those that have been proposed as future possibilities if we face a renewed global recession, break quite radically with existing norms. Negative interest rates weren’t even supposed to be economically possible (until they were tried), while quantitative easing (a central bank’s buying up bonds by massively increasing the size of its balance sheet) still carries a whiff of irresponsibility linked to its past as a way for governments to avoid fiscal retrenchment by “printing money.”

More recent proposals include helicoptering money into the government’s or the public’s accounts, abolishing cash to make low interest rates effective, and even introducing a reverse incomes policy—a government-enforced increase in wages (as opposed to the wage controls of the 1970s) to try to get inflation going.

All of these existing and potential policies break with current economic norms, and all are being pitched as temporary, exceptional measures that are (or may be) necessary in the face of an extreme crisis.

Ironically, rule-following was designed precisely to avoid this problem. It came into its own as an influential approach to monetary policy in the wake of the destabilizing 1970s, with their stop-go economic policies and rampant inflation. Mainstream economists came to love rule-based monetary policy as did politicians—not just neoconservatives like Margaret Thatcher and Ronald Reagan who first championed the approach, but eventually the more centrist politicians who followed like Tony Blair and Bill Clinton, as well as today’s mixed lot.

A rule-governed approach to policy was designed to be both politically and economically stabilizing—to do away with the problem and even the possibility of exceptions by removing not only governments’ but even central bankers’ discretion: just stick to the rule, and everything will work out. A tidy, efficient, depoliticized (although certainly not apolitical) approach to monetary policy.

Yet rules only seem great until they don’t apply anymore. A rule that pretends it can always apply (or at least, as Colin Hay puts it in his introductory blog, in the 99.9% of times that seem relevant) inevitably runs serious problems when an exception becomes necessary.

Of course, as Alan Greenspan has noted, the victory of rules over discretion was never entirely true in practice. But it was an extremely powerful narrative—one that promised that central banks’ (and governments’) commitments to low inflation and economic stability was credible because they were constrained to follow the rules.

It was also a very effective narrative that has convinced markets that anything other that rule-following is likely to be destabilizing. As central banks begin to face the limits of those rules, their earlier persuasiveness has come back to haunt them: a recent paper from some Federal Reserve staff notes that although a higher inflation target would make sense in the United States, increasing it could well backfire if market actors believed that it would be too inflationary.

This fixation on rule-following has thus put central bankers into a credibility trap. If bankers admit that the rules no longer apply, then they risk losing their credibility as market actors have come to believe the mantra that rules—particularly low inflation targets—are the only way to ensure sound monetary policy. On the other hand if they don’t admit the limits of the rules, and continue lurching from exception to exception, they will eventually lose credibility as the gap between rhetoric and reality widens.

Central banks are damned if they do admit the limits of rules and damned if they don’t.

Of course, the most viable solution to this trap is for governments to stop relying so heavily on central banks in the first place and start taking some responsibility for economic recovery through concerted fiscal action (something that the Canadian government has at least started to do). Yet for that kind of fiscal action to work, governments have to convince the markets that they believe in it enough to stick to their guns and follow through—a rather unlikely scenario in today’s austerity-driven times.

As the potential for renewed economic crisis continues to grow, this credibility gap will only widen—as central bankers and governments find themselves lurching from exception to exception, refusing to question the neoliberal rules that no longer seem to apply.

This blog was first posted on the Sheffield Political Economy Research Institute’s website.

Banking, Canada, Finance, International development, Risk, Uncertainty

Canada needs to do a better job of managing financial uncertainty

Published in the Hill Times, May 25, 2015

As Canadians, we pride ourselves on how well our financial regulations coped with the 2008 financial crisis. Given this attitude, it’s not surprising that Canadian policymakers have avoided a major overhaul to our regulations in response.

Yet we need to make sure that this pride in our system does not lead to complacency. Rather than just looking backwards to how the Canadian financial system performed in the last crisis, we also need to look forwards and recognize how much the global economy is changing.

Those changes take two key forms. First, the economy has become much more uncertain since the crisis. And second, a number of other countries have raised the bar for financial regulation. If Canadians don’t catch up with these two major shifts, we may well find ourselves in trouble.

Whether we look at the International Monetary Fund’s latest Global Financial Stability Report, or the Bank of Canada’s recent Financial System Review, it is clear that both the global and national economies have become increasingly uncertain. That uncertainty defines some of the most important aspects of our economy, whether we look at the likely medium-term impact of the decline in oil prices, the potential for a hard landing in an overheated housing market, or the possibility that Canadians will wake up one day and realize that their household debt level is unsustainable.

This environment of profound uncertainty poses serious policy challenges.

In the good old days of the so-called “Great Moderation” from the mid-1980s to the financial crisis, policymakers were able to focus on what Donald Rumsfeld famously described as “known unknowns”—the kinds of risks to which policymakers could assign definite probabilities. Today, we are faced instead with a great deal of “unknown unknowns”—the kinds of uncertainty that resists formal modeling, as Bank of Canada Governor, Stephen Poloz noted in a recent paper.

How should we regulate financial markets in the face of this kind of uncertainty? Very carefully. As it becomes increasingly difficult to predict what kinds of complex risks the economy might face, we need to err on the side of caution.

As good Canadians we might assume that we already have some of the most cautious financial regulations around. Yet this is no longer the case.

Yes, Canada has implemented the capital adequacy standards set out in the Basel III accord very quickly. Yet our government has treated those requirements as the gold standard, when they were designed to be a bare minimum. On the other hand, the United Kingdom and the United States are in the process of implementing more demanding standards, including adopting higher and stricter leverage ratios. While Canada was one of the only countries with a leverage ratio requirement before the crisis, we now starting to look relatively lax.

Even more striking is the fact that Canada, unlike every other major country, has no central body responsible for coordinating efforts to manage systemic risk. The Canadian regulatory universe is fragmented, with important pieces of the regulatory puzzle managed by half a dozen agencies plus a multitude of provincial authorities. The Bank of Canada does an admirable job of identifying potential sources of systemic risk, but they have few tools for acting on them.

Canadian authorities have engaged in macroprudential regulation in recent years—most notably through their efforts to cool the housing market down. Yet, as a recent IMF report points out, those efforts have unintentionally encouraged those who no longer qualify for prime mortgages into the under-regulated world of “shadow lending,” potentially increasing systemic risk. In order to manage an uncertain economy, someone needs to be able to look at the system as a whole: to connect the dots that link regulations governing consumer credit, mortgages, interest rates, big, small and “shadow” banking institutions.

What about the usual financial sector response that more regulation will cost Canadian financial institutions, and thus the economy, more generally? We should have learned by now that the cost of another crisis would be much greater still. Given the triple threat of uncertain oil prices, a volatile housing market and rising consumer debt, another crisis would likely hit us harder than the last one. It’s worth being well prepared for that kind of risk.

Posted on the CIPS Blog June 5, 2015. 

Canada, Failure, International development, Measurement, Risk, Theory

What counts as policy failure — and why it matters

When things go wrong in politics, the word ‘failure’ gets bandied around a lot. In recent weeks, we’ve heard about the failure of Canadian drug policy (as admitted by Stephen Harper), the failure of Canadian diplomatic efforts to get Barack Obama on board for the Keystone XL Pipeline (as declared by his critics), and the failure of European leaders and the ‘troika’ to find a long term solution to the problems posed by the Greek economy (as acknowledged by most sensible commentators).

These declarations of failure, of course, are not uncontested. In each case, there are those who would challenge the label of failure altogether, and others who would lay the responsibility for failure on different shoulders. Labeling something a failure is a political act: it involves not just identifying something as a problem, but also suggesting that someone in particular has failed. These debates about failures are crucial ways in which we assess responsibility for the things that go wrong in political and economic policy.

The most interesting debates about policy failure, however, occur when what’s at stake is what counts as failure itself.

When we say that something or someone, has failed, we are using a particular metric of success and failure. Formal exams provide the clearest example of assessment according to a scale of passing and failing grades. In most cases, such metrics are taken for granted. (Even if some students might not agree that my grading scale is fair, I am generally very confident when I fail a student.) But sometimes, if a failure is serious enough, or if failures are repeated over and over, those metrics themselves come into question. (I did once bump all the exam grades up by five percent in a course because they were so out of line with the students’ overall performance.)

In politics, these contested failures force both policymakers and the wider community to re-examine not just the policy problems themselves but also the measures that they use to evaluate and interpret them. These moments of debate are very important. They are very technical, focusing on the nuts and bolts of evaluation and assessment. Yet they are also fundamental, since they force us to ask both what we want success to look like and to what extent we can really know when we’ve found it.

In my recent book, Governing Failure, I trace the central role of this kind of contested failure in one particular area: the governance of international development policy. Policy failures such as the persistence of poverty in Sub-Saharan Africa, the Asian financial crisis and the AIDS crisis raised very serious questions about the effectiveness of the ‘Washington Consensus’, and ultimately led aid organizations ranging from the International Monetary Fund and the World Bank to the (then) Canadian International Development Agency to question and reassess their policies.

The ‘aid effectiveness’ debates of 1990s and 2000s emerged out of these contested failures, as key policymakers and critics questioned past definitions of success and failure and sought to develop a new understanding of what makes aid work or fail. In the process, they shifted away from a narrowly economic conception of success and failure towards one that saw institutional and other broader political reforms as crucial to program success.

International development is not the only area in which we have seen a significant set of failures precipitate this kind of debate about the meaning of success and failure itself. The 2008 financial crisis was also seen by many as a spectacular failure. The crisis produced wide-ranging debates not just about who was to blame, but also about how it was possible for domestic and international policymakers and market actors to get things so wrong that they were predicting continued success even as the global economy was headed towards massive failure.

In the aftermath of that crisis, there was a striking amount of public interest in the basic metrics underpinning the financial system. People started asking just how risks were evaluated and managed and how credit rating agencies arrived at the ratings that had proven to be so misleading. In short, they wanted to understand how the system measured success and failure. Many of the most promising efforts to respond to the crisis—such as attempts to measure and manage systemic risk—are also aimed at developing better ways of evaluating what’s is and isn’t working in the global economy, defining success in more complex ways.

Of course, not every failure is a contested one. Many have argued that the reasons for the failure of Canadian drug policy are less contested than Harper has suggested. Critics note that the Conservative government’s unwillingness to take on board the lessons of innovative policies such as safe injection sites goes a long way towards explaining this policy failure.

On the other hand, some failures—such as the failure not just of Greece but also of much of Europe to restart their economies—should be more contested than they currently are. The International Monetary Fund did begin opening up this kind of deeper discussion when its internal review of its early interventions in Greece suggested that the organization had been too quick to promote austerity. Yet the narrow terms of the troika’s conversations about the future of Greece suggests that there is an awful lot of room for more creative thinking about the path towards policy success, not just in Europe but around the world.

These kinds of debates about how we define and recognize success and failure can be crucial turning points in public policy. They force us, at least for a moment, to set aside some of our easy assumptions about what works and what doesn’t, and to ask ourselves what we really mean by success.

This blog post first appeared on the CIPS blog on March 6, 2015.

Canada, Finance, Measurement, Political economy, Results, Risk, Uncertainty

Why we need to take economic uncertainty seriously

If you have been reading the financial press over the past week, you know that the global economy’s chances are looking a lot more uncertain these days. What you may not know, however, is that this more recent upswing in uncertainty and volatility is part of a much broader pattern in the global economy—one that poses some real challenges for how policymakers do their job.

Stephen Poloz, the Governor of the Bank of Canada, just released a working paper in which he suggests that the economic climate has become so profoundly uncertain since the global financial crisis of 2007-2008 that it resists formal modeling.

Because of this, the Bank will no longer engage in the policy of ‘forward guidance’, in which it provides markets with a clear long-term commitment to its current very low interest rate policy. The Bank is changing this policy not because it is any less committed to low interest rates in the medium term, but because it does not want to give the markets a false sense of security about the predictability of the future. Instead, Poloz suggests that policymakers should do a better job of communicating the uncertainties facing the economy and the Bank itself as it formulates its policies.

Why should we care about this seemingly minor change in the Bank of Canada’s policy? Because it underlines just how much our governance practices are going to have to change in order to cope with the increasing uncertainty of the current economic and political dynamics.

It’s ironic that this warning is coming from the Bank of Canada. Central banks do not like change. They are just about the most conservative government institutions around.

Since the late 1970s, central bankers have been wedded to the idea that the most straightforward monetary policies are the best—ideally taking the form of a simple rule that can be expressed as a quantitative target, like the Bank of Canada’s inflation target. Economists argue that such policy rules are stabilizing because they avoid giving too much discretion to central bankers, thus reducing uncertainty about the Bank’s plans and increasing the credibility of their commitment to low inflation.

Yet these simple rules are effective only as long as the models that they are based on can accurately capture an economy’s dynamics and needs. If the economy is too complex and uncertain for such straightforward forms of quantification, then simple rules are at best misleading, and at worst destabilizing.

Poloz’s recent paper suggests that he recognizes some of these dilemmas—and the importance of coming to terms with them quickly in the current period of economic volatility.

The Bank of Canada’s Governor is not alone in recognizing these uncertainties. Janet Yellen, the current Chair of the United States Federal Reserve Board, has also pointed to the limits of simple rules in guiding central bank policy in the current context. Her predecessor, Ben Bernanke, referenced Donald Rumsfeld’s concept of ‘unknown unknowns’ to describe the extreme uncertainty that faced market participants during the recent financial crisis.

Yet, with this paper, Poloz seems to go further than his American counterparts in recognizing the implications of these unknown unknowns. In the same speech cited above, Bernanke argued that the failures of the global financial crisis were failures of engineering and management, and not of the underlying science of economics.

Poloz, by contrast, describes the work of monetary policymaking as a “craft” (not a science), and suggests that it is too complex to be treated as a form of engineering. The uncertainty that we are dealing with today, he suggests, “simply does not lend itself as easily to either mathematical or empirical analysis, or any real sort of formalization.”

This is a remarkable departure from the kind of numbers-driven rhetoric that we have heard from the Harper government in recent years.

The Canadian government has been increasingly preoccupied with measuring results, in health careinternational development, and across government-funded programs. Last May, when announcing additional funds for the health of mothers and children in developing countries, Stephen Harper argued, “You can’t manage what you can’t measure.

Poloz’s paper suggests that, on the contrary, because of the sheer complexity and uncertainty of the current global order, we have no alternative but to find ways of managing what we can’t measure. As I argue in my recent book, rather than using ever-more dubious indicators and targets to drive policy on everything from health to the economy, we need to find better ways of assessing, communicating and managing the true complexity of the policy challenges that we face.

This will not be an easy task, either technically or politically. It will take time to educate a public—not to mention a market—that has become used to simplified pronouncements.

The less we can rely on objective measurements and simple rules, the more careful we have to be about ensuring democratic accountability for policy decisions—through the political process and through an informed and active media.

And perhaps the biggest challenge that this new reality presents is the need for our politicians to heed Poloz’s suggestion that they not only recognize the inescapability of “uncertainty, and the policy errors it can foster,” but that they wear them “like an ill-fitting suit . . . that is, with humility.”

Humility tends to be in scarce supply in political circles these days. That too will need to change if we’re going to develop the kinds of creative policy tools that we need to manage the uncertain times to come.

First posted on the CIPS Blog.

Failure, Global governance, International development, Measurement, Political economy, Results, Risk, Theory

Hedging bets: our new preoccupation with failure

Nobody likes to admit failure—least of all government-funded development organizations in hard economic times. Yet recent years have seen a number of prominent development agencies confess to failure. The International Monetary Fund (IMF) admitted its failure to recognize the damage that its overzealous approach to austerity would cause in Greece. The World Bank President, Jim Yong Kim, has adopted the idea of Fail Faires from the information technology industry, where policymakers share their biggest failures with one another. The United States Agency for International Development’s (USAID) Chief Innovation Officer also expressed some interest in organizing a Fail Faire, and the agency eventually did hold an “Experience Summit” in 2012.

This interest in failure is central to a broader shift in how development organizations—and other national and international agencies—have begun to work. As I argue in my new book, Governing Failure, these organizations are increasingly aware of the possibility of failure and are seeking to manage that risk in new ways.

This preoccupation with failure is relatively new. The 1980s and early 1990s—the era of ‘structural adjustment’ lending—was a time of confidence and certainty. Policymakers believed that they had found the universal economic recipe for development success.

The 1990s marked a turning point for confidence in the development success ‘recipe’. Success rates for programs at the World Bank began to decline dramatically’ critics started to label the 1980s a ‘decade of despair’ for sub-Saharan Africa; and both the AIDS pandemic and the Asian financial crisis reversed many gains in poverty reduction. These events made policymakers more aware of the uncertainty of the global environment and of the very real possibility of failure—lessons only reinforced by the recent financial crisis.

What happens to policymakers when they are more aware of the possibility of failure? On one hand, they can accept the fact of uncertainty and the limits of their control, becoming creative—even experimental—in their approach to solving problems. Or they can become hyper-cautious and risk-averse, doing what they can to avoid failure at all costs. We can see both reactions in international development circles.

A major shift in development practice over the past two decades has been the recognition that political ‘buy-in’ matters for policy success. As development organizations tried to foster greater country ownership of economic programs, they became quite creative. By reducing conditionality and delivering more non-earmarked aid to countries’ general budgets, development organizations shifted more decision-making responsibility to borrowing governments in an effort to create an open-ended and participatory process more conducive to policy success.

But development organizations also took a more cautious turn in their response to the problem of failure. The social theorist Niklas Luhmann first introduced the idea of ‘provisional expertise’ to describe this cautious trend in modern society. He pointed to the increase in risk-based knowledge that could always be revised in the face of changing conditions.

Risk management has become omnipresent in development circles, as it has elsewhere. No shovel turns to build a school without a multitude of assessments of possible risks to the project’s success, allowing the organizations involved to hedge against possible failures.

An even more prominent trend in development policy is the current focus on results, which is particularly popular in the Canadian government. Few organizations these days do not justify actions in terms of the results that they deliver: roads built, immunizations given, rates of infant mortality reduced.

At first glance, this focus appears to be anything but cautious: what greater risk than publishing the true results of your actions? Yet it is not always possible to know the results of a given policy. The problem of causal attribution is a thorny one in development practice, particularly when any number of different variables could have led to the results an organization claim as its own.

Some agencies such as the U.S. Millennium Challenge Corporation (MCC) have tried to get around this problem through sophisticated counterfactual analysis and the use of control groups in their aid programs. Yet even MCC staff members recognize that designing programs in order to gain the best knowledge about results can come at the expense of other priorities.

If donors can count as successes only those results that can be counted, they may well find themselves redefining their priorities to suit their evaluation methodology—and their political needs. In most cases, results are donor-driven: they are not calculated and published for the benefit of the recipient country but for the donor’s citizens back home, who want to know that their taxes are being spent wisely. So building roads and providing immunizations suddenly becomes more attractive than undertaking the long, slow, and complex work of transforming legal and political institutions. Caution wins out in the end.

Which kind of approach to failure is winning out today: experimentalist or cautious? Sadly it seems that the earlier experiment with country ownership has lost momentum, in part because the forms of participation involved were so much less meaningful than had been hoped. At the same time, the results agenda has only become more numbers-driven in the last few years. As agencies have grown more risk-averse after the global financial crisis, they have sought to make results-evaluation more standardized—and ultimately less responsive to the particular needs of local communities.

There is still hope, as the recent admissions of failure by major development organizations suggest. Yet the very fact that that the USAID event was ultimately named an ‘Experience Summit’ rather than a ‘Fail Faire’ is telling: even when leaders admit to failure, it appears that they can’t resist hedging their bets.

This blog post draws from my recent book, Governing Failure: Provisional Expertise and the Transformation of Global Development Finance, published by Cambridge University Press.

Earlier versions of this essay appeared on RegBlog.org and the CIPS blog.