Canada, Finance, Measurement, Political economy, Results, Risk, Uncertainty

Why we need to take economic uncertainty seriously

If you have been reading the financial press over the past week, you know that the global economy’s chances are looking a lot more uncertain these days. What you may not know, however, is that this more recent upswing in uncertainty and volatility is part of a much broader pattern in the global economy—one that poses some real challenges for how policymakers do their job.

Stephen Poloz, the Governor of the Bank of Canada, just released a working paper in which he suggests that the economic climate has become so profoundly uncertain since the global financial crisis of 2007-2008 that it resists formal modeling.

Because of this, the Bank will no longer engage in the policy of ‘forward guidance’, in which it provides markets with a clear long-term commitment to its current very low interest rate policy. The Bank is changing this policy not because it is any less committed to low interest rates in the medium term, but because it does not want to give the markets a false sense of security about the predictability of the future. Instead, Poloz suggests that policymakers should do a better job of communicating the uncertainties facing the economy and the Bank itself as it formulates its policies.

Why should we care about this seemingly minor change in the Bank of Canada’s policy? Because it underlines just how much our governance practices are going to have to change in order to cope with the increasing uncertainty of the current economic and political dynamics.

It’s ironic that this warning is coming from the Bank of Canada. Central banks do not like change. They are just about the most conservative government institutions around.

Since the late 1970s, central bankers have been wedded to the idea that the most straightforward monetary policies are the best—ideally taking the form of a simple rule that can be expressed as a quantitative target, like the Bank of Canada’s inflation target. Economists argue that such policy rules are stabilizing because they avoid giving too much discretion to central bankers, thus reducing uncertainty about the Bank’s plans and increasing the credibility of their commitment to low inflation.

Yet these simple rules are effective only as long as the models that they are based on can accurately capture an economy’s dynamics and needs. If the economy is too complex and uncertain for such straightforward forms of quantification, then simple rules are at best misleading, and at worst destabilizing.

Poloz’s recent paper suggests that he recognizes some of these dilemmas—and the importance of coming to terms with them quickly in the current period of economic volatility.

The Bank of Canada’s Governor is not alone in recognizing these uncertainties. Janet Yellen, the current Chair of the United States Federal Reserve Board, has also pointed to the limits of simple rules in guiding central bank policy in the current context. Her predecessor, Ben Bernanke, referenced Donald Rumsfeld’s concept of ‘unknown unknowns’ to describe the extreme uncertainty that faced market participants during the recent financial crisis.

Yet, with this paper, Poloz seems to go further than his American counterparts in recognizing the implications of these unknown unknowns. In the same speech cited above, Bernanke argued that the failures of the global financial crisis were failures of engineering and management, and not of the underlying science of economics.

Poloz, by contrast, describes the work of monetary policymaking as a “craft” (not a science), and suggests that it is too complex to be treated as a form of engineering. The uncertainty that we are dealing with today, he suggests, “simply does not lend itself as easily to either mathematical or empirical analysis, or any real sort of formalization.”

This is a remarkable departure from the kind of numbers-driven rhetoric that we have heard from the Harper government in recent years.

The Canadian government has been increasingly preoccupied with measuring results, in health careinternational development, and across government-funded programs. Last May, when announcing additional funds for the health of mothers and children in developing countries, Stephen Harper argued, “You can’t manage what you can’t measure.

Poloz’s paper suggests that, on the contrary, because of the sheer complexity and uncertainty of the current global order, we have no alternative but to find ways of managing what we can’t measure. As I argue in my recent book, rather than using ever-more dubious indicators and targets to drive policy on everything from health to the economy, we need to find better ways of assessing, communicating and managing the true complexity of the policy challenges that we face.

This will not be an easy task, either technically or politically. It will take time to educate a public—not to mention a market—that has become used to simplified pronouncements.

The less we can rely on objective measurements and simple rules, the more careful we have to be about ensuring democratic accountability for policy decisions—through the political process and through an informed and active media.

And perhaps the biggest challenge that this new reality presents is the need for our politicians to heed Poloz’s suggestion that they not only recognize the inescapability of “uncertainty, and the policy errors it can foster,” but that they wear them “like an ill-fitting suit . . . that is, with humility.”

Humility tends to be in scarce supply in political circles these days. That too will need to change if we’re going to develop the kinds of creative policy tools that we need to manage the uncertain times to come.

First posted on the CIPS Blog.

Failure, Global governance, International development, Measurement, Political economy, Results, Risk, Theory

Hedging bets: our new preoccupation with failure

Nobody likes to admit failure—least of all government-funded development organizations in hard economic times. Yet recent years have seen a number of prominent development agencies confess to failure. The International Monetary Fund (IMF) admitted its failure to recognize the damage that its overzealous approach to austerity would cause in Greece. The World Bank President, Jim Yong Kim, has adopted the idea of Fail Faires from the information technology industry, where policymakers share their biggest failures with one another. The United States Agency for International Development’s (USAID) Chief Innovation Officer also expressed some interest in organizing a Fail Faire, and the agency eventually did hold an “Experience Summit” in 2012.

This interest in failure is central to a broader shift in how development organizations—and other national and international agencies—have begun to work. As I argue in my new book, Governing Failure, these organizations are increasingly aware of the possibility of failure and are seeking to manage that risk in new ways.

This preoccupation with failure is relatively new. The 1980s and early 1990s—the era of ‘structural adjustment’ lending—was a time of confidence and certainty. Policymakers believed that they had found the universal economic recipe for development success.

The 1990s marked a turning point for confidence in the development success ‘recipe’. Success rates for programs at the World Bank began to decline dramatically’ critics started to label the 1980s a ‘decade of despair’ for sub-Saharan Africa; and both the AIDS pandemic and the Asian financial crisis reversed many gains in poverty reduction. These events made policymakers more aware of the uncertainty of the global environment and of the very real possibility of failure—lessons only reinforced by the recent financial crisis.

What happens to policymakers when they are more aware of the possibility of failure? On one hand, they can accept the fact of uncertainty and the limits of their control, becoming creative—even experimental—in their approach to solving problems. Or they can become hyper-cautious and risk-averse, doing what they can to avoid failure at all costs. We can see both reactions in international development circles.

A major shift in development practice over the past two decades has been the recognition that political ‘buy-in’ matters for policy success. As development organizations tried to foster greater country ownership of economic programs, they became quite creative. By reducing conditionality and delivering more non-earmarked aid to countries’ general budgets, development organizations shifted more decision-making responsibility to borrowing governments in an effort to create an open-ended and participatory process more conducive to policy success.

But development organizations also took a more cautious turn in their response to the problem of failure. The social theorist Niklas Luhmann first introduced the idea of ‘provisional expertise’ to describe this cautious trend in modern society. He pointed to the increase in risk-based knowledge that could always be revised in the face of changing conditions.

Risk management has become omnipresent in development circles, as it has elsewhere. No shovel turns to build a school without a multitude of assessments of possible risks to the project’s success, allowing the organizations involved to hedge against possible failures.

An even more prominent trend in development policy is the current focus on results, which is particularly popular in the Canadian government. Few organizations these days do not justify actions in terms of the results that they deliver: roads built, immunizations given, rates of infant mortality reduced.

At first glance, this focus appears to be anything but cautious: what greater risk than publishing the true results of your actions? Yet it is not always possible to know the results of a given policy. The problem of causal attribution is a thorny one in development practice, particularly when any number of different variables could have led to the results an organization claim as its own.

Some agencies such as the U.S. Millennium Challenge Corporation (MCC) have tried to get around this problem through sophisticated counterfactual analysis and the use of control groups in their aid programs. Yet even MCC staff members recognize that designing programs in order to gain the best knowledge about results can come at the expense of other priorities.

If donors can count as successes only those results that can be counted, they may well find themselves redefining their priorities to suit their evaluation methodology—and their political needs. In most cases, results are donor-driven: they are not calculated and published for the benefit of the recipient country but for the donor’s citizens back home, who want to know that their taxes are being spent wisely. So building roads and providing immunizations suddenly becomes more attractive than undertaking the long, slow, and complex work of transforming legal and political institutions. Caution wins out in the end.

Which kind of approach to failure is winning out today: experimentalist or cautious? Sadly it seems that the earlier experiment with country ownership has lost momentum, in part because the forms of participation involved were so much less meaningful than had been hoped. At the same time, the results agenda has only become more numbers-driven in the last few years. As agencies have grown more risk-averse after the global financial crisis, they have sought to make results-evaluation more standardized—and ultimately less responsive to the particular needs of local communities.

There is still hope, as the recent admissions of failure by major development organizations suggest. Yet the very fact that that the USAID event was ultimately named an ‘Experience Summit’ rather than a ‘Fail Faire’ is telling: even when leaders admit to failure, it appears that they can’t resist hedging their bets.

This blog post draws from my recent book, Governing Failure: Provisional Expertise and the Transformation of Global Development Finance, published by Cambridge University Press.

Earlier versions of this essay appeared on and the CIPS blog.