Banking, Canada, Finance, International development, Risk, Uncertainty

Canada needs to do a better job of managing financial uncertainty

Published in the Hill Times, May 25, 2015

As Canadians, we pride ourselves on how well our financial regulations coped with the 2008 financial crisis. Given this attitude, it’s not surprising that Canadian policymakers have avoided a major overhaul to our regulations in response.

Yet we need to make sure that this pride in our system does not lead to complacency. Rather than just looking backwards to how the Canadian financial system performed in the last crisis, we also need to look forwards and recognize how much the global economy is changing.

Those changes take two key forms. First, the economy has become much more uncertain since the crisis. And second, a number of other countries have raised the bar for financial regulation. If Canadians don’t catch up with these two major shifts, we may well find ourselves in trouble.

Whether we look at the International Monetary Fund’s latest Global Financial Stability Report, or the Bank of Canada’s recent Financial System Review, it is clear that both the global and national economies have become increasingly uncertain. That uncertainty defines some of the most important aspects of our economy, whether we look at the likely medium-term impact of the decline in oil prices, the potential for a hard landing in an overheated housing market, or the possibility that Canadians will wake up one day and realize that their household debt level is unsustainable.

This environment of profound uncertainty poses serious policy challenges.

In the good old days of the so-called “Great Moderation” from the mid-1980s to the financial crisis, policymakers were able to focus on what Donald Rumsfeld famously described as “known unknowns”—the kinds of risks to which policymakers could assign definite probabilities. Today, we are faced instead with a great deal of “unknown unknowns”—the kinds of uncertainty that resists formal modeling, as Bank of Canada Governor, Stephen Poloz noted in a recent paper.

How should we regulate financial markets in the face of this kind of uncertainty? Very carefully. As it becomes increasingly difficult to predict what kinds of complex risks the economy might face, we need to err on the side of caution.

As good Canadians we might assume that we already have some of the most cautious financial regulations around. Yet this is no longer the case.

Yes, Canada has implemented the capital adequacy standards set out in the Basel III accord very quickly. Yet our government has treated those requirements as the gold standard, when they were designed to be a bare minimum. On the other hand, the United Kingdom and the United States are in the process of implementing more demanding standards, including adopting higher and stricter leverage ratios. While Canada was one of the only countries with a leverage ratio requirement before the crisis, we now starting to look relatively lax.

Even more striking is the fact that Canada, unlike every other major country, has no central body responsible for coordinating efforts to manage systemic risk. The Canadian regulatory universe is fragmented, with important pieces of the regulatory puzzle managed by half a dozen agencies plus a multitude of provincial authorities. The Bank of Canada does an admirable job of identifying potential sources of systemic risk, but they have few tools for acting on them.

Canadian authorities have engaged in macroprudential regulation in recent years—most notably through their efforts to cool the housing market down. Yet, as a recent IMF report points out, those efforts have unintentionally encouraged those who no longer qualify for prime mortgages into the under-regulated world of “shadow lending,” potentially increasing systemic risk. In order to manage an uncertain economy, someone needs to be able to look at the system as a whole: to connect the dots that link regulations governing consumer credit, mortgages, interest rates, big, small and “shadow” banking institutions.

What about the usual financial sector response that more regulation will cost Canadian financial institutions, and thus the economy, more generally? We should have learned by now that the cost of another crisis would be much greater still. Given the triple threat of uncertain oil prices, a volatile housing market and rising consumer debt, another crisis would likely hit us harder than the last one. It’s worth being well prepared for that kind of risk.

Posted on the CIPS Blog June 5, 2015. 

Canada, Failure, International development, Measurement, Risk, Theory

What counts as policy failure — and why it matters

When things go wrong in politics, the word ‘failure’ gets bandied around a lot. In recent weeks, we’ve heard about the failure of Canadian drug policy (as admitted by Stephen Harper), the failure of Canadian diplomatic efforts to get Barack Obama on board for the Keystone XL Pipeline (as declared by his critics), and the failure of European leaders and the ‘troika’ to find a long term solution to the problems posed by the Greek economy (as acknowledged by most sensible commentators).

These declarations of failure, of course, are not uncontested. In each case, there are those who would challenge the label of failure altogether, and others who would lay the responsibility for failure on different shoulders. Labeling something a failure is a political act: it involves not just identifying something as a problem, but also suggesting that someone in particular has failed. These debates about failures are crucial ways in which we assess responsibility for the things that go wrong in political and economic policy.

The most interesting debates about policy failure, however, occur when what’s at stake is what counts as failure itself.

When we say that something or someone, has failed, we are using a particular metric of success and failure. Formal exams provide the clearest example of assessment according to a scale of passing and failing grades. In most cases, such metrics are taken for granted. (Even if some students might not agree that my grading scale is fair, I am generally very confident when I fail a student.) But sometimes, if a failure is serious enough, or if failures are repeated over and over, those metrics themselves come into question. (I did once bump all the exam grades up by five percent in a course because they were so out of line with the students’ overall performance.)

In politics, these contested failures force both policymakers and the wider community to re-examine not just the policy problems themselves but also the measures that they use to evaluate and interpret them. These moments of debate are very important. They are very technical, focusing on the nuts and bolts of evaluation and assessment. Yet they are also fundamental, since they force us to ask both what we want success to look like and to what extent we can really know when we’ve found it.

In my recent book, Governing Failure, I trace the central role of this kind of contested failure in one particular area: the governance of international development policy. Policy failures such as the persistence of poverty in Sub-Saharan Africa, the Asian financial crisis and the AIDS crisis raised very serious questions about the effectiveness of the ‘Washington Consensus’, and ultimately led aid organizations ranging from the International Monetary Fund and the World Bank to the (then) Canadian International Development Agency to question and reassess their policies.

The ‘aid effectiveness’ debates of 1990s and 2000s emerged out of these contested failures, as key policymakers and critics questioned past definitions of success and failure and sought to develop a new understanding of what makes aid work or fail. In the process, they shifted away from a narrowly economic conception of success and failure towards one that saw institutional and other broader political reforms as crucial to program success.

International development is not the only area in which we have seen a significant set of failures precipitate this kind of debate about the meaning of success and failure itself. The 2008 financial crisis was also seen by many as a spectacular failure. The crisis produced wide-ranging debates not just about who was to blame, but also about how it was possible for domestic and international policymakers and market actors to get things so wrong that they were predicting continued success even as the global economy was headed towards massive failure.

In the aftermath of that crisis, there was a striking amount of public interest in the basic metrics underpinning the financial system. People started asking just how risks were evaluated and managed and how credit rating agencies arrived at the ratings that had proven to be so misleading. In short, they wanted to understand how the system measured success and failure. Many of the most promising efforts to respond to the crisis—such as attempts to measure and manage systemic risk—are also aimed at developing better ways of evaluating what’s is and isn’t working in the global economy, defining success in more complex ways.

Of course, not every failure is a contested one. Many have argued that the reasons for the failure of Canadian drug policy are less contested than Harper has suggested. Critics note that the Conservative government’s unwillingness to take on board the lessons of innovative policies such as safe injection sites goes a long way towards explaining this policy failure.

On the other hand, some failures—such as the failure not just of Greece but also of much of Europe to restart their economies—should be more contested than they currently are. The International Monetary Fund did begin opening up this kind of deeper discussion when its internal review of its early interventions in Greece suggested that the organization had been too quick to promote austerity. Yet the narrow terms of the troika’s conversations about the future of Greece suggests that there is an awful lot of room for more creative thinking about the path towards policy success, not just in Europe but around the world.

These kinds of debates about how we define and recognize success and failure can be crucial turning points in public policy. They force us, at least for a moment, to set aside some of our easy assumptions about what works and what doesn’t, and to ask ourselves what we really mean by success.

This blog post first appeared on the CIPS blog on March 6, 2015.

Failure, Global governance, International development, Measurement, Political economy, Results, Risk, Theory

Hedging bets: our new preoccupation with failure

Nobody likes to admit failure—least of all government-funded development organizations in hard economic times. Yet recent years have seen a number of prominent development agencies confess to failure. The International Monetary Fund (IMF) admitted its failure to recognize the damage that its overzealous approach to austerity would cause in Greece. The World Bank President, Jim Yong Kim, has adopted the idea of Fail Faires from the information technology industry, where policymakers share their biggest failures with one another. The United States Agency for International Development’s (USAID) Chief Innovation Officer also expressed some interest in organizing a Fail Faire, and the agency eventually did hold an “Experience Summit” in 2012.

This interest in failure is central to a broader shift in how development organizations—and other national and international agencies—have begun to work. As I argue in my new book, Governing Failure, these organizations are increasingly aware of the possibility of failure and are seeking to manage that risk in new ways.

This preoccupation with failure is relatively new. The 1980s and early 1990s—the era of ‘structural adjustment’ lending—was a time of confidence and certainty. Policymakers believed that they had found the universal economic recipe for development success.

The 1990s marked a turning point for confidence in the development success ‘recipe’. Success rates for programs at the World Bank began to decline dramatically’ critics started to label the 1980s a ‘decade of despair’ for sub-Saharan Africa; and both the AIDS pandemic and the Asian financial crisis reversed many gains in poverty reduction. These events made policymakers more aware of the uncertainty of the global environment and of the very real possibility of failure—lessons only reinforced by the recent financial crisis.

What happens to policymakers when they are more aware of the possibility of failure? On one hand, they can accept the fact of uncertainty and the limits of their control, becoming creative—even experimental—in their approach to solving problems. Or they can become hyper-cautious and risk-averse, doing what they can to avoid failure at all costs. We can see both reactions in international development circles.

A major shift in development practice over the past two decades has been the recognition that political ‘buy-in’ matters for policy success. As development organizations tried to foster greater country ownership of economic programs, they became quite creative. By reducing conditionality and delivering more non-earmarked aid to countries’ general budgets, development organizations shifted more decision-making responsibility to borrowing governments in an effort to create an open-ended and participatory process more conducive to policy success.

But development organizations also took a more cautious turn in their response to the problem of failure. The social theorist Niklas Luhmann first introduced the idea of ‘provisional expertise’ to describe this cautious trend in modern society. He pointed to the increase in risk-based knowledge that could always be revised in the face of changing conditions.

Risk management has become omnipresent in development circles, as it has elsewhere. No shovel turns to build a school without a multitude of assessments of possible risks to the project’s success, allowing the organizations involved to hedge against possible failures.

An even more prominent trend in development policy is the current focus on results, which is particularly popular in the Canadian government. Few organizations these days do not justify actions in terms of the results that they deliver: roads built, immunizations given, rates of infant mortality reduced.

At first glance, this focus appears to be anything but cautious: what greater risk than publishing the true results of your actions? Yet it is not always possible to know the results of a given policy. The problem of causal attribution is a thorny one in development practice, particularly when any number of different variables could have led to the results an organization claim as its own.

Some agencies such as the U.S. Millennium Challenge Corporation (MCC) have tried to get around this problem through sophisticated counterfactual analysis and the use of control groups in their aid programs. Yet even MCC staff members recognize that designing programs in order to gain the best knowledge about results can come at the expense of other priorities.

If donors can count as successes only those results that can be counted, they may well find themselves redefining their priorities to suit their evaluation methodology—and their political needs. In most cases, results are donor-driven: they are not calculated and published for the benefit of the recipient country but for the donor’s citizens back home, who want to know that their taxes are being spent wisely. So building roads and providing immunizations suddenly becomes more attractive than undertaking the long, slow, and complex work of transforming legal and political institutions. Caution wins out in the end.

Which kind of approach to failure is winning out today: experimentalist or cautious? Sadly it seems that the earlier experiment with country ownership has lost momentum, in part because the forms of participation involved were so much less meaningful than had been hoped. At the same time, the results agenda has only become more numbers-driven in the last few years. As agencies have grown more risk-averse after the global financial crisis, they have sought to make results-evaluation more standardized—and ultimately less responsive to the particular needs of local communities.

There is still hope, as the recent admissions of failure by major development organizations suggest. Yet the very fact that that the USAID event was ultimately named an ‘Experience Summit’ rather than a ‘Fail Faire’ is telling: even when leaders admit to failure, it appears that they can’t resist hedging their bets.

This blog post draws from my recent book, Governing Failure: Provisional Expertise and the Transformation of Global Development Finance, published by Cambridge University Press.

Earlier versions of this essay appeared on RegBlog.org and the CIPS blog.