COVID-19, Varieties of ignorance

Unmasking Ignorance Reveals the Exercise of Political Power

By Jacqueline Best and Michael Orsini

It’s not the kind of statement that comforts the faithful.

Dr. Theresa Tam, Canada’s Chief Public Health Officer, told a press conference last month that we are “steering in uncertain waters. No one knows exactly what is going to work, so there’s a grey zone and people are doing slightly different things.”

Although in more recent weeks, Tam has been a lot more definitive about the need for a strong and systematic response across the country, she and other public health officials and politicians are grappling with the challenge of acting decisively in spite of imperfect information as the scientific understanding of COVID-19 continues to evolve.

Tam’s nod to uncertainty might not be welcome by most Canadians grappling with the unfolding pandemic and the reality of lockdowns and red zones. Yet it speaks to a core challenge of our fractured politics: evidence-based policymaking must confront the varieties of our ignorance.

One of the bravest and most necessary things that policymakers such as Tam can do is acknowledge what they do not know. Too often, however, we have seen politicians mobilize ignorance for their troubling ends. 

As a public, we demand that our political leaders and policymakers take definitive action based on expert advice. But what if knowledge does not hold the “master key”? What if an emerging feature of policymaking consists of expending considerable effort to mobilize ignorance and strategically position the art of unknowing? When all of our attention is focused on amassing knowledge or expanding the scope of the available evidence, we tend to lose sight of the need – the imperative in some cases – not to know.

In some cases, ignorance consists of actively denying or contesting knowledge and evidence in the public sphere. As sociologist Linsey McGoey argues in her book, The Unknowers, “knowing the least amount possible is often the most indispensable tool for managing risks and exonerating oneself from blame…”  Think of the calculated efforts by leaders such as Jair Bolsonaro in Brazil and Donald Trump in the US to contest the science on social distancing and mask-wearing to combat the COVID-19 pandemic, not to mention their deliberate refusal to recognize the dangers associated with hydroxychloroquine.

Recent examples suggest that various forms of ignorance are far more central and useful to policymaking than we tend to assume. Climate change denial is a potent example of the political appeal and enormous danger of contesting widely accepted knowledge. Even then-U.S. Supreme Court nominee Amy Coney Barrett sidestepped the issue when asked by Senator Kamala Harris if climate change is occurring. “I will not answer that because it is contentious,” Barrett responded. Is that type of response a dog whistle for climate change deniers, such as President Trump, who blamed the California wildfires on forest mismanagement?

Ignorance comes in many varieties. It can take the less deliberate form of wishful thinking, as policymakers underestimate the very real possibility that their policies will have serious, unintended consequences. The last few months have revealed just how pervasive and powerful a hold this kind of wishful thinking can have on policymakers. For instance, Ontario Premier Doug Ford ignored Toronto’s Medical Officer of Health, who had urged his government to impose new restrictions in Toronto, only to backtrack a week later. And it was recently revealed that the Ford government rejected advice from their in-house experts when creating a new colour-coded plan for COVID restrictions.

These forms of willful and wishful ignorance can prevent political leaders from acting on some of the key problems facing our societies. Why, for instance, is Canada still lagging in terms of race-based data on the health inequities revealed by the COVID-19 pandemic? Although data alone cannot transform racist institutions or magically improve health outcomes for Canada’s most marginalized communities, the power to ignore data that connect the dots between racism and health outcomes can be a convenient cover for policy inaction. This “will to ignore” should be challenged vigorously by Canadians interested in equity and justice; equally important, however, knowledge about marginalized communities must be protected from forms of “algorithmic racism”.

The answer to this instrumentalization of ignorance, however, is not to pretend that we have all the answers. That kind of wishful thinking is also dangerous.

How do we govern in the face of uncertainty and ignorance? By walking a very fine line. Policymakers and experts must identify gaps in our knowledge and work to redress them. Citizens must uncover those instances when ignorance is mobilized as a cover for inaction. And all of us must acknowledge that there is still much that we don’t know.

Originally posted to the CIPS Blog November 18, 2020.

COVID-19, Economic exceptionalism

We are entering another state of exception – but this time it’s economic too


     In the last few weeks, governments all over the world have been declaring states of emergency to deal with the coronavirus. Given the remarkable powers that governments at all levels have been acquiring through these measures, it’s not surprising then that both state leaders and commentators are talking about this moment being akin to a state of war.

     Although this comparison is powerful—and in many ways correct—it only tells part of the story. Yes, governments are invoking emergency powers and imposing a state of exception of the kind that we usually see in wartime. Yet, if the goal is to save the lives of citizens against and attack, as it is during wartime, then why would government leaders like British Prime Minister, Boris Johnson, have delayed social distancing measures so long for fear of their economic consequences? And why does President Trump continue to flirt with the idea of opening the economy back quickly in some parts of the country in spite of the likely impact on the number of COVID-related deaths?

     The answer to this puzzle lies in the fact that the kind of exceptionalist policies that we are seeing being put into place are, for some leaders at least, as much about protecting the economy as it they are about ensuring the public’s security.

     Although we often forget it in calmer times (remember those?), liberal democratic governments do reserve for themselves the power impose a state of exception in times of crisis, such as a war. Such exceptionalist measures are designed to temporarily suspend normal liberal democratic rights and processes in order to respond to a supreme threat to the state and its people. The last time we saw this occurring on a broad basis was of course after 9/11, although many have also drawn parallels to the Second World War.

     In many ways, the current emergency declarations and measures do bear important resemblances to these wartime measures. Governments have acted with extraordinary speed, rushing legislation through or using executive powers to give themselves the flexibility to act to fight the virus’ spread. They have partly sealed off their borders, turning inwards and actively seeking to manage the supply of essential medical equipment. We are also beginning to see the adoption of subtler, more technocratic measures that are closer to those we saw after 9/11, like the use of data surveillance in South Korea and other countries to map, track and control populations deemed a danger.

     All of these exceptionalist measures suspend or constrain normal democratic processes and liberal civil rights in the same of the security of the state and its people.

     Yet there are also several key ways in which the exceptional measures being proposed and introduced today are different from these war-time emergency policies. This time around, governments are simultaneously seeking to secure their population’s health against attack while also protecting their economy from ruin.

     Governments have historically invoked states of exception not only to fight wars but also to tackle economic crises. Confronted by the ravages of the Great Depression, President Roosevelt famously argued in his inaugural address in 1933 that he was willing to use “broad Executive power to wage a war against the emergency, as great as the power that would be given to me if we were in fact invaded by a foreign foe.” During the 2008 global financial crisis, political and economic leaders again called for exceptionalist measures ranging from bailouts to stimulus measures in order to respond to what they described as an economic emergency.

     This time around, governments are facing two existential threats at the same time—a public health threat to their citizen’s lives which can only be treated through a series of measures that themselves pose an existential threat to the economy. The Second World War’s mobilization of a wartime economy helped to rescue western states from the prolonged crisis of the Great Depression. This time around, mobilizing against the health threat means partly suspending the economy, not energizing it. These two sets of emergency responses are necessarily in tension with one another.

     While political leaders’ willingness to sacrifice some individuals’ lives for the sake of the economy may strike us as a fresh horror, it actually has a long history. Over a decade ago, I wrote an article entitled “Why the Economy is Often the Exception to Politics as Usual,” in which I suggested that neoliberal international institutions, like the International Monetary Fund and the World Bank, had proven willing to suspend the political rights and freedoms of the citizens of poorer countries in the name of re-establishing a sound economy. In these countries, the supreme goal of economic stability, not political security, was treated as a sufficient ground for exceptionalist measures, even if the consequences were often extreme poverty and deprivation—and yes, very likely, death—or some.

     In the Global North, while we remain on average the lucky ones, these recent twin crises have revealed just how vulnerable we have made some of our population in our pursuit of a mythic “sound economy”— including those in the gig economy, without secure employment, and working in essential but unrecognized jobs. These crises have also shown just how willing some political leaders are to require the ultimate sacrifice of some of our citizens in the name of that same “sound economy.”

    Responding to these twin public health and economic crises, while also ensuring that we come out the end of this process in something that still resembles a democracy, will not be easy. What’s clear, however, is that those who insist that that the economy’s survival trumps the right of its people to life are mobilizing a particularly vicious form of economic exceptionalism—one that must be recognized and resisted.

This post was originally published on the SPERI blog on April 17, 2020.

COVID-19, Political economy, Varieties of ignorance

Why it’s important to acknowledge what we don’t know in a crisis

Why it’s Important to Acknowledge What We Don’t Know in a Crisis

      How do we act effectively when there is so much that we simply do not know about what lies ahead? This is the challenge that policymakers face today on two very different fronts: public health and the economy. There is so much that public health officials don’t yet know about COVID-19, but they have to act nonetheless. As the current economic crisis deepens, economic policymakers must also take steps while facing huge uncertainty about what the consequences will be.

      While this double dilemma is alarming, the comparison between these two challenges is also instructive.

      I spend a lot of my time studying how economic ideas and expertise work, particularly in the context of crises. As I have been watching the current public health crisis unfold, day by day and hour by hour, I have been struck by the parallels and differences in how economic and public health policymakers deal with what they know and, more importantly, what they don’t know.

      It turns out that there are some common takeaways for how to develop policy—and communicate about it—in the context of extreme uncertainty.

1. Wishful thinking and denial are dangerous

      In both public health and economic cases, we can see the both the temptation and the danger of engaging in wishful thinking and denial. Although Donald Trump is the most obvious and egregious example of this kind of willful ignorance, he is not alone. Just look at the UK government’s extremely optimistic (but short-lived) embrace of the theory of “herd immunity” as a way of coping with the pandemic without having to pay the social and economic cost of social distancing.

      Economic policymakers aren’t immune either to the temptations of willful ignorance. We also saw a lot of wishful thinking and denial about the huge economic risks being taken in the early 2000s, which led to under-regulation and helped precipitate the 2008 financial crisis. Today, we need our policymakers to avoid wishful thinking about how bad things could easily get, and take dramatic and decisive steps to support the economy.

2.   Policymakers need to find ways of admitting what they don’t know

      If you pay attention to reputable news outlets and the quickly growing number of scientific papers being published on COVID-19, what you discover is a frank and evolving discussion of what is and isn’t known about the virus and the best way to respond. News sites provide updates on both what we do know so far and what we don’t know yet. In a public health crisis, scientists and policymakers alike are willing to both admit their ignorance and build it into their response.

      When it comes to economic crises, things tend to work differently. Most economists have very definite ideas about how the economy works and how to fix it when it’s ailing. In recent decades, many economists have become convinced that the way to make the economy work best is imposing simple rules—monetary rules for central banks and fiscal rules for government. Added to that is the belief that for a policy to work it must be credible—which means sticking to your guns in following the rules, come what may.

      Of course, during the 2008 economic crisis, policymakers were forced to break the rules in their response. Even the most orthodox of economists (usually) become pragmatists in a crisis. Yet within a couple of years, policymakers treated this response as an exception and have sought to return to “normal” ever since then (good luck with that).

      In theory, a central banker or finance minister isn’t allowed to say “I don’t know” for fear of markets’ panicked reaction.  Yet, in practice, central bankers like our own Governor, Stephen Poloz, have admitted (long before this current crisis) that they are often confronted by extreme uncertainty. We need economic policymakers to take a page from the world of public health and find better ways of communicating both what they know and what they don’t know today.

3.   We need a flexible and contextual response

      Much contemporary economic thinking assumes that the basic rules governing economic behaviour never change. This is a recipe for rigidity, not resilience. It ignores the fact economic dynamics are always social and historical. They depend on how people act, which changes over time. What works in response to one crisis, or in one national context, may not work in another.

      In the public health debate there is a much greater awareness of the fact that the effectiveness of a given policy response depends on how people respond. Because the coronavirus’s spread and mortality rate also depend partly on how we react to it, answers to key questions about how to respond have to be contextual and evolving.

      Although it’s scary to admit our ignorance, it also turns out that it’s vital—whether we’re talking about the novel coronavirus or its effects on our economy today.

This post was original published on the CIPS Blog on March 25, 2020.

Banking, Canada, COVID-19

Can the Bank of Canada come to the rescue again?

Bank of Canada headquarters.
Bank of Canada headquarters – from

Like central banks around the world, the Bank of Canada has cut its target interest rate in order to tackle the economic effects of the novel coronavirus.

Does that mean that central bankers are once again our knights in shining armor coming to save the day in an economic crunch? Or is this finally the right time to recognize that we can’t keep counting on the Bank of Canada to do all of the economic heavy lifting in a crisis?

This rate cut does signal the central bank’s willingness to do what it can to counteract the economic consequences of the virus’s spread. Yet there is only so much that lower rates can do. They can make it easier for people and businesses to borrow and they will reduce payments on a flexible rate mortgage. But low rates won’t help companies continue to make the products that rely on parts made in hard-hit countries like China and South Korea, and they won’t help people afford to take time off of work if they get sick.

While it isn’t yet clear yet how serious the economic effects of the virus will be, this is a good time to take stock of what tools we have to respond to the next economic crisis.

Ever since 2008 financial crisis, the Canadian government, like governments around the world, has relied an awful lot on the super-powers of the central bank. G7 politicians decided that they didn’t want to have to keep using fiscal policy to stimulate the economy and started treating central banks like “The Only Game in Town,” as former Bank of England Deputy-Governor, Paul Tucker, put it.

Yet the last decade has made it clear that there are very real drawbacks to assuming that central banks can always save the day.

The Bank of Canada has had to keep interest rates very low for a very long time to keep the economy going. While this has worked, to a point, it has had perverse consequences. Canadians took advantage of the low interest rates to go on a spending spree—building up debts worth as much as 177% of their annual income. This borrowing binge and the housing bubble that has gone with it has limited the Bank of Canada’s options moving forwards: by cutting rates, they run the risk of pushing debt even higher, but by increasing rates they could precipitate a crisis for the many families who can barely make their interest payments.

Given these limits, one option for the Bank of Canada would be to pursue unconventional monetary policy. In doing so it would be following the lead of the US, Japanese, British and European central banks that have dabbled in more esoteric monetary policies after the 2008 crisis, including “quantitative easing,” which has central banks creating new money to buy up government and private sector bonds and securities.

While these unconventional tools may be useful and even necessary, they do produce winners and losers. Top-down strategies like quantitative easing have actively contributed to growing inequality (by increasing the value assets that are mostly held by the wealthy), and has accelerated the climate crisis by disproportionately investing in carbon-intensive firms. On the other hand, bottom-up strategies, like “helicopter money,” where the central bank distributes new money to individuals, have been overlooked to date.

Politicians had hoped in the last crisis that they could avoid making difficult political decisions by passing the buck to central bankers who are insulated from the democratic process. Unfortunately, this past decade has taught us that there is no such thing as an apolitical solution to an economic crisis. Whatever role the Bank of Canada plays, it needs to be guided by democratic—and not just technocratic—priorities.

We do need central banks to play their part now—and we will need them again in the future. But we also need to make sure political leaders stop waiting for their knight in shining armor to come to the rescue and take responsibility for their own role in responding to economic shocks.

This blog post was original published as an opinion piece in the Ottawa Citizen on March 11.

Economic exceptionalism

Economic exceptionalism past and present: or whatever happened to normal?

Exceptionalist policies can play a critical role in changing norms and perceptions of what constitutes the status quo. What role does exceptionalism play within our society today?


Whatever happened to normal? You remember: a normal neoliberal political economy in which the democratic process sort of works and we have reasonable growth combined with some wage increases and interest rates around 4-5%. Of course, this “normal” economy excluded a huge number of people from its benefits, depended on lower and middle income earners maxing out their credit cards and lines of credit to keep afloat, relied on using carbon at an unprecedented scale, and produced a massive and unsustainable asset bubble. But it seemed normal (at least when compared with where we are today).

Not long after the 2008 financial crisis blew this system up, there was a lot of talk about returning to normal. But once Trump was elected and the long slow Brexit train wreck began, we seem to have given up on normal altogether.

Scholars have found a number of ways of describing this disruption of “normal” politics and economics. Ian Bruff, Burak Tansel and others have pointed to the rise of authoritarian neoliberalism in many countries. We are also witnessing what Peter Adley, Ben Anderson and Stephen Graham have described as “the proliferation of emergency as a term” and an increasing effort to govern through emergencies.

My work has focused instead on the growing role of economic exceptionalism in recent years. During my time as a Leverhulme visiting professor at SPERI at the University of Sheffield, I examined how useful this concept is for understanding how “the normal” has been suspended or disrupted today—as well as in the past [Spoiler alert]. As it turns out, the usefulness of the term depends a lot on what time frame we are looking at—but more on that later.

I first became interested in understanding this kind of break from the “normal” in the wake of the 2008 global financial crisis. I became increasingly angry at the Canadian Prime Minister, Stephen Harper, for repeatedly making claims along the lines of: “Normally, we wouldn’t be doing this (running a deficit, imposing austerity measures in a counter-productive attempt to reduce said deficit, denying airline workers the right to strike) …but because we are living in exceptional times, these measures are not only legitimate but necessary”. This language of exceptionalism was widespread at the time. In the UK, we saw politicians justifying bailouts, austerity measures and highly exceptional forms of monetary policy as necessary suspensions of normal politics in a time of crisis.

I have a number of colleagues and friends who work on critical security studies, and I kept thinking about their work on securitization and the logic of political exceptionalism in the post-9/11 era. They found that there has been an increased tendency of liberal governments to invoke states of exception in times of crisis. They achieve this by claiming that a given existential threat to the state has made the suspension of normal liberal rights necessary; in order to protect the public.

What if, I asked myself, this logic of exceptionalism is not only political but also economic? Without getting into the theoretical details of why this absolutely the case (which you can read in my Security Dialogue and International Political Sociology articles on the topic), a quick survey of history made it clear that yes, in fact, liberal states have often used emergency powers to address economic crises and have also justified them in exceptionalist terms. This has included the repeated use of martial law in the US and UK to put down strikes in the late 19th and early 20th century as well as President Franklin Roosevelt’s use of the “Trading with the Enemy” Act to put through some of the key measures of the New Deal in the 1930s.

One of the goals of this research project is to understand when and why these kinds of exceptionalist claims are used to justify particular  responses to economic crises. When I defined my initial hypotheses, I expected to find that the early New Right governments of Margaret Thatcher and Ronald Reagan both relied heavily on exceptionalist claims in the early 1980s in arguing for the necessity of their radical and often very painful strategies for reducing inflation. But this is not what I discovered. In fact, I seriously considered titling this blog “A funny thing happened on my way to a conclusion,” because it is in many ways about what happens in research when we start out with one particular hypothesis and end up finding something quite unexpected.

Going back to the 1970s, when both American and British governments first began describing inflation as a major crisis, I found plenty of evidence of exceptionalist language. Nixon declared an emergency in order to address the postal workers’ strike in 1970 and again when he imposed wage and price controls in 1971. In the UK, the Conservative Prime Minister Edward Heath declared a national emergency and imposed a three-day work week in 1973 when striking miners threatened the national energy supply. All of these measures were framed as temporary exceptions to normal politics that were necessary because of the grave threat to the national economy.

In stark contrast, the Reagan and Thatcher governments avoided using exceptionalist language when they first came to power. In the US case, Reagan’s first Budget Director, David Stockman, had written an open letter calling on the government to declare an emergency in order to tackle “an economic Dunkirk,” but his arguments were rejected.

Two very different responses to national strikes, less than a decade apart makes it clear the difference in these two disruptions of normal politics. Nixon called in the National Guard in 1970 when postal workers went on strike but ultimately granted them their demands of a wage increase and a right to bargain over wages. In contrast, when Reagan faced the air traffic controllers’ strike in 1981, he not only declared the strike illegal but then fired all of the striking workers and banned them from public employment for life. If we look at Thatcher’s response to the miners’ strike in 1984-85, we find a similar pattern of using extraordinary measures not to address a temporary crisis, but to permanently reduce the power of the miners in particular, and labour unions more generally. These were extreme actions but they were not justified as temporary or exceptional. Instead, they sought to use emergency powers to establish a new normal.

What do these historical findings tell us the disruption of the “normal” today?

After the 2008 global financial crisis, most political leaders were using a language of exceptionalism, telling us that we just had to suspend normal political and economic rights and processes temporarily to deal with the crisis. Yet, in many cases, that suspension has blurred into a new kind of normal, which has had all sorts of troubling consequences.

Today, it seems like the Trumps and the hard Brexiters of this world have given up even pretending that this is about a temporary suspension of the normal. It looks instead like they’re calling for a more radical disruption and a very different kind of normal. Both kinds of claims are troubling—but it’s the second kind that is truly worrying.

This is the final blog in a three-part series posted on the SPERI Blog, reflecting on the major research themes that I explored while a Leverhulme Visiting Professor there. You can also read Part 1 and Part 2 of the mini blog series.

Varieties of ignorance

A beginner’s guide to economic ignorance

Ignorance is not the antithesis to knowledge, but is part of it. Wishful thinking, muddling through and other forms of ignorance play a crucial role in shaping economic policy and its effects on society.

We hear a lot about the power of economic expertise—whether it’s the news media calling on the latest expert to tell us where the economy is headed, or a populist critic arguing that experts are out of touch with the real world.

While it is certainly true that governments and international organisations rely on economic expertise to get their job done, there is a lot that these institutions simply don’t know. Sometimes that ignorance is inconvenient, sometimes it’s dangerous, and sometimes it’s politically useful.

Just take a look at some the economic arguments being made in favour of Brexit in the UK, or the willful denialism underpinning Canadian provincial and federal Conservatives’ attacks on carbon taxes, and you can find plenty of evidence of the political uses of economic ignorance today.

It turns out that economic ignorance has a long pedigree. I became interested in the role of ignorance in economic policymaking while doing archival research into the internal policy debates in the early 1980s in the United States and Great Britain, when Margaret Thatcher and Ronald Reagan first came into power.

I had begun the research expecting to find an unmatched display of economic expertise. After all, these were the years when the fathers of neoliberal economics, Friedrich Hayek and Milton Friedman were directly advising policymakers as they and the politicians of their time sought to put neoliberal theory into practice.

Yet what I found when I started to read through the various internal memos and minutes was a great deal of confusion, uncertainty—and, yes, down-right ignorance. This finding led me to ask three key questions: What kinds of ignorance do we find in economic policymaking? What roles does ignorance play? And what are the political stakes involved?

Understanding the varieties of economic ignorance became one of the three key research themes for my time as a Leverhulme Visiting Professor at SPERI and the basis of one of my Leverhulme Lectures on February 6, 2019.

Here are a few of the things that I have learned about economic ignorance (so far).

  1. Ignorance is more complicated—and more interesting—than we generally admit.

Rather than being its opposite, ignorance is always a part of knowledge. As researchers, we begin with a question, not an answer. And even once we begin to answer that question, ignorance remains somewhere in the background. One the tricks of knowing anything well is to figure out what we don’t need to know (this, of course, is one of the greatest challenges in writing a PhD thesis – or just about anything else of significance).

There is always so much more that we do not know than that we do actually know. This should be blindingly obvious, and yet it’s also somehow deeply unsettling for us as scholars (and “experts”) to admit.

  1. Ignorance is always political—even when it’s not strategic

Ignorance can play a strategic role in some cases, as Linsey McGoey has pointed out. Although ignorance does sometimes pose serious practical and political difficulties for policymakers, it can also be quite useful for them to ignore or deny knowledge of certain inconvenient facts.

Yet even when it isn’t strategic or willful, ignorance is immensely political: what forms of ignorance we use, what kinds we avoid, and whether we admit to the limits of our knowledge all have profound consequences for how we act or fail to act.

  1. Ignorance is central to economic theory

It may seem odd to talk about ignorance in the same breath as economics. After all, much conventional economic theory is based on the assumption that markets efficiently use all available information.

In fact, ignorance plays a foundational role in economic theory. When we dig a bit deeper into different economic theories, we find a whole host of assumptions about what is knowable and what isn’t and who should remain ignorant. For example, as both Will Davies and Melinda Cooper have argued, most neo- and new-classical economic theory is premised on the belief that the government not only cannot but should not know too much about the economy. Economic theories like these seek to map out a kind of distribution of ignorance and knowledge that is based both on efficiency and morality.

  1. There are (at least) four different varieties of economic ignorance worth paying attention to.

In digging through the archival records for the early Thatcher and Reagan years, I found four distinct but related forms of economic ignorance at work. Here are a few helpful definitions for those interested in tracking the role of ignorance in past (and present) economic policymaking:

Wishful and magical thinking

Definition: Willfully ignoring how ridiculous an idea is.

Example: In the 1980s, policymakers in both the UK and the US argued that the magic of rational expectations theory meant that it was possible to reduce inflation without the usual pain of a major recession as long as the government’s commitment to controlling the money supply was credible.


Definition: Genuine confusion about what is going on (particularly when your ridiculous idea doesn’t work).

Example: Once it became clear (about a year into each government’s term) that a) neither the UK nor the US government could control the money supply, and b) recessions of historic proportions were underway, confusion, cluelessness (and finger-pointing) ensued.

Fudging and muddling through

Definition: Doing what you can a) to make your magical thinking seem more believable, and b) in spite of your cluelessness.

Example: Both the British Chancellor and the American Budget Director rejected the economic forecasts produced by their bureaucracies (because they contradicted their magical thinking) and either proposed their own highly inaccurate one (in the American case) or largely ignored it (in the British case). When they were presented by staff with inconvenient facts about the contradictions in their proposals, they resorted to various “presentational” fudges to make them less obvious.


Definition: Blissfully ignoring all of the above and pretending that it’s someone else’s fault when things go wrong.

Example: This is precisely what both British and American governments did in the years following the abject failure of their early attempts to put their policies into place, denying that these were “real” monetarist experiments or rebranding their efforts as “political monetarism” when actual monetarism turned out a dud. It is also, of course, what we are living with on a daily basis right now, as so many of our political leaders deny responsibility for the failure of our economic thinking and practice over the past decades—and for the resulting damage to economic justice, to our democracies, and to our planet.

  1. It’s not dangerous to admit our ignorance—in fact the opposite is the case.

Although it’s natural to respond to the obvious dangers of many of these forms of economic ignorance by seeking a return to a purer, more absolute kind of expertise, this would be a mistake. Yes, we must be attuned to the ways in which willful ignorance can blur into mendacity and call it out as such. But to pretend that we can avoid ignorance altogether is itself a dangerous kind of wishful thinking.

Instead, we need a more humble and reflexive kind of expertise that acknowledges its limits and recognizes the value of a healthy kind of ignorance—the kind that leads us to recognize what we don’t know, that spurs us on to ask new questions, and that prompts us to find better answers.

This is the second in a series of blogs originally posted on the SPERI Blog on June 14, 2019, outlining the major themes that I explored while a Leverhulme Visiting Professor at the University of Sheffield.




Rethinking failure

Neoliberalism’s ‘unfailures’

Economic policies enacted under neoliberalism have often failed to meet their objectives, but have remained unchallenged. Why do certain policy failures have so little impact?

Political economists are fascinated by crises and the failures that they so often reveal: the collapse of the Gold Standard, the Great Depression, the stagflation crisis, the Great Recession. We see them as the inflection points in our economic history—the moments when the failure of one economic order is made visible and another takes its place.

Yet economic failures—even very big ones—do not always have this kind of political salience. The 2008 global financial crisis began as a massive, highly public failure for contemporary neoliberal theory and practice. A decade on, however, what is most striking is how little the very evident failures of neoliberalism have translated into meaningful changes.

This is not the first time that significant economic failures did not have the kind of political effects that we might expect. Such “unfailures” are actually a crucial, but often overlooked, part of our political economic history.

The late 1970s and early 1980s are a moment in time that is often seen as a resounding success for neoliberal theory and practice—when Margaret Thatcher and Ronald Reagan swept into power and turned their back on decades of Keynesian orthodoxy with a series of dramatic and ruthless policies that put Friedman and Hayek’s ideas into practice. Monetarism, supply side economics and the rational expectations revolution turned economic theory and policy upside down. Or did they?

In fact, this particular narrative of the early success of neoliberal ideas, which is not only common in popular accounts but also makes its way into many textbooks and serious political economy scholarship, is largely a fabrication. These ideas were all failures—and yet they have been rewritten as successes after the fact.

Why do some policy failures lead to dramatic changes in economic theory and practice, while others are ignored or forgotten? Why are some failures quiet, while others resonate loudly through history?

These are some of the central research questions that I tackled during my time as a Leverhulme Visiting Professor at SPERI. I communicated some of my initial findings on “unfailures” in a Leverhulme Lecture and the Annual IPE Lecture at the University of Warwick, and also continued to explore the issue through research at the National Archives in London during my stay in the UK.

What were the early failures of neoliberalism?

Monetarism was introduced with great fanfare and then quietly abandoned just a few years later. The early supply-siders’ promises that lower taxes would spark a massive spurt in growth without producing a large deficit quickly turned out to be wrong. Meanwhile, the quick shift in the public’s expectations that policymakers had counted on to provide a relatively painless transition to a lower inflation economy simply didn’t materialize. In other words, the three key theories underpinning the early Reagan and Thatcher economic revolutions—monetarism, supply side economics and rational expectations—all failed the initial transition from theory to practice.

What’s more, the archival record in both countries shows very clearly that policymakers were very well aware of these failures at the time, even as they struggled to understand them (and even if they denied them in later years).

One of the more entertaining letters that I found in the National Archives make it clear that although the Americans and the British were on the same ideological side, they were also quite capable of levelling the charge of failure against each other.

The US Treasury Secretary, Donald Regan, gave a speech to Congress in which he labelled the British efforts to control their money supply an abject failure, inspiring Sir Kenneth Couzens, a senior British civil servant, to write a very testy letter to his American counterpart. After enumerating the many obvious mistakes that Regan had made in describing the British economy (including suggesting that “practically 60% of the population in Great Britain … was working for the government”), he concluded:

“it is unhelpful, and also unrealistic to suggest that we have failed because of policy mistakes whereas you won’t because you won’t make any”.

Of course, the Americans did make lots of mistakes, and even admitted a few of them—mostly in confidential documents, although sometimes more publicly—at some personal cost. The US Budget Director, David Stockman, lost his job after he was quoted in The Atlantic arguing that, when it came to both the Reagan Administration and the Congressional Democrats’ budget proposals, “none of us really understands what’s going on with all these numbers.”

Why then did the failure of the early days of Thatcher and Reagan’s brave new neoliberal experiment not lead to the rejection of their economic vision?

Part of the answer lies with the success of key politicians’ discursive strategies of denial, rebranding and mythologizing. The US Undersecretary for Monetary Affairs, Beryl Sprinkel, a Milton Friedman student, gave speech after speech denying the failure of monetarism. His argument was identical to the one made by Ben Bernanke, after the global financial crisis—which was to argue that the problem was not the economic theory itself but its operationalization (although Sprinkel blamed the Fed Chairman of the time, Paul Volcker, whereas Bernanke, being the Fed Chairman himself, had to find someone else to blame).

In the UK, the preferred strategy deployed by Thatcher’s cabinet members was to rebrand monetarism as a more general belief in “sound money” which remained core to the Conservative’s mission, even as it gave up on actually targeting the money supply.

But no matter how silver-tongued the early neoliberal advocates may have been, they would not have succeeded if the world itself did not also change in important respects. Inflation did come down in the end. Not because of monetarism, supply-side economics or rational expectations – but because of massively painful and deflationary recessions in both countries—the worst since the 1930s.

Why should we care about the early days of neoliberalism today?

When we see the Teflon-like resilience of neoliberalism in the face of its very obvious failures, building a more sustainable and just economy can seem beyond reach. It is easy to start treating the rise, spread and persistence of neoliberalism as almost inevitable.

If we instead see neoliberalism for the messy, failure-ridden experiment that it was in its early days, we might also begin to understand its ongoing influence as contingent rather than inevitable—and to imagine a life after neoliberalism.

This blog was originally posted on the SPERI blog, 5 June 2019.

Political economy

How austerity measures helped fuel today’s right-wing populism

Ten years ago, on Oct. 3, 2008, United States President George W. Bush signed the “Troubled Assets Relief Program” (TARP) that promised $700 billion to support banks and companies that were hit by the global financial crisis.

As U.S. Congress granted its support for the historic bill, it seemed like liberal democracy was rising to the challenge posed by the global financial crisis. Yes, the bill would be very expensive for American taxpayers, but the cost seemed justified in the face of the potential collapse of the global economy.

A decade later, the financial crisis is a distant memory, the TARP funds have been repaid with interest and stock markets are reaching new heights.

Yet switch from the business pages to the front page and a much darker picture appears: a particularly virulent strand of right-wing populism is popping up around the world, while Doug Ford and Donald Trump are wreaking havoc with our democratic institutions.

Exploiting weaknesses

It turns out that the greatest cost of the 2008 global financial crisis was not the bailouts — but rather the cost to our democratic system.

Conservative populists have been able to exploit a series of weaknesses in liberal democratic society — weaknesses that predate the global financial crisis, but were exacerbated by the failure of our political leaders to respond effectively to it.

In this Sept. 17, 2008, photo, a trader rubs his eyes as he works on the floor of the New York Stock Exchange. The financial crisis touched off the worst recession since the 1930s Great Depression. (AP Photo/Richard Drew)

In the decades leading up to the 2008 crisis, governments rejected the more cautious approach to economic management that had emerged after the Great Depression and the Second World War. Those traumatic historical events produced policies that focused on employment and economic stability, delivering a decrease in inequality and fuelling solid economic growth.

Those concerns were pushed aside in the 1980s and 1990s, as governments of all political stripes sought to focus on inflation rather than unemployment, and to roll back regulations in the belief that this would produce a more dynamic economy.

Cuts to social spending

The results were a massive growth in the size of the financial sector and a tolerance for increasingly risky investments with little genuine oversight — a recipe for financial disaster, as we saw unfold a decade ago.

As governments sought to get leaner and cut back on social spending, as the Jean Chrétien Liberals did in the 1990s, inequality grew and middle-class incomes stagnated. Many middle-class families adapted by dipping into their home equity with lines of credit or simply loading up on credit-card debt — another time bomb that exploded in the U.S., Britain and throughout Europe in 2008 but has yet to detonate in Canada.

Once the global financial crisis hit, it became much easier to see that the economy wasn’t working for everyone.

In the U.S., the Federal Reserve Bank of St. Louis estimates that nine million families lost their homes in that crisis — between 10 and 15 per cent of all homeowners. In the U.K., between 2008 and 2009, the sudden drop in housing prices, pension funds and equities translated into a loss of 31,000 pounds (or almost $50,000 Canadian) for every household.

Drowning in debt

The household debt that had seemed like a clever solution to stagnating wages suddenly became a huge problem for those families who found themselves with a house worth a lot less, one of their household’s jobs gone and debts still to pay.

Governments’ response to the crisis only made things worse. Sure, in the short term, they acted to shore up the financial system and used fiscal stimulus to reduce the severity of the recession. But by 2010, just about every western government, including Canada’s Conservatives, had changed their tune and shifted back to austerity, arguing that we couldn’t afford more fiscal stimulus.

Austerity measures land hardest on those who most need government help — like those families who were down one job and couldn’t make the payments on a mortgage that was worth more than their house.

A girl smiles after writing slogans on Whitehall during a protest against the Conservative government and its austerity policies in London in June 2015.(AP Photo/Tim Ireland)

It also turns out that this rapid shift to austerity was counterproductive —damaging the recovery in many countries and actually increasing debt-to-GDP ratios.

Inequality also grew after the crisis. As economist Branco Milanovic’s research shows, the stagnation in western middle-class wages expanded to include upper-middle-class earners. In fact, the only people who really benefited from the austerity push were the hyper-rich.

Meanwhile governments around the world billed their austerity measures as necessary and inevitable — denying any responsibility for the suffering these policies caused.

Economics helped fuel populism

Add it all up and you get ripe conditions for the kind of economic insecurity and frustration that is fertile ground for populist sentiment. Of course, the rise of soft authoritarianism cannot and should not be reduced to economic factors. But those factors do play a role.

After all, if political leaders tell us that they have no choice but to enact these painful economic policies — that these issues are beyond democratic control — why should we be surprised when someone like Donald Trump, Nigel Farage or Doug Ford comes along and promises to take back — and give them back — control?

In order to oppose the authoritarianism of these conservative populists and challenge their lies, we need to start by recognizing that the economic experiments of the last few decades have failed the ultimate test: building a prosperous and democratic society for all.

This article was originally published on The Conversation, October 1, 2018.

Finance in crisis

Why we aren’t ready for the next financial crisis

Ten years ago, as the global economy slipped ever-closer to a total meltdown, regulators were slow to recognize the severity of the problem because they were looking in the wrong direction.

The transcripts from the US Federal Reserve’s policy-making committee meeting that took place on September 16, just as the financial giant, Lehman Brothers, was allowed to fail include 129 mentions of “inflation,” and just five of “recession.” Not surprisingly, given their narrow focus on inflation, they voted to take no action to support the economy, just days before it began to go into free-fall.

We will never know how much the Federal Reserve’s obsession with inflation cost the global economy—not just through that delayed response, but also due to the previous decades of focusing on low inflation rather than jobs and growth.

In retrospect, the Fed’s lack of awareness of the wider economy in 2008 seems crazy. And yet, we are running the risk of doing something very similar today. As stock markets continue to a historically long bull market, we’re partying like it’s 2007, even as both Canadian and global economies edge into ever-more dangerous territory. And we simply don’t have the tools to respond when the next crisis hits.

Why are we acting like the 2008 financial crisis was a blip, when it should have been a wake-up call to transform our financial and economic systems?

During the immediate aftermath of the crisis, of course, governments and central bankers did take bold action. They experimented with quite radical policies, particularly in the US and in Europe, including massive bail-outs, quantitative easing, and even negative interest rates. Yet, these changes were largely framed as exceptional—temporary aberrations rather than a sign that the tools needed to manage the global economy had changed for good.

Some more sustained efforts to reform the regulatory system have been successful. Big banks are better capitalized now than they were before the 2008 crisis and regulators have more power. But dig a bit deeper and it is remarkable how many things remain unchanged. The same big three credit rating agencies whose misleading evaluations helped to blow up the system still account for over 96% of all ratings. Big American financial institutions are using loopholes to move their riskier derivatives portfolios offshore where they aren’t regulated. And the real-estate market, which was at the heart of the last major crisis, is a ticking time bomb as interest rates slowly climb back up to more normal levels.

What happened to the big talk of reform that we heard in the early days of the crisis?  Policymakers discussed much higher leverage ratios (which would restrain the riskiness of banks’ financial bets), the comprehensive regulation of derivatives, and a financial transactions tax that would make it more costly for big investment firms to make very short-term, often destabilizing, financial bets.

But in the end, the reforms that were made were tweaks rather than major changes. Governments around the world failed to introduce the kinds of wide-ranging reforms needed to prevent and manage the major financial meltdown. When the next economic crisis hits (and yes, there is always another one), our central banks—and our governments—will face much higher levels of debt, far less room to lower interest rates, and few new tools to respond.

Who is to blame for this current dangerous situation, and our woeful lack of preparedness?

CIBC’s CEO, Victor Dodig, recently blamed central banks for the current instabilities in the global financial system—suggesting that they kept interest rates too low too long, creating distortions in housing markets and in the emerging market economies who borrowed cheaply and are now having to pay more than they can afford. Dodig is right to suggest that these are serious warning signs that the economy may be in trouble soon. But he’s pointing a finger in the wrong direction when it comes to allocating blame.

Central banks only kept rates this low because it was the only tool at their disposal to keep the economy going. The real blame rests with the many western governments, including Canada’s Conservatives, who didn’t have the political guts to do what was needed to reform the global economy. These governments flirted briefly with stimulus and discussions of more systematic reform, and then shifted all too rapidly to a package of austerity and a few minor reforms.

The heavy lifting for supporting a still-ailing economy was left to the central banks—who found themselves keeping rates low far longer than they had ever anticipated.

Higher rates without any government action to support the economy would only have made things worse, sooner. What we needed then, and still need now, is more systematic economic reform of the kind that was only briefly discussed and then ignored after the 2008 crisis.

It may well be too late to do much more before the next economic crisis hits. We just have to hope that the next crisis produces some more creative thinking and, above all, some braver political action to reform the global economy for the long haul.


This blog post originally appeared in the Ottawa Citizen on September 20, 2018.

Economic exceptionalism

Why nationalizing the Trans Mountain Pipeline is undemocratic

When Canadians woke up to learn that they were the proud owners of a run-down pipeline, many of them no doubt asked themselves, “Can the government just do that?” After all, nationalization hasn’t been a popular government pastime in Canada since the 1970s.

The answer is that of course the government can do that—and has done so in various ways over the past decades, through bailouts, subsidies, and all-out nationalizations when markets and firms run into serious trouble. Yet, the kind of emergency bailout that we are witnessing today should still raise alarm bells among those who believe that major economic decisions should be subject to genuine democratic debate.

Of course, the Liberal government has been quick to assure us that the nationalization is an entirely temporary measure, only necessary because of the current crisis, which requires a use of exceptional government powers to bail out a major economic sector that is vital to the Canadian economy.

If this language seems oddly familiar, it’s because it’s almost exactly the kind of exceptionalist rhetoric that we heard from the Harper Conservative government during the 2008 global financial crisis (as well as the Bush administration in the United States and the Labour and then Conservative governments in the United Kingdom, to name just a few).

Ten years ago, it was the major auto companies that received a government bailout of $13.7-billion in Canada, while in the US and the UK, auto companies, major banks, and financial institutions were the recipients of massive government injections, together with a few choice all-out nationalizations. The global insurance firm, AIG, alone received US$182-billion from the Federal Reserve Bank of New York.

While government intervention is a normal part of a market economy, not all such interventions are equally legitimate. When should a government break the normal rules of democratic deliberation and free-market economics and bail out big firms with taxpayer money?

A quick comparison with government responses to the 2008 global financial crisis helps to clarify what’s at issue here.

In both 2008 and today, the government’s actions were peremptorily announced and driven by the prime minister and cabinet without full democratic consultation. The justification then, as now, was that the needs of the market actors were too immediate to be subjected to the normal contentious processes of democratic politics.

Then, like now, we were told that these were temporary, exceptional government actions and were necessary because of the massive crisis that the economy faced.

Then, like now, we were also promised that the investment would be repaid, as the private sector regained its ground. (In fact, in the Canadian case, The Globe and Mail reported that the Harper government sold back its shares in the auto sector too quickly and the taxpayers took a $3.5-billion loss.)

Yet, that is where the parallels end.

In 2008, there was arguably a very legitimate concern about an imminent global financial meltdown. The US government had initially refused to take the bailout route and had let the financial firm, Lehman Brothers, fail. The firm’s failure produced a massive global market-wide panic, as banks wondered who would be next to go under and refused to lend to each other, nearly bringing the financial system to a halt. Although it is impossible to know what would have happened without massive government intervention in the United States, Canada, and around the world, there is a strong case to be made that the general public good was at serious risk as we faced the prospect of another Great Depression.

Where are the major risks to the public good today? Arguably, in this case, they are those identified by the people arguing against the pipeline: the very real risks of oil spills on the West Coast and the potentially catastrophic costs of climate change.

There is no massive economic crisis in the offing. Yes, there is significant uncertainty surrounding the pipeline, but any firm in this business should know by now that pipelines are immensely political, and build that into their business plan. Yes, Alberta’s economy would take a big hit from losing the pipeline deal – and it will face significant economic challenges in the future as the province and the country make the necessary shift toward a low-carbon economy. These are serious policy challenges that require genuine public debate and innovative public and private investment – not an imperious bailout.

A crisis needs to be serious, and the threat to the public good very clear, for a government to legitimately bypass normal democratic processes. The Kinder Morgan Trans Mountain pipeline nationalization clearly does not meet this threshold. As such, it represents a particularly undemocratic form of economic exceptionalism.

This article was originally published by the Globe and Mail on 7 June 2018.