Economics

Economic exceptionalism past and present: or whatever happened to normal?

Exceptionalist policies can play a critical role in changing norms and perceptions of what constitutes the status quo. What role does exceptionalism play within our society today?

 

Whatever happened to normal? You remember: a normal neoliberal political economy in which the democratic process sort of works and we have reasonable growth combined with some wage increases and interest rates around 4-5%. Of course, this “normal” economy excluded a huge number of people from its benefits, depended on lower and middle income earners maxing out their credit cards and lines of credit to keep afloat, relied on using carbon at an unprecedented scale, and produced a massive and unsustainable asset bubble. But it seemed normal (at least when compared with where we are today).

Not long after the 2008 financial crisis blew this system up, there was a lot of talk about returning to normal. But once Trump was elected and the long slow Brexit train wreck began, we seem to have given up on normal altogether.

Scholars have found a number of ways of describing this disruption of “normal” politics and economics. Ian Bruff, Burak Tansel and others have pointed to the rise of authoritarian neoliberalism in many countries. We are also witnessing what Peter Adley, Ben Anderson and Stephen Graham have described as “the proliferation of emergency as a term” and an increasing effort to govern through emergencies.

My work has focused instead on the growing role of economic exceptionalism in recent years. During my time as a Leverhulme visiting professor at SPERI at the University of Sheffield, I examined how useful this concept is for understanding how “the normal” has been suspended or disrupted today—as well as in the past [Spoiler alert]. As it turns out, the usefulness of the term depends a lot on what time frame we are looking at—but more on that later.

I first became interested in understanding this kind of break from the “normal” in the wake of the 2008 global financial crisis. I became increasingly angry at the Canadian Prime Minister, Stephen Harper, for repeatedly making claims along the lines of: “Normally, we wouldn’t be doing this (running a deficit, imposing austerity measures in a counter-productive attempt to reduce said deficit, denying airline workers the right to strike) …but because we are living in exceptional times, these measures are not only legitimate but necessary”. This language of exceptionalism was widespread at the time. In the UK, we saw politicians justifying bailouts, austerity measures and highly exceptional forms of monetary policy as necessary suspensions of normal politics in a time of crisis.

I have a number of colleagues and friends who work on critical security studies, and I kept thinking about their work on securitization and the logic of political exceptionalism in the post-9/11 era. They found that there has been an increased tendency of liberal governments to invoke states of exception in times of crisis. They achieve this by claiming that a given existential threat to the state has made the suspension of normal liberal rights necessary; in order to protect the public.

What if, I asked myself, this logic of exceptionalism is not only political but also economic? Without getting into the theoretical details of why this absolutely the case (which you can read in my Security Dialogue and International Political Sociology articles on the topic), a quick survey of history made it clear that yes, in fact, liberal states have often used emergency powers to address economic crises and have also justified them in exceptionalist terms. This has included the repeated use of martial law in the US and UK to put down strikes in the late 19th and early 20th century as well as President Franklin Roosevelt’s use of the “Trading with the Enemy” Act to put through some of the key measures of the New Deal in the 1930s.

One of the goals of this research project is to understand when and why these kinds of exceptionalist claims are used to justify particular  responses to economic crises. When I defined my initial hypotheses, I expected to find that the early New Right governments of Margaret Thatcher and Ronald Reagan both relied heavily on exceptionalist claims in the early 1980s in arguing for the necessity of their radical and often very painful strategies for reducing inflation. But this is not what I discovered. In fact, I seriously considered titling this blog “A funny thing happened on my way to a conclusion,” because it is in many ways about what happens in research when we start out with one particular hypothesis and end up finding something quite unexpected.

Going back to the 1970s, when both American and British governments first began describing inflation as a major crisis, I found plenty of evidence of exceptionalist language. Nixon declared an emergency in order to address the postal workers’ strike in 1970 and again when he imposed wage and price controls in 1971. In the UK, the Conservative Prime Minister Edward Heath declared a national emergency and imposed a three-day work week in 1973 when striking miners threatened the national energy supply. All of these measures were framed as temporary exceptions to normal politics that were necessary because of the grave threat to the national economy.

In stark contrast, the Reagan and Thatcher governments avoided using exceptionalist language when they first came to power. In the US case, Reagan’s first Budget Director, David Stockman, had written an open letter calling on the government to declare an emergency in order to tackle “an economic Dunkirk,” but his arguments were rejected.

Two very different responses to national strikes, less than a decade apart makes it clear the difference in these two disruptions of normal politics. Nixon called in the National Guard in 1970 when postal workers went on strike but ultimately granted them their demands of a wage increase and a right to bargain over wages. In contrast, when Reagan faced the air traffic controllers’ strike in 1981, he not only declared the strike illegal but then fired all of the striking workers and banned them from public employment for life. If we look at Thatcher’s response to the miners’ strike in 1984-85, we find a similar pattern of using extraordinary measures not to address a temporary crisis, but to permanently reduce the power of the miners in particular, and labour unions more generally. These were extreme actions but they were not justified as temporary or exceptional. Instead, they sought to use emergency powers to establish a new normal.

What do these historical findings tell us the disruption of the “normal” today?

After the 2008 global financial crisis, most political leaders were using a language of exceptionalism, telling us that we just had to suspend normal political and economic rights and processes temporarily to deal with the crisis. Yet, in many cases, that suspension has blurred into a new kind of normal, which has had all sorts of troubling consequences.

Today, it seems like the Trumps and the hard Brexiters of this world have given up even pretending that this is about a temporary suspension of the normal. It looks instead like they’re calling for a more radical disruption and a very different kind of normal. Both kinds of claims are troubling—but it’s the second kind that is truly worrying.

This is the final blog in a three-part series posted on the SPERI Blog, reflecting on the major research themes that I explored while a Leverhulme Visiting Professor there. You can also read Part 1 and Part 2 of the mini blog series.

Uncategorized

A beginner’s guide to economic ignorance

Ignorance is not the antithesis to knowledge, but is part of it. Wishful thinking, muddling through and other forms of ignorance play a crucial role in shaping economic policy and its effects on society.

We hear a lot about the power of economic expertise—whether it’s the news media calling on the latest expert to tell us where the economy is headed, or a populist critic arguing that experts are out of touch with the real world.

While it is certainly true that governments and international organisations rely on economic expertise to get their job done, there is a lot that these institutions simply don’t know. Sometimes that ignorance is inconvenient, sometimes it’s dangerous, and sometimes it’s politically useful.

Just take a look at some the economic arguments being made in favour of Brexit in the UK, or the willful denialism underpinning Canadian provincial and federal Conservatives’ attacks on carbon taxes, and you can find plenty of evidence of the political uses of economic ignorance today.

It turns out that economic ignorance has a long pedigree. I became interested in the role of ignorance in economic policymaking while doing archival research into the internal policy debates in the early 1980s in the United States and Great Britain, when Margaret Thatcher and Ronald Reagan first came into power.

I had begun the research expecting to find an unmatched display of economic expertise. After all, these were the years when the fathers of neoliberal economics, Friedrich Hayek and Milton Friedman were directly advising policymakers as they and the politicians of their time sought to put neoliberal theory into practice.

Yet what I found when I started to read through the various internal memos and minutes was a great deal of confusion, uncertainty—and, yes, down-right ignorance. This finding led me to ask three key questions: What kinds of ignorance do we find in economic policymaking? What roles does ignorance play? And what are the political stakes involved?

Understanding the varieties of economic ignorance became one of the three key research themes for my time as a Leverhulme Visiting Professor at SPERI and the basis of one of my Leverhulme Lectures on February 6, 2019.

Here are a few of the things that I have learned about economic ignorance (so far).

  1. Ignorance is more complicated—and more interesting—than we generally admit.

Rather than being its opposite, ignorance is always a part of knowledge. As researchers, we begin with a question, not an answer. And even once we begin to answer that question, ignorance remains somewhere in the background. One the tricks of knowing anything well is to figure out what we don’t need to know (this, of course, is one of the greatest challenges in writing a PhD thesis – or just about anything else of significance).

There is always so much more that we do not know than that we do actually know. This should be blindingly obvious, and yet it’s also somehow deeply unsettling for us as scholars (and “experts”) to admit.

  1. Ignorance is always political—even when it’s not strategic

Ignorance can play a strategic role in some cases, as Linsey McGoey has pointed out. Although ignorance does sometimes pose serious practical and political difficulties for policymakers, it can also be quite useful for them to ignore or deny knowledge of certain inconvenient facts.

Yet even when it isn’t strategic or willful, ignorance is immensely political: what forms of ignorance we use, what kinds we avoid, and whether we admit to the limits of our knowledge all have profound consequences for how we act or fail to act.

  1. Ignorance is central to economic theory

It may seem odd to talk about ignorance in the same breath as economics. After all, much conventional economic theory is based on the assumption that markets efficiently use all available information.

In fact, ignorance plays a foundational role in economic theory. When we dig a bit deeper into different economic theories, we find a whole host of assumptions about what is knowable and what isn’t and who should remain ignorant. For example, as both Will Davies and Melinda Cooper have argued, most neo- and new-classical economic theory is premised on the belief that the government not only cannot but should not know too much about the economy. Economic theories like these seek to map out a kind of distribution of ignorance and knowledge that is based both on efficiency and morality.

  1. There are (at least) four different varieties of economic ignorance worth paying attention to.

In digging through the archival records for the early Thatcher and Reagan years, I found four distinct but related forms of economic ignorance at work. Here are a few helpful definitions for those interested in tracking the role of ignorance in past (and present) economic policymaking:

Wishful and magical thinking

Definition: Willfully ignoring how ridiculous an idea is.

Example: In the 1980s, policymakers in both the UK and the US argued that the magic of rational expectations theory meant that it was possible to reduce inflation without the usual pain of a major recession as long as the government’s commitment to controlling the money supply was credible.

Cluelessness

Definition: Genuine confusion about what is going on (particularly when your ridiculous idea doesn’t work).

Example: Once it became clear (about a year into each government’s term) that a) neither the UK nor the US government could control the money supply, and b) recessions of historic proportions were underway, confusion, cluelessness (and finger-pointing) ensued.

Fudging and muddling through

Definition: Doing what you can a) to make your magical thinking seem more believable, and b) in spite of your cluelessness.

Example: Both the British Chancellor and the American Budget Director rejected the economic forecasts produced by their bureaucracies (because they contradicted their magical thinking) and either proposed their own highly inaccurate one (in the American case) or largely ignored it (in the British case). When they were presented by staff with inconvenient facts about the contradictions in their proposals, they resorted to various “presentational” fudges to make them less obvious.

Denial

Definition: Blissfully ignoring all of the above and pretending that it’s someone else’s fault when things go wrong.

Example: This is precisely what both British and American governments did in the years following the abject failure of their early attempts to put their policies into place, denying that these were “real” monetarist experiments or rebranding their efforts as “political monetarism” when actual monetarism turned out a dud. It is also, of course, what we are living with on a daily basis right now, as so many of our political leaders deny responsibility for the failure of our economic thinking and practice over the past decades—and for the resulting damage to economic justice, to our democracies, and to our planet.

  1. It’s not dangerous to admit our ignorance—in fact the opposite is the case.

Although it’s natural to respond to the obvious dangers of many of these forms of economic ignorance by seeking a return to a purer, more absolute kind of expertise, this would be a mistake. Yes, we must be attuned to the ways in which willful ignorance can blur into mendacity and call it out as such. But to pretend that we can avoid ignorance altogether is itself a dangerous kind of wishful thinking.

Instead, we need a more humble and reflexive kind of expertise that acknowledges its limits and recognizes the value of a healthy kind of ignorance—the kind that leads us to recognize what we don’t know, that spurs us on to ask new questions, and that prompts us to find better answers.

This is the second in a series of blogs originally posted on the SPERI Blog on June 14, 2019, outlining the major themes that I explored while a Leverhulme Visiting Professor at the University of Sheffield.

 

 

 

Uncategorized

Neoliberalism’s ‘unfailures’

Economic policies enacted under neoliberalism have often failed to meet their objectives, but have remained unchallenged. Why do certain policy failures have so little impact?

Political economists are fascinated by crises and the failures that they so often reveal: the collapse of the Gold Standard, the Great Depression, the stagflation crisis, the Great Recession. We see them as the inflection points in our economic history—the moments when the failure of one economic order is made visible and another takes its place.

Yet economic failures—even very big ones—do not always have this kind of political salience. The 2008 global financial crisis began as a massive, highly public failure for contemporary neoliberal theory and practice. A decade on, however, what is most striking is how little the very evident failures of neoliberalism have translated into meaningful changes.

This is not the first time that significant economic failures did not have the kind of political effects that we might expect. Such “unfailures” are actually a crucial, but often overlooked, part of our political economic history.

The late 1970s and early 1980s are a moment in time that is often seen as a resounding success for neoliberal theory and practice—when Margaret Thatcher and Ronald Reagan swept into power and turned their back on decades of Keynesian orthodoxy with a series of dramatic and ruthless policies that put Friedman and Hayek’s ideas into practice. Monetarism, supply side economics and the rational expectations revolution turned economic theory and policy upside down. Or did they?

In fact, this particular narrative of the early success of neoliberal ideas, which is not only common in popular accounts but also makes its way into many textbooks and serious political economy scholarship, is largely a fabrication. These ideas were all failures—and yet they have been rewritten as successes after the fact.

Why do some policy failures lead to dramatic changes in economic theory and practice, while others are ignored or forgotten? Why are some failures quiet, while others resonate loudly through history?

These are some of the central research questions that I tackled during my time as a Leverhulme Visiting Professor at SPERI. I communicated some of my initial findings on “unfailures” in a Leverhulme Lecture and the Annual IPE Lecture at the University of Warwick, and also continued to explore the issue through research at the National Archives in London during my stay in the UK.

What were the early failures of neoliberalism?

Monetarism was introduced with great fanfare and then quietly abandoned just a few years later. The early supply-siders’ promises that lower taxes would spark a massive spurt in growth without producing a large deficit quickly turned out to be wrong. Meanwhile, the quick shift in the public’s expectations that policymakers had counted on to provide a relatively painless transition to a lower inflation economy simply didn’t materialize. In other words, the three key theories underpinning the early Reagan and Thatcher economic revolutions—monetarism, supply side economics and rational expectations—all failed the initial transition from theory to practice.

What’s more, the archival record in both countries shows very clearly that policymakers were very well aware of these failures at the time, even as they struggled to understand them (and even if they denied them in later years).

One of the more entertaining letters that I found in the National Archives make it clear that although the Americans and the British were on the same ideological side, they were also quite capable of levelling the charge of failure against each other.

The US Treasury Secretary, Donald Regan, gave a speech to Congress in which he labelled the British efforts to control their money supply an abject failure, inspiring Sir Kenneth Couzens, a senior British civil servant, to write a very testy letter to his American counterpart. After enumerating the many obvious mistakes that Regan had made in describing the British economy (including suggesting that “practically 60% of the population in Great Britain … was working for the government”), he concluded:

“it is unhelpful, and also unrealistic to suggest that we have failed because of policy mistakes whereas you won’t because you won’t make any”.

Of course, the Americans did make lots of mistakes, and even admitted a few of them—mostly in confidential documents, although sometimes more publicly—at some personal cost. The US Budget Director, David Stockman, lost his job after he was quoted in The Atlantic arguing that, when it came to both the Reagan Administration and the Congressional Democrats’ budget proposals, “none of us really understands what’s going on with all these numbers.”

Why then did the failure of the early days of Thatcher and Reagan’s brave new neoliberal experiment not lead to the rejection of their economic vision?

Part of the answer lies with the success of key politicians’ discursive strategies of denial, rebranding and mythologizing. The US Undersecretary for Monetary Affairs, Beryl Sprinkel, a Milton Friedman student, gave speech after speech denying the failure of monetarism. His argument was identical to the one made by Ben Bernanke, after the global financial crisis—which was to argue that the problem was not the economic theory itself but its operationalization (although Sprinkel blamed the Fed Chairman of the time, Paul Volcker, whereas Bernanke, being the Fed Chairman himself, had to find someone else to blame).

In the UK, the preferred strategy deployed by Thatcher’s cabinet members was to rebrand monetarism as a more general belief in “sound money” which remained core to the Conservative’s mission, even as it gave up on actually targeting the money supply.

But no matter how silver-tongued the early neoliberal advocates may have been, they would not have succeeded if the world itself did not also change in important respects. Inflation did come down in the end. Not because of monetarism, supply-side economics or rational expectations – but because of massively painful and deflationary recessions in both countries—the worst since the 1930s.

Why should we care about the early days of neoliberalism today?

When we see the Teflon-like resilience of neoliberalism in the face of its very obvious failures, building a more sustainable and just economy can seem beyond reach. It is easy to start treating the rise, spread and persistence of neoliberalism as almost inevitable.

If we instead see neoliberalism for the messy, failure-ridden experiment that it was in its early days, we might also begin to understand its ongoing influence as contingent rather than inevitable—and to imagine a life after neoliberalism.

This blog was originally posted on the SPERI blog, 5 June 2019.

Uncategorized

How austerity measures helped fuel today’s right-wing populism

Ten years ago, on Oct. 3, 2008, United States President George W. Bush signed the “Troubled Assets Relief Program” (TARP) that promised $700 billion to support banks and companies that were hit by the global financial crisis.

As U.S. Congress granted its support for the historic bill, it seemed like liberal democracy was rising to the challenge posed by the global financial crisis. Yes, the bill would be very expensive for American taxpayers, but the cost seemed justified in the face of the potential collapse of the global economy.

A decade later, the financial crisis is a distant memory, the TARP funds have been repaid with interest and stock markets are reaching new heights.

Yet switch from the business pages to the front page and a much darker picture appears: a particularly virulent strand of right-wing populism is popping up around the world, while Doug Ford and Donald Trump are wreaking havoc with our democratic institutions.


Exploiting weaknesses

It turns out that the greatest cost of the 2008 global financial crisis was not the bailouts — but rather the cost to our democratic system.

Conservative populists have been able to exploit a series of weaknesses in liberal democratic society — weaknesses that predate the global financial crisis, but were exacerbated by the failure of our political leaders to respond effectively to it.

In this Sept. 17, 2008, photo, a trader rubs his eyes as he works on the floor of the New York Stock Exchange. The financial crisis touched off the worst recession since the 1930s Great Depression. (AP Photo/Richard Drew)

In the decades leading up to the 2008 crisis, governments rejected the more cautious approach to economic management that had emerged after the Great Depression and the Second World War. Those traumatic historical events produced policies that focused on employment and economic stability, delivering a decrease in inequality and fuelling solid economic growth.

Those concerns were pushed aside in the 1980s and 1990s, as governments of all political stripes sought to focus on inflation rather than unemployment, and to roll back regulations in the belief that this would produce a more dynamic economy.

Cuts to social spending

The results were a massive growth in the size of the financial sector and a tolerance for increasingly risky investments with little genuine oversight — a recipe for financial disaster, as we saw unfold a decade ago.

As governments sought to get leaner and cut back on social spending, as the Jean Chrétien Liberals did in the 1990s, inequality grew and middle-class incomes stagnated. Many middle-class families adapted by dipping into their home equity with lines of credit or simply loading up on credit-card debt — another time bomb that exploded in the U.S., Britain and throughout Europe in 2008 but has yet to detonate in Canada.

Once the global financial crisis hit, it became much easier to see that the economy wasn’t working for everyone.

In the U.S., the Federal Reserve Bank of St. Louis estimates that nine million families lost their homes in that crisis — between 10 and 15 per cent of all homeowners. In the U.K., between 2008 and 2009, the sudden drop in housing prices, pension funds and equities translated into a loss of 31,000 pounds (or almost $50,000 Canadian) for every household.

Drowning in debt

The household debt that had seemed like a clever solution to stagnating wages suddenly became a huge problem for those families who found themselves with a house worth a lot less, one of their household’s jobs gone and debts still to pay.

Governments’ response to the crisis only made things worse. Sure, in the short term, they acted to shore up the financial system and used fiscal stimulus to reduce the severity of the recession. But by 2010, just about every western government, including Canada’s Conservatives, had changed their tune and shifted back to austerity, arguing that we couldn’t afford more fiscal stimulus.

Austerity measures land hardest on those who most need government help — like those families who were down one job and couldn’t make the payments on a mortgage that was worth more than their house.

A girl smiles after writing slogans on Whitehall during a protest against the Conservative government and its austerity policies in London in June 2015.(AP Photo/Tim Ireland)

It also turns out that this rapid shift to austerity was counterproductive —damaging the recovery in many countries and actually increasing debt-to-GDP ratios.

Inequality also grew after the crisis. As economist Branco Milanovic’s research shows, the stagnation in western middle-class wages expanded to include upper-middle-class earners. In fact, the only people who really benefited from the austerity push were the hyper-rich.

Meanwhile governments around the world billed their austerity measures as necessary and inevitable — denying any responsibility for the suffering these policies caused.

Economics helped fuel populism

Add it all up and you get ripe conditions for the kind of economic insecurity and frustration that is fertile ground for populist sentiment. Of course, the rise of soft authoritarianism cannot and should not be reduced to economic factors. But those factors do play a role.

After all, if political leaders tell us that they have no choice but to enact these painful economic policies — that these issues are beyond democratic control — why should we be surprised when someone like Donald Trump, Nigel Farage or Doug Ford comes along and promises to take back — and give them back — control?

In order to oppose the authoritarianism of these conservative populists and challenge their lies, we need to start by recognizing that the economic experiments of the last few decades have failed the ultimate test: building a prosperous and democratic society for all.

This article was originally published on The Conversation, October 1, 2018.

Uncategorized

Why we aren’t ready for the next financial crisis

Ten years ago, as the global economy slipped ever-closer to a total meltdown, regulators were slow to recognize the severity of the problem because they were looking in the wrong direction.

The transcripts from the US Federal Reserve’s policy-making committee meeting that took place on September 16, just as the financial giant, Lehman Brothers, was allowed to fail include 129 mentions of “inflation,” and just five of “recession.” Not surprisingly, given their narrow focus on inflation, they voted to take no action to support the economy, just days before it began to go into free-fall.

We will never know how much the Federal Reserve’s obsession with inflation cost the global economy—not just through that delayed response, but also due to the previous decades of focusing on low inflation rather than jobs and growth.

In retrospect, the Fed’s lack of awareness of the wider economy in 2008 seems crazy. And yet, we are running the risk of doing something very similar today. As stock markets continue to a historically long bull market, we’re partying like it’s 2007, even as both Canadian and global economies edge into ever-more dangerous territory. And we simply don’t have the tools to respond when the next crisis hits.

Why are we acting like the 2008 financial crisis was a blip, when it should have been a wake-up call to transform our financial and economic systems?

During the immediate aftermath of the crisis, of course, governments and central bankers did take bold action. They experimented with quite radical policies, particularly in the US and in Europe, including massive bail-outs, quantitative easing, and even negative interest rates. Yet, these changes were largely framed as exceptional—temporary aberrations rather than a sign that the tools needed to manage the global economy had changed for good.

Some more sustained efforts to reform the regulatory system have been successful. Big banks are better capitalized now than they were before the 2008 crisis and regulators have more power. But dig a bit deeper and it is remarkable how many things remain unchanged. The same big three credit rating agencies whose misleading evaluations helped to blow up the system still account for over 96% of all ratings. Big American financial institutions are using loopholes to move their riskier derivatives portfolios offshore where they aren’t regulated. And the real-estate market, which was at the heart of the last major crisis, is a ticking time bomb as interest rates slowly climb back up to more normal levels.

What happened to the big talk of reform that we heard in the early days of the crisis?  Policymakers discussed much higher leverage ratios (which would restrain the riskiness of banks’ financial bets), the comprehensive regulation of derivatives, and a financial transactions tax that would make it more costly for big investment firms to make very short-term, often destabilizing, financial bets.

But in the end, the reforms that were made were tweaks rather than major changes. Governments around the world failed to introduce the kinds of wide-ranging reforms needed to prevent and manage the major financial meltdown. When the next economic crisis hits (and yes, there is always another one), our central banks—and our governments—will face much higher levels of debt, far less room to lower interest rates, and few new tools to respond.

Who is to blame for this current dangerous situation, and our woeful lack of preparedness?

CIBC’s CEO, Victor Dodig, recently blamed central banks for the current instabilities in the global financial system—suggesting that they kept interest rates too low too long, creating distortions in housing markets and in the emerging market economies who borrowed cheaply and are now having to pay more than they can afford. Dodig is right to suggest that these are serious warning signs that the economy may be in trouble soon. But he’s pointing a finger in the wrong direction when it comes to allocating blame.

Central banks only kept rates this low because it was the only tool at their disposal to keep the economy going. The real blame rests with the many western governments, including Canada’s Conservatives, who didn’t have the political guts to do what was needed to reform the global economy. These governments flirted briefly with stimulus and discussions of more systematic reform, and then shifted all too rapidly to a package of austerity and a few minor reforms.

The heavy lifting for supporting a still-ailing economy was left to the central banks—who found themselves keeping rates low far longer than they had ever anticipated.

Higher rates without any government action to support the economy would only have made things worse, sooner. What we needed then, and still need now, is more systematic economic reform of the kind that was only briefly discussed and then ignored after the 2008 crisis.

It may well be too late to do much more before the next economic crisis hits. We just have to hope that the next crisis produces some more creative thinking and, above all, some braver political action to reform the global economy for the long haul.

 

This blog post originally appeared in the Ottawa Citizen on September 20, 2018.

Uncategorized

Why nationalizing the Trans Mountain Pipeline is undemocratic

When Canadians woke up to learn that they were the proud owners of a run-down pipeline, many of them no doubt asked themselves, “Can the government just do that?” After all, nationalization hasn’t been a popular government pastime in Canada since the 1970s.

The answer is that of course the government can do that—and has done so in various ways over the past decades, through bailouts, subsidies, and all-out nationalizations when markets and firms run into serious trouble. Yet, the kind of emergency bailout that we are witnessing today should still raise alarm bells among those who believe that major economic decisions should be subject to genuine democratic debate.

Of course, the Liberal government has been quick to assure us that the nationalization is an entirely temporary measure, only necessary because of the current crisis, which requires a use of exceptional government powers to bail out a major economic sector that is vital to the Canadian economy.

If this language seems oddly familiar, it’s because it’s almost exactly the kind of exceptionalist rhetoric that we heard from the Harper Conservative government during the 2008 global financial crisis (as well as the Bush administration in the United States and the Labour and then Conservative governments in the United Kingdom, to name just a few).

Ten years ago, it was the major auto companies that received a government bailout of $13.7-billion in Canada, while in the US and the UK, auto companies, major banks, and financial institutions were the recipients of massive government injections, together with a few choice all-out nationalizations. The global insurance firm, AIG, alone received US$182-billion from the Federal Reserve Bank of New York.

While government intervention is a normal part of a market economy, not all such interventions are equally legitimate. When should a government break the normal rules of democratic deliberation and free-market economics and bail out big firms with taxpayer money?

A quick comparison with government responses to the 2008 global financial crisis helps to clarify what’s at issue here.

In both 2008 and today, the government’s actions were peremptorily announced and driven by the prime minister and cabinet without full democratic consultation. The justification then, as now, was that the needs of the market actors were too immediate to be subjected to the normal contentious processes of democratic politics.

Then, like now, we were told that these were temporary, exceptional government actions and were necessary because of the massive crisis that the economy faced.

Then, like now, we were also promised that the investment would be repaid, as the private sector regained its ground. (In fact, in the Canadian case, The Globe and Mail reported that the Harper government sold back its shares in the auto sector too quickly and the taxpayers took a $3.5-billion loss.)

Yet, that is where the parallels end.

In 2008, there was arguably a very legitimate concern about an imminent global financial meltdown. The US government had initially refused to take the bailout route and had let the financial firm, Lehman Brothers, fail. The firm’s failure produced a massive global market-wide panic, as banks wondered who would be next to go under and refused to lend to each other, nearly bringing the financial system to a halt. Although it is impossible to know what would have happened without massive government intervention in the United States, Canada, and around the world, there is a strong case to be made that the general public good was at serious risk as we faced the prospect of another Great Depression.

Where are the major risks to the public good today? Arguably, in this case, they are those identified by the people arguing against the pipeline: the very real risks of oil spills on the West Coast and the potentially catastrophic costs of climate change.

There is no massive economic crisis in the offing. Yes, there is significant uncertainty surrounding the pipeline, but any firm in this business should know by now that pipelines are immensely political, and build that into their business plan. Yes, Alberta’s economy would take a big hit from losing the pipeline deal – and it will face significant economic challenges in the future as the province and the country make the necessary shift toward a low-carbon economy. These are serious policy challenges that require genuine public debate and innovative public and private investment – not an imperious bailout.

A crisis needs to be serious, and the threat to the public good very clear, for a government to legitimately bypass normal democratic processes. The Kinder Morgan Trans Mountain pipeline nationalization clearly does not meet this threshold. As such, it represents a particularly undemocratic form of economic exceptionalism.

This article was originally published by the Globe and Mail on 7 June 2018.

Uncategorized

Why we need to stop letting economic crises go to waste

There’s a popular adage that we should never let a good crisis go to waste. Yet, arguably, that’s what we’ve been doing for decades now. We’ve avoided facing the genuine political challenges that economic crises present us and lost these opportunities to build a more equitable, effective society.

Although we seem to have dodged the bullet of another economic meltdown, recent gyrations in the stock market remind us that economic crises always come back. We have not been so lucky politically, and are now in the throes of a major democratic crisis, with the rise of right-wing, xenophobic, broadly anti-democratic forces in the United States, Europe, and beyond.

These may seem like very different crises, but they are connected. The last few global economic crises all have something in common. Although they were politically dislocating — producing major winners and losers — most governments responded by pretending that they had technical-economic fixes. Our governments contributed to the wider technocratic trend of narrowing political debate by pushing many of the most pressing questions — like inequality, which many of these policies ended up increasing — off the table.

As political critics and philosophers like Michael Sandelhave argued, this distancing of major economic decisions from democratic debate — and the resultant narrowing of the political debate — have played crucial roles in the current democratic crisis.

I have spent much of my career trying to make sense of economic crises and the politics of our responses to them. (I suppose this makes me something of an academic ambulance chaser!) While I began optimistically about how we could learn from our mistakes and make constructive changes, over time I have seen a depressingly familiar pattern emerge.

I went back to grad school because of the 1994 Canadian debt crisis. I was a parliamentary intern at the time, working on the Hill, trying to make sense of why Chrétien’s Liberal government was pursuing an austerity agenda that threatened so much of the social infrastructure that the (first) Trudeau Liberals had put into place. Back then, Canada was a bit like Greece (but on a smaller scale, happily).

After the Mexican peso crisis, the bond markets were skittish, and the level of federal and provincial debt began to look worrying. To regain bond market confidence, we were told, the government must slash its debt through austerity measures. We had no alternative. This was simply a matter of necessity, not up for political debate.

Of course, these decisions did have profound political consequences. As a recent TD reportshows, 1990s federal cuts to transfer payments — money that the provinces used to support health, education, and welfare — sharply increased the level of inequality in Canada. But we were told that economic decisions were not political choices.

The Canadian government was not alone in adopting this response to a crisis brought about by global economic pressures. This same mantra of economic necessity trumping political debate was also at the heart of Bill Clinton’s and Tony Blair’s political strategies throughout the 1990s.

The 2008 global financial crisis should have laid to rest any delusions about the unquestionable superiority of Western hyper-globalization, as Dani Rodriklabels it. This, many of my colleagues and I hoped, would finally be the crisis that would lead to some more fundamental rethinking of the current global economic order.

Briefly, it looked like more transformative changes would be adopted. There were discussions about a global “Tobin Tax” on international capital flows. There was talk of imposing much more realistic capital requirements on banks to make them less reliant on taxpayer-funded bailouts. It even became possible to talk about inequality — a problem that had been sidelined before, even at the World Bank, where discussing absolute poverty was far more politically palatable. And of course, there was the Occupy movement and wider social pressure to acknowledge and challenge growing inequality.

In the end, though, as Eric Helleinerhas put it so convincingly, this was a “status quo crisis.” The big questions were pushed off the agenda, and very little actually changed. Economic necessity trumped political possibility once again.

Ironically, President Obama’s chief of staff, Rahm Emmanuel used the phrase “you should never let a good crisis go to waste” to describe the 2008 global financial crisis. And then that administration ultimately went right ahead and let it go to waste. Western liberal democratic governments not only failed to seize the moment to try to fix what was going wrong with economic policies, but they also missed their chance to narrate the crisis as one that could be resolved through genuinely open, democratic political and economic reforms.

Anti-democratic forces of the right and the far-right have jumped into that void and filled it with their own dark stories that paint globalization in racist and anti-Semitic terms and treat democracy as being broken beyond repair.

If we are to challenge these hateful narratives, then our political and economic leaders must let go of the narrowly technical language of efficiency and necessity. Instead, they must start talking about the politics of economics. They must acknowledge that there are always winners and losers. They must see economic efficiency as a means to endhuman flourishing rather than as an end in itself. And they must reinforce, promote, and protect our key democratic values.

If we do so, next time we’re hit by a crisis (and we will be), rather than letting it go to waste, we may just be able to use it to tackle these problems in a genuinely democratic forum.

This blog post was originally published on the CIPS blog, April 30, 2018.