Daily-Dose

Contents

From New Yorker

From Vox

Why fears of the return of 1970s-style inflation are overblown.

Financial history doesn’t repeat itself, but it tends to rhyme.

In 2008, the kinds of excessive risk-taking and speculation on Wall Street that had sparked the Great Depression in 1929 contributed to another massive global downturn.

Now some economists are voicing concern that 2021 could see a rerun of another economic calamity: the Great Inflation of the 1970s.

For those of us not alive then and who have never lived through a period of debilitating inflation, the fears voiced by baby boomer economists like Larry Summers and Olivier Blanchard that massive price increases could be coming might ring hollow. But their worry, which many economists share, reflects a real history. The Great Inflation, which began in the late 1960s and finally ebbed in the early ’80s, was a genuine calamity that worsened living standards for years.

Understanding the warning that figures like Summers and Blanchard are issuing is important. But equally important is understanding the key differences between what happened in the 1970s and what’s happening today.

Summers, Blanchard, and many mainstream economists have internalized a story about the 1970s Great Inflation, and inflationary phenomena more generally, that informs their outlook.

The story goes like this: President Lyndon B. Johnson spent a lot of money on the war in Vietnam. Wartime spending flooded the economy with money; prices crept up. LBJ’s profligacy — and the Federal Reserve’s willingness to tolerate it — led the whole economy to lose faith in the idea that prices would remain stable. Once everyone expected inflation, it became a self-fulfilling prophecy: because workers expected prices to increase, they demanded higher wages; because businesses expected wages to rise, they raised their prices; and so on, in an ever-escalating “wage-price spiral.”

By the end of the 1970s, the inflation rate was nearing double digits, or even higher, depending on the measure.

The experience came to an end thanks to a new, radical approach by the Federal Reserve. Now, a quick primer on how the Fed affects the economy: Broadly speaking, the Fed is in charge of how much money is circulating in the economy at any given time. If there’s too much money, then you can get inflation; too little might mean low inflation — but that might also mean businesses and households have trouble borrowing money, grinding the economy to a halt.

In 1979, grinding the economy to a halt was the path the Fed chose to take to tame inflation. Paul Volcker, installed as Fed chair by Jimmy Carter that year, raised interest rates, effectively shutting off the money spigot at the Fed, and signaling to markets that more rate hikes would follow until the problem was fixed.

What resulted was a gradual decline in inflation — but also two deep recessions in the early ’80s that drove the unemployment rate to its highest level since the Great Depression. The process worked, Reagan adviser Michael Mussa later said, because the Fed proved it was willing “to spill blood, lots of blood, other people’s blood” to get inflation under control.

That story now looms over the economy today. The high-spending Biden administration and its very cooperative partner in economic policy, Federal Reserve Chair Jerome Powell, feel to inflation-worriers like the story of ’70s inflation repeating itself.

Less than two months into office, Biden signed a $1.9 trillion stimulus bill, much of which went toward $1,400 checks to most Americans. Powell is accommodating this policy by continuing to keep rates near zero and buy up Treasury bonds, effectively financing the stimulus with printed money; moreover, he urged Congress to pursue stimulus during the debate over Biden’s bill, and dismissed concerns this could cause inflation.

With inflation reaching 3.4 percent in May, its highest level in 30 years, worries about a ’70s flashback appear to have some grounding. But there’s good reason to think that this worry of a replay is overblown. New economic research suggests that the story mainstream economics has been telling about the Great Inflation of the ’70s might not be entirely accurate.

The new story looks at other policies and conditions, previously understated in narratives of the period, that could have contributed to the calamity of the ’70s. This story emphasizes specific challenges, like an oil shortage and turmoil in world food markets, that drove the 1970s inflation and that are just not relevant today.

In other words, this time really might be different. And understanding that might help steer policymakers toward novel solutions and away from unnecessarily spilling “other people’s blood.”

The standard story of the Great Inflation of the 1960s and ’70s

Using the Fed’s preferred measure of inflation, we can see that prices began to rise, year over year, more rapidly starting around the mid-1960s.

 Federal Reserve Bank of St. Louis

The year-over-year core inflation rate, from 1960 to the present. Gray areas represent recessions.

They fluctuated a bit after a brief recession in 1970, but then surged to great heights, first in 1974-’75 and then at the end of the 1970s. After Volcker’s appointment in 1979, inflation peaked and then plummeted rapidly. It has never exceeded 4 percent on an annual basis again.

The popular story of the Great Inflation holds that it was the result of a chain of policy decisions starting with the budget policies of President Lyndon B. Johnson, particularly the war in Vietnam.

Whereas Johnson paid for some of his domestic priorities, like Medicare, with new taxes, he and Congress were reluctant to raise taxes to pay for the war. That meant that the war — more specifically, money spent on the war — was firing up the economy at a time when it was already growing fast, without taxes doing anything to cool the economy back down. The government was just pumping a lot more money into a private economy that didn’t have much spare capacity, meaning the money could only translate into price increases.

But the conventional story only posits Vietnam as the proximate cause. The truest cause has something to do with a trade-off economists dub the “Phillips curve” (named after economist A.W. Phillips).

In its purest form, the Phillips curve is merely a plot of the unemployment rate against the inflation rate, and it is usually downward sloping: The higher inflation is, the lower unemployment is. Here is an example of a Phillips curve graph from the Federal Reserve Bank of St. Louis:

 St.  Louis Fed/Kristie Engemann
A brief illustration of the Phillips curve.

Essentially, as Brad DeLong argued in his excellent history of the Great Inflation, policymakers in the 1960s thought they could just move leftward on the Phillips curve, to a point with higher inflation and lower unemployment, without much pain.

But they were wrong. Pushing unemployment too low, the story goes, risks not just higher inflation (as the Phillips curve suggests) but accelerating inflation: inflation that grows higher and higher without stopping.

This happens because of expectations: Once it becomes clear the Federal Reserve doesn’t really care about inflation and won’t do much to contain it, businesses and consumers start to expect inflation and plan for it. Workers might demand more money because they know $1,000 today will be worth a lot more than $1,000 a year or even a month from now. Businesses will raise prices for the same reasons.

These dynamics themselves create inflation, in the form of higher wages and prices, which in turn reinforces people’s predictions of inflation in the future, leading to a toxic spiral.

As economists Richard Clarida (now the vice chair of the Fed), Jordi Galí, and Mark Gertler wrote in 2000, under Fed policy of the time, inflation was considered at risk of spiraling out of control “because individuals (correctly) anticipate that the Federal Reserve will accommodate a rise in expected inflation.”

The next turn in this story came with Volcker’s appointment. Volcker raised interest rates dramatically, mostly to signal that the Fed was, in fact, committed to crushing inflation. It would do whatever it took to crack down, up to and including raising interest rates so high that two recessions occurred, in 1980 and 1981-’82.

The policy followed by Volcker and his successor Alan Greenspan, according to Clarida, Galí, and Gertler, killed off the possibility of self-fulfilling cycles spurring inflation. The Volcker policy made it clear that “the Federal Reserve adjusts interest rates sufficiently to stabilize any changes in expected inflation.”

The (assumed) trade-off between unemployment and inflation

These days, economists reject the idea, held by Johnson and his advisers, that you could just increase inflation with little worry about setting off a spiral, and get lower unemployment as a result.

At the center of their thinking is a concept that came to dominate Fed philosophy in recent decades: the NAIRU. That’s the non-accelerating inflation rate of unemployment — or the jobless rate below which economists claim you’ll get the inflation of the 1960s and 1970s all over again.

How does this work? The Congressional Budget Office currently estimates the NAIRU at 4.5 percent for the third quarter of 2021. Under NAIRU-driven policy, the Fed shouldn’t let unemployment, currently at 5.9 percent, go below 4.5 percent, lest it tempt the inflation gods. And the way to do that is to jack up interest rates, like Volcker did.

One reason for concern among the inflation-worriers is that we no longer have a Fed with a NAIRU-driven policy — the Fed under Powell has removed references to NAIRU from its statement of strategy.

The worriers like Blanchard and Summers also are concerned that Biden might be doing what Johnson did, but with economic stimulus and other domestic spending instead of the Vietnam War; that he’s juicing the economy so much that unemployment will swiftly fall below the NAIRU and create an inflationary spiral that can only be ended through a painful economic contraction down the road.

There are two important caveats to the conventional story. One is that you can buy its basic premise — and still think that the actual NAIRU, at least right now, is very, very low, lower than the CBO estimate of 4.5 percent, lower even than the 3 percent rate that supposedly caused problems in the 1970s. That is, the economy can continue to expand at a rapid pace for a long time and push unemployment down to historic lows, all without triggering inflation problems.

Jón Steinsson, a professor at UC Berkeley who, with his co- author Emi Nakamura, has helped make macroeconomics much more empirically grounded, basically thinks that’s the case. He told me that he’s still firmly convinced that inflation expectations matter, and that the Federal Reserve’s credibility matters. But his research leads him to believe that NAIRU could be very low, and that we should be aiming for very low unemployment rates without having to worry about inflationary pressures.

“Whether you look at the 1980s expansion, the 1990s expansion, or the 2010s expansion,” Steinsson told me in a phone call, “the unemployment rate, if you just plot it, it just keeps falling. It keeps falling and falling and falling and it never levels out. Maybe at some point it would, but one view of that is that we’ve just never gotten to the point where we have true full employment.” Indeed, for two years before Covid-19, the US enjoyed unemployment at or below 4 percent without any inflationary problems.

Federal Reserve Board Chair Arthur Burns Smoking
 Pipe Bettmann Archive/Getty Images

Federal Reserve Board Chair Arthur Burns testifies before Congress in 1972; Burns oversaw the first Great Inflation of 1974-’75 and is frequently blamed for the even worse inflation of 1978-’80.

Another caveat to the conventional story is that some economists have suggested the increase in aggregate demand during the 1960s and 1970s that led to the Great Inflation was not due to Vietnam, but at least partially to an obscure rule called Regulation Q that capped interest rates on checking and savings accounts.

In 1965, Q’s cap (then 4 percent) fell below the Federal Reserve’s interest rate for the first time. That meant anyone with money in a checking or savings account was making less than the actual market interest rate — they were losing money.

Economists Itamar Drechsler, Alexi Savov, and Philipp Schnabl argue in a recent paper that this led to a massive outflow of deposits from the banking system. That both increased aggregate demand, by spurring people to spend rather than save their money, and contracted the economy, because fewer deposits meant banks had less money to loan out to businesses. Regulation Q was effectively repealed in 1978 and 1979, with the introduction of Money Market Certificates and Small Saver Certificates, which offered market- rate interest with no caps — and the Great Inflation started to wane soon thereafter.

There are reasons to doubt this story a bit (for one thing, the Great Inflation also occurred in a bunch of other countries that didn’t have Regulation Q), but it matches the timing of the rise and fall in inflation eerily well, and suggests that a repeat of that exact situation is unlikely — Joe Biden is not proposing bringing back Regulation Q.

What if inflation is not about the price of everything, but the prices of a few specific things?

But there’s another major weakness in the conventional story of the 1970s inflation — it doesn’t take some incredibly significant world events around that time very seriously. And if you take those into account, contemporary fears about a return to ’70s-style inflation start to wane.

The 1973 oil embargo, in which Saudi Arabia and allied Arab nations blocked oil exports to the US and some of its allies in retaliation for supporting Israel in the Yom Kippur War, is little more than a side note in the inflation expectations story. Some, like former Fed Chair Ben Bernanke in his earlier academic work with Gertler and Mark Watson, go so far as to argue the embargo mostly mattered because of the Fed’s response to it, which was to sharply raise interest rates (though not as much as Volcker would later on).

But that claim seems unrealistically dismissive of the effects of a brute fact: The price of gas nearly quadrupled between October 1973 and January 1974.

 Owen Franken/Corbis/Getty Images

A gas station in 1974 that, like many during the Arab oil embargo, resorted to rationing.

While the oil shock was the most famous supply shock of the period, it was hardly the only one. Commodities of all kinds, from oil to minerals to agriculture products like grain, saw prices boom in the 1970s. And in many cases, these booms were clearly related to supply-side issues, not an inflation in prices caused by consumers with too much to spend. The price of grain, for instance, spiked in part because of a massive drought in the Soviet Union in 1972, which greatly reduced its food output, led it to purchase the US’s entire wheat reserves, and pushed up food prices worldwide.

Skanda Amarnath, executive director of the macroeconomic policy organization Employ America, explains that the baby boom in the US and Europe and the resulting higher population increased demand for these kinds of commodities and goods over the 1960s and ’70s, and supply struggled to catch up in the absence of more investment in capacity expansion.

“The response to these demographic-induced shortages was a breakneck pace of investment in everything from housing to oil wells,” Amarnath told me. “Oil, in particular, takes years of exploration and development to translate initial investment into expanded production capacity.” Eventually that investment would bear fruit and help end shortages — but while those shortages raged, the result could be inflation.

Another supply-side factor was the introduction and withdrawal of President Richard Nixon’s wage and price controls. In 1971, Nixon ended the convertibility of the dollar to gold, which removed a key part of the system that had been stabilizing exchange rates between the US and the rest of the world since World War II. To try to minimize the aftershocks, Nixon imposed mandatory limits on wages and prices from 1971 to 1974. The limits constrained prices a little, temporarily — until they were repealed, which contributed to the 1974 upward spiral of inflation.

Economist Alan Blinder has been arguing for a supply-centered explanation like this since at least 1979, and he and colleague Jeremy Rudd summarized the “supply-side” view well in a 2013 paper.

The Great Inflation, they note, was really two inflations: one between 1972 and 1974, which “can be attributed to three major supply shocks—rising food prices, rising energy prices, and the end of the Nixon wage-price controls program”; and another spike from 1978 to 1980, which reflected food supply limitations, energy prices, and rising mortgage rates. Mortgage interest payments were, until 1983, included in the most-used inflation measure, meaning that when the Fed responded to inflation by raising interest rates — and that policy change in turn caused mortgage rates to rise — this change all on its own increased measured inflation.

 H. Abernathy/ClassicStock/Getty Images
A Nebraska wheat field in 1978; grain shortages helped drive the 1970s inflation.

A supply-side story for 1970s inflation has markedly different policy implications than the “Volcker shock” of high interest rates meant to shrink the economy. In the counterfactual, instead of shrinking demand and spending so as to meet the lower supply of the period, the government could have actively tried to increase the supply of those scarce goods, as economists like then-American Economic Association president and future Nobelist Lawrence Klein argued in 1978. That could have taken the form of attempts to boost crop yields, or encourage US domestic oil production.

We’ll never know if that would have worked, but it’s a compelling and — in my view — persuasive alternative to the story we’ve been told for decades.

What this revised story of the Great Inflation means for policy in 2021

In the context of 2021, this alternate story implies that Federal Reserve Chair Jerome Powell should not be considering slowing down the economy as a blunt tool to keep prices down. Instead, the federal government should be intervening in specific areas to keep specific types of prices that are rising rapidly from further accelerating.

As my colleagues Emily Stewart and Rani Molla have noted, the biggest price increases affecting “core” non-gas or food inflation in recent months have come from new and used cars and air travel. The Biden Council of Economic Advisers estimates that at least 60 percent of inflation in June was due to car prices alone, and a big chunk of the rest came from services like air travel increasing in price as everyone rushes back to travel post- pandemic.

Unfinished Ford Pickup Trucks Stockpiled At Kentucky Speedway
 Amid Chip Shortage Jeffrey Scott Dean/Bloomberg/Getty Images
Ford F-Series trucks, completed except for scarce semiconductors, are stockpiled at the Kentucky Speedway on July 16, 2021.

A huge part of the rise in car prices is a semiconductor shortage — implying that a better way to tackle inflation than the Fed raising interest rates might be an effort to improve supply of semiconductors, including boosting production in the US. Biden’s recent efforts to get Taiwan to boost production for US car companies is exactly the kind of intervention implied by this analysis.

The Fed itself seems to be thinking this way; Powell recently testified to Congress that “supply constraints have been restraining activity in some industries, most notably in the motor vehicle industry, where the worldwide shortage of semiconductors has sharply curtailed production so far this year.” Lael Brainard, an influential member of the Fed’s Board of Governors, has said the same.

“If you do think that this supply side story is convincing, then that does really change the way you want to think about this,” Steinsson told me. “Somebody’s going to build a new semiconductor factory at some point … that gives you a rationale for not using the blunt tool of raising interest rates for the whole economy.”

Yes, inflation is rising, there is a great deal of uncertainty, and the specter of the ’70s looms large. But given how much economic pain was visited on millions in the fight against inflation decades ago, it’s encouraging that today’s policymakers seem more willing to consider the path their predecessors did not take.

To be clear, “pop music” here means the sounds of Top 40 charts and TikTok dance challenges. It’s as much a genre — catchy, repetitious, youth-oriented — as it is a term for whatever’s grabbing the world’s attention, and dollars, at any given time. It’s unabashedly commercial, accessible, and aiming to please, which makes some ashamed to love it so much. Brazen, crass consumerism notwithstanding, these songs celebrating a lifestyle most of us will never have are still damn fun to listen to. But overall, pop is the sound of our cultural concerns, money (and spending it) being a huge one of them.

An early materialistic hit came in 1955 with Chuck Berry’s “Maybellene.” Though not about shopping or spending money, the song famously chronicles a drag race between a Cadillac Coupe DeVille and a V8 Ford. That decade, automobile production and sales hit new highs as more and more Americans became car owners. As Americans’ buying power and options developed, so did materialist themes in pop. Elvis’s dewy-eyed 1956 “Blue Suede Shoes” gave way to tongue-in- cheek zeitgeist anthems like Madonna’s 1984 “Material Girl.”

Meanwhile, major artists that didn’t specifically glorify consumerism were still caught in the thrill of making enough money to join a new aristocracy, even those praised for their supposed artistic purity. “Somebody said to me, ‘But the Beatles were anti-materialistic.’ That’s a huge myth,” Paul McCartney told journalist David Fricke in 1990. “John and I literally used to sit down and say, ‘Now let’s write a swimming pool.’ We said it out of innocence. Out of normal, fucking working-class glee that we were able to write a ‘swimming pool.’ For the first time in our lives, we could actually do something and earn money.” Notably, Elvis only acquired his iconic pair of blue suede shoes after the single about them became a hit.

More recently, as hip-hop has become the sound of pop, it has become the main site of anxiety around materialism in the subjects of songs. In 1997, when Sean Combs went by Puff Daddy, he released his debut album No Way Out chock full of tax-bracket-climbing anthems, including “It’s All About the Benjamins.” Since then, Combs has very deliberately made luxury his brand. Launching from his Bad Boy Entertainment label, he now has Combs Enterprises, a portfolio of brands he has stakes in, including DeLeon Tequila, Revolt, a cable music network, and Sean John, his iconic streetwear brand. One of his most successful brands, Ciroc, helped push his personal fortune to $740 million in 2019. In pop and hip-hop, the vodka brand name is now shorthand for expensive indulgence, as in Future’s song “Fuck Up Some Commas” or Rick Ross’s “Diced Pineapples.” Along the way, Rihanna took that same model and perfected it. The singer and entrepreneur’s music career and persona came with a fan base that was a boon for her wildly successful makeup and lingerie brands, that are in many ways extensions of Rihanna merchandise. Moguls like Jay-Z and Beyoncé also parlayed the lavishness they sang and rapped about in their songs into lucrative business deals and brand partnerships that capitalize on their celebrity.

As Questlove pointed out in his 2014 critique of hip-hop for Vulture, songs of the genre were reaching comically new consumerist heights just as the industry was nurturing more and more inequality between priority and rising artists. “[Jay-Z] would never want to be in a club that would have you as a member. But this doesn’t offend his audiences. They love it,” he lamented after comparing the lyrics of the rapper’s song “Picasso Baby” to those of Run DMC’s “My Adidas,” quaint in comparison. However, alongside the ascendance of consumerist songs celebrating out-of-reach lifestyles, this new iteration of our capitalist soundtrack has spent the past two decades hammering home the idea that consumption can be the same thing as empowerment.

Though she wasn’t the first to do it, one of pop music’s biggest peddlers of this myth so far has been Beyoncé, who on her 2006 duet “Upgrade U” with Jay-Z compared buying her husband luxury goods to the civil rights work of Martin Luther King Jr. As a solo artist, Beyoncé ran with the themes of female empowerment and independence that Destiny’s Child made their bread and butter in songs like “Bills, Bills, Bills” and “Independent Woman.”

But songs about taking pride in being able to support yourself hit different from an up-and-comer than from one of the biggest stars in the world. Rising from being concerned about her telephone bill to rockin’ chinchilla coats, Beyoncé is one of many artists who flaunt her wealth in her music as a symbol of how far she’s come. Consumption and overconsumption — as in her 2011 #feminism song “Run the World (Girls),” shouting out the girls “that’s in the club rocking the latest/Who will buy it for themselves and get more money later” — is a main pillar of her self-empowerment brand. It’s built not only through her music, but through the documentaries and interviews that she greenlights, participates in, and shapes.

There’s an unfortunate truth in the “consumption as empowerment” narrative she and many other artists push. In the world as it is now, money really can buy the conditions for happiness and self-empowerment. The most fulfilling things in life — supportive, loving relationships; a stable, comfortable home; access to everything you need to be healthy and pursue your passions — are infinitely easier to get and keep if you have the money to fund it.

The other side of that coin is an insidious but common political standpoint: that everyone who is rich got there because they earned it. Songs like Drake’s “Worst Behavior,” Kanye West’s “The Good Life,” and Gwen Stefani’s “Luxurious” reminisce about their rises to superstardom and the monetary flexes that came with it all. Meanwhile, Fergie’s “Glamorous” and Jennifer Lopez’s “Jenny from the Block” do the same with added humblebrags about managing to not be an asshole. At the heart is the idea that it is not just hard work or talent that rocket an artist into the 1 percent, it is both at once. But how many of us know a hardworking singer, songwriter, or musician with world-class talent who just never got that mysterious jolt of luck and resources it takes to get to the top? Pop music’s constant conflation of consumption with empowerment, and the music industry with a meritocracy, is a uniquely American kind of propaganda that keeps our capitalist hellscape burning.

Meanwhile, its leaders must feign concern about it under the guise of Christian morality. “In a nation that was proud of hard work, strong families, close-knit communities, and our faith in God, too many of us now tend to worship self-indulgence and consumption,” President Jimmy Carter lamented in his 1979 “Crisis of Confidence” speech. He went on to reassure that the American people were learning that “piling up material goods cannot fill the emptiness of lives which have no confidence or purpose.” It was immediately apparent just how wrong he was. The decade that followed in American culture became known for glamorizing decadence, materialism, and greed.

To be fair, some of the world’s biggest pop artists have pushed back on the materialism of their industry at the height of their fame. But any pushback on the system from a successful pop artist is fraught with the contradiction of the call coming from inside the house. For example, though Tracy Chapman included the anti-capitalist song “Mountains o’ Things” on her 1988 self- titled debut album, most people who heard it did so by buying a product of Elektra Records, now owned by Warner Music Group, a corporation in the business of selling mountains upon mountains of things in the forms of digital music, physical albums, and associated merchandise.

In 1991, Sinead O’Connor withdrew from the Grammys in protest of the music industry’s materialism. “There is an emphasis (in pop music) on materialism and it’s not right to give people the message that they can fill their emptiness with material things,” she told the Los Angeles Times the following year. “They’ve got to try to fill it with truth, which we’ve got to try to show them by being ourselves rather than trying to cover up with loads of makeup or a hairdo or loads of diamond rings.” The sentiment was somewhat undercut by its appearance in an interview advertising her upcoming album.

Even Lorde’s 2013 megahit “Royals,” a lament on the materialism of pop music (with lyrics like “Cristal, Maybach, diamonds on your timepiece” ​​that seemed to focus on hip-hop and Black culture entirely), could only be anti-consumerist to a shallow point. The single propelled the singer to stardom and helped her debut album Pure Heroine reach $1 million in sales just a few months after release.

There’s no separating pop music, literally music that has proven popular, from consumption and sales. And yet many reviews, and both formal and informal criticism, hold it up to a utopian ideal, as if pop music can change society and not the other way around. The music landscape can seem bleak until you turn your attention to those operating outside and on the fringes of the major label and advertising machines. Operating in their makeshift studios, self-mixing and publishing their work, there are artists out there to turn to for non-cynical celebrations of community, mutual aid, support, and commiseration. But when you’re trying to temporarily mute the horror of modern life by buying a bathing suit with the last $30 in your bank account? That’s the perfect time to turn on “7 Rings” or whatever dystopian consumerist bop comes next. Because if there’s anything that’s relatable in consumerist society, it’s wanting, coveting, and loving stuff. Considering pop music’s indivorceable relationship to the market, that’s not going away any time soon.

The first is that Republican partisans can use race as a proxy to identify communities with large numbers of Democratic voters. In 2020, according to the Pew Research Center, 92 percent of non-Hispanic Black voters supported Democrat Joe Biden over Republican Donald Trump — and that’s after Trump slightly improved his performance among African Americans compared to 2016.

That means that state lawmakers who wish to prevent Democrats from voting can do so through policies that make it harder for Black voters (and, to a lesser extent, most other nonwhite voters) to cast a ballot. And Republican lawmakers haven’t been shy about doing so. As a federal appeals court wrote in 2016 about a North Carolina law that included many provisions making it harder to vote, “the new provisions target African Americans with almost surgical precision.”

An even more stark example: Georgia recently enacted a law that effectively enables the state Republican Party to disqualify voters and shut down polling precincts. If the state GOP wields this law to close down most of the polling places in the highly Democratic, majority-Black city of Atlanta, it’s unclear that a Voting Rights Act that’s been gravely wounded by three Supreme Court decisions remains vibrant enough to block them.

The second reason to be concerned about decisions like Brnovich is that the Supreme Court’s attacks on the Voting Rights Act are not isolated; they are part of a greater web of decisions making it much harder for voting rights plaintiffs to prevail in court.

These cases include decisions like Purcell v. Gonzales (2006), which announced that judges should be very reluctant to block unlawful state voting rules close to an election, Crawford v. Marion County Election Board (2008), which permitted states to enact voting restrictions that target largely imaginary problems, and Rucho v. Common Cause (2019), which forbade federal courts from hearing partisan gerrymandering lawsuits because the Court’s GOP-appointed majority deemed such cases too “difficult to adjudicate.”

Finally, decisions like Shelby County and Brnovich are troubling because the Court’s reasoning in those opinions appears completely divorced from the actual text of the Constitution and from the text of federal laws such as the Voting Rights Act.

Shelby County eliminated the Voting Rights Act’s requirement that states with a history of racist election practices “preclear” any new voting rules with officials in Washington, DC. It was rooted in what Chief Justice John Roberts described as “the principle that all States enjoy equal sovereignty,” a principle that is never mentioned once in the text of the Constitution.

In Brnovich, the Court upheld two Arizona laws that disenfranchise voters who vote in the wrong precinct and limit who can deliver an absentee ballot to a polling place. Alito purports to take “a fresh look at the statutory text” in this case. But he imposes new limits on the Voting Rights Act — such as a strong presumption that voting restrictions which were in place in 1982 are lawful, or a similar presumption favoring state laws purporting to prevent voter fraud — which have no basis whatsoever in the law’s text.

As Kagan writes in dissent, Brnovich “mostly inhabits a law-free zone.”

That doesn’t necessarily mean that this Supreme Court will allow any restriction on voting to stand — under the most optimistic reading of cases like Brnovich, the Court might still intervene if Georgia tries to close down most of the polling places in Atlanta — but it does mean that voting rights lawyers and their clients can no longer expect to win their cases simply because Congress passed a law protecting their right to vote.

The rules in American elections are now what Chief Justice John Roberts and his five even more conservative colleagues say that they are — not what the Constitution or any act of Congress has to say about voting rights.

How Republicans learned to stop worrying and oppose the Voting Rights Act

In retrospect, it was probably inevitable that the conservative backlash against voting rights would flourish in the one unelected branch of the federal government.

When Congress first enacted the Voting Rights Act in 1965, its “preclearance” provision — the provision that was deactivated in Shelby County — was set to expire in five years. Congress extended preclearance four times, in 1970, in 1975, in 1982, and in 2006, and each time the bill reauthorizing the fully operational Voting Rights Act was signed by a Republican president.

At least some of these GOP presidents made aborted efforts to weaken the law — President Richard Nixon, for example, proposed allowing preclearance to expire in 1970, but he backed down in the face of intense opposition from civil rights organizations.

Similarly, a significant faction within the Reagan administration — a faction that included future Chief Justice Roberts — pressed President Ronald Reagan to veto a 1982 bill expanding the Voting Rights Act. In 1980, Reagan had denounced the Voting Rights Act as “humiliating to the South,” so this conservative faction appeared to have a sympathizer in the Oval Office.

But Republicans in Congress and in the White House ultimately concluded that standing athwart the Voting Rights Act was too politically toxic. As then-Rep. Trent Lott (R-MS) warned Reagan in 1981, after an expansive voting rights renewal had already passed the House, “anyone who seeks to change” that bill “will risk being branded as racist.”

 Bettmann/Getty Images

President Ronald Reagan signs an extension to the 1965 Voting Rights Act on June 29, 1982.

By the time the Voting Rights Act was up for reauthorization again in 2006, its conservative opponents had largely given up on convincing elected officials to let much of the law die. The bill passed both houses by overwhelming margins and was signed by President George W. Bush.

“Republicans don’t want to be branded as hostile to minorities, especially just months from an election,” anti-civil rights activist Edward Blum complained in a bitter 2006 article published by the National Review. Blum would go on to be the driving force behind Shelby County and several other lawsuits seeking to diminish the rights of people of color.

Yet, as it turned out, Blum understood something that the conservative opponents of voting rights who lobbied elected officials in vain did not.

The premise of an independent judiciary is that judges must be insulated from political pressure so that they will apply the law without favor. This is why federal judges serve for life, and why they are guaranteed to keep their salary so long as they remain in office. But these very same protections also allow judges who support an unpopular policy agenda to implement it without fear of losing their job.

By the time Shelby County reached the Supreme Court, the Court was dominated by conservatives who, in Justice Antonin Scalia’s words, saw the Voting Rights Act as a “perpetuation of racial entitlement.”

“Whenever a society adopts racial entitlements,” Scalia complained during the Shelby County oral arguments, “it is very difficult to get out of them through the normal political processes.” He then channeled the resentments of men like Blum.

“I don’t think there is anything to be gained by any Senator to vote against continuation of this act,” Scalia continued. “And I am fairly confident it will be reenacted in perpetuity unless — unless a court can say it does not comport with the Constitution.”

And so the Court said just that.

The Supreme Court’s treatment of the Voting Rights Act has no apparent basis in the Constitution or the act itself

One of the many frustrating things about the Shelby County opinion is that it doesn’t even attempt to root its holding in the text of the Constitution.

The question of what constraints the Constitution’s text places on judges, especially when that text is ambiguous, is one of the most hotly contested questions in American law. But even when the Court hands down constitutional decisions that are broadly criticized, it typically makes at least some effort to ground its holding in a specific provision of the Constitution.

The Court’s anti-worker decision in Lochner v. New York (1905) and its pro-abortion decision in Roe v. Wade (1973), for example, were both rooted in the 14th Amendment’s promise that no one shall be denied “liberty” without “due process of law.”

Indeed, even the Court’s decision in Griswold v. Connecticut (1965), one of the most widely mocked majority opinions of the last century, at least tried to ground its holding in specific constitutional provisions. Griswold established married couples’ right to use contraceptives, and announced a “right to privacy” that formed the basis for subsequent liberal victories on abortion and sexuality. But the Court swiftly abandoned Griswold’s legal reasoning, which was rooted in the idea that the First, Third, Fourth, Fifth, and Ninth Amendments “have penumbras, formed by emanations from those guarantees that help give them life and substance.”

And yet, compared to Roberts’s majority opinion in Shelby County, Griswold seems like a paean to textualism and judicial restraint. Shelby County never identifies which provision of the Constitution embodies the “‘fundamental principle of equal sovereignty’ among the States” that the Court’s decision rests upon.

Although Shelby County does make a vague statement that the 15th Amendment “is not designed to punish for the past; its purpose is to ensure a better future,” this principle appears nowhere in the text of that amendment. And, in any event, the concept of “equal sovereignty” does not flow from Roberts’s future-driven interpretation of that amendment. It can’t even be found in the 15th Amendment’s penumbras and emanations.

We don’t have to imagine what Shelby County might have said if the Court had attempted to ground its decision in constitutional text — and in nearly 200 years of precedent governing how courts should read that text. Chief Justice Earl Warren wrote that opinion for the Court in South Carolina v. Katzenbach (1966), the Court’s original decision upholding the Voting Rights Act, which relies heavily on both the text of the 15th Amendment and a centuries-old line of cases holding that Congress’s power to legislate should be construed broadly.

The 15th Amendment has two provisions. The first prohibits the government from denying or abridging the right to vote “on account of race, color, or previous condition of servitude,” while the second clause declares that “Congress shall have power to enforce this article by appropriate legislation.” Thus, as Warren explained, Congress has broad authority to enact laws preventing race discrimination in voting.

Warren quoted a line of cases, stretching back to the early days of the republic, which established that Congress’s power to regulate is quite broad indeed. When the Constitution gives Congress the power to legislate on a particular subject matter, the Court established in McCulloch v. Maryland (1819), it may use “all means which are appropriate” and that are “plainly adapted” to a legitimate end, so long as Congress does not violate some other provision of the Constitution in the process.

Taken together, decisions like McCulloch and the 15th Amendment’s text yield a clear result: Congress, not the Court, gets to decide how it wants to fight race discrimination in voting. Congress, not a handful of Republican-appointed judges, get to decide whether preclearance should exist, and which states should be subject to it.

 Stefani Reynolds/Bloomberg/Getty Images
Voting rights activists rally on the National Mall in Washington, DC, on June 26, 2021, in support of DC statehood.

Indeed, Congress would have the power to impose a preclearance regime on most state election rules even if the 15th Amendment didn’t exist. Although the Constitution’s “Elections Clause” permits states to determine the “times, places and manner of holding elections for Senators and Representatives,” it also permits Congress to “make or alter such regulations, except as to the places of choosing Senators.” Thus, the federal government doesn’t just have nearly complete authority to regulate congressional elections, it explicitly has the power to displace state laws.

And yet, as Franita Tolson, a law professor at the University of Southern California and a leading expert on the federal government’s power to regulate elections, explained in recent testimony before Congress, Shelby County “ignored that the Elections Clause stands as an additional source of authority” which “can justify federal anti-discrimination and voting rights legislation.”

The impact of Shelby County was fairly swift. In 2013, for example, Texas enacted racially gerrymandered legislative maps, even though a federal court had rejected many key elements of these maps under the Voting Rights Act’s preclearance provisions. Yet, with preclearance dead, the Supreme Court upheld nearly all of Texas’s gerrymandered maps in Abbott v. Perez (2018).

Similarly, if preclearance were still in effect, it is unlikely that many of the controversial provisions of Georgia’s recently enacted voter suppression law would survive. And certainly no federal official acting in good faith would permit Georgia to simply start closing down polling places in Black neighborhoods.

Alito’s opinion in Brnovich pays no more heed to the text of the Voting Rights Act than Roberts’s opinion in Shelby County paid to the Constitution.

That case involved two interlocking provisions of the Voting Rights Act. One prohibits any law that “results in a denial or abridgement of the right of any citizen of the United States to vote on account of race or color.” The other provides that the Voting Rights Act is violated if “based on the totality of circumstances, it is shown that the political processes leading to nomination or election in the State or political subdivision are not equally open to participation by” voters of color, or if such voters “have less opportunity than other members of the electorate to participate in the political process and to elect representatives of their choice.”

That’s a lot of thick legal language, but one searches it in vain for anything suggesting, as Alito wrote in Brnovich, that election practices that were common in 1982 are presumptively legal. Or, as he also suggested in Brnovich, that state election rules are presumptively lawful so long as they supposedly combat voter fraud.

As Rick Hasen, a law professor and election law expert at the University of California Irvine, writes, Brnovich ignores “the text of the statute, its comparative focus on lessened opportunity for minority voters, and the history that showed Congress intended to alter the status quo and give new protections to minority voters.” Alito’s opinion in Brnovich bears the same resemblance to the text of the Voting Rights Act that Taco Bell does to Mexico.

Just as significantly, Brnovich raises serious doubts about whether this Supreme Court would strike down any state election law that discriminates on the basis of race.

The case for (very limited) optimism

One thing that surprised me after Brnovich was handed down is that my initial assessment of the opinion was slightly more optimistic than the view among many voting rights scholars, including Tolson and Hasen.

I wrote that the Supreme Court left the Voting Rights Act alive in Brnovich — if only “barely.” Hasen, by contrast, accused Alito of essentially offering “a new and impossible test for plaintiffs to meet” if they allege that they were denied the right to vote. Tolson told the legal podcast Strict Scrutiny that “it’s very difficult to determine what voting restrictions would violate” the standard laid out in Brnovich.

So let me lay out the case for why Brnovich — and the array of Roberts Court decisions limiting voting rights that proceed it — may not produce an apocalyptic crisis for American democracy. This argument has three prongs.

The first is that, while Alito’s opinion in Brnovich imposes a long list of extratextual limits on the Voting Rights Act, it doesn’t go quite as far as the Republican Party asked the Court to go. The Arizona Republican Party’s brief in Brnovich argued that “race-neutral regulations of the where, when, and how of voting do not” violate the Act — a proposal that, as Justice Kagan pointed out at oral argument, would allow a state to require all voters to cast their ballot at a country club.

Meanwhile, Arizona Attorney General Mark Brnovich (R) suggested that voting restrictions that have a disproportionate impact on minority voters should be upheld, so long as the state didn’t cause voters of color to behave differently than white voters. Thus, under Brnovich’s standard, a state could potentially limit the franchise to country music fans — because the state didn’t cause white people to be more likely to listen to country music than voters of color.

Republicans, in other words, gave the Supreme Court two different legal standards that it could have applied in Brnovich if the Court wanted to effectively neutralize the Voting Rights Act altogether. The fact that the Court rejected these proposed standards — in an opinion that was otherwise completely shameless about its disregard for what the law actually says — suggests that some key members of the Court may have balked at the GOP’s request to shut down the Voting Rights Act altogether.

The second reason for optimism is that, while Republican state lawmakers have enacted a bevy of voting restrictions in the wake of decisions like Shelby County, most of those restrictions have not had as drastic of an impact on voting as many advocates feared.

Voter ID laws, for example, which require voters to show photo ID before they can cast a ballot, are a common voter restriction favored by many Republicans. Yet, while initial research on voter ID suggested that these laws may disproportionately prevent left-leaning demographics from casting a ballot, more recent research suggests that they have no impact whatsoever. They appear to neither diminish voter turnout (as Democrats feared), nor have any real impact on voter fraud (which Republicans often highlight to justify such laws, even though voter fraud is exceedingly rare).

 Andrew Lichtenstein/Corbis/Getty Images
Tens of thousands of Trump supporters rally to declare the presidential election results fraudulent on November 14, 2020, in Washington, DC.

 Alex Wong/Getty Images

Voting rights activists link arms with Rep. Joyce Beatty (D-OH, center) during a protest at the Capitol on July 15, 2021, in response to a wave of voting restrictions in Republican states.

Similarly, a recent paper by political scientists Mayya Komisarchik and Ariel White finds that Shelby County “did not reduce aggregate Black or Hispanic voter registration or turnout,” and that turnout among these voters may have even slightly increased since the Court’s decision in 2013 — an unexpected finding that the authors think may be attributable, at least in part, to get out the vote efforts “explicitly targeted to counter potential voter suppression in the wake of the decision.”

I want to be cautious about being too optimistic here. As the Court’s decision in Perez suggests, even if eliminating preclearance did not diminish “voter registration or turnout,” it has made it easier for states to enact racial gerrymanders. And even if Democrats and voting rights advocates have thus far succeeded in countering Shelby County through countermobilization efforts, it’s unclear if those efforts will remain successful forever.

Shelby County is also less than a decade old, so it remains to be seen what impact more innovative voter suppression laws — such as the one recently enacted in Georgia — will have on turnout. But that brings us to the third reason to be cautiously optimistic.

As Nicholas Stephanopoulos, a Harvard election law professor, wrote shortly after Brnovich came down, that decision does not preclude challenges to “novel or unusual voting restrictions” because such restrictions “weren’t prevalent in 1982.” The more creative Republican lawmakers get in their efforts to restrict the vote, the more likely it is that the courts will balk.

Two unanswered questions

The biggest threat facing American democracy is that state lawmakers may go beyond restrictions, such as voter ID, which make it harder for some voters to cast a ballot — and actually impose election rules that make it impossible for Democrats to win.

Think of former President Donald Trump’s failed attempts to pressure judges, state officials, and Congress into tossing out President Joe Biden’s victory in the 2020 election.

Last year, the Supreme Court literally did the least that it could possibly do to preserve democracy in the United States, by turning aside frivolous lawsuits brought by Republicans seeking to overturn Biden’s victory. But future efforts to rig elections are likely to be more subtle — and the lawyers who defend those efforts are likely to be more competent than the band of misfits Trump assembled to challenge the 2020 election.

We don’t yet know how the Court will approach those efforts.

Consider, for example, Georgia’s new law. The most troubling provision of that law permits Republican officials to seize control of local election boards that have the power to close down polling locations and disqualify voters. This is a novel form of voter suppression — it’s unlikely that many states permitted partisan officials to simply toss out Democratic ballots in 1982 — so the Court’s decision in Brnovich should not prevent courts from intervening if Georgia Republicans go that far.

But here’s the rub: imagine that Georgia Republicans start shutting down polling precincts in the largely Democratic, majority Black city of Atlanta shortly before the 2022 election — or imagine that, say, Arizona passes a new law one month before the election that shuts down half the precincts in Democratic neighborhoods.

The Court’s decision in Purcell held that judges should be reluctant to intervene in election-related disputes as Election Day draws close, because such decisions “can themselves result in voter confusion and consequent incentive to remain away from the polls.” Yet, more recent decisions have treated Purcell less as a practical warning that judges should avoid decisions that might confuse voters, and more like an inexorable rule that late-breaking voting rights decisions are not allowed.

The danger, in other words, is that if a state imposes last-minute voting restrictions that seek to rig an election, the Supreme Court may forbid the federal judiciary from doing anything about it.

Another unanswered question is how far this Court is willing to go in giving Republicans an unfair advantage during the next legislative redistricting cycle, which is expected to begin this fall.

In a long line of cases stretching back more than a century, the Supreme Court has repeatedly rejected something known as the “independent state legislature doctrine,” which could potentially allow state legislatures to pass election laws that can neither be vetoed by a state governor nor reviewed by the state’s courts. But four members of the Court recently endorsed this doctrine, and newly confirmed Justice Amy Coney Barrett’s views on the doctrine are unknown.

As Justice Neil Gorsuch summarized this doctrine in a 2020 opinion, “the Constitution provides that state legislatures — not federal judges, not state judges, not state governors, not other state officials — bear primary responsibility for setting election rules,” at least for federal elections.

In its most extreme form, Gorsuch’s approach could forbid Democratic governors from vetoing congressional gerrymanders passed by Republican legislatures. It could forbid states from using nonpartisan commissions to draw congressional maps. And it could even prevent state supreme courts from enforcing state constitutional safeguards against gerrymandering.

The biggest uncertainty surrounding the Court’s voting rights decisions, in other words, is whether the Court will enable efforts to lock Republicans into power no matter what voters do to elect their candidates of choice, or whether the Court’s majority will, at some point, tell their fellow Republicans in state legislatures that they’ve gone too far.

The answers to these questions, moreover, won’t be found anywhere the Constitution, or in any law enacted by Congress. The Roberts Court’s voting rights cases bear far more resemblance to the old English common law, a web of entirely judge-created legal rules governing areas such as contracting and property rights, than it does to the modern, more democratic model where federal judges are supposed to root their decisions in legal texts. The future of democracy in the United States will be decided by six Republican-appointed justices’ arbitrary whims.

And, if a majority of the justices do support a wholesale attack on liberal democracy, their actions will hardly be unprecedented.

Nearly a century before President Lyndon Johnson signed the Voting Rights Act, Congress and state legislatures passed a different kind of legislation that was supposed to guarantee the franchise to people of color.

It’s called the 15th Amendment, with its command that “the right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.”

The pre-Voting Rights Act United States did not deny voting rights to millions of African Americans because we lacked a legal guarantee protecting the right to vote. We did so because powerful public officials — including judges — decided that they did not care what the Constitution had to say about voting rights.

We’re about to find out whether the Supreme Court is going to repeat that history.

 Frederic J. Brown/AFP/Getty Images
Activists rally in Los Angeles, calling on Congress and Sen. Dianne Feinstein (D-CA) to remove the Senate filibuster and pass the For the People Act to expand voting rights on July 7, 2021.

From The Hindu: Sports

From The Hindu: National News

From BBC: Europe

From Ars Technica

From Jokes Subreddit