The BRICS and the Almighty Dollar

When the BRICS (Brazil, Russia, India, China, and South Africa) summit was held last month in South Africa, it highlighted both the group’s main economic strengths and the divergent interests that make it difficult for them to leverage those strengths.  Whether those differences can be resolved will have a major impact on the U.S. in general, and the dominance of the dollar in particular.

Nearly two dozen countries formally applied to join the group. The bloc invited top oil exporter Saudi Arabia, along with Iran, Egypt, Argentina, Ethiopia, and the United Arab Emirates, to join in an ambitious push to expand their global influence as a viable counterweight to the West.  This is certainly the goal of Beijing and Moscow.

Developing countries are increasingly the biggest the most dynamic parts of the world economy.  This has resulted in both the shift of a vast amount of know-how from the West to the rest and the development of new know how in the rest—not just in China but also in India.

The new BRICS members bring together several of the largest energy producers with the developing world’s biggest consumers, potentially giving the bloc outsized economic clout.  Most of the world’s energy trade takes place in dollars, but the expansion could enhance the group’s ability to push more trade to alternative currencies.

This is a win for China and Russia, who would very much like to undermine the dominance of the US dollar. This would be especially helpful to Russia as its economy struggles with sanctions imposed after its invasion of Ukraine last year.  China is looking to build a broader coalition of developing countries to extend Beijing’s influence and reinforce its efforts to compete with the US on the global stage.

Former French President Valery Giscard d’Estaing called the dollar’s role as the world’s reserve currency “America’s exorbitant privilege.” Most Americans don’t think about the value of the dollar.  But for the rest of the world, its value on currency exchanges is a big deal.

U.S. monetary policy is closely watched around the world because interest rate hikes by the Federal Reserve increase the dollar’s value and make loans denominated in dollars more expensive to repay in local currencies. This is certainly an advantage for the U.S.

But the dollar’s unique position is under threat on several fronts and will likely experience a stress test in the future. The most immediate and unnecessary threat would stem from the self-inflicted wound of the U.S. defaulting on its debt.

One of the bedtime stories D.C. politicians tell themselves is that the dollar is unassailable. If Americans have learned anything from history, it is that there is no escaping it. Moving on from history requires some honesty and truth telling, but truth tellers are an endangered species among the political elite.

There are a growing number of countries, notably China and Russia, that resent the US’s weaponization of the dollar on global markets.  Their de-dollarization efforts bear watching. Another threat arises from technology, as central banks around the world work to develop their own digital currency networks.

Though home to about 40 percent of the world’s population and a quarter of global GDP, the bloc’s ambitions of becoming a global political and economic player have long been thwarted by internal divisions and the lack of a coherent vision.

The BRICS countries also have economies that are vastly different in scale and governments that often seem to have few common foreign policy goals, which complicates their decision-making.  China’s economy, for example, is more than 40 times larger than South Africa’s.

Russia, isolated by the United States and Europe over its invasion of Ukraine, is keen to show Western powers it still has friends. Brazil and India, in contrast, have both forged closer ties with the West.  Given these differences it is unclear how the group will be able to act in unison and enhance their clout on the global stage.

Can Machines Think ?

In 1950, Alan Turing, theoretical mathematician responsible for breaking the Nazi Enigma code during World War II, who is considered the father of modern computer science and artificial intelligence (AI), posed a fundamental question: “Can machines think?”

Today we are on the verge of answering Turing’s question with the creation of AI systems that imitate human cognitive abilities, interact with humans naturally, and even appear capable of human-like thinking.  These developments have sparked a global discussion about the need for comprehensive and coordinated global AI regulation.

Implementation would be a tall order.  Even if regulations could keep up with the pace of technological change, passing a framework acceptable to countries that would view it through the lens of self-interest would be a daunting task.

Turning was just 41 when he died from poisoning in 1954, a death that was deemed a suicide. For decades, his status as a giant in mathematics was largely unknown, thanks to secrecy around his computer research and the social taboos about his homosexuality.  His story became more widely known after the release of the 2014 movie, “The Imitation Game.”

Alan Turing played a foundational role in the conceptual development of machine learning. For example, one of his key contributions is the Turing Test he proposed in his seminal 1950 paper, “Computing Machinery and Intelligence.”

The Turing Test is a deceptively simple method of determining whether a machine can demonstrate human intelligence.  If a machine can converse with a human without the human consistently being able to tell that they are conversing with a machine, the machine is said to have demonstrated human intelligence.

Critics of the Turing Test argue that a computer can have the ability to think, but not to have a mind of its own. While not everyone accepts the test’s validity, the concept remains foundational in artificial intelligence discussions and research.

AI is pretty much just what it sounds like—getting machines to perform tasks by mimicking human intelligence. AI is the simulation of human intelligence by machines. The personal interactions that individuals have with voice assistants such as Alexa or Siri on their smartphones are prime examples of how AI is being integrated into people’s lives.

Generative AI has made a loud entrance. It is a form of machine learning that allows computers to generate all sorts of content. Recently, examples such as ChatGPT and other content creating tools have garnered a whole lot of attention.

Given the rapid advances in AI technology and its potential impact on almost every aspect of society, the future of global AI governance has become a topic of debate and speculation.  Although there is a growing consensus around the need for proactive AI regulation, the optimal path forward remains unclear.

What is the right approach to regulating AI?  A market-driven approach based on self-regulation could drive innovation. However, the absence of a comprehensive AI governance framework might spark a race among commercial and national superpowers to build the most powerful AI system. This winner-take-all approach could lead to a concentration of power and to geopolitical unrest.

Nations will assess any international agreements to regulate AI based on their national interests. If, for instance, the Chinese Communist Party believed global AI regulation would undermine its economic and military competitive edge, it would not comply with any international agreements as it has done in the past.

For example, China ratified the Paris Global Climate Agreement in 2016 and pledged to peak its carbon dioxide emissions around 2030. Yet it remains the world’s largest emitter of greenhouse gases. Coal continues to play a dominant role in China’s energy mix and emissions have continued to grow.

It would be wise to be realistic about the development and implementation of global AI regulations.  Technology usually does not advance in a linear fashion. Disruptions will occur with little to no foresight. Even if a regulatory framework can keep pace with technological advancement, countries will be hesitant to adopt regulations that undermine their technological advancement, economic competitiveness, and national security.

Is 2% The Right Inflation

People the world over have been facing a poisonous new economic reality, as inflation has emerged from multi-decade hibernation.  And many of the people dealing with it are too young to remember when inflation was last a serious issue.  It is economically damaging, socially corrosive, and very hard to bring down.

Both the U.S. Federal Reserve (Fed) and the European Central Bank appear dead set on getting inflation back to their 2 percent target. Why did these and other banks, such as the Bank of Canada, Sweden’s Riksbank, and the Bank of England gravitate to this 2 percent figure?

In January 2012, a thousand years ago in internet time, the Fed, under Chairman Ben Bernanke, formally adopted an explicit inflation target of 2 percent. This marked the first time the Fed ever officially established a specific numerical inflation target. The 2 percent target was seen as a way to provide clarity and enhance the effectiveness of monetary policy.

Bernanke’s successor Janet Yellen and current chair Jerome Powell maintained the 2 percent inflation target. While Powell has a laser focus on the 2 percent target, the Fed has recently moved to a more flexible 2 percent average over time. This means the Fed would tolerate some periods of inflation above 2 percent to offset periods when inflation was below that level.

The 2 percent target was not established based on any specific formula or fixed economic rule. Despite its widespread adoption by central banks, there is little empirical evidence to suggest that 2 percent is the platonic ideal for addressing the Fed’s dual mandate of price stability and maximum employment.

This inflation target is an arbitrary number that originated in New Zealand. Surprisingly, it came not from any academic study, but rather from an offhand comment during a television interview.

During the late 1980s, New Zealand was going through a period of high inflation and inability to achieve stable economic growth – the financial equivalent of a bloody nose.  In 1988, inflation had just come down from a high of 15 percent to around 10 percent. New Zealand’s finance minister, Roger Douglas, went on TV to talk about the government’s approach to monetary policy.

He was pressed during the interview about whether the government was satisfied with the new inflation rate.  Douglas replied that he was not, saying that he ideally wanted inflation between zero and 2 percent.  This involved targeting inflation, a method that had kicked around in economic literature for years but had not been implemented anywhere.

At the time there was no set target for inflation in New Zealand; Douglas’ remark was completely off the cuff. But the inflation target caught the attention of economists around the world and went viral, becoming a kind of orthodoxy.  The approach gained recognition and as noted, was subsequently adopted by many other central banks, making inflation targeting a widely used monetary policy strategy – a classic example of how ideas spread within the small priesthood of central bankers.

The hard truth is that many economic luminaries have tried to come up with what is thought to be the optimum inflation rate, but with little success.

All things considered the 2 percent target was seen as a kind of sweet spot for inflation despite the lack of serious intellectual groundwork. Simply stated, there is nothing magical about 2 percent.  It is low enough that the public doesn’t feel the need to think about inflation, but not so low as to stifle economic growth.  That’s how it goes, but not so much more.

Bankers Once Went to Prison in the U.S.

Once upon a time in America, bank executives went to prison for white-collar crimes. During the Savings and Loan (S&L) debacle, between 1985 and 1995, there were over 1,000 felony convictions in cases designated as major by the U.S. Department of Justice.

In contrast, no senior bank executives faced prosecution for the widespread mortgage fraud that contributed to the 2008 financial apocalypse that precipitated the Great Recession. Not a single senior banker who had a hand in causing the financial crisis went to prison.  Rather than reining in Wall Street, President Obama and Congress restored the status quo ante, even when it meant ignoring a staggering white-collar crime spree.

Indeed, the Department of Justice did not prosecute a single major bank executive in the largest man-made economic catastrophe since the Great Depression. They went after the small fish, not the mortgage executives who created the toxic products or the senior bank executives who peddled them.

The S&L crisis was arguably the most catastrophic collapse of the banking industry since the Great Depression.  S&Ls were banks that for well over a century had specialized in making home mortgage loans.  Across the United States, more than 1,000 S&Ls had failed, nearly a third of the 3,234 savings and loan associations that existed in 1989.  It is estimated that by 2019, there were only 659 S&L institutions in the United States.

In 1979, the S&L industry was facing many problems.  Oil prices doubled, inflation was in double digits for the second time in five years, and the Federal Reserve decided to target the money supply to control inflation. This not only let interest rates rise, it also made them more volatile.

As inflation continued to soar, S&Ls, with their concentration in home loans, found themselves squeezed by an interest rate mismatch.  The 30-year mortgages on their books earned single-digit interest rates, but they either had to pay depositors double-digit rates or lose them to competitors. Overnight, long-term depositors turned short term.  Funding long-term assets like mortgages with short-term liabilities like deposits is a risky formula, and in a high-inflation environment, it quickly makes insolvency inevitable.

For sure there are several parallels between then and the failures of Silicon Valley Bank and other banks over the last several months.  Just as many S&Ls went bust because surging interest rates increased their costs as mortgages brought low fixed rates of interest, many of today’s banks face similar balance sheet problems.

The changing economic and financial environment ruined the “3-6-3” business model that had served thrift executives well for decades: pay 3 percent on savings deposits, charge 6 percent on mortgages, pocket the difference, and play golf at 3:00.

In 1982, lobbying from the S&L industry led Congress to permit them to make highly leveraged investments far removed from their original franchise to provide mortgage funding.  In response, the federal government also enacted statutory and regulatory changes that lowered the capital standards that apply to S&Ls.

For the first time, the government approved measures intended to increase S&L profits, as opposed to promoting home ownership.  The premise underlying the changes was that deregulation of markets could let the S&Ls grow out of their insolvency.  Instead, the crisis culminated in the collapse of hundreds of S&Ls, which cost taxpayers many billions of dollars and contributed to the recession of 1990-1991.

And some S&Ls contributed to the development of a Wild West attitude that led to outright fraud among insiders. Many S&Ls ended up defrauding their depositors and speculating on high-risk ventures, engaging in illegal land flips, engaging in accounting fictions, and other criminal activities.

The S&L crisis teaches at least one important lesson: There is no ending financial chicanery without holding senior bankers accountable for their wrongdoing.

Ford Motor Co. and Industrial Policy

The U.S. government is giving the Ford Motor Co. a $9.2 billion loan, by far the biggest infusion of taxpayer cash for a U.S. automaker since bailouts during the 2008 financial crisis, to build three battery factories in Kentucky and Tennessee.  Neither Ford nor the Energy Department (DOE), which provides loans at far lower interest rates than those available in the private market, have revealed details about the loan.

The U.S. is taking a page from Beijing’s playbook.  China has a top-down industrial policy, with serious government planning and support of target industries. China’s sustained industrial policy has yielded the world’s largest battery manufacturers.  Between 2009 and 2021, the Chinese government poured more than $130 billion of subsidies into the EV market, according to a report last year by the Center for Strategic and International Studies.  Today, more than 80 percent of lithium-ion battery cell manufacturing capacity is in China.

Simply put, industrial policy means that centralized agencies formulate national visions and programs to develop specific industries.  It has been a toxic phrase in American politics.

As Gary Becker, who won the Nobel Prize for Economics in 1992, said, “The best industrial policy is none at all.” It has long been associated with pork barrel politics, picking winners, and crony capitalism.  The political rhetoric has been that the free market works best and is closely associated with freedom and democracy. The history of the U.S. does not square with this perspective.

On the surface, Ford would seem an unlikely party to receive the largest loan ever extended by the Department’s Loans Programs Office.  Just last month, Ford touted having almost $29 billion of cash on its balance sheet and more than $46 billion in total liquidity.  It is worth nothing that one of the best known loans made by the DOE was $465 million to Tesla in 2010 to support manufacturing of the Model S.

Ford aims to close the gap with Tesla on electric vehicles, just as the U.S. aims to close a similar gap with China. Ford told investors early last year that it would put $50 billion into its EV manufacturing efforts. By the end of 2026, the company wants to make two million EVs a year.

Starting with Alexander Hamilton, the first Secretary of the Treasury, who outlined a strategy for promoting American manufacturing both to catch up with Britain and provide the material base for a powerful military.  Hamilton’s “Report on the Subject of Manufacturers” promoted the use of subsidies and tariffs.  Similar practices have been expressed in various forms throughout American history.

During the 19th and 20th centuries, the government played an active role in promoting economic growth, using policies such as high tariffs to protect strategic industries, federal land grants, and subsidies for infrastructure development. The federal government has sometimes backed failures, but it also has remarkable success stories, such as nuclear energy, computers, the Internet, and building the interstate highway system

These days, industrial policy is viewed more positively, spurred by bipartisan concerns about the competitive threat China poses.  U.S. programs are now underway to cover semiconductor production, development of critical technologies, to secure key domestic supplies and support industries that are considered strategically important.

For example, subsidies from the Inflation Reduction Act and Infrastructure Investment and Jobs Act are spread across the EV value chain and are carpet bombing the entire automobile industry.  There are tax credits for sourcing critical minerals within the U.S. or friendly countries, for manufacturing or assembling the batteries and EVs they go into, for the consumers who buy the vehicles, and even for anyone building the public chargers needed to keep those vehicles moving.

The debate over industrial policy will continue because it gets to the longstanding controversy over the role of the government in our economy.  One thing is clear: the rosy rhetoric about the U.S. not engaging in industrial policy is contradicted by the country’s history.

Cancel Culture and the Chinese Cultural Revolution

It’s not news that Americans live in a new Age of Magical Thinking. The Enlightenment is seen as the start of hate speech, feelings must always overrule facts, and transubstantiation has taken on a whole new meaning.  Men can become women simply by wishing it so.

Over the last several years, much ink has been spilled about whether there are similarities between cancel culture of the 21st century, particularly in Anglosphere countries, and China’s Great Proletarian Cultural Revolution.  Pundits warn of the dangerous implications of cancel culture.

Both social media and real-life mobs target people who dissent, aiming to ruin their reputations and sometimes getting them fired, all while toppling statues of the Founding Fathers and looting in the name of social justice.

Contemporary events come nowhere near the scale of violence and repression associated with the Cultural Revolution.  Thankfully, social ostracism and unemployment are not the same as firing squads and gulags, but they are still harmful, especially to those committed to free speech.

The ordinary American lives in an age when they witness “high-tech” lynching, to borrow a phrase coined in 1991 by then Supreme Court nominee Clarence Thomas.  The core features are public smears, ridicule, along with the moralistic mob forcing victims to publicly recant their sins.

Between 1949 and his death in 1976, dictator Mao Zedong directed a radical transformation of China.  He grew increasingly suspicious of government apparatchiks and Chinese Communist Party intellectuals, leading him in 1966 to launch a stunning attack on the establishment in the form of a “Cultural Revolution.”

He encouraged youthful Red Guards, his shock troops, to destroy the “four olds” (old ideas, old culture, old customs, and old habits).  In practice, this meant widespread beating, denunciations, and mob-instigated “trials.”  Red Guards roamed the country attacking establishment elites, including government officials, managers, intellectuals, and former members of the bourgeois class. The goal was to purge the country of anyone who was insufficiently leftist.

Today, America and other Anglosphere countries are going through an admittedly more genteel cultural revolution of activists preoccupied with identity politics and cancel culture preaching the same old shibboleths.  As under Mao, people suffer disproportionate consequences for small ideological heresies.

Cancel culture involves public shaming, boycotts, online harassment, and calls for removing people from positions of influence due to perceived offensive comments or behavior.  It can lead to reputational damage, loss of employment opportunities and social isolation without due process.  Cancelling people who disagree with you is straight out of the playbook of dictators and cults.

For example, when former NFL quarterback Drew Brees stated he could “never agree with anybody disrespecting the flag of the United States of America,” citing his grandfathers’ military service, he was accused of violating contemporary social justice dogmas. Acquiescing to the pressure less than 24 hours later, Brees issued an apology on Instagram. He soon followed up with another apology.  Then his wife apologized.  Any wonder why public figures spend their days walking on eggshells?

It remains to be seen where America goes next in its nascent cultural revolution.  Where this trend goes and how long it lasts will ultimately depend on whether Americans stand up for their convictions or cave before online mobs.  Maybe nothing permanent will come of it, despite the best efforts of today’s Red Guards.  It may well turn out that the worst harm from legitimization of censorship and cancel culture may befall those on the right or the left who wield these weapons.

We would do well to remember the words of John Stuart Mill:

“He who knows only his side of the case (argument) knows little of that.  His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no grounds for preferring either opinion”.

Inflation, Interest, and the Fed

Interest rates play a crucial role in the economy, influencing savings, investment, consumption and overall growth.  Central banks around the world cut benchmark interest rates sharply following the 2007-09 financial meltdown that  tanked the global financial system. In many cases, the nominal interest rate was cut to zero, close to zero or even negative territory.

It was thought that these aggressively low interest rates helped stimulate economic activity, although there remain uncertainties about the side effects and risks. 

Distressed, or “Zombie,” companies feasted on cheap credit. These firms tied up resources that could have been better allocated to more productive and efficient businesses, hindering overall economic growth.

For example, companies such as Bed Bath & Beyond earned just enough money to continue operating, but were unable to pay off their debts as interest rates rose.  As rates have risen, many of the loans banks made to these firms have turned out to be stinkers, as borrowers miss payments or default. 

Indeed, cheap credit, by way of low interest rates, was allowed to persist for an improbable 14 years – much too long in the minds of many analysts.  What was initially seen as a blessing turned out to be a curse. 

When continued for too long, cheap credit effectively inspires excessive borrowing – some of it speculative.  And bubbles do eventually burst. 

A lot has happened since the 2007-09 financial crisis. Recently, inflation has returned with a vengeance.  The Federal Reserve and other central bankers are trying to stop surging prices by raising short-term interest rates, which is not necessarily a boom for the stock market or the economy.

Rising interest rates help control demand for credit, soften growth of the money supply and therefore help control demand.  In theory, higher mortgage rates may slow housing price inflation and help make property more affordable over time. 

Others argue that today’s rate hikes threaten to push up tomorrow’s housing costs amid high prices for materials and loans.  This creates a threat of future housing shortages that could lead to more inflation.

High interest rates prevent a misallocation of capital goosing the price of the riskiest assets in the shares casino.  Then there are investment projects, often vanity projects, that only proceed because of cheap capital. 

As interest rates rise, they incentivize savings in contrast to the recent near-zero interest rates that made savers – including many retirees – feel like fools.  

Finally, high interest rates give central banks room to cut interest rates in the event of a negative external shock. In sum, they act as a deterrent to excessive borrowing and spending, curbing inflationary pressure and preventing the formation of bubbles.

But higher interest rates also bring with them the risk of significant slowdowns in consumption.  They might choke off much needed business investment in new home building and renewable energy capacity, for example. 

Rising interest rates may cause the dollar to appreciate, making exports less competitive and leading to an export slowdown and perhaps a worsening trade deficit.

Higher interest rates certainly make government debt more expensive, sending debt costs soaring and eating up a bigger share of public budgets.

Finally, higher interest rates might lead to a broad-based economic slowdown that could hit stock prices, pension fund assets, and dividend incomes.

In recent months, inflation has been as persistent as gravity.  A cold dish of truth is that it is unclear when prices will moderate.  The Fed took a break from raising interest rates at its June meeting after a string of 10 consecutive rate hikes in just over a year. Still, the benchmark rate could go a bit higher in the near future.  

The Fed is taking some time to assess the effects of its prior rate hikes on inflation and the overall economy, as well as the impact of other economic activity – namely the collapse of three banks this spring.  Improvisation is clearly the order of the day.

The Future of Roadway Pricing

The need to find a better way of managing public roads in metropolitan areas is painfully apparent to many Americans each morning when they drive to work.

It is easy to conclude that the U.S. has made a series of wrong-headed choices about how to finance its all-important metropolitan roadway systems.  The results of these mistakes are ubiquitous and take several forms.

We have insufficient roadway capacity where it is most needed, as evidenced by severe traffic congestion on many critical roadway links in important metropolitan regions during increasingly long portions of the day.

We are chronically unable to build new roadway capacity to keep up with demand, to the point that blindly chanting “we can’t build our way out of congestion” too often replaces serious discussion of how to overcome obvious capacity shortfalls.

We insist on “saving money” in government operating budgets by reducing needed roadway maintenance, which causes roads to wear out faster and reduces long-term capacity.

To move beyond these mistakes, transportation policy makers must recognize the potential of recent technological breakthroughs that enable effective, market-oriented roadway financing systems that can dramatically improve how the U.S. manages, maintains, and pays for existing metropolitan roadway systems.

In simple terms, technology can now allow access to metropolitan roadway capacity through the same kind of marketplace mechanism traditionally used to distribute access to a host of private sector goods and services.

We can charge motorists directly for access to each roadway in a metropolitan area without requiring them to stop or slow down. Prices can be based on the distance they travel on that roadway and can be differentiated based on the “popularity” of each route as measured by the number of vehicles per hour traveling on them.

Prices can also be differentiated based on vehicle type, so trucks and other heavy vehicles that cause more wear and tear on pavement pay higher prices than small vehicles that cause less wear. Charges can be adjusted frequently to reflect changes in the number of vehicles traveling on a roadway.

Frequent price adjustments can also be used to guarantee motorists a certain minimum average speed on a particular route. Charges can be raised or lowered to maintain a target maximum number of vehicles on the roadway.

Intelligent use of these new technologies narrows the often considerable gap between a roadway system’s theoretical capacity and its functional capacity by using the classic economic principle of using price to control the demand for scarce resources.  It also results in better service for all roadway customers in a metropolitan area.

Note the term “customers.”  A customer is a willing buyer of what you have to sell at the price you are charging. What makes someone a willing buyer is a personal judgment about whether the value they are getting is greater than the price charged.

Suppose a driver can use two different lanes to reach their destination.  One lane charges a price per mile but promises an average speed of 60 mph.  The other charges nothing, but moves at less than 10 mph.  If the driver is on their way to an important business meeting and can’t afford to be late, they may decide that the value of time saved by using the priced lane is greater than the cost.  But if they are simply making a discretionary trip to the mall, they may opt to use the fee one and put up with the additional travel time.

Put simply, roadway pricing lets you create value for drivers by offering them shorter travel times for high-priority trips.  Drivers determine the priority of their trips, making personal judgments about which are the most important and how much they are willing to pay to reach their destinations faster.

Using price to distribute travel demand rationally at and raise resources for roadway maintenance?   Now that would be something to write home about.

Playing Let’s Pretend

One definition of intellectual dishonesty is the practice of ignoring reality when it interferes with what you want to believe about the way the world works.  The bipartisan deal President Biden signed on June 3, after months of political brinkmanship to raise the debt limit for two years and increase the amount of money the federal government can borrow, is an example.

Cynics might be forgiven for insisting there is a great deal to be said for intellectual dishonesty in American society.  They would remind us that the body politic is much more likely to enjoy an adequate supply of the public goods and services that are so vital to the national welfare if Americans can convince themselves that “someone else” is paying for them.

Whenever we admit to ourselves that the cost is coming out of our pockets, we inevitably try to cut corners or do things on the cheap, and ultimately deprive ourselves of much that is really needed.

Many Americans would argue that government has played a major role in this national con game since the early days of the republic.  By cleverly manipulating things like tax rates, deductions, and public accounting practices, the government has made it easy for Americans to persuade themselves that “the other guy” is paying most of the bill for the things we need.  All of which has helped make the United States great—in the sense of becoming the world’s most ostensibly successful national economy for the moment.

The national debt has soared, nearly tripling since 2009, forcing the U.S. Treasury Department to borrow more to pay for government spending.  The legislative curb on this borrowing is known as the debt ceiling.  When Treasury spends the maximum amount authorized under the ceiling, Congress must vote to suspend or raise the limit on borrowing.

The latest deal includes caps on federal spending, additional work requirements for food stamps and welfare, and reforms to build energy projects more quickly.  But the caps would not actually reduce spending.  The end game is to make it grow more slowly, say more slowly than the rate of inflation.

Divided government is never pretty.  But if you are of a Panglossian persuasion, you will rejoice that this deal enables both sides to claim a win of sorts.

Neither wants to be responsible for a catastrophe, so each pretends it is a win-win deal. Republicans can say they cut spending since spending will grow more slowly than it might have otherwise. Democrats can argue that they prevented actual cuts.  In theory, everyone wins and politicians insist they conducted themselves in an intellectually honest fashion.

But the American public, not elected officials and government bureaucrats, is to blame for this.  They insist on receiving more from government than they’re willing to pay for, and they don’t ask any serious questions about the charades and fiscal shenanigans necessary to sustain the illusion of a free lunch.

The U.S. is up to its neck in debt – $31.4 trillion as of January 2023.  Since it cannot increase its income in the short term, it needs to exchange new debt for old debt, leaving no choice but to raise the debt ceiling to avoid global economic chaos.  The annual federal deficit has averaged nearly $1 trillion since 2001, meaning government spends that much more money than it receives in taxes and other revenue.

To make up the difference, the government has to borrow to finance payments that Congress has already authorized. Even with the debt limit raised, the best way to repay the debt is to figure out how to revive the economy.

Good government types and fiscal moralists may be outraged by these shell games and urge Americans to stop acting like children.  But Americans have a long and pragmatic tradition of believing that fiscal morality, like religion and the law, is great as long as it doesn’t get in the way of anything really important.

Pay Me Now or Pay Me Later

Maintenance is often seen as the stepchild of infrastructure.  It easily slips from public notice in the face of more glamorous new construction.

Yet delayed or poorly executed maintenance can add billions of dollars to the private and public costs of infrastructure.  In addition, deferred maintenance hastens the need to replace assets by years, if not decades.  Many urban transit systems are testament to the high cost of inadequate maintenance.

Infrastructure spending has traditionally been divided into two categories: capital and operations and maintenance.  But such a breakdown can be misleading and is too simplistic to serve as a basis for allocating resources.  A more useful approach would be to think along functional lines. So capital spending can be split into new capacity and rehabilitation and operations and maintenance divided into its two components:

  •      New Capacity—expenditures for the engineering design or construction of new facilities or for plant and equipment that significantly expand existing capacity.
  •      Rehabilitation—capital-intensive activities that extend the useful life of a facility more than two years.
  •      Maintenance—expenditures on routine schedules to repair or maintain the good working order of existing facilities, plant, equipment, or rolling stock that neither adds new system capacity nor extends the life of facilities beyond two years.
  •      Operations—expenditures incurred on a routine basis for labor, utilities, engineering, and other overhead activities that support the day-to-day delivery of services.

For certain, a rigorous breakout of spending into each category is difficult.  It is particularly easy to confuse maintenance and rehabilitation.  For example, the two-year criterion used to differentiate between them is somewhat arbitrary.  The key is that “pure” maintenance focuses on short-term improvements (filling potholes) while rehabilitation has a longer-term impact.

Similarly, rehabilitation work and new capacity are often combined.  A road may be resurfaced at the same time that additional lanes are added.  Maintenance and operations also overlap.

In many ways, these four activities represent a continuum that, taken as a whole, could be called lifecycle costing.  In other words, inattention to one aspect increases the cost of all the others.  Finding the most cost-effective combination of spending, as opposed to focusing exclusively on building things, is one of the keys to effective infrastructure management.

Proper maintenance of infrastructure assets is important for two reasons.  First, there is a direct link between the quality of current services and the performance of the nation’s infrastructure.  Second, public perceptions of the overall quality of infrastructure services depend on good routine maintenance.

Just to be clear, lack of maintenance spending impacts long-term infrastructure costs.  Effective maintenance reduces rehabilitation costs and/or delays the time when such spending is required.

Although maintenance spending plays an important role in lifecycle costing, it is not always an obvious part of the infrastructure decision-making process.  This can result in maintenance being ignored or afforded neither adequate attention nor funding.

Since local governments own and operate most infrastructure assets, they also bear the heaviest financial burden for maintaining those assets.  Yet local governments do not always possess the financial resources or have the institutional flexibility to implement innovative maintenance programs. Consequently, they must be the main focus of efforts to ensure adequate maintenance.

Maintenance of infrastructure assets is surely not a politically compelling category of public spending. That adds to the dilemma of getting it properly funded.

Putting maintenance on par with other categories of infrastructure investment is not a simple matter, especially given the temptation to defer maintenance when the much higher costs it causes would likely hit on somebody else’s watch. That explains why elected officials all too often put the politics of new construction ahead of maintaining existing infrastructure.