We need to think of our roads as cows

Academics have filled volumes on the differences between what they call public and private goods. Too often the distinction seems to come down to ownership: If something is owned by society as a whole, it is a common good. If owned by one or more individuals, it is a private good.

Common goods are things like public schools, parks or roads that are owned by all of society. The responsibility for operating and maintaining them is (usually) assigned to government and supported by tax revenues.

This is the standard pattern for metropolitan roadway systems in the United States. They are built and maintained by a mixture of municipal, county and state governments that fund most of the cost from general tax revenues. They are often supplemented by “user taxes” levied on the purchase of motor­ vehicle fuel, which implies that motorists pay based on how much they use the roadways.

But even when a roadway network is supported by fuel taxes, there remains a disconnect in the minds of motorists between the act of driving on roadways and paying for them. This is quite different from commodities distributed through the marketplace, where a consumer must buy and pay for some quantity of a commodity before being able to consume it.

The result is an instinctive sense among motorists that roadways are free.

A useful metaphor popularized by biologist Garret Hardin in 1968 illustrates the basic problem. Imagine a community that has a publicly owned pasture where local farmers can graze their dairy cows without having to pay any user charges. Under these circumstances, each farmer seeks to graze as many cows as possible in the pasture because each additional cow will increase milk production but not feeding cost.

This only works so long as the number of grazing cows remains within the pasture’s feeding capacity. Once the farmers exceed this limit, the pasture’s viability begins to break down. The cows consume its grass faster than it can replenish itself with fresh growth, resulting in less nourishment for each cow.

When farmers are faced with cows that are producing less milk to sell, their logical response is to add still more cows to the overused pasture. When all the farmers do this, the result can only be an increasingly dysfunctional pasture and declining milk production for everyone.

In Hardin’s words: “Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited. Ruin is the destination towards which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.”

Severe traffic congestion is a modern example of the tragedy of the commons. Hardin’s metaphor illuminates a broad range of socioeconomic questions about why congestion afflicts so many metropolitan areas.

It illustrates the inevitable tendency to overuse common goods that are perceived to be free. It explains why this tendency leads to a condition in which supply never catches up with the demand. It describes how the widespread availability of free public goods can significantly influence the underlying economics of many private activities that come to depend on them. And it demonstrates the relative ease with which an entire society can be locked into counterproductive behaviors.

The most sensible solution to the tragedy of the commons may be to charge farmers grazing fees. This immediately confronts them with a series of critical business judgments about how to maximize their milk revenues, such as how much to spend feeding pasture grass to their cows or whether to feed them corn or other grains instead.

When all forms of cattle feed are distributed at prices that reflect supply and demand, the business of milk production becomes more rational. Perhaps the same is true for metropolitan roadway systems: directly charge motorists for roadway use and the economics of building, operating and maintaining roadways change rapidly- and for the better.

originally published: March 4, 2014

If the minimum wage kept up with inflation

In President Obama’s fifth State of the Union address last month, he urged Congress to gradually increase the minimum wage from $7.25 to $10.10 per hour. It was hardly a radical request.

On the same day the president delivered his address, the executive director of the Port Authority of New York and New Jersey sent a letter to four airlines – American , Delta, JetBlue and United – asking them to immediately give a $1 an hour raise to workers who made less than $9 an hour.

The issue of raising the federal minimum wage to help lift millions of Americans out of poverty has bounced around more in recent years than a check from the Greek government. In last year’s State of the Union address, President Obama proposed raising the minimum wage to $9 an hour.

Minimum-wage regulation began in New Zealand in 1896, followed by Australia in 1899, and Great Britain in 1909. In 1938, President Franklin Roosevelt established a minimum wage of$.25 per hour. It was the Fair Labor Standards Act of 1938 which provided the first federally established minimum wage.

Former President Clinton granted states the right to set minimum wages above the federal level. Currently, 19 states and the District of Columbia have minimum wage rates higher than what the federal government requires. Washington State is the highest at $9.19 per hour. California passed legislation last September designed to increase the state’s minimum wage to $10 by 2016. The Massachusetts Legislature is considering an increase from $8 to $11 per hour.  

The minimum wage had not been modified for about a decade leading up to 2007 and inflation had reduced the actual value of the minimum wage to its lowest level in decades. That May, President George W. Bush signed a spending bill advocated by the Democrats which, among other things,  amended the Fair Labor Standards Act to increase the federal minimum wage in three steps: to $5.85 per hour effective July 24, 2007, to $6.55 per hour effective July 24, 2008, and to $7.25 per hour effective July 24, 2009. That’s $15,080 per year for working full-time.

There is fierce disagreement over the effects of raising the minimum wage on both employees and employers. Labor unions are certainly the most ardent supporters of hiking the minimum wage. They and other advocates argue that workers should earn enough to live on, that a higher minimum wage would have only a modest impact on jobs and would push a larger number of working poor out of poverty.

Opponents basically reverse the argument. For them, increasing the minimum wage would deliver a blow to employment and the economy, especially for those at the lower end of the wage scale. They argue that a higher minimum wage would be too heavy a burden on employers, especially small business owners, who would cut back on hiring, resulting in higher unemployment and the accompanying social costs.

Studies on the impact of increasing the minimum wage have become an industry unto themselves. Each side undertakes its own partisan cost-benefit calculations and their analyses are critically dependent on debatable assumptions which do little more than underscore just how difficult it is to determine what the impact of a minimum wage hike would be.

One thing is certain, two things are for sure: There will be no serious discussion about increasing the minimum wage while both sides believe their positions offer them political benefits. Change comes slowly in a country as large, diverse – and troubled – as ours.

What’s so bad about restoring the minimum wage to where it was in 1968? Raising the wage to $10.10 per hour, would bring it closer to the level of purchasing power than it has had in the past. If the 1968 minimum hourly wage of $1.60 kept up with inflation, it would be about $10.40 today.

In a world in which a senior Wall Street executive gets a bonus, stock options that have been realized, and deferred stock that has been vested, that brings his pay to about $20,000 an hour, raising the minimum wage to $10.10 doesn’t seem like too much to ask.

originally published: February 15, 2014

America’s financial ‘Guns of August’

The Financial Meltdown of 2008 was one of the most critical events in American history. As we mark the 100th anniversary of World War I, there are a number of parallels that can be drawn between these two events that changed the course of history.

The financial meltdown wiped out some $11 trillion of the nation’s wealth. It eliminated more than eight million American jobs, many of which are gone forever. It froze the nation’s vast financial credit system, leaving millions of businesses without the cash needed to operate. It forced the federal government to spend $2.8 trillion and commit another $8.2 trillion in taxpayer funds to bail out crippled corporations deemed “too-big-to-fail.” It cost millions of Americans their homes, life savings and hopes for a decent retirement.

The New Year is a time to reflect on lessons from the past. Scholars still study World War I and the changes left in its wake. Like the Financial Meltdown of 2008, the war was one of the most critical events in American history.

In 1914, Archduke Franz Ferdinand was heir to the throne of the multi-ethnic Austro-Hungarian Empire. On the bright Sunday morning of June 28, he and his wife, Duchess Sophie, made an official state visit to the Bosnian city of Sarajevo, which was then an occupied province of the empire.

Late that morning the cars in their procession made a wrong turn on the unfamiliar streets of Sarajevo and halted to get their bearings. At that moment,   Gavrilo Princip, a young Bosnian Freedom Fighter – or terrorist, depending on your point of view – fired two shots into the back seat of the open car carrying the archduke and duchess and killed them both.

Europe proceeded to come apart at the seams. Fewer than six weeks later, on Aug. 3, the armies of Kaiser Wilhelm’s Germany invaded Belgium. By the middle of August, the lineup of combatants was basically complete. Britain, France and Russia were formally at war with Germany and Austria-Hungry.

The war’s staggering costs horrified the world. All told, the 16 nations that ended up involved spent the equivalent of $3,000 trillion (in inflation-adjusted 2009 U.S. dollars) and mobilized 65 million troops, 12 percent of whom were killed. Another 33 percent were wounded.    ·

Three of Europe’s four leading monarchs were toppled. Only Britain ‘s royal family survived.

The Austro-Hungarian Empire collapsed and was replaced by some half-dozen ethnically based nations, most of which were overrun by Germany in World War II and became puppet states of the Soviet Union thereafter. And of course, the war spawned communism and fascism.

Europeans, having borne the brunt of the suffering, lost confidence in the so-called ideals of western civilization they’d taken for granted before 1914. The credibility of their governments was particularly hard-hit. Citizens were convinced that those governments had persistently lied to them to protect the elites at the cost of everyone else and squandered millions of lives by mismanaging the war.

Victors and vanquished alike were left bankrupt, owing more to the United States (which sat out most of the war and became the world’s leading creditor nation) than they could ever repay and cementing America’s rise as an industrial power.

And all for a war that settled virtually nothing.

Our generation experienced its own version of the “Guns of August” when the world of international finance melted down so catastrophically in 2008. The meltdown drove the country into the worst economic crisis since the 1930s.

Some people may find obvious parallels to 1914 in the fall of the famed investment banking house of Lehman Brothers, which the federal government allowed to collapse into bankruptcy in September of 2008.

But a more telling parallel to Gavrilo Princip drawing his pistol may have occurred more than a year earlier in July of 2007, when investment banking firm Bear Stearns & Co. placed two of its hedge funds in bankruptcy because they had run out of cash.

Little more than a year later, the chain of events that bankruptcy started had changed the world forever.

originally published: January 25, 2014

Balancing technology with need

Back in 1954, when Elia Kazan celebrated the tradition-bound world of intercontinental goods movement in his Oscar-winning film “On the Waterfront,” few could have imagined that the world he depicted was on the verge of becoming as obsolete as the Marlon Brando character’s career. Today, surface transportation is in a similar place, thanks to an explosion of new technologies.

Two years after “On the Waterfront,” an entrepreneurial trucking magnate named Malcom McLean first arranged to pack hundreds of individual crates of goods into a few large steel containers that could quickly and efficiently be transferred by mechanical cranes between ocean-going ships and land-based trailer trucks without disturbing their contents. It was quite a change from the age-old tradition of having large crews of dockworkers slowly move each crate by hand from shops to trucks and vice versa.

This marked the birth of what we now call “containerization.” By slashing the costs of moving goods, it made possible the huge growth in global trade. Today, a person in Kansas City can buy consumer goods mass-produced in China for a fraction of what his or her grandparents paid. In the process, containerization totally transformed the infrastructure and operations of the shipping and port industries. It also accelerated global competition and technological change.

The same forces that allow American families to buy cheap goods make them fearful that their jobs will be eliminated by technology or performed more cheaply by armies of high-skilled, low-cost foreign workers. Consumers benefit from the low-cost products and services global competition provides, but that same competition may reduce both wages and buying power.

Containerization was conceived and developed by a visionary outsider who imposed it on reluctant ocean-shipping and port-operating firms that would have much preferred to keep doing the same old things in the same old ways. In other words, it became part of the external environment within which those tradition-bound industries had to function. They were forced to understand its implications for their businesses. We must do the same when it comes to the external environment within which transportation functions.

Surface transportation is awash with new technology that is transforming it just as containerization transformed ocean shipping and port operations. We already have technologies for collecting tolls without requiring motorists to slow down, for measuring the average speeds and densities of traffic flows on roadway lanes at any given moment, and for pinpointing the location of buses and other public transportation vehicles.

Just over the horizon are technologies that have the potential to make transportation much safer, more efficient and friendlier to the environment by providing instant communication between roadway operators and motor vehicles about bottlenecks; alternate routes; preventing accidents; minimizing deaths, injuries and collateral damage in accidents that can’t be avoided; and monitoring the contents of containers moving by road, rail and air without disrupting traffic flows.

But these new technologies will be as much a curse as a blessing unless we learn how to properly manage their transfer from the laboratory to the marketplace.

For example, we face the prospect of having to evaluate the pros and cons of implementing tolls on limited-access highways, and of entering into agreements with private firms to operate such highways. But how can we realistically do this if we don’t understand what the long-term impact of new technologies will be on these highways?

And let’s not forget that the design of any technological innovations must be customer-driven, not provider-driven- a fact that is so obvious, but so often overlooked.

Technology is as much a part of our lives today as eating and breathing. But in transportation as well as other areas, we need as much information as possible about what new technologies are, how they work, what they can do and what problems they pose so we can use it in devising policies that spur job creation, increase competitiveness and cushion the economic pain to workers by minimizing dislocation.

originally published: January 11, 2014

 

 

How America can become a job creator

Unless you have been hibernating, you understand that the debate over the proper role of government is a central issue in America’s current troubled political environment.

The contretemps over this question are exacerbated by high unemployment, which exerts a severe economic drag on the country. We should approach the question of government’s role with job creation as our top priority.

Nearly 11 million Americans are unemployed and another 9.8 million either involuntarily work only part-time or are too discouraged to keep looking for a job. Families struggling to make ends meet on unemployment benefits are no longer able to participate fully in the nation’s consumer economy. And since consumer spending accounts for some two-thirds of the nation’s gross domestic product, their reduced spending poses an obstacle to economic recovery.

People’s views on the role of government are heavily influenced by their political philosophies. Some care most about individual freedom, seeing wealth creation mainly as the product of individual effort; others prioritize promoting the well-being of the community as a whole. These two philosophical conceptions lead to disagreements about government’s proper role in the economy. Those on the right believe less government leads to more robust economic growth and those on the left argue for more government intervention.

The real problem is that what both groups really want is to find a political strategy that will tip a few red states blue and vice versa. That makes the philosophical clash toxic because bad politics trumps smart public policies.

There are, however, some basic areas of agreement about the role of government. For example, government should protect us from violence. To do so, government must have a monopoly on coercive power. Otherwise anarchy develops, and as the 17th-century philosopher Thomas Hobbes noted, “the life of man (becomes) solitary, poor, nasty, brutish, and short.”

Similarly, Adam Smith, the intellectual messiah of capitalism, argued that government should protect “society from the violence and invasion of other independent societies” and protect “as far as possible every member of the society from the injustice of or oppression of every other member of it.” The most limited government, then, is one whose sole function is to prevent its members from being subjected to physical coercion.

But even Smith argued that government should create and maintain “certain public works and certain public institutions, which it can never be for the interest of any individual or small number of individuals, to erect and maintain.” Think roads, bridges and sewers -the infrastructure required for society to function and grow.

Consider the Erie Canal, the transcontinental railroad, the great dams and water systems of the west, airports, seaports, transit, and the highways and bridges that are part of America’s great public works inheritance. They were the envy of the world and supported the growth of the greatest economic power the world has even known.

One way to pay for infrastructure upgrades is to recruit private firms as active partners to help fund and operate these projects. If properly structured, public-private partnerships could tap into billions of dollars in private capital that are looking for a home.

This kind of ambitious infrastructure investment plan could give the nation’s economic growth a vital shot in the arm by creating new jobs and reversing the negative-multiplier effect caused by high unemployment and reduced consumer spending.

originally published: December 14, 2013

Extreme wealth inequality threatens the nation

One of the salient characteristics of the last 20 years has been the unprecedented growth in income and wealth inequality, and the extent to which both have flowed to the proverbial1-percenters.

Market capitalism has generated enormous wealth, but the distribution of the spoils of capitalism has gone awry. While there are many ways to measure inequality, consider that in today’ s Gilded Age, the wealthiest 1 percent of American households enjoy a higher total net worth than the bottom 90 percent and the top 1 percent of income earners receive more pretax income than the entire bottom half.

Since 1979, 36 percent of all after-tax gains went to the 1-percenters; over 20 percent of those gains went to the top one-tenth of 1 percent of the income distribution.

The increasingly unequal distribution of income and wealth threatens not only the social fabric of American society but the economy as well. The mega-rich cannot spend enough to offset the lost demand that results from a shrinking middle class, which slows economic growth.

Growing inequality is making a lie of the American promise that this is a country where if you work hard, you can make it into the middle class. We are witnessing the hollowing out of the middle class; it is being mothballed like an old Navy ship. The last time that income inequality in the land of plenty was as profound as it is now was immediately before the 1929 stock market crash.

Right now, more than 8.4 million Americans are collecting either state or federal unemployment benefits and one out of every seven depend on food stamps, the highest share of the population ever to do so. A shrinking few claim a disproportionate share of the nation’s wealth at the expense of everyone else.

If we could identify a single culprit to blame for this mess, it would make for a good television drama. But the story of rising income inequality is more complex. None of the major explanations are exhaustive or definitive, and making sense of them is no easy task.

Some blame globalization, a process of closer integration between different countries and peoples made possible by falling trade and investment barriers, tremendous advances in telecommunications and  drastic reductions in transportation costs that have forced American workers to compete against the huge supply of low-cost labor in the developing world and contributed to the declining influence of labor unwns.

Others point to new labor-replacing technologies that threaten both unskilled and skilled workers, while they increase demand for a select few with highly specialized skills. They argue that American public education does not provide children with the advanced skills they need to compete in this new world.

Stated differently, the pace of technological advance has outstripped the educational system’s ability to supply students with the skills they need to utilize this technology, leading to outsized earnings gains for those who have such skill. This is the so-called college wage premium.

Over the past few decades, people in developed economies who were educated enough to take advantage of the technological advances won higher wages. Others got left behind.

Finally, there are those who contend that immigration policy worsens inequality. The mass influx of low-wage workers probably reduces global inequality at the same time it increases inequality within America by reducing the wages of hard-working, semi-skilled Americans.

Many pundits contend that we can reverse the deterioration of the middle class with a series of policies such as revising the tax code, making free trade fair, investing in America’s infrastructure, rethinking training and education and strengthening labor unions.

Perhaps America can deal finally with the divisive issue of inequality after having spent decades ignoring it, but hope is not a strategy. The only thing we can be certain of is that there are no quick fixes or easy solutions, and the longer it takes to address the problem, the more painful the cure will be.

originally published: November 30, 2013

The offshoring of the American Dream

By all accounts, Americans continue to experience the worst economy since the Great Depression. Unemployment remains unacceptably high, many of the jobs that produce real income have been offshored and the middle-class earnings are stagnant. Looking ahead, it’s likely to get worse before it gets better.

Yet corporate profits are doing just fine, thank you. Today they make up about 12.5 percent of  America’s gross domestic product. Just two years ago, they reached their largest percentage of GDP since the 1950s. On the other hand, wages and salaries, which accounted for 47 percent of GDP in 1985, are currently at around 42 percent.

Among the reasons for the combination of lower wages and high corporate profits in a weak economy is that American firms have discovered the advantages of exporting manufacturing and service jobs to countries with an abundance of productive, low-wage workers. Firms substitute cheap foreign labor for American workers. All the while, those Americans are told that offshoring is part of free trade and globalization.

Early offshoring was focused on manufacturing, but in recent years, U.S. firms have taken advantage of modem communication technology to outsource service activities. This trend cuts across all industries and occupations, ranging from lower-skilled manufacturing jobs to those requiring more skill and education, including those in the information technology sector. Put bluntly, they are exporting jobs to countries where wage rates are low, causing higher unemployment and lower living standards in the U.S.

Cheerleaders for offshoring argue that the money companies save will, in the long term, create new and better domestic jobs. These jobs must be disguised in the employment statistics; very well disguised, indeed. Moreover, they argue that when firms save money, consumers benefit from lower prices. So while free trade causes some dislocation, the benefits outweigh the costs. This pitch has become a totem of belief among free-trade advocates but it’s cold comfort for those whose jobs have been exported.

It was reported last month that IBM now employs more people in India than it does in the U.S. Its Indian workforce has grown from 3,000 in 2002 to about 112,000 last year. The reason is simple: The cost of labor in India is only a fraction of what it costs to employ the equivalent workers in the U.S. The average annual salary for an IBM employee in India is $17,000 compared with $100,000 for a senior American IT specialist.

Given such wage differentials, it’s not surprising that we are now witnessing the great migration of white-collar American service jobs. While India is the largest destination, the jobs have also gone to Eastern Europe, the Philippines, China and Mexico.

The offshoring of jobs may be one of the underlying reasons why Great Recession job losses look quite different from those of past recessions. American unemployment is becoming structural rather than cyclical and may worsen over time no matter how much public stimulus is provided.

So we have finally figured out how to make income redistribution happen on a global scale: American workers have to be less rich so their overseas counterparts can be less poor. Offshoring increases income levels in developing countries and the theory is that with greater wealth, those people will be able to demand and receive better treatment. The question is whether these interests should outweigh the interests of American workers.

Maybe jobs will return when American wages are as low as those of our foreign competitors and corporations decide to come home to exploit cheap labor. But it seems they first have to impoverish domestic workers so those workers can become rich again in the future.

originally published: November 6, 2013

Middle-class America holds no influence over Congress  

The rest of the world watched the latest game of chicken over the U.S. government shutdown, which stretched on for more than two weeks and threatened to result in financial default, all of which again raised the question of whether the world’s leading power has lost the capacity to govern itself. Congress has not passed a proper budget since 2009.

The first government shutdown in 17 years ended when the Senate and the House of Representatives reached another 11th hour deal to avoid a financial default and get the government running again late on the evening of Oct. 16. The president signed the legislation early the next day. The bill approved funding the government until Jan. 15, 2014 and suspended the nation’s borrowing limit of $16.7 trillion until Feb. 7.

Of course, Congress could not resist larding the legislation with pork. Senate Minority Leader Mitch McConnell, who was instrumental in ending the crisis, got $2.9 billion for a dam in his home state of Kentucky. Congress also awarded the widow of the late New Jersey Sen. Frank Lautenberg $174,000, the equivalent of one year’s salary. In 2012, the Capitol Hill publication Roll Call named Lautenberg one of the 50 richest members of Congress with a net worth of about $56.8 million.

The threat of a government default is off the table for now. But instead of resolving underlying disputes, the short-term deal only pushed the hard choices off to another day. It gives the parties some time to  cool off and negotiate a broader spending plan.

Brace yourself for another cliffhanger that resembles a bad daytime soap opera. America will continue its habit of governing by crisis after crisis after crisis. In this troubled political environment, is it any wonder that businesses are sitting on their cash rather than investing in new factories, equipment, and more workers?

As part of the recent deal, the House and Senate will appoint members to a bipartisan group tasked with hammering out an agreement by Dec. 13 on a blueprint for tax and spending policies over the next decade, that may include tax increases and structural reforms to entitlement programs such as Medicare something the two parties have not agreed on in years.

Given the recent track record, the chance that this new forum can deliver by its deadline, in time for Congress to act by Jan. 15, on funding to keep the government open, is slim to none.

Can the U.S. recover its tarnished image? Is the recent dysfunction in Washington now behind us, or is it destined to become part of the permanent bleak political landscape?

Conventional wisdom holds that the deal made in Washington guarantees another shutdown and debt ceiling fight early next year. In other words, Americans will soon be witnessing another psychodrama being played out with politicians again acting badly, more divided than ever, and pulled apart by two different conceptions of government.

If you believe the political roosters on Capitol Hill can be counted on to stop squawking, bridge the gap between competing visions of the role of government and reach agreement on critical problems ranging from employment to energy to entitlements to education, then you have every confidence in the full faith and credit of the U.S. government.

The average hard-working, middle-class family is coming to recognize that they don’t have a shred of influence and that our leaders in Washington seem to care only about those who write the checks that allow them to stay in power. Nobody wants to have to say it, but Americans need to read it to begin to understand that campaign contributions are politicians’ favorite form of catnip.

originally published: November 2, 2013

The Fed takes middle-class to the cleaners

Despite all the talk about the progress made over the last four years, the jobless recovery is eating away at the American economy like a swarm of termites invisibly consuming a house from the inside out, widening income inequality and undermining Americans’ belief in upward mobility.

The economic growth rate has fallen to less than 2 percent and the only reason the headline unemployment rate has declined to 7.3 percent is because so many people – especially middle and lower class Americans- have stopped looking for work or are working part-time. Job creation can’t even keep up with population-related growth in the labor force.

It is anticipated that under Janet Yellen, the likely successor to Federal Reserve Chair Ben Bernanke, current monetary policy will remain in place and the government will continue to pump trillions into the financial system, keeping interests rates near zero to offset the drag of current fiscal policy. When considering the feasibility of any future quantitative easing, government speak for printing money, the Fed would be wise to consider the policy’s adverse effects on savers and retirees and their interest income.

According to economics textbooks, reducing interest rates and the cost of credit is supposed to spur lending; encourage spending on big ticket items like cars and houses; and boost business investment in inventories, plant, equipment and hiring.

Sure, credit is the most important and most direct channel through which the Fed’s polices affect the economy, but the transmission lines through which cheap money flows are clogged. Despite sitting on an astonishing $2.3 trillion in capital available for lending, banks remain reluctant to extend credit to all but households with the highest credit scores. If you don’t need money, you can get all you want. And by dropping its short-term lending rate to near zero, the Fed allows banks to borrow, for example at 0.10 percent and invest the proceeds in Treasury bonds. Nobody in their right minds wants to own the 10- year Treasury bond at a 2.5 percent interest rate, but banks are doing it because they can borrow at nearly interest-free and earn a spread of 2.40 percent.

The good news is that the Fed’s policies have boosted the stock market. Chairman Bemanke has said that “higher stock prices will boost consumer wealth and help increase confidence, which can also spur spending. Increased spending will lead to higher incomes and profits that, in a virtuous circle, will further support economic expansion.”

But while a rising stock market has helped market participants like financial institutions and large firms, it has done little to improve economic growth and reduce unemployment. The median amount of wealth middle-income families have is about $20,000. By contrast, the family that earns $90-$100,000 annually has about $424,000 in financial wealth.

The spoils of the recovery have not been equally shared. The boost in asset prices is likely to disproportionately benefit the wealthy and increase income inequality. Unemployment is still high by historical standards, economic growth is anemic, and real wages adjusted for inflation have not improved.

One of the overlooked consequences of the Fed’s rounds of monetary stimulus and reducing interest rates is to rob hardworking, average American savers and retirees of income and spending power, because the interest they earn on their savings isn’t enough to keep up with inflation. This dramatically reduces their spending, which hurts businesses, leaving them unable to hire. Consumer spending is critically important because it accounts for more than 60 percent of the nation’s gross domestic product.

But then who said the Fed was responsible for the equitable distribution of wealth, income or credit? After all, they have their hands full minimizing unemployment and inflation. Nowadays the average American doesn’t have much carry with a Fed whose policies are taking the middle class to the cleaners.

originally published: October 19, 2013

While America’s losses mount, the top 1% makes huge gains

Several weeks ago, the Federal Reserve decided not to cut back, or taper as the finance mavens say, the billion-dollar bond buying program known as quantitative easing, which is a euphemism for printing money. A gradual reduction of QE had been expected since June, when Fed chairman Ben Bernanke said the economy was getting stronger and he might reduce the monthly purchases by the end of the year.

But the Fed got cold feet and changed its mind because its leaders don’t believe that economic activity and labor market conditions are strong enough to merit reducing the bond purchases. Economic growth is lackluster. The unemployment rate is too high, labor force participation is at a 34-year low and too many newly created jobs are low paying and/or part time. Consequently the Fed will continue buying $85 billion of Treasury and mortgage bonds each month and remain committed to holding short-term rates near zero at least so long as the unemployment rate remains above 6.5 percent.

So the training wheels stay on the bicycle and continue to feed an addiction to cheap money.

After five years of depending on a monthly injection of liquidity, it may well be that QE will become a permanent tool of the Fed for managing the business cycle and garden-variety recessions. Since it began in the Paleozoic era, circa late 2008, the Fed has hoped that QE would stimulate economic growth and hiring by holding down interest rates and encouraging households and businesses to spend and invest.

But given that the wealthy own a disproportionate amount of equities, the stock market’s gains are unequally distributed. While a rising stock market may make people feel wealthier, the run-up in their pension accounts -thanks to the stock market reaching new highs – does not send the average American running to the nearest retail store.

Corporations are using the record-low interest rates to take on new debt for acquisitions, increase dividends, and engage in share buybacks rather than investing in the real economy, which has certainly not helped the labor market. Of course, thanks to the Fed, the interest rate paid on the national debt is at a historic low of2.4 percent, according to the Congressional Budget Office. This keeps debt service costs down and understates the budget deficit.

The danger is that as everyone becomes addicted to cheap money, the Fed’s moves are setting the stage for a new bubble. Printing money floods the market with more dollars, which makes dollars worth less, which means that tangible assets priced in dollars, such as oil and food, end up costing more.

Prolonged low interest rates and an abundant money supply have punished savers and retirees. Their money has generated little return, which forces them into high-risk investments to try and keep up with inflation. The younger generation hurt by high unemployment is not increasing its consumption to make up for the decline in spending among older Americans. And as middle America struggles with rising food and gas prices, weighed down by high unemployment and stagnant wages, the gap between the wealthy and everyone else is growing.

While the question of whether the Fed’s monetary policy has helped the real economy and the average American remains unanswered, a new study from the University of California at Berkeley finds that the top 1 percent have captured 95 percent of the gains during the so-called recovery. Additionally, according to the Census Bureau, middle-class incomes have largely remained stagnant even after the recession ended. With high unemployment, corporations are under no pressure to increase workers’ mcomes.

The gap between the top 1 percent and the rest of the country is the greatest it has been since the 1920s. A large reason for this growing disparity is that the wealthiest households have benefited far more that ordinary Americans have from the Dow Jones Industrial Average more than doubling in value since it bottomed out in early 2009.

So much for the presumption that once the Great Recession ended, the American consumer would once again fuel economic growth.

originally published: October 5, 2013