A day that should live in infamy

Early in 1941, the government of resource-poor Japan realized that it needed to seize control of the petroleum and other raw material sources in the Dutch East Indies, French Indochina and the Malay Peninsula. Doing that would require neutralizing the threat posed by the U.S. Navy’s Pacific Fleet based at Pearl Harbor in Hawaii.

The government assigned this task to the Imperial Navy, whose combined fleet was headed by Admiral Isoroku Yamamoto. The Imperial Navy had two strategic alternatives for neutralizing the U.S. Pacific Fleet. One was to cripple the fleet itself through a direct attack on its warships,  or cripple Pearl Harbor’s ability to function as the fleet’s forward base in the Pacific.

Crippling the U.S. fleet would require disabling the eight battleships that made up the fleet’s traditional battle line. It was quite a tall order.

The most effective way to cripple Pearl Harbor’s ability to function as a naval base would be to destroy its fuel storage and ship repair facilities. Without them, the Pacific Fleet would have to return to the U.S., where it could no longer deter Japanese military expansion in the region during the year or so it would take to rebuild Pearl Harbor.

It soon became apparent that the basics of either strategy could be carried out through a surprise air raid launched from the Imperial Navy’s six first-line aircraft carriers. Admiral Yamamoto had a reputation as an expert poker player, gained during his years of study at Harvard and as an Imperial Navy naval attache in Washington. He decided to attack the U.S. warships that were moored each weekend in Pearl Harbor. But in this case the expert poker player picked the wrong target.

The Imperial Navy’s model for everything it did was the British Royal Navy. Standard histories of the Royal Navy emphasized its victories in spectacular naval battles.

Lost in the shuffle was any serious consideration of trying to cripple Pearl Harbor’s  ability to function as a forward naval base. So it was that, in one of history’s finest displays of tactical management, six of the world’s best aircraft carriers furtively approached the Hawaiian Islands from the north just before dawn that fateful Sunday, Dec. 7, 1941, launched their planes into the rising sun, caught the U.S. Pacific Fleet with its pants down and wrought havoc in spectacular fashion. On paper at least, this rivaled the British Royal Navy’s triumph at Trafalgar.

But so what?

The American battleships at Pearl Harbor were slow-moving antiques from the World War I era. As we know, the U.S. Navy already had two brand new battleships in its Atlantic Fleet that could run rings around them. And eight new ones the navy was building were even better.

More importantly, the Pacific Fleet’s three aircraft carriers weren’t at Pearl Harbor. American shipyards were already building 10 modem carriers whose planes would later devastate Imperial Navy forces in the air/sea battles of the Philippine Sea and Leyte Gulf.

Most importantly, as the sun set on Dec. 7 and the U.S. Navy gathered the bodies of its 2,117 sailors and Marines killed that day, all-important fuel storage and ship repair facilities remained untouched by Japanese bombs, allowing Pearl Harbor to continue as a forward base for American naval power in the Pacific.

So in reality, Dec. 7 marked the sunset of Japan’s extravagant ambitions to dominate Asia. Admiral Yamamoto and the Imperial Navy’s other tradition-bound leaders chose the wrong targets at Pearl Harbor.

The dictates of tradition are usually the worst guides to follow when it comes doing anything really important. After all, if they survived long enough to be venerated, they’re probably obsolete.

Originally published: December 6, 2014

A day that should live in infamy

Early in 1941, the government of resource-poor Japan realized that it needed to seize control of the petroleum and other raw material sources in the Dutch East Indies, French Indochina and the Malay Peninsula. Doing that would require neutralizing the threat posed by the U.S. Navy’s Pacific Fleet based at Pearl Harbor in Hawaii.

The government assigned this task to the Imperial Navy, whose combined fleet was headed by Admiral Isoroku Yamamoto. The Imperial Navy had two strategic alternatives for neutralizing the U.S. Pacific Fleet. One was to cripple the fleet itself through a direct attack on its warships,  or cripple Pearl Harbor’s ability to function as the fleet’s forward base in the Pacific.

Crippling the U.S. fleet would require disabling the eight battleships that made up the fleet’s traditional battle line. It was quite a tall order.

The most effective way to cripple Pearl Harbor’s ability to function as a naval base would be to destroy its fuel storage and ship repair facilities. Without them, the Pacific Fleet would have to return to the U.S., where it could no longer deter Japanese military expansion in the region during the year or so it would take to rebuild Pearl Harbor.

It soon became apparent that the basics of either strategy could be carried out through a surprise air raid launched from the Imperial Navy’s six first-line aircraft carriers. Admiral Yamamoto had a reputation as an expert poker player, gained during his years of study at Harvard and as an Imperial Navy naval attache in Washington. He decided to attack the U.S. warships that were moored each weekend in Pearl Harbor. But in this case the expert poker player picked the wrong target.

The Imperial Navy’s model for everything it did was the British Royal Navy. Standard histories of the Royal Navy emphasized its victories in spectacular naval battles.

Lost in the shuffle was any serious consideration of trying to cripple Pearl Harbor’s  ability to function as a forward naval base. So it was that, in one of history’s finest displays of tactical management, six of the world’s best aircraft carriers furtively approached the Hawaiian Islands from the north just before dawn that fateful Sunday, Dec. 7, 1941, launched their planes into the rising sun, caught the U.S. Pacific Fleet with its pants down and wrought havoc in spectacular fashion. On paper at least, this rivaled the British Royal Navy’s triumph at Trafalgar.

But so what?

The American battleships at Pearl Harbor were slow-moving antiques from the World War I era. As we know, the U.S. Navy already had two brand new battleships in its Atlantic Fleet that could run rings around them. And eight new ones the navy was building were even better.

More importantly, the Pacific Fleet’s three aircraft carriers weren’t at Pearl Harbor. American shipyards were already building 10 modem carriers whose planes would later devastate Imperial Navy forces in the air/sea battles of the Philippine Sea and Leyte Gulf.

Most importantly, as the sun set on Dec. 7 and the U.S. Navy gathered the bodies of its 2,117 sailors and Marines killed that day, all-important fuel storage and ship repair facilities remained untouched by Japanese bombs, allowing Pearl Harbor to continue as a forward base for American naval power in the Pacific.

So in reality, Dec. 7 marked the sunset of Japan’s extravagant ambitions to dominate Asia. Admiral Yamamoto and the Imperial Navy’s other tradition-bound leaders chose the wrong targets at Pearl Harbor.

The dictates of tradition are usually the worst guides to follow when it comes doing anything really important. After all, if they survived long enough to be venerated, they’re probably obsolete.

originally published: December 6, 2014

Why gas prices are so low

Global crude oil prices have sunk dramatically, falling to nearly $80 a barrel – a 30 percent drop from June. Unleaded regular gasoline prices are now under $3 a gallon. Falling prices are a boon to industrialized nations, but they shouldn’t make the mistake of assuming that oil will remain cheap indefinitely.

If you ask 10 experts why oil prices are so volatile, you are likely to get 10 different answers. But they all boil down to supply and demand.

In recent years, the ranks of major economic achievers were swelled by the emergence of formerly developing third world nations – especially population giants like India and China. The result was a significant increase in the number of middle- and upper-class consumers eager to enjoy a more lavish lifestyle. Such a lifestyle inevitably meant higher consumption of oil products, which generated upward price pressure.

The reason crude oil prices have been falling since early June is the global economic slowdown, especially in Asia and Europe. Demand is down at a time when oil is abundant, especially with substantial increases in U.S. production, which is at its highest level in 30 years thanks to shale oil drilling in North Dakota and Texas. United States crude oil production has climbed to just over nine million barrels a day and is projected to approach 9.5 million next spring. Adding to the excess supply, production is up in Russia as well.

As a result, imports from the 12 OPEC countries responsible for about a third of global production have been cut in half. OPEC members who rely on higher oil prices to balance their budgets wanted to announce production cuts at their Thanksgiving Day meeting in Vienna and are looking for Saudi Arabia to take the lead.

The decline in crude oil prices accelerated last month when the Saudis, the world’s largest oil producer at 9.6 million barrels daily, cut the price of exports to the United States in an effort to retain its shrinking market share and, some speculate, undercut America’s oil shale bonanza. The thought is that the Saudis can push the price of crude oil below $50 per barrel and still make money.

In the past, oil producers struggled to respond to the problems of letting supply run wild by developing “gentlemen agreements” to control supply in “rational” ways. The most notorious of these producer cartels is OPEC.

The difficulty with the cartels is that many of their members aren’t gentlemen. They can’t resist opportunities to make hay while the sun shines by sneaking extra oil onto world markets to take advantage of temporary price spikes or marching to the beat of their own drummer and cutting prices, causing other members to wrinkle their noises in disgust- before joining the party so they don’t get left behind.

One of the few economic laws that’s truly ironclad is the practical impossibility of enforcing cartel supply and price restrictions without the kind of outright physical violence that is generally only acceptable among New York’s Five Families.

Falling oil prices are providing a boost to the U.S. economy with lower costs for consumers and energy­ sensitive industries. It has been estimated that the cut in crude oil prices to $80 a barrel is the equivalent of a $600 tax cut for every household. This should be music to the ears of retailers who had been bracing for a slow-growth holiday shopping season.

originally published: November 29, 2014

Why gas prices are so low

Global crude oil prices have sunk dramatically, falling to nearly $80 a barrel – a 30 percent drop from June. Unleaded regular gasoline prices are now under $3 a gallon. Falling prices are a boon to industrialized nations, but they shouldn’t make the mistake of assuming that oil will remain cheap indefinitely.

If you ask 10 experts why oil prices are so volatile, you are likely to get 10 different answers. But they all boil down to supply and demand.

In recent years, the ranks of major economic achievers were swelled by the emergence of formerly developing third world nations – especially population giants like India and China. The result was a significant increase in the number of middle- and upper-class consumers eager to enjoy a more lavish lifestyle. Such a lifestyle inevitably meant higher consumption of oil products, which generated upward pnce pressure.

The reason crude oil prices have been falling since early June is the global economic slowdown, especially in Asia and Europe. Demand is down at a time when oil is abundant, especially with substantial increases in U.S. production, which is at its highest level in 30 years thanks to shale oil drilling in North Dakota and Texas. United States crude oil production has climbed to just over nine million barrels a day and is projected to approach 9.5 million next spring. Adding to the excess supply, production is up in Russia as well.

As a result, imports from the 12 OPEC countries responsible for about a third of global production have been cut in half. OPEC members who rely on higher oil prices to balance their budgets wanted to announce production cuts at their Thanksgiving Day meeting in Vienna and are looking for Saudi Arabia to take the lead.

The decline in crude oil prices accelerated last month when the Saudis, the world’s largest oil producer at 9.6 million barrels daily, cut the price of exports to the United States in an effort to retain its shrinking market share and, some speculate, undercut America’s oil shale bonanza. The thought is that the Saudis can push the price of crude oil below $50 per barrel and still make money.

In the past, oil producers struggled to respond to the problems of letting supply run wild by developing “gentlemen agreements” to control supply in “rational” ways. The most notorious of these producer cartels is OPEC.

The difficulty with the cartels is that many of their members aren’t gentlemen. They can’t resist opportunities to make hay while the sun shines by sneaking extra oil onto world markets to take advantage of temporary price spikes or marching to the beat of their own drummer and cutting prices, causing other members to wrinkle their noises in disgust- before joining the party so they don’t get left behind.

One of the few economic laws that’s truly ironclad is the practical impossibility of enforcing cartel supply and price restrictions without the kind of outright physical violence that is generally only acceptable among New York’s Five Families.

Falling oil prices are providing a boost to the U.S. economy with lower costs for consumers and energy­ sensitive industries. It has been estimated that the cut in crude oil prices to $80 a barrel is the equivalent of a $600 tax cut for every household. This should be music to the ears of retailers who had been bracing for a slow-growth holiday shopping season.

originally published: November 29, 2014

The beginning of another bank crisis?

One definition of insanity is doing the same thing over and over and expecting a different result. Late  last month, the Federal Housing Finance Agency, which regulates Fannie Mae and Freddie Mac, announced that the two institutions would start purchasing mortgages with down payments as low as 3 percent instead of the already absurdly low 5 percent minimum both institutions currently require. These FHFA-insured loans are for borrowers with weak credit.

Also, the federal regulator announced loosened mortgage lending rules. It removed the 20 percent down payment requirement for high-quality mortgages that banks determine to have low risk of default and made it less likely that Fannie and Freddie will force lenders to buy back mortgages that go bad.

By expanding the types of mortgages Freddie and Fannie will buy, the FHFA hopes to spur banks to make more loans to first-time buyers and increase homeownership among those with low and moderate incomes. These watered-down underwriting standards are a big win for affordable housing advocates and the banking and real estate industries.

But the feds are sowing the seeds for another meltdown by loosening recently enacted safeguards. Bookshelves sag with encyclopedic volumes arguing that a major factor in the financial apocalypse of 2008 was relaxed lending practices that led to the housing bust. How quickly we forget.

Back then the weakening of underwriting standards, and especially low down payments, increased home ownership and housing prices, which led to a housing price bubble. The banks had packaged and sold to investors bundles of risky mortgages with teaser rates that ballooned after a few years. Many borrowers ended up defaulting on the loans when the interest rates spiked. As a result, the value of the mortgage securities plummeted, and banks and investors holding them lost billions.

The debacle helped ignite the financial meltdown that plunged the economy into the deepest recession since the 1930s and necessitated taxpayer bailouts of banks and Fannie and Freddie. Following this debacle, there was a push to tighten mortgage lending standards and have banks retain a small portion of the loans they sold as stipulated in the Dodd-Frank Act of2010.

Those of you who’ve seen the classic movie “It’s a Wonderful Life” will remember George Bailey describing to his nervous depositors how the home mortgage business worked. You would visit your local bank and, among other things, the institution would require you to pay a significant portion (like 20 percent) of the purchase price upfront. Along with this down payment, purchasers would be required to demonstrate proof of income.

By granting a home mortgage, George’s thrift institution was exposing itself to risk. A buyer could fail to make the monthly mortgage payments. And since the institution kept the mortgage on its books as an asset, it remained exposed to this risk until the mortgage was paid off.

This process meant the initial lending decision was based on careful consideration of the customer’s creditworthiness. To the extent feasible, the institution would seek to grant mortgages only to its own customers so it could be confident that the homebuyers were safe credit risks. The pluses and minuses of this simple model are obvious.

By selling the packaged loans to others, banks could remove the loans from their balance sheets, which allowed the banks to increase their loans without technically violating the rules in regard to minimum capital ratios.

Providing low down-payment loans to borrowers with weak credit, then bundling and selling those loans drove the American financial system to the brink of collapse. Less than a decade later it appears that no one remembers.

originally published: November 8, 2014

The beginning of another bank crisis?

One definition of insanity is doing the same thing over and over and expecting a different result. Late  last month, the Federal Housing Finance Agency, which regulates Fannie Mae and Freddie Mac, announced that the two institutions would start purchasing mortgages with down payments as low as 3 percent instead of the already absurdly low 5 percent minimum both institutions currently require. These FHFA-insured loans are for borrowers with weak credit.

Also, the federal regulator announced loosened mortgage lending rules. It removed the 20 percent down payment requirement for high-quality mortgages that banks determine to have low risk of default and made it less likely that Fannie and Freddie will force lenders to buy back mortgages that go bad.

By expanding the types of mortgages Freddie and Fannie will buy, the FHFA hopes to spur banks to make more loans to first-time buyers and increase homeownership among those with low and moderate incomes. These watered-down underwriting standards are a big win for affordable housing advocates and the banking and real estate industries.

But the feds are sowing the seeds for another meltdown by loosening recently enacted safeguards. Bookshelves sag with encyclopedic volumes arguing that a major factor in the financial apocalypse of 2008 was relaxed lending practices that led to the housing bust. How quickly we forget.

Back then the weakening of underwriting standards, and especially low down payments, increased home ownership and housing prices, which led to a housing price bubble. The banks had packaged and sold to investors bundles of risky mortgages with teaser rates that ballooned after a few years. Many borrowers ended up defaulting on the loans when the interest rates spiked. As a result, the value of the mortgage securities plummeted, and banks and investors holding them lost billions.

The debacle helped ignite the financial meltdown that plunged the economy into the deepest recession since the 1930s and necessitated taxpayer bailouts of banks and Fannie and Freddie. Following this debacle, there was a push to tighten mortgage lending standards and have banks retain a small portion of the loans they sold as stipulated in the Dodd-Frank Act of2010.

Those of you who’ve seen the classic movie “It’s a Wonderful Life” will remember George Bailey describing to his nervous depositors how the home mortgage business worked. You would visit your local bank and, among other things, the institution would require you to pay a significant portion (like 20 percent) of the purchase price upfront. Along with this down payment, purchasers would be required to demonstrate proof of income.

By granting a home mortgage, George’s thrift institution was exposing itself to risk. A buyer could fail to make the monthly mortgage payments. And since the institution kept the mortgage on its books as an asset, it remained exposed to this risk until the mortgage was paid off.

This process meant the initial lending decision was based on careful consideration of the customer’s creditworthiness. To the extent feasible, the institution would seek to grant mortgages only to its own customers so it could be confident that the homebuyers were safe credit risks. The pluses and minuses of this simple model are obvious.
By selling the packaged loans to others, banks could remove the loans from their balance sheets, which allowed the banks to increase their loans without technically violating the rules in regard to minimum capital ratios.

Providing low down-payment loans to borrowers with weak credit, then bundling and selling those loans drove the American financial system to the brink of collapse. Less than a decade later it appears that no one remembers.

originally published: November 8, 2014

Dancing on the edge of absurdity

There can be little doubt that one of the causes of the 2008 financial crisis was diminished regulatory control , the seeds of which were sown during the three preceding decades.

Recent legislation re-regulates financial markets, but attracting the best and brightest to regulatory jobs is proving to be a major challenge. The congressionally authorized Financial Industry Regulatory Authority, a not-for-profit self-regulator,  may offer a solution to the problem.

Beginning with the Carter administration and accelerating during Reagan’s presidency, the banking industry,  among others, was steadily deregulated. Not only were leveraging requirements continually lowered but watchdog organizations such as the Securities and Exchange Commission were weakened both by legislation and the appointment of free-market advocates.

Successive administrations were enthusiastic advocates of deregulation. The dominant economic paradigm was that markets are efficient and inherently maximize welfare and work best when managed least. Moreover, with free-market advocates in charge of regulatory agencies such as the SEC, many existing laws were ignored or rarely enforced.

For example, observers repeatedly warned the Securities and Exchange Commission about suspected irregularities at Bernard Madoff’s investment firm, which was later revealed to be a multi-billion dollar Ponzi scheme. In spite of several warnings, no serious investigation was undertaken until after the firm’s spectacular collapse.

One reason offered for poor financial regulation is that government agencies are seriously disadvantaged when it comes to attracting the best and the brightest. The salaries of elected officials tend to impose an artificial ceiling on how much public employees can be paid. Even though these ceilings ignore marketplace realities, elected officials are reluctant to raise them by advocating higher salaries for themselves because it looks bad to voters.

Consequently,  Americans are told that many government regulatory agencies lack the talent to regulate financial markets because they can’t pay the going rate for good people. Thus, the regulatory agencies’ best and the brightest flock to higher-paying jobs with firms they regulate. This leaves the public to complain that our regulatory agencies are less effective than they need to be.

But not all regulators are underpaid. According to the Bond Buyer ‘s annual salary survey of 21 industry regulatory groups, compensation for the chairman and CEO of Financial Industry Regulatory Authority , which oversees the 4,100 securities firms and over 636,000 stockbrokers in the United States, was $2.63 million in 2013.

The perks aren’t bad either, he receives $20,000 annually for admission fees, dues, and house charges to one club each in the Big Apple and Washington, and up to $20,000 annually for personal finance and tax counseling, as well as spousal travel for certain business-related events. Financial Industry Regulatory Authority also paid four of its top executives more than $1 million in 2013. These folks can spend more for one dinner than the average American -whose wages have been flat for decades – spends on a vacation.

Let’s put these salaries in perspective: The President earns $400,000 annually. Janet Yellen, the chair of the Federal Reserve who has sway over the entire world economy as opposed to just American stockbrokers, earns $201,700. Securities and Exchange Commission Chair Mary Jo White makes $165,300. White’s predecessor at the SEC, Mary Shapiro, was fresh from running Financial Industry Regulatory Authority, which gave her a $9 million severance to ease the pain of a low government salary.

These are clearly difficult times for national financial regulators. They are challenged with implementing hideously complicated Dodd-Frank legislation that is supposed to safeguard and stabilize the financial system to avoid another financial crisis.

At 2,319 pages, the Dodd-Frank Act is the most far-reaching financial regulatory undertaking since the 1930s, requiring regulatory agencies that had been withering to enact 447 rules and complete 63 reports and 59 studies within tight congressional deadlines.

It may be at the edge of absurdity, but just maybe the best way to attract the best and the brightest would be to expand the number of one-percenters by outsourcing all regulation to not-for-profit entities such as Financial Industry Regulatory Authority.

originally published: October 25, 2014

Dancing on the edge of absurdity

There can be little doubt that one of the causes of the 2008 financial crisis was diminished regulatory control , the seeds of which were sown during the three preceding decades.

Recent legislation re-regulates financial markets, but attracting the best and brightest to regulatory jobs is proving to be a major challenge. The congressionally authorized Financial Industry Regulatory Authority, a not-for-profit self-regulator,,  may offer a solution to the problem.

Beginning with the Carter administration and accelerating during Reagan’s presidency, the banking industry,  among others, was steadily deregulated. Not only were leveraging requirements continually lowered but watchdog organizations such as the Securities and Exchange Commission were weakened both by legislation and the appointment of free-market advocates.

Successive administrations were enthusiastic advocates of deregulation. The dominant economic paradigm was that markets are efficient and inherently maximize welfare and work best when managed least. Moreover, with free-market advocates in charge of regulatory agencies such as the SEC, many existing laws were ignored or rarely enforced.

For example, observers repeatedly warned the Securities and Exchange Commission about suspected irregularities at Bernard Madoff’s investment firm, which was later revealed to be a multi-billion dollar Ponzi scheme. In spite of several warnings, no serious investigation was undertaken until after the firm’s spectacular collapse.

One reason offered for poor financial regulation is that government agencies are seriously disadvantaged when it comes to attracting the best and the brightest. The salaries of elected officials tend to impose an artificial ceiling on how much public employees can be paid. Even though these ceilings ignore marketplace realities, elected officials are reluctant to raise them by advocating higher salaries for themselves because it looks bad to voters.

Consequently,   Americans are told that many government regulatory agencies lack the talent to regulate financial markets because they can’t pay the going rate for good people. Thus, the regulatory agencies best and the brightest flock to higher-paying jobs with firms they regulate. This leaves the public to complain that our regulatory agencies are less effective than they need to be.

But not all regulators are underpaid. According to the Bond Buyer ‘s annual salary survey of 21 industry regulatory groups, compensation for the chairman and CEO of Financial Industry Regulatory Authority , which oversees the 4,100 securities firms and over 636,000 stockbrokers in the United States, was $2.63 million in 2013.

The perks aren’t bad either, he receives $20,000 annually for admission fees, dues, and house charges to one club each in the Big Apple and Washington, and up to $20,000 annually for personal finance and tax counseling, as well as spousal travel for certain business-related events. Financial Industry Regulatory Authority also paid four of its top executives more than $1 million in 2013. These folks can spend more for one dinner than the average American -whose wages have been flat for decades – spends on a vacation.

Let’s put these salaries in perspective: The President earns $400,000 annually. Janet Yellen, the chair of the Federal Reserve who has sway over the entire world economy as opposed to just American stockbrokers, earns $201,700. Securities and Exchange Commission Chair Mary Jo White makes $165,300. White’s predecessor at the SEC, Mary Shapiro, was fresh from running Financial Industry Regulatory Authority, which gave her a $9 million severance to ease the pain of a low government salary.

These are clearly difficult times for national financial regulators. They are challenged with implementing hideously complicated Dodd-Frank legislation that is supposed to safeguard and stabilize the financial system to avoid another financial crisis.

At 2,319 pages, the Dodd-Frank Act is the most far-reaching financial regulatory undertaking since the 1930s, requiring regulatory agencies that had been withering to enact 447 rules and complete 63 reports and 59 studies within tight congressional deadlines.

It may be at the edge of absurdity, but just maybe the best way to attract the best and the brightest would be to expand the number of one-percenters by outsourcing all regulation to not-for-profit entities such as Financial Industry Regulatory Authority.

originally published: October 25, 2014

Federal Reserve, Americans must listen to Carmen Segarra

To most people, the name Carmen Segarra means nothing. But to a few, her fate validates their worst suspicions about regulators who exist to protect the interests of the regulated.

Segarra is a former bank examiner whose job was to be the Goldman Sachs’ watchdog for the Federal Reserve Bank of New York, which regulates many large New York banks and is the Federal Reserve System’s primary connection to financial and credit markets. She secretly recorded 46 hours of conversations inside the Federal Reserve and Goldman Sachs and released the tapes to Pro-Publica and the radio show, “This American Life.” You can listen to the episode online at ThisAmericanLife.org.

Segarra was fired by the Federal Reserve after seven months, apparently because she refused to budge on her findings that Reserve officials on numerous occasions seemed to treat Goldman Sachs with too much deference. In particular, she insisted based on her fact-finding that the company did not have a policy on conflicts of interest that met regulatory standards.

Her story underscores how regulators have become too cozy with the industry they are charged with policing. Academics call it “regulatory capture.”

This is hardly breaking news. Lax external oversight was among the chief reasons the world’s biggest economy was brought to the brink of depression in 2008. Put bluntly, regulators have to shoulder some of the blame for the financial apocalypse that unleashed the worst economic crisis since the Great Depression of 1929, at a galactic cost to the American taxpayer, and threw millions of Americans out of their jobs and homes. The economy still bears deep scars.

The 2008 financial crisis demonstrated more than ever that the self-regulating financial system was pure myth.

The public has come to catch the joke that on Wall Street, if you represent everyone there is no conflict of interest. Transparency and the financial services industry don’t exactly waltz around arm in arm. In fact, for some bankers transparency is an occupational hazard.

The coverage in the media since the Sept. 26 release of Carmen Segarra’s recordings of Federal Reserve officials not doing their jobs has been minimal. Hers is not a household name like Edward Snowden, who leaked classified information from the National Security Agency that advertised security vulnerabilities and spying on Americans and international leaders.

It may be that the public’s default mode is indifference; they would like to care but there’s just too much going on at the moment. The average American is too busy worrying about making ends meet. And after all, they already knew that banks hold regulators hostage.

Sure, Sens. Elizabeth Warren, D-Mass., and Sherrod Brown, D-Ohio, both members of the Senate Banking Committee, want Congress to investigate Goldman Sachs’ relationship with the Federal Reserve, but it’s more likely that the issue will quietly disappear.

Wall Street makes generous campaign contributions to the guardians of democracy in Washington and spends big on lobbyists to communicate their policy preferences to government apparatchiks. Despite the rosy rhetoric, that makes it highly unlikely that Congress will hold hearings.

Another problem is that many people see government regulatory jobs as stepping stones to lucrative private-sector careers. They develop useful contacts with key employees in the private-sector firms whose behavior they are supposed to regulate and quietly impress these contacts that their “hearts are in the right place.” In this culture of coziness, nothing should be taken at face value.

In the final analysis you can write all the tough regulations you want to regulate the financial system and its participants to prevent future financial debacles. But for those regulations to have any teeth, they must be accompanied by closing the revolving door between lavish private-sector executive suites and the basic steel-desk offices of government agencies.

originally published: October 11, 2014

The mean teeth of he Great Depression still have bite

To paraphrase T.S. Eliot, the only major British poet born in St. Louis, September 2008 was the cruelest month. As America marks the sixth anniversary of the financial meltdown which began that month and drove the global economy off the cliff and into the worst economic crisis since the 1930s, the damage it did is still being felt. Last year, middle-income families earned 8 percent less, adjusted for inflation, than they did in 2007.

But not everyone was so profoundly affected. Commuter helicopter traffic at the East Hampton airport this summer increased by close to 40 percent over last year. Yet while there is a pretense of recovery and conditions are marginally better, most Americans are still living in the mean teeth of the Great Recession. The U.S. economy is facing many challenges, especially the rising financial inequality between the top 1 percent and everybody else.

You would be right to conclude that the fed’s attempts to deal with the Financial Apocalypse of 2008, reminded you of the note your grade school teacher scrawled on too many report cards: “Could have done better.” To help put this in perspective, here’s a chronology of key events in September 2008.

On Sept. 7, the Federal Housing Finance Agency, backstopped by the Treasury Department, placed Fannie Mae and Freddie Mac into conservatorship. A week later, Merrill Lynch avoided oblivion by hastily selling itself to Bank of America. In the early hours of Sept. 15, Lehman Brothers CEO Dick Fuld, aka the gorilla of Wall Street, announced that his firm was seeking bankruptcy protection after the feds refused to step in and provide financial assistance.

Within hours of the Lehman bankruptcy, the feds rushed forward with an initial $85 billion in taxpayer cash to bail out the giant insurance company American International Group (AIG), essentially nationalizing the firm. AIG had mismanaged itself to the verge of bankruptcy by stuffing its portfolio full of derivative products whose value had collapsed.

Then on Sept. 16, the shares in the world’s oldest money market fund fell below $1 because of losses incurred on the fund’s holdings of Lehman commercial paper and medium-term notes.

To help stabilize the financial system, on Sept. 21 the feds declared Morgan Stanley and Goldman Sachs to be bank holding companies. Five days letter, in the biggest bank failure in American history, the government seized the assets of Washington Mutual, the nation’s sixth largest bank, and its banking operations were sold to JP Morgan Chase.

As the month mercifully wound down, the House of Representatives on the 29th voted down the Bush administration’s Troubled Assets Relief Program (TARP), which would have invested 700 billion taxpayer dollars in troubled banks by purchasing their distressed assets. Needless to say, the stock market reacted with panic to this “failure of democratic government” and suffered one of its worst single-day price declines, with the S&P 500 Index plunging a horrendous 8.8 percent.

Finally, rattled by the market’s obvious panic, Congress passed the Emergency Economic Stabilization Act of 2008 on Oct. 3, which included a cosmetically revamped version of TARP.

The financial markets were still in turmoil over the ensuing weeks. In terms of sheer dollar losses, the “fall of 2008” (along with the fall of many other illusions) was probably the greatest financial disaster in world history. Throughout the world, its cost in terms of shattered wealth and wrecked lives is still being calculated.

The fallout even reached Iceland. The country’s entire banking system collapsed in October when it became apparent that its bank portfolios were stuffed full of American-made toxic derivatives that had little value.

The collapse led to the following exchange on a late-night TV talk show: “What is the capital of Iceland? About $25, give or take.”

Sadly, Iceland was hardly alone.

originally published: September 27, 2014