Can Machines Think ?

In 1950, Alan Turing, theoretical mathematician responsible for breaking the Nazi Enigma code during World War II, who is considered the father of modern computer science and artificial intelligence (AI), posed a fundamental question: “Can machines think?”

Today we are on the verge of answering Turing’s question with the creation of AI systems that imitate human cognitive abilities, interact with humans naturally, and even appear capable of human-like thinking.  These developments have sparked a global discussion about the need for comprehensive and coordinated global AI regulation.

Implementation would be a tall order.  Even if regulations could keep up with the pace of technological change, passing a framework acceptable to countries that would view it through the lens of self-interest would be a daunting task.

Turning was just 41 when he died from poisoning in 1954, a death that was deemed a suicide. For decades, his status as a giant in mathematics was largely unknown, thanks to secrecy around his computer research and the social taboos about his homosexuality.  His story became more widely known after the release of the 2014 movie, “The Imitation Game.”

Alan Turing played a foundational role in the conceptual development of machine learning. For example, one of his key contributions is the Turing Test he proposed in his seminal 1950 paper, “Computing Machinery and Intelligence.”

The Turing Test is a deceptively simple method of determining whether a machine can demonstrate human intelligence.  If a machine can converse with a human without the human consistently being able to tell that they are conversing with a machine, the machine is said to have demonstrated human intelligence.

Critics of the Turing Test argue that a computer can have the ability to think, but not to have a mind of its own. While not everyone accepts the test’s validity, the concept remains foundational in artificial intelligence discussions and research.

AI is pretty much just what it sounds like—getting machines to perform tasks by mimicking human intelligence. AI is the simulation of human intelligence by machines. The personal interactions that individuals have with voice assistants such as Alexa or Siri on their smartphones are prime examples of how AI is being integrated into people’s lives.

Generative AI has made a loud entrance. It is a form of machine learning that allows computers to generate all sorts of content. Recently, examples such as ChatGPT and other content creating tools have garnered a whole lot of attention.

Given the rapid advances in AI technology and its potential impact on almost every aspect of society, the future of global AI governance has become a topic of debate and speculation.  Although there is a growing consensus around the need for proactive AI regulation, the optimal path forward remains unclear.

What is the right approach to regulating AI?  A market-driven approach based on self-regulation could drive innovation. However, the absence of a comprehensive AI governance framework might spark a race among commercial and national superpowers to build the most powerful AI system. This winner-take-all approach could lead to a concentration of power and to geopolitical unrest.

Nations will assess any international agreements to regulate AI based on their national interests. If, for instance, the Chinese Communist Party believed global AI regulation would undermine its economic and military competitive edge, it would not comply with any international agreements as it has done in the past.

For example, China ratified the Paris Global Climate Agreement in 2016 and pledged to peak its carbon dioxide emissions around 2030. Yet it remains the world’s largest emitter of greenhouse gases. Coal continues to play a dominant role in China’s energy mix and emissions have continued to grow.

It would be wise to be realistic about the development and implementation of global AI regulations.  Technology usually does not advance in a linear fashion. Disruptions will occur with little to no foresight. Even if a regulatory framework can keep pace with technological advancement, countries will be hesitant to adopt regulations that undermine their technological advancement, economic competitiveness, and national security.

Is 2% The Right Inflation

People the world over have been facing a poisonous new economic reality, as inflation has emerged from multi-decade hibernation.  And many of the people dealing with it are too young to remember when inflation was last a serious issue.  It is economically damaging, socially corrosive, and very hard to bring down.

Both the U.S. Federal Reserve (Fed) and the European Central Bank appear dead set on getting inflation back to their 2 percent target. Why did these and other banks, such as the Bank of Canada, Sweden’s Riksbank, and the Bank of England gravitate to this 2 percent figure?

In January 2012, a thousand years ago in internet time, the Fed, under Chairman Ben Bernanke, formally adopted an explicit inflation target of 2 percent. This marked the first time the Fed ever officially established a specific numerical inflation target. The 2 percent target was seen as a way to provide clarity and enhance the effectiveness of monetary policy.

Bernanke’s successor Janet Yellen and current chair Jerome Powell maintained the 2 percent inflation target. While Powell has a laser focus on the 2 percent target, the Fed has recently moved to a more flexible 2 percent average over time. This means the Fed would tolerate some periods of inflation above 2 percent to offset periods when inflation was below that level.

The 2 percent target was not established based on any specific formula or fixed economic rule. Despite its widespread adoption by central banks, there is little empirical evidence to suggest that 2 percent is the platonic ideal for addressing the Fed’s dual mandate of price stability and maximum employment.

This inflation target is an arbitrary number that originated in New Zealand. Surprisingly, it came not from any academic study, but rather from an offhand comment during a television interview.

During the late 1980s, New Zealand was going through a period of high inflation and inability to achieve stable economic growth – the financial equivalent of a bloody nose.  In 1988, inflation had just come down from a high of 15 percent to around 10 percent. New Zealand’s finance minister, Roger Douglas, went on TV to talk about the government’s approach to monetary policy.

He was pressed during the interview about whether the government was satisfied with the new inflation rate.  Douglas replied that he was not, saying that he ideally wanted inflation between zero and 2 percent.  This involved targeting inflation, a method that had kicked around in economic literature for years but had not been implemented anywhere.

At the time there was no set target for inflation in New Zealand; Douglas’ remark was completely off the cuff. But the inflation target caught the attention of economists around the world and went viral, becoming a kind of orthodoxy.  The approach gained recognition and as noted, was subsequently adopted by many other central banks, making inflation targeting a widely used monetary policy strategy – a classic example of how ideas spread within the small priesthood of central bankers.

The hard truth is that many economic luminaries have tried to come up with what is thought to be the optimum inflation rate, but with little success.

All things considered the 2 percent target was seen as a kind of sweet spot for inflation despite the lack of serious intellectual groundwork. Simply stated, there is nothing magical about 2 percent.  It is low enough that the public doesn’t feel the need to think about inflation, but not so low as to stifle economic growth.  That’s how it goes, but not so much more.