Can Machines Think ?

In 1950, Alan Turing, theoretical mathematician responsible for breaking the Nazi Enigma code during World War II, who is considered the father of modern computer science and artificial intelligence (AI), posed a fundamental question: “Can machines think?”

Today we are on the verge of answering Turing’s question with the creation of AI systems that imitate human cognitive abilities, interact with humans naturally, and even appear capable of human-like thinking.  These developments have sparked a global discussion about the need for comprehensive and coordinated global AI regulation.

Implementation would be a tall order.  Even if regulations could keep up with the pace of technological change, passing a framework acceptable to countries that would view it through the lens of self-interest would be a daunting task.

Turning was just 41 when he died from poisoning in 1954, a death that was deemed a suicide. For decades, his status as a giant in mathematics was largely unknown, thanks to secrecy around his computer research and the social taboos about his homosexuality.  His story became more widely known after the release of the 2014 movie, “The Imitation Game.”

Alan Turing played a foundational role in the conceptual development of machine learning. For example, one of his key contributions is the Turing Test he proposed in his seminal 1950 paper, “Computing Machinery and Intelligence.”

The Turing Test is a deceptively simple method of determining whether a machine can demonstrate human intelligence.  If a machine can converse with a human without the human consistently being able to tell that they are conversing with a machine, the machine is said to have demonstrated human intelligence.

Critics of the Turing Test argue that a computer can have the ability to think, but not to have a mind of its own. While not everyone accepts the test’s validity, the concept remains foundational in artificial intelligence discussions and research.

AI is pretty much just what it sounds like—getting machines to perform tasks by mimicking human intelligence. AI is the simulation of human intelligence by machines. The personal interactions that individuals have with voice assistants such as Alexa or Siri on their smartphones are prime examples of how AI is being integrated into people’s lives.

Generative AI has made a loud entrance. It is a form of machine learning that allows computers to generate all sorts of content. Recently, examples such as ChatGPT and other content creating tools have garnered a whole lot of attention.

Given the rapid advances in AI technology and its potential impact on almost every aspect of society, the future of global AI governance has become a topic of debate and speculation.  Although there is a growing consensus around the need for proactive AI regulation, the optimal path forward remains unclear.

What is the right approach to regulating AI?  A market-driven approach based on self-regulation could drive innovation. However, the absence of a comprehensive AI governance framework might spark a race among commercial and national superpowers to build the most powerful AI system. This winner-take-all approach could lead to a concentration of power and to geopolitical unrest.

Nations will assess any international agreements to regulate AI based on their national interests. If, for instance, the Chinese Communist Party believed global AI regulation would undermine its economic and military competitive edge, it would not comply with any international agreements as it has done in the past.

For example, China ratified the Paris Global Climate Agreement in 2016 and pledged to peak its carbon dioxide emissions around 2030. Yet it remains the world’s largest emitter of greenhouse gases. Coal continues to play a dominant role in China’s energy mix and emissions have continued to grow.

It would be wise to be realistic about the development and implementation of global AI regulations.  Technology usually does not advance in a linear fashion. Disruptions will occur with little to no foresight. Even if a regulatory framework can keep pace with technological advancement, countries will be hesitant to adopt regulations that undermine their technological advancement, economic competitiveness, and national security.

Print Friendly, PDF & Email