The gap between pure brilliance and a convincing imitation of cleverness is only as wide as our own bias. When a machine provokes real human responses in us—engaging our understanding, our astonishment, our thankfulness, and even provoking fear—that performance clearly exceeds hollow mimicry.

Can machines think? The British mathematician, Alan Turing, in his paper “Computing Machinery and Intelligence” (1950), posited that the single method of measuring intelligence must be external behaviour. Turing introduced the “imitation game”. If observers cannot distinguish the output of a computer from a human’s, the machine may be labelled as intelligent.

According to the Turing Test, in a blind text-only conversation with two subjects – one human and one machine, if the machine is conversationally dexterous that a connoisseur is unable to reliably distinguish between the two outputs, the computer not only passes the Turing Test but may be deemed to have achieved that amorphous concept thought of as intelligence.

For decades, the Turing Test was the North Star of AI. An overarching and elusive goal. However, present advances in generative AI appear very close to crossing this legendary threshold. Bing, ChatGPT, and Bard do engage in fluid yet brittle conversations. Mustapha Suleyman, a founder of DeepMind and Inflection.Ai, believes that the Turing Test is no longer serving as an inspiration or a meaningful measure of artificial intelligence. The test does not tell us what the machine can do or understand. What is certain is that AI is an unstoppable wave across civilization and it will change the world forever.

Job displacement, privacy violations, intellectual property infringement, disinformation, bias, discrimination, terrorist attacks, misuse, abuse, “filter bubbles” or dark places that algorithms can take you to, limiting our willingness to make difficult choices, and failures linked to insufficient training data are just a few of the vectors of concern.

Perhaps a new North Star for AI may involve allowing an AI to use angel finance held in a Decentralized Autonomous Organization (DAO) to deliver a specific return on an investment in a given time frame. In such an experiment the AI may be required to produce an avant-garde chocolate bar using cacao pods that are picked by hand from the wild in the forest, rather than imperial selections from celebrated estates.

The AI must research the digital trade business opportunity, off-centre recipes, develop blueprints for a product, a marketing strategy, designs with a colour palette for a label and logo, use internet trends to forecast market placement within a year of production, find chefs globally who can blend nuanced flavours, and identify manufacturers of equipment for roasting, winnowing, tempering and conching.

Finally, the AI must be able to place chocolate bars made from raw wild cocoa, packaged for automation, with a complete multilingual list of ingredients and, food and drug testing results on digital shelves in shops like Walmart and Amazon Go. Crossing the chocolate bar threshold is not far off. It is already possible for AI to produce all of the designs for a Carnival band. We are no longer interested in what machines say under conditions like the Turing Test, but in what they can do. This shift in the North Star is seismic.

Our moral awareness, cognitive strategies, and legislative frameworks change incrementally as they respond to increasing degrees of reasoned agency, self-awareness, the removal of unfreedoms, and our capacity to communicate. Nothing is static and everything is interlaced. Following Turing, John McCarthy further defined AI as “machines that can perform tasks that are characteristic of human intelligence”. The Turing and McCarthy assessments of AI have become yardsticks but not without challenges and counter comments. For almost half a century, machines did not meet the criteria. But this impasse appears to be at an end.

For many years, humans operated on the basis of precisely defined code. The outputs were similarly limited in their rigidity and static form. The vague, fuzzy, and conceptual nature of human thought proved to be a stubborn impediment. Over the last decade, and spurred especially by the global lockdowns associated with the SARS-CoV-2 pandemic, computing innovations have created AIs that match and even exceed human achievements in specific fields.

AIs are imprecise, fluid, emergent, and capable of “learning”. While previous systems required precise inputs and outputs, AIs with fuzzy functions require neither.  These AIs are dynamic because they evolve in response to changing circumstances. They are also emergent because they produce solutions that are novel to humans. These qualities in machinery are revolutionary.

AlphaZero learned Chess by playing against itself. It was given only the rudimentary rules of the game. A human chess player is yet to defeat AlphaZero. Classical chess programs relied on human expertise, and moves developed by humans coded into their programming. But machine-learning algorithms are a radical departure from the precision and predictability of classical algorithms, like those that humans learn in school to be able to do long division.

What separates algorithms like the one for long division and machine-learning algorithms is that the former consists of a set of choking steps for producing precise results. The latter is built around steps for improving upon imprecise results.