Does Artificial Intelligence Threaten the Future of Humankind?
By Prof. Dato' Dr. Zulfigar Yasin
November 2023 FEATURE
THE TERM “Artificial Intelligence” was first coined in 1956 by John McCarthy, an American computer scientist. Now considered one of the founding figures in the field of artificial intelligence (AI), he used this term to describe the goal of creating machines and computer programmes that can simulate human intelligence and perform tasks that typically require human intelligence. McCarthy’s work laid the foundation for the development of AI as a distinct field of research and technology.
A few years earlier, in 1950, British mathematician and computer scientist, Alan Turing, proposed the Turing test (also known as the “imitation game”) to test the concept of AI. The Turing test measures a machine’s ability to exercise an intelligent behaviour that is indistinguishable from that of a human. In this test, a human evaluator engages in natural language conversations with both humans and machines (often a computer programme) without being aware of which one is which. If they cannot reliably distinguish between the human and the machine based on their responses, then the machine is said to have passed the Turing test.
The Turing test served as a benchmark for measuring progress in AI and the development of intelligent computer systems. From then on, the field of AI has encompassed a wide range of techniques and approaches aimed at achieving various aspects of human-like intelligence and the readjustment of the Turing test.
In fact, the Loebner Prize, the oldest Turing prize contest, was established as an annual competition in AI that awarded prizes to computer programmes considered by the judges to be the most human-like. Many were critical, but it remains a measure of human-like intelligence. The prize has reportedly been defunct since 2020. In 2019, the prize was won by Ben McCallister for the most human-like chatbot.
Many have attempted the Turing test. The earliest attempt was by a computer programme called Eliza, using simple scripts to mimic a psychologist, fooling many people by reflecting their own questions back at them. Another programme called Parry imitated a schizophrenic and steered conversations back to its preprogrammed obsessions. Later, a programme called Cleverbot relied on its vast archive of previous conversations to respond. Although deceptively clever, its lack of a consistent personality and its inability to deal with brand-new topics was a dead giveaway.
Clearly, intelligence is more than just computing power and memory (as Alan Turing supposed). Although more convincing when having short conversations, the computer usually fails in extended general conversations. Still, the gap is closing, which some believe is unrelated to available computing power but the limited data made available to the computers.
The Measure of Computing Power
Moore’s Law states that the number of transistors on a microchip doubles every two years. It claims that we can expect the speed and capability of our computers to increase every two years because of this, yet we will pay less for them. Another tenet of Moore’s Law asserts that this growth is exponential. The law is attributed to Gordon Moore, the co-founder and former CEO of Intel, who made this observation in 1965, and has so far been more or less accurate in this observation. Computers and transistors are getting smaller. What was a room-sized computer three decades ago, can now fit in our pocket.
However, there is a limit to this miniaturisation. Experts agree that computers should reach the physical limits of Moore’s Law at some point in the 2020s. The high temperatures of transistors would eventually make it impossible to create smaller circuits because cooling them down takes more energy than the amount already passed through. In a 2005 interview, Moore himself admitted that “…the fact that materials are made of atoms is the fundamental limitation, and it’s not that far away… we’re pushing up against some fairly fundamental limits, so one of these days, we’re going to have to stop making things smaller.”
An article in Wired argues that this may not be a bad thing. John Smart, a futurist, noted that “…contrary to traditional thinking that the end of Moore’s Law is bad for the future of the current computer industry… it could fuel the rise of AI. Moore’s Law ending allows us to jump from artificial machine intelligence (a top-down, human-engineered approach) to natural machine intelligence (bottom-up and self-improving).”
Will Machines Be Able To Think?
Whether a “thinking” machine is ever possible is a complex and philosophical question that has been debated and even projected in blockbuster movies for many years.
Let us consider Functional versus Conscious Thinking. Machines are already capable of what we might call “functional” thinking. They can process vast amounts of data, perform complex calculations, recognise patterns and make decisions based on algorithms. This form of thinking is often referred to as “narrow AI”. The question becomes more challenging when we consider whether machines can achieve conscious thinking or true self-awareness. Consciousness is a highly debated and poorly understood aspect of human experience. Some philosophers and scientists argue that it may be theoretically possible to replicate consciousness in a machine, while others believe it is a unique property of biological organisms.
Philosopher David Chalmers coined the term “the hard problem of consciousness” to describe the challenge of explaining why and how physical processes in the brain give rise to subjective experiences. It is a fundamental issue in the philosophy of mind and AI. While we can create machines that mimic some aspects of human intelligence, replicating subjective experience remains an unsolved problem.
Even if we were to create a machine that could mimic conscious thinking, we would face significant ethical and practical questions:
1) How do we define and measure machine consciousness?
2) What rights or ethical considerations should apply to conscious machines?
In 1997, IBM’s Deep Blue computer programme defeated world chess champion, Gary Kasparov. This was expected as the computer evaluated about 200 million positions a second compared to humans, who are relatively slower and characteristically consider 50 moves to various end positions. Some observers likened this to the computer being a motorcycle challenging a human sprinter. However, critics realised that the approach of this machine thinking was just brute force. Although the end game was a victory for the computer, it did not understand the concepts of the game. Its approach to winning the game was by analysing billions of moves, which seemed intelligent but is really an elementary approach to possible outcomes. The result, which is a solid play, is still a fair way from the sophistication of understanding the game logic and real intelligence.
The Alignment Problem of Future AI
Recently, it was said that an android named Detroit passed the Turing test, and when asked, it replied that it did not do much. Still, its “brain” performs several billion billion computations per second and displays human intelligence—or came very close to it. Other scientists even indicate that they are seeing signs of consciousness in these machines.
Intelligence and consciousness go through “emergence”. This is when a group of simpler components gain new properties. Our DNA, for example, only tells us how to wire our brains. It does not hold enough information to describe our brains or allow us to think. But this can be achieved through “emergence” where infinitely complex results emerge from simple rules. AI can evolve from this in a self-improving process.
AI companies may not like to declare that their AI is sentient, which could imply certain rights for the AI. More disconcerting is the notion that if the AI becomes sentient in a self-preserving system of a conscious entity, it could conceal this property.
Mustafa Suleyman (cofounder of Deep Mind, a company acquired by Google) demonstrated the capabilities of his AI programme. Offering a start-up capital of USD 100,000, he instructed the programme to create a product and business. The programme was able to do a market survey, construct blueprints for this business, engage other AI for information, contact manufacturers, organise distribution and finally collect revenue. AI cannot yet do everything now, but it is close. In the next five years, AI can not only say things but do things. It is here to stay.
Yuval Noah Harari, a historian, points out that we should exercise caution with the advent of AI. For an autonomous technology that can create new ideas, power is shifted away from humans to superior intelligence. We are heading toward uncharted territory, and for the first time in human history, we are the inferior partner.
Mustafa listed a 10-point plan to impose constraints and control on AI, including oversight into AI by independent bodies, transparency in observing AI failures in its capabilities, no recursive self-improvement, and AI-free elections.
Unfortunately, as Yuval Harari noted, the distribution and control of this new wealth creation and AI capabilities are already problematic. It is true that the genie is out of the bottle as AI drives many of the new innovations and provides novel solutions to our lives— and all done at breakneck speed. Surely, it gives us all the more reason to plan for humankind’s evolution and survival in the future.
Prof. Dato' Dr. Zulfigar Yasin
is a marine environmental scientist who is an Honourable Professor at Universiti Sains Malaysia and a visiting senior analyst at Penang Institute. His work now focuses on the sustainable development of the marine environment.