AI—Boon or Bane, Intelligent or Illusory?
By Dato' Dr. Ooi Kee Beng
November 2023 EDITORIAL
These are the days of miracle and wonder.
Don’t cry, baby, don’t cry.
Don’t cry.
— Paul Simon, “The Boy in the Bubble”, 1986.
THE 1960S AND 70S were turbulent times culturally. At least that was what I thought when living through them. The youth revolution and the music revolution were going on. But more profoundly, underneath all this, the greatest disruption of all was creeping up on us—digital technology.
First, we had 24-hour news channels, MTV, et cetera. Then came the internet, which took a while to mature into the browsing heaven it is today. Then came the search engines. Then came the hand phone and the SMS. Then came the web news sites. Then came social media—Facebook, WhatsApp, WeChat and what-have-you. Then came the smartphone and the endless apps to make our lives more effective and “easier”—and indeed more open and permeable.
Memory storage soon became unthinkably huge. Then came the explosion in chip power. Then came the Cloud. Overloaded with information volume and intensity, we began to lose faith in everything we read. Subjectivity in ideas and relativism in knowledge started to become acceptable.
All the above within two decades.
Then came AI. With access to portals like ChatGPT and many more. Generative AI has arrived—and it is accessible to anyone interested. Left confused about what to believe, and with universal access to information abandoning us within each of our own bubble of incessant “facts” and “news” and “truths”, we now purportedly have access to all the answers. Or at least with flexible approximations to answers generated by search engines with superpowers.
Or do we? What is generative AI? And should we fear it?
The “simple” explanation to the first question is that computing power and software programming have taken a quantum leap, and, relying on neural networks and deep learning algorithms, are now able to identify patterns and generate new outcomes based on them, almost instantaneously.
To answer the second question, what better way than to ask the AI genie itself, and see what it says? “Dear ChatGPT, should we fear generative AI?”
Answer: “Fear of generative AI is a complex topic. It’s important to approach AI with caution and ensure ethical use. While AI itself is a tool, concerns may arise if it’s used unethically or maliciously. Responsible development, regulation and oversight are crucial to mitigate potential risks and ensure the benefits of AI are harnessed.”
Not a very satisfying reply. Rather evasive, if you ask me. Spoken like a diplomat. But let’s give it another chance. We change the question slightly: “Dear ChatGPT, why should we fear Generative AI?
Answer: “Fearing generative AI is not necessary, but there are concerns associated with its potential misuse. Some reasons for concern include:
1. Misinformation: AI can generate convincing fake news, deepfakes and other content that can be used to spread misinformation.
2. Privacy: AI can be used to create highly realistic fake personas, potentially compromising individual privacy.
3. Malicious use: In the wrong hands, generative AI can be used for cyberattacks, fraud or other harmful activities.
4. Job displacement: AI may automate tasks, potentially impacting employment in certain industries.
5. Bias and discrimination: If not properly trained, AI can perpetuate biases present in the data it’s trained on.
It’s crucial to address these concerns through responsible AI development, regulation and ethical guidelines. AI itself is a tool and can be harnessed for both positive and negative purposes, depending on how it’s used.”
OK, that’s more substantive. But apart from point 4, which is a reiteration of the common fear that AI will put us all out of work, the rest are warnings to us not to be too trusting of AI-generated information.
Is AI information-generation still “rubbish in, rubbish out” despite all the hype? Sounds like it, in this context at least. However, I do not doubt AI is a quantum leap in many ways, and there is very little we can do to stop its power.
For now, at least, I tend to think of it as a mediocritising of knowledge. Of course, collective mediocrity is generally superior to individual mediocrity, and for that reason, it can be highly useful to each and every one of us separately. Individual excellence still beats it though, I am prone to believe.
Whichever the case, Pandora’s Box has been opened, and perhaps in the context of AI, it is not “Hope” that is left in the box when Pandora hurriedly closes it before all harms escape into the world. If all information is to be kept outside the human head, be sorted algorithmically, and be presented to us elegantly without any common-sense filtering, then perhaps what is left in the box—in our brain—is the human capacity to judge.
Perhaps that is the dexterity we should train in ourselves for the future. The way things look, that will come to be best done with the encompassing help of AI.
Dato' Dr. Ooi Kee Beng
is the Executive Director of Penang Institute. His latest book is Signals in the Noise: Notes on Penang, Malaysia and the World (Singapore: Faction Press) Homepage: wikibeng.com.