Humanity at the AI Crossroads: Balancing Privacy, Bias and Progress

By Medeleine Tee

November 2023 FEATURE
main image
Advertisement

WE ARE APPROACHING a turning point in artificial intelligence (AI) development. As computers become increasingly omnipresent, and as algorithms grow uncannily precise, concerns mount—what are the undesirable consequences of AI?

Privacy—Or The Lack Thereof

AI is the epicentre of modern mass surveillance. While technology and the fight for privacy rights are historically intertwined, the recent implementation of AI systems in government and private platforms has accentuated the threat it poses to our right to privacy and, by extension, our autonomy.

With machine learning and big data, computational systems now possess the ability to actively draw inferences from a user’s digital footprint. Even if personal information is deliberately held back, traces of data—be it from social media platforms, Google search histories or even fitness trackers—can be cross-fed to form a frighteningly multi-dimensional snapshot of a user. The question arises: Who has access to this data, and more importantly, what do they do with it?

In the US, a judge managed to extract voice recordings from an Amazon Echo, a home AI assistant, as evidence to convict a man of murder, eliciting public outcry. Amazon claims it will not hand over data from its systems unless under “legally-binding instructions”[1], but the case exemplifies how our personal data is essentially within the hands of big tech companies, and that our privacy is protected by rules that can seem arbitrary when these, coupled with pre-existing AI transparency issues, are enforced by irresponsible or self-serving regulators.

This is not the only instance where home AI assistants have provided incriminating judicial evidence to courts. While this indirect surveillance can potentially lower crime rates, it does so at the expense of our privacy, and aside from being a double-edged sword, it also places us on a slippery slope: if espoused, what precedent will it set for mass surveillance in the name of the law?

In the realm of social media, companies such as X (formerly Twitter) and Meta have faced recurring lawsuits for harvesting data from vulnerable users, including minors. In short, what was historically surveillance by secret service agents now extends to companies, who, in turn, sell our data to suspicious third-parties, thereby creating an infrastructure that allows for the largest—yet generally unnoticed—pool of mass social monitoring ever seen.

The invasive collection of such data can be used for effective targeted advertising, but also to assess a user’s political stance. Research has shown that people who are on the fence between political ideologies are more likely to be shown propaganda on social media. As the algorithm is programmed to maximise user engagement, it displays posts with the highest amount of engagement, which are typically ones with the most polarising opinions. This creates a toxic online echo chamber of extremist opinions that will likely, in the long run, decrease the tolerance of society as a whole.[2]

An Enforcer of Systemic Discrimination?

It is tempting to think of decisions made by AI as neutral, and without any inherent biases or agendas. However, since algorithms are trained through live human data, AI behaves in a way that emulates human behaviour, which, unfortunately, includes human fallacies. Hence, while AI decision-making is useful for trivial matters, such as Netflix’s movie recommendation algorithm, the waters become murkier when it comes to making decisions with weightier implications.

In 2018, Amazon came under fire for their AI-based hiring system, which favoured white males over other candidates: the algorithm had learned from past hiring data, which is prevalently male. This highlights one of the major caveats of AI-based decision making— it internalises the cognitive biases of humans and, more condemningly, internalises these biases without understanding why they are ethically wrong.

These biases are also rampant in other industries, including the finance and law enforcement sectors. Studies have shown that banks are more hesitant to lend to businesses owned by people of colour, as AI systems categorise them to be less trustworthy based on extrapolations of prior collected data. Across the globe, police departments are using AI to carry out “predictive policing”, wherein suspect individuals are targeted before they even commit a crime. As marginalised communities are more likely to be reported for misdemeanours, this has resulted in racial profiling and wrongful arrests.[3]

Select parliaments have advocated for the use of AI in bureaucratic decision-making in hopes that it will eliminate issues such as corruption, inefficiency and ego politics. Many, however, have argued that an automated bureaucracy dismantles the very foundations of true democracy, replacing the voices of humans with that of an unfeeling algorithm riddled with bias. Others express concerns that AI will be used as a tool to reinforce totalitarianism in already corrupt states.[4]

Some downplay the quandary. “We demand a high level of explanation from machine-based decisions despite humans not reaching that standard themselves,” says Joseph B. Fuller, a professor at Harvard Business School. “Panic over AI suddenly interjecting bias into everyday life is overstated.”[5]

Philosopher Michael Sandel disagrees, positing that AI “not only replicates human biases” but also “confers on these biases a kind of scientific credibility”, thereby validating systemic discrimination.

The Technological Singularity

Movies such as The Terminator and Wall-E are pop culture sensations, and though evil, anthropomorphised robots still remain firmly in the realm of science-fiction, they have roots in an age-old hypothesis known as a “technological singularity”—a supposed future where technology overpowers humanity.

“I believe we are very close to what we call ‘Artificial General Intelligence (AGI)’,” explains Guan Wang, a data scientist and speaker at the George Town Google I/O Extended Event, referring to a hypothetical form of AI in which a self-aware and self-conscious machine can learn and think like a human. While there is no scientific consensus on when AGI will be achieved, a majority of researchers agree that super-intelligent computers loom large in our future.

For now, though, AI remains a tool that possesses the capability to reflect either the best or worst of humanity. If we do not outsource our moral responsibilities to machines or grow complacent of its potential to harm, perhaps, with proper regulations, our imagined dystopian society of robot makings will never come to pass.

Footnotes

[1] (2018, November 12). “Amazon asked to share Echo data in US murder case.” The BBC, https://www.bbc.com/news/technology-46181800.

[2] KHOSRAVINIK, MAJID. “Right Wing Populism in the West: Social Media Discourse and Echo Chambers.” Insight Turkey, vol. 19, no. 3, 2017, pp. 53–68. JSTOR, http://www. jstor.org/stable/26300530. Accessed 19 Aug. 2023.

[3] Müller, Vincent C., "Ethics of Artificial Intelligence and Robotics", The Stanford Encyclopedia of Philosophy (Fall 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.), https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/.

[4] Ünver, H. Akın. “Artificial Intelligence, Authoritarianism and the Future of Political Systems.” Centre for Economics and Foreign Policy Studies, 2018. JSTOR, http://www.jstor.org/stable/resrep26084. Accessed 19 Aug. 2023.

[5] Pazzanese, Christina., (2020, October 26). “Ethical concerns mount as AI takes bigger decision-making role in more industries.” The Harvard Gazette, https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.

Medeleine Tee

is an English Literature student. Born and raised in Penang, she is a true-born foodie and lover of cultural pursuits.


`