Is AI the problem?
- Deandra Cutajar
- Jun 16
- 5 min read
A dark cloud is hovering over Artificial Intelligence (AI) technology and brings with it a storm of ethical concerns about human rights and societal sustainability. But is AI the problem?

In this article, I aim to convince you that ethical issues surrounding AI existed before the rise of language models. Nonetheless, they are enhanced with AI scalability. But should we blame the technology or the behaviour with which the technology is used? I hope to convince you that regulations on the technology reflect the user's behaviour, which is why the likes of AI Acts are necessary.
I will focus on a couple of points:
Data Privacy
Data Theft
Copyright Infringements
Fake Media - Misinformation
CASE 1: Data Privacy
Data privacy is a concept that allows individuals to have control over their data and how it is managed, stored, and shared. It is associated mainly with data stored in the cloud, but such "sharing" capabilities existed before the digital era. The concept of sharing information started with humans conversing about a topic or a person. Before the age of computers, business individuals kept ledgers to save information about their clients and customers' behaviour. Medical professionals used to have files about their patients, which remains true in some hospitals. Private individuals used to keep a diary with significant contacts and addresses to the point that one could share such information with a simple technique called "word of mouth". So, data privacy concerns existed before we started inputting data in a digital format, albeit small. People could pretend they had good intentions when asking for information, and again, malicious intent existed before AI.
Why is AI a threat to data privacy?
In its technological form, it is not. The threat arises when the borders between privacy and progress blur in the name of AI training. Personal data is golden because the technology can be trained to personify anyone according to a specific demography. Such a tool is attractive to marketing and businesses because the more they know about a person, the better they can tailor their message to convince that individual to buy their product. Understanding human behaviour and their background helps train the technology to have human-like characteristics, and we are more likely to engage with what we relate to.
AI's threat to data privacy stems from AI developers who want to simulate human characteristics based on different profiles.
CASE 2: Data Theft
AI is not out to steal your data, but the individuals aiming to develop AI models could be. Data theft existed before the digital era. Ledgers went missing, and information about a person was taken without the controller's or proprietor's consent. The threat of data theft arises similarly to that of data privacy. AI developers want to build models and make money. If companies paid for data, it wouldn't be easy or cheap to build AI models anymore. I want to believe that most companies choose the ethical route of sharing and buying/selling data with proper consent forms and policies in place. However, there are always loopholes, and the more detailed the data is, the better the AI will be.
One kryptonite of AI is the wholeness of a dataset. Apart from its reliability and cleanliness, the AI requires having the whole picture to perform as well as one expects. Any data missing can lead to statistical manipulations that skew the inherent information.
Thus, when AI developers cannot reach or afford the data they require for their model, they resort to other means. However, it is essential to acknowledge that it is done in the name of AI training and not because AI is making anyone do so.
CASE 3: Copyright Infringements
AI was developed in an attempt to complete tasks otherwise done by humans. Over the years, this task extended to uncommon skills and capabilities and started getting into the realm of expertise and careers. In traditional ways, for a person to acquire specific capabilities or skills, they would need to attend numerous workshops, sit exams or complete assignments. Such will take time and money. AI provided a shortcut. Why learn how to be a writer when an AI can be prompted to write a book with some points revolving around a story? Why study art when AI can generate such work? For the output to hold in the market, the quality of the work needs to mimic professional work, and that is where issues of copyright infringements arise. The only way an AI can produce work that competes with professionals is if it learns from data generated by such experts.
One can say that AI is "inspired" as a human is, but that is wrong. AI is a technology, much like my laptop, that doesn't "motivate" itself to be a better version of itself (think of the money we would save and how the technology market would crash!), AI has to be trained to improve. When trained, AI stores hyperparameters and derivate data to compare the prompt words with its "memory". This is achieved by comparing words/sentences and reproducing/generating data. AI cannot generate an output if it cannot obtain data about a concept. Whatever "new" the AI seems to generate is data that has not been linked together or has been disparate enough that the proprietors of that data did not make the connection. AI output always sits on human work, and needs to be referenced. But doing so takes away some of the credit of the AI user who wants to compete with said professionals, so instead, it is easy to own the AI output and ignore where the AI got the data in the first place. However, AI does not infringe copyright laws because it cannot detect which data is proprietary or not. Such a task falls upon the AI developers.
CASE 4: Fake Media - Misinformation
There was a time when one could tell when the news was fake or misinformed. It usually consists of manipulated/edited photos easily detected by the human eye. Similarly, the case of misinformation. It may have taken time to expose it, but it usually spread slowly, so the damage could be contained. AI is making it easier for such fake media and misinformation to spread fast and wide, and it is becoming very hard to tell AI output from real-world experiences. The consequences could be catastrophic!
Individuals have been spreading such fake news in an attempt to cause havoc for years, but as I said, they were quickly (I believe so anyway) dissipated by their lack of ability to spread.
I would love to say that again, AI is not the cause, but in this case, it may well be. In its attempt to generate output based on its training data and avoid copyright issues by the users, it tries to collate data. This data may contain unverified and fake information. The most common danger is when non-experts try to mimic experts and therefore have no knowledge to detect wrong/fake information. Even experts have fallen into this trap, with some losing their careers. However, I will add that the more danger, albeit uncommon (hopefully), stems from those who purposefully use the AI to spread misinformation for personal gain.
I will conclude this article by sharing that ethical concerns about AI reflect ethical concerns about human behaviour. The former would not exist if the latter didn't. In all four cases, I emphasised explaining the problem, how it existed before AI, and whether AI can be blamed for it. Three out of four cases show that AI enhances the problem, but in the last case, some misinformation or fake news can be attributed to AI's output process, even if the user didn't intend to. I argue that even in that case, the user should be held accountable as the AI, in its technological form, cannot understand what it is doing.
In conclusion, the attributes of AI reflect the users' attributes.




Comments