AI made me do it!
- Deandra Cutajar
- Sep 1
- 5 min read
The title itself should indicate what inspired this article. Movies like "The Conjuring" or "IT" can give you goosebumps and may also encourage you to conduct a little more research before buying a house or moving to a small town. However, other movies such as "M3GAN" and "Companion" display a new dimension of horror.
In both cases, the intent behind the AI remains noble. The robots were programmed to provide a solution for parents or individuals. Curiously, the movies projected a rather horrendous outcome, and I will NOT delve into that just yet. I need to understand more about the emotional programming of such scenarios before I can write an article.
I think the AI hype bubble has peaked, and we are now entering the exciting era of using AI with eyes wide open, at least that's what I hope for. A significant amount of awareness regarding ethical concerns is already present. For this reason, I believe we can start to make significant progress in striking a balance and utilising the technology for the benefit of humanity, or so I hope! (Yes, I repeated the sentiment on purpose).

There remains a legitimate concern about what AI can "make someone do", and it should be addressed. Hence, I am writing this article to inform businesses and the public that AI does not, in fact, make anyone do anything. Still, we are walking into a scenario where "AI made me do it" becomes an honest defensive response to anything completed with AI, whether it is a business mistake, honest malpractice, illegal, criminal, or worse.
Firstly, AI is programmed and therefore any behaviour can be logically explained by delving into the program itself. Neural network models make explainability complicated, and Anthropic has been researching to Map the Mind of an LLM. Nonetheless, there are ways to understand the similarities between the output of an AI and the user's prompt. Moreover, if the user has access to training data, a simple calculation can be executed to determine the source of the information, which is the basis of RAG. The reality, however, is that LLMs are trained on large datasets, and in most cases, when a user prompts an AI program, they are unlikely to have access to the training data or have an understanding of how an LLM works. The initial feel of AI was that of "magic", and most users do not comprehend how these models work.
Hence, when a user prompts the AI and the AI outputs a response, the user's first instinct is to believe the output as true, fair and correct.
Let the problems begin!
When an individual trusts the AI to that extent, all problems regarding ethics, factual, legitimacy and legality arise in the same way that having an exposed electric power socket becomes a problem once you have a toddler, but not before. *In the case of AI, copyright issues can be said to arise from data collection.
By blindly trusting the AI output without proper review, individuals are putting their names on work without considering the consequences, focusing instead on the benefits they will gain. Later, if caught doing so, they would say "I didn't know the AI lied to me.", or "I didn't mean to steal anybody's work", in other words, "AI made me do it."
NO.
In all cases, a prompt from a user will be required. The AI is programmed to answer to the best of its ability, which is determined by the quality of the data it was trained on, as well as the logic embedded to identify most likely or related data based on the prompt. I cannot emphasise this enough:
It is not the responsibility of an AI to check itself. It executes at your command.
Some will say, "Should it be responsible for its own actions?" and to those I ask, "Do you expect a child to be responsible for its own actions while they're exploring boundaries?"
The concept of Education exists because humans do not naturally know how to do everything. We breathe and function naturally, but I wasn't born knowing how to calculate my taxes; I had to learn to avoid consequences. Nor did I know Bayes' theorem the moment I started walking. It's a learning process, and while AI requires a different approach, it can't keep itself in check. Some guardrails can be put in place by the developer (parent), absolutely, but it is not enough to then let it run on automation and sip cocktails by the pool.
Humans continually test the limits, and over the years, this has necessitated the establishment of numerous authorities and enforcement agencies. Equally, AI use requires the same effort. Not because AI is human, but the PR on the technology has succeeded in personifying it to be considered as having similar human traits. However, let me put some things clear:
AI does not lie. It outputs what it calculates to be most similar to the prompt input by the user.
AI does not steal data. It injects the data provided by its developer.
AI does not commit fraud. It executes the user's prompts.
AI does not harm. It outputs information based on the user prompts.
In all cases, I am confident that an AI expert will attribute it to a human action. The only way to prevent such scenarios is to regulate the AI's output by lowering the threshold of human-like behaviour so individuals do not perceive it as another person, and promoting education on AI. I believe that there will come a time when using AI will require a license or a certificate. While the AI is a piece of technology, it provides a significant amount of power to the user, and as comics have taught us, "with great power comes great responsibility."
OK, I shared the problem, but is there a solution? You know that when I talk about a problem, I try to share a potential solution. Before I proceed, I want to make it clear that I am not a lawyer of any kind. Any legal knowledge I have is through my curious reading, primarily lensed by my scientific brain. What I think could work may be a bit biased by my hope that we can use AI for good.
Education was, is and will remain the epicentre of any solution. Regulations can be implemented, and they may actually catch up with the progress. But if the population at large still do not understand the technology, in total honesty, regulations become a trap to fine individuals without awareness of faulting - "I did not know".
The key element for the education of AI is to:
Distribute information straight to the user.
Not forced courses or books that people receive and toss aside. Limit their use of AI until they have genuinely read the ethical guidelines. I am unsure how to monitor that, but I did suggest a licence/certificate, didn't I?
I will conclude with my strongest belief that enforcing Data Governance and Ethical Human behaviour will reflect ethical use of AI and consequently lead to relaxation on AI regulations. Think about it: if the AI doesn't have access to copyrighted material, can it output any copyrighted material? From a scientific point of view, there remains a slight risk that it could, due to the generative nature of AI, but I wouldn't punish the technology because it would have been done by coincidence. The probability that this happens is not zero, but the percentage of similarity with the original work would remain very low since the AI would not have access to the original work.




Comments