top of page

Into the Woods with AI

  • Writer: Deandra Cutajar
    Deandra Cutajar
  • Mar 24, 2023
  • 5 min read

Updated: Apr 5, 2023

We are familiar with the idiom:

Out of the woods.

It refers to a situation that no longer poses any threat or danger. With the latest advancements in artificial intelligence, more commonly known as AI, different individuals have taken part in various debates and discussions about the threat that AI poses or not.

ree

Image designed by Geoff Sence

I will start by saying a simple thing.

AI is an IT .

Maybe not the ominous clown, but AI does not understand its output.


Imagine a conversation between two people who speak different languages. They both use a common language, but when they get angry, they yell words in their mother tongue. Thus, person A says many things in one language, and person B hears a sequence of noises that makes little sense. Maybe the latter could understand a word here and there but not the whole context. Person B might gather that person A is angry by paying attention to body language. However, each word uttered means nothing to Person B. Only when Person A translates the emotions into a language that Person B understands will the terms have meaning. This is how humans operate, but sometimes we translate things without understanding the language we are translating to. Did anyone ever use a particular translator and was told that whatever they wanted to say was "lost in translation"?


In a book called AI Ethics, Mark Coeckelbergh shared the Chinese room argument first presented by philosopher John Searle in 1980.


[In the argument] Searle [supposes he] is locked in a room and given Chinese writings but doesn’t know Chinese. However, he can answer questions given to him by Chinese speakers outside the room because he uses a rulebook that enables him to produce the right answer (output) based on the documents (input) given, without understanding anything.

Searle's argument puts the machine's capabilities into perspective. Does the machine understand Chinese when translating Chinese into English and back to Chinese following some rules? Or is it simply simulating such intelligence?


Have you ever helped someone, and they showed you how to help them, but you didn't understand what you were doing? You followed the rules and did it, but you did not know what was happening. Usually, we ask why or not, depending on your curiosity. A machine does not. It executes. A calculator has rules that 1 + 1 = 2 but doesn't comprehend that 1 represents a single item, and you get two of the same thing by adding another item. It is a computational exercise based on rules programmed by humans to simulate our understanding of the world.


AI is the same. First, AI doesn't understand English, Maltese or whatever language we humans speak. It only speaks in 1s and 0s. For an AI to produce an output, it must first follow the rules to convert text into a sequence of 1s and 0s, then compute whatever logic is needed to calculate what I want and convert the result back into human-readable text. And here is the thing. The rules depend on the data used to train the AI, regardless of whether they make sense.


"Meaning comes from humans" (Boden, 2016).

In this sense, AI is not dangerous at all. It is doing something without intent. There is nothing ominous or dangerous about it because it doesn't have a mind. The logic it executes is programming code determined and inputted by humans. Its goal is to perform some calculations and output the result. When you turn on the coffee machine, it doesn't want to take your life. It pours coffee into a mug, just like we instructed it. If it malfunctions, however, only then does it become dangerous.


So where do the dangers and threats of AI come from?

As I iterated earlier, AI is an IT and does not understand self-awareness. Any conversation published online about self-awareness is the AI simulating self-awareness from data made publicly online. It doesn't have a conscious voice or gut feeling, nor does it have intuition. However, it can mirror our emotions based on the multitude of data it learnt from. Doing so has led to remarkably worrying interactions.


The first danger of AI is

Believing AI a person, and has intent.

AI on its own is not capable of it. The intent in AI results from the programming it uses and the data it learns from, i.e. simulated purpose, which can still be of concern.


AI needs a lot of data to function. Without data, AI is several theoretical equations and computational functions that require more effort to get the output we get today. Remember Einstein? Before the data to prove his theory for gravitational lensing, the equations were just that, a theory. I could keep talking about this side of AI and what we need to understand about the data it uses and needs. I will focus on this in another article.


Here, I want to focus on "believing AI is a human" or, worse, a divine entity that knows everything and can solve everything.

AI is an enabler, a tool.

Thinking otherwise creates a threat. Believing that AI is a higher level of human intelligence has encouraged individuals to take whatever an AI says as the holy truth. Mistake! AI learns from human-generated data, and we all acknowledge that we are full of errors.


One threat I am concerned about is not AI per se but what people believe AI can do. Companies are jumping on the AI fast train, replacing human talent with automated ones. Some roles are repetitive even for the humans who do them, and I understand the need to automate and encourage individuals to upskill, grow and develop in other areas. This isn't easy, but we need to work together. Companies which foresee this shift need to allow their employees to grow.


But what about senior roles? Or C-level roles? Or justice and politics? Are we comfortable replacing experience with AI?


AI has much to learn but is a beautiful tool for experts and data scientists. I believe classic machine learning has solved most common business problems (such as churn, customer segmentation and forecasting). Tech companies have embedded these functionalities in no-code/low-code, generic solutions—one model-to-fit-all.


Data scientists are solving custom, industry-specific problems requiring more than a classifier or regressor. Machine learning algorithms are useful, but only after extensive work from data scientists to understand the problem and transform it (without changing the concept) into a problem a machine learning algorithm, including deep learning, understands. Some companies don't have enough data to consider classic machine learning algorithms, let alone neural networks. Companies with a lot of data have little understanding of what story lies hidden underneath that data. Putting a veil over data science work with AI can be quick and fast, but I can assure you that stakeholders will begin to ask many questions at one point. Stakeholders are crucial in building data science solutions for the business, and slowly they are learning about the pros and cons of machine learning and AI. Trust me, they want an explanation and why not!


Stakeholders bring insights, gut feeling, and experience into the data science world. They bring knowledge and a real-world understanding of how things work. Conversely, AI can be compared to a fresh graduate who knows all the rules and theories but has no real-world experience. If we are comfortable placing AI in a senior role, why not a graduate who has studied the field but has no experience? They both learned from previous (historical) data and learned the theory.


Such hires rarely happen because we all understand that things may be different on paper than in real life due to elements we cannot always account for. And that is why we need to use AI responsibly. Use it with the knowledge (or comprehension) that AI is constrained to its programming limitations and the environment it was "raised" in, i.e. the data it used to learn the rules. **When unsure about this, I invite you to reach out and discuss it.


If we trust the output of an AI without scrutiny, then I believe we are walking into the woods, and unless someone gives us direction on the way out, we will be there for a while.

Comments


© 2023 by REVO. Proudly created with Wix.com

bottom of page