Artificial Intelligence or AI. Three decades ago, it made movies like Matrix a blockbuster. But today, AI is everywhere. In the last few months, Chad GPT has taken the world by storm.
What’s Chad GPT?
An artificial intelligence chatbot. One that can write essays, make up poems, scripts of complete movies and even clear exams. In fact, it’s gotten so out of hand that the creator of Chad GPT, Sam Altman, wants us to regulate it. We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.
How powerful is AI?
How can we control it?
Or can we control it?
Also, why should we care?
In fact, AI is all around us. Using Google Maps, that’s AI. Talking to serial Alexa, that’s AI too. Using predictive text, AI again. Using FaceID to unlock your phones, that too is AI. The YouTube algorithm that is making this video pop up on your feeds is also artificial intelligence. Like I said, it’s everywhere.
So what exactly is artificial intelligence?
The answer involves a lot of jargon, so let me try and break it down.
What makes us human special?
It’s an ability to think. And that’s exactly what AI is trying to replicate systems that think and solve problems like we humans do. Basically, intelligence, but artificial. The idea is simple. To recreate human-like thinking to solve problems, which brings us to another term, machine learning. Essentially, machines learning from experience. Algorithms learn from previous data. They recognize existing patterns and use them to solve problems with no human interference. I know it sounds complicated. So let me try and simplify this further. Imagine you’re scrolling down your YouTube feed. You see a video about dogs. You like it. The algorithm takes note of it. Suddenly, dog videos are all over your feed. So what the algorithm has done is this. It used machine learning to understand what you like and started recommending similar content. Now, AI has come a long way. Computers beating humans at chess is ancient history. Today, AI is broadly characterized into three categories. First, artificial narrow intelligence, where the system is given one task and it does just that. Think appliances, self-driving cars, streaming apps or even healthcare. This is simple. We basically give a machine a task and they do it. So the rudimentary stages of artificial intelligence, that’s what it is. Then comes the second stage, artificial general intelligence. This is where AI can rival humans. They can do multiple things at the same time. Many called chat GPT are stepped towards artificial general intelligence. Then we have the third stage. And this is where things get a little scary. It is called artificial super intelligence. Basically, machines going beyond human intelligence. They have a mind of their own.
Remember all those sci-fi movies with the villain robot?
This is that stage. Or so we are told. But we are not talking Terminator yet. The current emergence of AI has raised a lot of questions.
Is it ethical?
Can it go out of hand?
And are we too late to control this?
In March this year, some of Tech’s most prominent minds wrote a letter. It included the likes of Twitter CEO Elon Musk and Apple co-founder Steve Wozniak. They asked world leaders to hit pause, to stop AI development for six months. Let me read out apart from that letter. This is what it says. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. So clearly, top tech leaders are worried. They believe AI is going too fast. Fast enough to have negative impacts on society and humanity. Inherently, AI is not without its own set of problems. The first would be the way it is used. Facial recognition to open your phone sounds great.
What happens when the same is used to spy on you?
Countries use it to spy on their citizens. Some even use it for racial profiling like China does. They use surveillance to oppress Uyghur Muslims. The other problem is bias.
You would expect machines to be neutral, right?
But at the end of the day, they are made by us. And as humans, we are inherently biased. So the machines we make are biased too. In fact, AI almost amplifies the existing biases in society. AI is as good as the data that it is fed. If the data is biased, so will be the machine. Imagine this, an AI algorithm is trained on data involving only white men. Now, when this same algorithm is applied to women, the result will obviously be a biased one, which brings us to the third issue. And that is lethal mistakes. Humans make mistakes and so do machines.
What happens when the mistake is life threatening?
What happens when a self-driving car kills someone?
Who will you blame?
The car or the person behind the code?
And what if the stakes are a little higher?
I am talking about lethal autonomous weapons, aka killer robots. These will be able to identify targets and kill them without any human help. We are not there yet. But precursors do exist. Think armed drones. In 2020, these may have killed a human for the first time. The details are a little murky according to the United Nations. A Turkish-made cargo to drone hunted down members of Libya’s National Army. The manufacturer says cargo too can use machine learning to classify objects. This allows it to autonomously fire. Now, Thurke denies using cargo too in this way. But the point is that drone does have a mind of its own, just like a lot of other armed drones. So the United Nations wants to ban killer robots, but countries are not agreeing. The US, for example, wants guidelines and not a complete ban. So we can’t regulate killer robots yet.
Can we regulate AI?
That’s what open-AI’s Sam Altman called for at the US Congress. He says AI could cause significant harm to the world, so the need of the R is regulation. And leading the trend is China. They have come up with a law that seeks to regulate AI in all forms. Chinese tech companies will need to register their AI products. They will go through a security assessment first.
What do we make of it?
Well, China is always ready to regulate everything, even human rights.
But what about the West?
The European Union too is working on an AI act. It’s called the European AI Act. The first law for AI systems in the West, the regulation takes a risk-based approach.
What does that mean?
It means the higher the risk, the stricter the regulation. Now this act could be a precedent for others, especially the United States. India too is considering a regulatory framework for AI. But you see, all governments are just playing catch-up. AI is moving at breakneck speed and countries will have to race much harder to control it. In fact, they may be late already, like they were with social media. They thought of regulation only after things went out of control. And just like social media, AI too can be a double-edged sword.
It can be used for both good and bad. So we must not repeat the same mistakes that we did with social media. The regulations must come before the damage is done.