Algorithms are Destroying Society

Algorithms are Destroying Society

In 2013, Eric Loomis was pulled over by the police for driving a car that had been used in a shooting, a shooting mind you that he wasn’t involved in at all. After getting arrested and taken to court, he pleaded guilty to attempting to flee an officer and no contest operating a vehicle without the owner’s permission. His crimes didn’t mandate prison time, yet he was given an 11-year sentence with six of those years to be served behind bars in the remaining five under extended supervision.

Not because of the decision of a judge or jury of his peers, but because an algorithm said so The judge in charge of Mr. Loomis’s case determined that he had a high risk of recidivism through the use of the correctional officer management profiling for alternative sanctions risk assessment algorithm, or COMPAS. Without questioning the decision of the algorithm, Loomis was denied probation and incarcerated for a crime that usually wouldn’t carry any time at all.

What has society become if we can leave the fate of a person’s life in the hands of an algorithm? When we take the recommendation of a machine as truth, even when it seems so unreasonable and inhumane, even more disturbing is the fact that the general public doesn’t know how COMPAS works. The engineers behind it have refused to disclose how it makes recommendations and are not obliged to by any existing law, yet we’re all supposed to finally trust and adhere to everything it says. Reading about this story, a few important questions come to mind. How much do algorithms control our lives, and ultimately, can we trust them? It’s been roughly 10 years since Eric Loomis’s sentencing, and algorithms now have a far greater penetration into our daily life. From the time you wake up to the time you go to bed, you’re constantly interacting with tens, maybe even hundreds, of algorithms. Let’s see you wake up, tap open your screen and do a quick search for a place near you to eat breakfast. In this one act, you’re triggering Google’s complex algorithm that matches your keywords to websites and blog posts to show you answers that are most relevant to you.

When you click on a website, an algorithm is used to serve you ads on the side of the page. Those ads might be products you’ve searched for before, stores near your location, or early enough, something you’ve only spoken to someone about. You then try to message a friend to join you for your meal. When you open any social media app today, your feed no longer simply displays the most recent post blog people you follow. Instead, what you see can be best described by TikTok’s 4U page. Complex mathematical equations behind the scenes decide what posts are most relevant to you based on your view history on the platform. YouTube, Twitter, Facebook, and most notoriously TikTok all use these recommendation systems to get you to interact with the content that their machine thinks is right for you. And it’s not just social media. Netflix emails you recommendations movies to watch based on what you’ve already seen. Amazon suggests products based on what you previously bought and probably the most sinister of all.

Tinder recommends you the person you’re supposed to spend the rest of your life with. Or at least that night. These might seem like trivial matters, but it’s more than that. Algorithms are also used to determine who needs more healthcare. And when you have your day in court and a computer program decides whether you’d spend the next decade of your life behind bars for a crime that usually doesn’t carry any time. One of the most dangerous things about algorithms is the data that is used to power them, because the more data you feed into an algorithm, the better its results. And where do companies get this data is from their users like you and me. Most of the time giving out this information is harmless, but a lot of times these companies sell your information to data brokers who then sell that data to other companies that want to sell you stuff. That’s why you keep getting targeted ads from random companies you’ve never heard of before. And what’s worse is that these data brokers are often targeted by nefarious actors who steal all the information they have in data breaches. According to a report from the Identity Theft Resource Center, there were 68% more breaches in 2021 than in 2020, and that number seems to continue to go up. A few months ago, my friend got this message from Google, telling him that some of his passwords are found in a data breach from a company that he had never heard of before. And right after, he started getting personalized email ads from scam companies. This is how scammers are able to figure out your phone number, name, and even your address. The good news is that you can get these data brokers to delete the information they have about you. Sadly, to do it manually, it could take years. This is why I love using the sponsor of today’s video, Incogney. All you have to do is create an account and grant them the right to work for you. And that’s it. Incogney will reach out to data brokers on your behalf to request all your personal data be deleted and deal with any objection from their end. To get started, sign up using the link in the description.

The first 100 people to use code Aperture with the link below will get 20% off of Incogney. It’s completely risk-free for 30 days, so I encourage everyone to do at least give it a try, and if you’re not happy, you’ll get a full refund. But I can assure you, when you see just how many data brokers have your information, you’d definitely want to keep your subscription. Back to our story. I’m not saying that all algorithms are bad and we should get rid of them, and algorithm is probably the reason you’re watching this video in the first place. I’m saying we as a society need to make some changes to the way we currently interact and use these systems. One of the scariest things about algorithms is that they’re built and altered in a black box with little oversight. The engineers behind them determine what we see and don’t see. They classify sort, order, and rank, and we don’t get to know how or why. Even the government doesn’t get to know how and why, and if they did, would they understand it? The engineers themselves often don’t know why an algorithm behaves the way it does. They use AI in machine learning, which can make the outcomes become hard to predict. They become emissory to makers as well. When companies like Google or Facebook are challenged about their platforms after something terrible happens, they hide behind the mythos of the algorithm. They’re cold unbiased systems, they suggest they’re rational to areas human, not machine, they claim. This is the notion of algorithms that is potentially dangerous.

We think of them as pillars of objectivity, incapable of the kind of biases that corrupt human society. But are they genuinely unbiased? Are they pure instruments of rationality? As much as big tech companies would like you to believe they are, the sad truth is they are not. When the engineers choose to classify and sort, they’re using pre-existing classifications which are filled with bias already, and their methods of sorting and forced biases that can have real negative consequences. In 2019, an algorithm was used on more than 200 million patients in US hospitals to determine who would need more care. Although race wasn’t included in the criteria, black patients were discriminated against by the machine anyway, they were determined to require less care than white patients. How did this happen if race wasn’t even an input, you might ask? Well, while race directly wasn’t in the equation, previous healthcare expenses were a determining factor in deciding whether someone would need more care. And because black patients have historically spent less on healthcare, the results were that they required less care and incorrect blanket conclusion for situations that should be case by case evaluations.

Although the racial bias was unintended, it still occurred as a result of the engineer’s designs. It’s because of issues like these that we can’t hide behind the myth of the infailable machine. Biases like these will exist in machines as long as humans are the ones building them, and there is one bias that exists in almost every algorithm we use today with far more reaching consequences. Meta, Twitter, Google, Amazon, Netflix, Tinder, most tech companies and the platforms they offer you and me as services design their algorithms to maximize one thing in one thing alone. These platforms generate revenue by primarily selling ads and to generate more ad revenue, they try to keep you on their platforms longer because the longer you’re there, the more ads you’ll see and the more money they make. Take YouTube for example, there’s three main things that make any video successful on the platform. Click through rate, watch time and session time. So while YouTube cares about is, can you get people to start watching your video and can you keep them watching for as long as possible, so we can serve them more ads.

For the most part, this works as it’s supposed to and people get served content they enjoy, but would have never found on their own. As with everything in life though, there are downsides. People have learned to game the system by using clickbait to lure viewers in, and then the push conspiracy theories that keep people glued to their screens, whether the information is factual or not.

YouTube’s algorithm has also been accused of having a radicalizing effect on its viewers. Moderate content always leads to recommendations of more extreme content which leads people down the notorious rabbit hole. You can start by watching videos about jogging and YouTube would continue to recommend you videos that push you further slightly until one day you wake up and you’re watching videos about running an ultra marathon.

Facebook’s algorithm shows you more content from friends whose posts you’ve liked or read in the past. This process slowly funnels you into a bubble where you’re mostly reading the same opinions you already have, reinforcing them in your mind. The goal of this approach is, of course, to keep you on the platform longer with views you agree with. The consequence though, is that many harmful beliefs are cemented into the heads of users on the platform instead of being challenged. The more you think about the algorithms of social media, the more they start to seem like programs for creating social problems, for the sake of profit. So if that’s the case, are all algorithms just evil piles of code that are determined to doom us all? Maybe, but maybe not. They do have extraordinary benefits to offer when used correctly. A dataset of 678 nuns from the NONE study, a research project started in 1986 on the development of dementia and Alzheimer’s showed something very peculiar. Researchers tried to find if they could spot any patterns in the data to suggest a relationship between something in a person’s early life and the onset of these diseases later in life, but to no avail. The team also had success to the letters that the NONEs wrote decades prior when they were entering into the sisterhood around ages 19 and 20.

An algorithm was able to detect an incredible accuracy through these letters, which NONEs would go on to have dementia in their elderly years. This is what algorithms are great at, comparing datasets and figuring out tiny patterns that humans are more likely to miss. Their sensitive variations in data and finding patterns that lead to reliable predictions of possible outcomes. Today, algorithms are used in detecting the likelihood of getting breast cancer and presenting better models for tackling climate change. Except the machine isn’t great on its own. Every potential positive here only works with a human behind it. Algorithms can act as the first layer for screening breast cancers, but a human has to act as that necessary second layer to verify the results. Using an algorithm for determining an appropriate jail sentence might one day make sense, only if there’s a human deciding whether or not the generated output is sensible or not. One of the main problems with Eric Loomis’ case is that the judge didn’t question the algorithm’s recommendation. He simply accepted the supposed objectivity of the machine and sent a man to prison for a crime that didn’t warrant it. As it stands now, we just seem to be part of this enormous social experiment being run by tech gurus. In every year or so, another social experiment is added to the mix with its own unique set of social consequences.

More recently, we’re discovering what a rapid stream of bite-sized videos does to teenagers, or what a completely user-generated game does to tweens. So far, this video has been pretty hard on the big tech companies, but I think it’s also really important to acknowledge that they are trying to address some of these issues with algorithms. YouTube, for example, has changed its algorithm to include quality and authority as measures of determining whether a video is recommended or not. Facebook has limited its targeting options to try and avoid another Cambridge analytical scandal where user data was distributed without consent for political purposes. Are these adjustments to the algorithm helping? Yes, but not as much as necessary. Even more is the fact that these efforts point to two things. One is that human intervention in algorithms is not only necessary but needs a much stronger presence. Two is that tinkering with the algorithm is probably not going to resolve the consequences of their most significant bias, profit-seeking. Keeping people on a platform is always going to be easier with content that sparks the most outrage. That’s not always the case, of course.

There is great content on YouTube and earnest viewers like you watching this video right now. But for every creator seeking to share legitimate information there seems to be several others blatantly exploiting the algorithm for a quick book. How can we take these platforms back from them? The sad truth is we can’t. The algorithms need to change. They need to put human welfare above profits. We need to stop designing machines that take advantage of our psychological weaknesses. To make that world possible, we need to be more critical of the algorithm. We need to dismantle the notion that the algorithm is all knowing objective and rational. The black boxes need to open up and our blind trust in these systems needs to be challenged at every turn.

To paraphrase the co-founder of the Center for Humane Technology, Tristan Harris, we’re all looking out for the moment when technology would overpower human strength and intelligence. But there’s a much earlier moment when technology overwhelms human weaknesses. That point is being crossed right now and it’s reducing our attention spans, ruining our relationships, destroying our communities. It’s downgrading humans.

Did you find this article valuable?

Support Sushil by becoming a sponsor. Any amount is appreciated!