Skip to Content
Categories:

How Powerful Should We Make AI’s?

"Artificial Intelligence & AI & Machine Learning" by mikemacmarketing is licensed under CC BY 2.0
“Artificial Intelligence & AI & Machine Learning” by mikemacmarketing is licensed under CC BY 2.0

How are Artificial Intelligences (AI) dangerous?

The first thing that most people think of when someone mentions the dangers of artificial intelligence is rogue AIs. Examples in the media like the movie Terminator and games like Detroit: Become Human are all supposed to be fantastical alternate realities, yet some are worried that soon they won’t be. The truth about future AI’s is not that they’ll go rogue, but the true danger is what happens when they follow exactly what we tell them to.

 

What do I need to know about AI’s?

In order to understand exactly what I’m saying, let’s go over a few facts about AI’s. The everydays AI’s that many people already use are called narrow AI’s. This includes programs like SIRI, the Google search algorithm, and Amazon Echoes. These AI’s are called “narrow AI’s,” not because they are sluggish, but because they specialize in one task or goal. The AI’s that pose a greater risk are general AI’s or strong AI’s (AGI). These AI’s are quite self-explanatory, but they are AI’s that can outperform humans in not just one category, but in all cognitive functions, like the human androids from the videogame Detroit: Become Human. While the public freakout around AI’s are mostly irrational, people like Stephen Hawking, Elon Musk, and Bill Gates have publicly mentioned that they are wary of its dangers (Forbes).

 

What can AI’s do? What will they be able to do?

AI’s have much potential for power. Currently, many AI’s do mundane tasks with programs like SIRI or the personality “Alexa” from the Amazon Echo. AI’s have been made and are being worked on for self driving cars, most notably by the company Tesla. One of the more scary AI’s are autonomous weapons, like quadcopters which can target people without any human intervention (The Future of Life).

One of the most impactful examples of AI’s is China’s social crediting system. The website WIRED says that this system is supposed to give each person a score, similar to a credit score, but is based on more general actions like jaywalking, taxes, and rebellion. It is very important to remember that this system is NOT in effect in all of China, but only certain parts of it. Jaywalking specifically uses AI to identify people’s faces on traffic cameras. This whole concept seems very outlandish to foreigners, to the extent that an episode of Black Mirror, a show that displays multiple dystopias,  is based on it. The downside of this system is that it is used to take away people’s rights. Those who have very low scores can be blacklisted and are sometimes not allowed to fly on a plane or use a train. In addition to this, those with good scores are preferred and treated better than those with worse scores, setting people up for a downward spiral. Some even say that this system is used to give more power to the Chinese Communist Party (CCP), but the CCP claims that it is used to strengthen trust. Here I remind you once again that this system is not implemented in all of China because it is used locally, not nationally.

 

What Does This Mean for our Jobs?

Next to an AI uprising, the next, and more realistic problem people are worried about is job security. It is general knowledge that many jobs have been and will continue to be replaced by AI’s. Many physical jobs that do not require imagination will be quite easier to install AI’s into than in jobs that require creative, decision-making expertise. In order to showcase this at her TED talk, AI researcher Janelle Shane showed her audience an AI coded to make ice cream flavors. The results were mixed, with flavors like “Pumpkin Trash Break” and “Strawberry Cream Disease.” Tasty, right? Looking deeper into this, it will be hard to make AI’s creative because they have to know every single detail that humans know about. Humans intuitively know that a disease does not sound very delicious, but AI’s have no idea of this unless we teach them every single word or concept that should probably not be used in an ice cream flavor.

 

How Is this AI Misunderstanding Dangerous?

While obscure ice cream flavors will not hurt many, there are still serious issues this misunderstanding between humans and AI’s can create. When it comes to situations of a higher magnitude, like self-driving cars for example, it can cause serious damage to everything in its vicinity. Shane tells the story of how in 2016, a person used a car that is supposed to self drive on the highway on a normal street. On a highway, a car would only need to see a truck from the back, so the AI in the car only recognized trucks from behind. When driving on the normal street, a truck passed in front of the self-driving car, showing only its side. The AI didn’t know what it was, so it crashed straight into the truck. The humans who programmed this AI technically taught it how to recognize trucks, but the AI did not have the human intuition to recognize that the truck can also be seen from its side. This is one of the many advantages that humans have over AI’s. In order to clarify, I do not want you to worry about the safety of self driving cars, because the people working on them are undoubtedly making sure that this won’t happen again. Overall, AI’s do exactly what we tell them too, but if humans themselves do not specify enough, then AI failures may occur.

AI’s hold much potential for both good and bad. One thing we know is that AI’s will undeniably lead our future. It’s just a matter of being careful in this high-risk, high-reward situation. We will need AI’s in the future, but always remember that AI’s will need us, too.

More to Discover