Artificial Intelligence and Ethics: An Overview

Artificial Intelligence and Ethics: An Overview

SEC 314

By Brendon Feole

 

Artificial Intelligence is going to change humanity. Many people have opinions of a variety of aspects of humanity that will be affected. They range from the apocalyptic, to the mundane and Utopian. AI will be smarter and will evolve faster than its creators and there are major questions to be considered.

When should AI be allowed to take the reins and make decisions that humans used to? AI will not be infallible. Even if it considers itself infallible and by its own standards and does not make mistakes, by human standards it could be making the wrong decisions and actually not be functioning properly. In situations where people are already suffering greatly, the possibility that AI could do some harm isn’t really a factor. For example, cars will be vastly more efficient and precise. They will get into far fewer accidents. The AI that runs the cars can also learn faster when more cars are on the road because there will be more data that the cars can learn from. The same goes for piloting airplanes. Drones are currently being piloted autonomously to take off and land from aircraft carriers (the hardest landing zone) in China (Lin & Singer, 2018).

AI may not be so great in instances where we really need a human touch, and the added precision that a machine can give wouldn’t really help. For example, humans should have the ability to launch nuclear weapons vs. letting AI have control. However, kill decisions from a drone may be more along the lines of something AI should actually handle. It’s an activity that will result in death, but they can identity targets and strike much more precisely than a human can, thereby greatly reducing collateral damage. Of course, there is an issue with mistakes being made but overall, in the same way cars would safer, drone strikes would overall be safer. It’s a popular idea to simply not give any AI the ability to make a kill decision. Someone will though, and we’ll all have to do the same to keep up (Talks, 2017).

 

There are also security and health concerns. Here’s a nightmare scenario: all the cars in the United States are connected to the same central hub wirelessly, sharing data so they can prevent accidents and get people where they are going fast. A terrorist cell breaks into the hub and decided to issue the emergency breaking command to half the vehicles on the road, chosen at random. It would be devastating.

Due to the sweeping changes that AI can affect, there are a lot of people saying that the world is going to end. People have said that a lot before about a lot of things, and it has yet to come true. However, the problem of artificial intelligence taking off on its own could still be a large one. At some point AI will decide to create more of itself and improve upon itself. We will not have full control of it. In that instance, there is a potential for great harm. The next Chernobyl-level event may be due to artificial intelligence. That being said, we should not avoid AI. It is coming. To some extent it is already here. Rather than focusing on hard-line policies that are severe and absolutely constraining (similar to the drug war that is a complete failure) we should focus on regulation and enforcement. We should work towards AI being in line with human values (Hunt, 2017).

Another crazy scenario would be a misinterpretation of our instructions. For example, if AI took our instructions to find a way to make us happy and ran with it, perhaps they come up with a solution where we are all comatose, with our faces forced and frozen into a smiling expression, because that’s the understanding of happiness that the AI has (Hunt, 2017). However we handle AI, we’ll need to make sure that we are able to adapt our regulations quickly to prevent harm. We can’t handle it like we did seatbelts, where we knew for sure people were dying but car manufacturers didn’t want to have safety equipment standard and it took many years and many deaths to change this.

 

Our own evolution will even be affected. Because of breakthroughs in AI, everyone on the planet will have the ability to obtain superhuman cognition. Money will not be a barrier because a person’s earning power after getting the ability will be so much higher. It’s very likely that in a similar way to computers allowing us to have perfect memory, access any information, answer any questions, perform most mathematical equations, and be able to remember video and images flawlessly, we will be able to do this directly, ourselves, without a slow device as the intermediary. When billions of people do this, the culture of the world and what it means to be human will shortly change. The future of humanity will vastly change (Musk, 2018).

Elon Musk has also speculated that we will be able to snapshot who we are and upload ourselves to another body if our biological body dies, and that AI will be the most likely cause of world war 3. That raises a lot of questions and concerns as well. Though, I will move on toward other slightly less horrifying aspects of AI.

Economics is a big concern already, with many jobs being lost to automation. In a similar fashion, AI will further that loss. This poses a huge problem because a lot of the people being displaced are older people who need the steady income and will have a harder time changing locations and learning new trades. It will be more difficult and stressful for them than the younger population (Andrew, 2019). Examples of jobs that will be disappearing within the next 20 years are as follows: drivers, dispatchers, telemarketers, bank tellers, fast food workers, many accounting jobs, stock traders, many construction and farming jobs, cashiers, and many manufacturing jobs (Anton, 2017).

 

Due to this negative potential and tremendous power of AI, there may be issues of personal rights. Humans typically assign rights (basic human rights) based on the facts that we are conscious, and we can feel. We don’t really know what to consider consciousness in another type of being. How would we know when it starts or stops? What rights do we give it? What if they are more intelligent than us? Should they have more rights? Historically, humanity has restricted the rights on any animal, including other humans, that it can profit from. It is very likely that some people will argue and fight for AI to not have rights, and to be subservient to humanity (Nutshell, 2017).

Due to all of these concerns, and more, Trump has signed into order The American AI initiative to try and stay ahead of the curve. It states that the US must drive technology breakthroughs, technical standards, and reduce barriers (even if they are safety related). It must train current and future generations in AI, foster public trust, and promote an international environment that supports AI research and development (Trump, 2019). Companies like Google and IBM are leading charge in the private sector as well. Though, they are concerned that what they do will be used for military purposes, regardless of the original intention (Stanford, 2019).

A lot of change is coming for sure. Some people are definitely going to lose out but a lot of good can come from change. I wouldn’t be surprised if my aunt’s Muscular Sclerosis could be cured by AI. Businesses will close. People will lose their jobs. We will be put in danger and harmed. However, with AI it’s very possible that our lives become much happier, healthier, and safer overall. After studying this for a while now, I am slightly less terrified than I was before.

 

Bibliography

Lin, J. (2018, April 20). China is building drone planes for its aircraft carriers. Retrieved June 25, 2019, from Lin, J., & Singer, P. W. (2018, April 20). China is building drone planes for its aircraft carriers. Retrieved June 25, 2019, from https://www.airuniversity.af.edu/CASI/Display/Article/1604481/china-is-building-drone-planes-for-its-aircraft-carriers/

Talks, T. (2017, January 31). Artificial Intelligence: It will kill us | Jay Tuck | TEDxHamburgSalon. Retrieved June 25, 2019, from https://www.youtube.com/watch?v=BrNs0M77Pd4

Hunt, D. G. (2017, October 16). The Future of Artificial Intelligence and Ethics on the Road to Superintelligence. Retrieved June 25, 2019, from http://www.whyfuture.com/single-post/2016/07/01/The-future-of-Artificial-Intelligence-Ethics-on-the-Road-to-Superintelligence

Musk, E. (2018, September 06). Retrieved June 26, 2019, from https://www.youtube.com/watch?v=Ra3fv8gl6NE

Andrew, Y. (2019, February 12). Retrieved June 26, 2019, from https://www.youtube.com/watch?v=u-vSB6gQszY

Anton, E. (2017, October 05). 15 Jobs That Will Disappear in the Next 20 Years Due To Automation. Retrieved June 25, 2019, from https://www.alux.com/jobs-gone-automation-ai/

Nutshell, K. –. (2017, February 23). Do Robots Deserve Rights? What if Machines Become Conscious? Retrieved June 25, 2019, from https://www.youtube.com/watch?v=DHyUYg8X31c

Trump, D. (2019, February 11). Executive Order on Maintaining American Leadership in Artificial Intelligence. Retrieved June 25, 2019, from https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/

Stanford. (2019, February 27). Retrieved June 26, 2019, from https://www.youtube.com/watch?v=WfS9PoxJCDA