Yin and yang. The positive and negative pulls of a magnet. Ethics and artificial intelligence. Each comes with its natural pair. Like any such duality, it is important to examine both sides for the fullest possible picture. The ethical problems created by the sheer existence of artificial intelligence cannot be ignored.
The limitless number of applications for AI leaves much power in the hands of AI developers. Developers are forced to answer questions such as how should a technology work? What should its limits be? What boundaries should developers intentionally create to prevent a technology from doing certain operations? The list goes on. With such far-reaching implications for privacy intrusion into daily life, several large companies have incorporated AI ethics boards to facilitate how they use their AI technology, including companies like Microsoft, Amazon, and Google (which just this week dissolved it’s AI committee). But the question remains of just how much influence do these ethics committees hold over the companies they were created to help shape? Are they truly committed to shaping ethics, or are they just a new HR ploy? But more important than the ethics committees, how do the coders themselves form their ethical leanings that shape the AI products?
The difficulty is that artificial intelligence is highly technical, requiring thousands of lines of painstakingly detailed code. Yet the very underpinning of AI is code written by ethically-driven, yet ethically-varying, humans. Taking a look at top ethics schools of thought, there are dozens of ways in which a person forms his or her ethical foundation. Deontology (duty-based), consequentialism (consequence-based), and virtue are just three of the major contemporary ethics schools of thought, with each school consisting of its own subcategories.
This is a foundational level issue that springs forth a multitude of resulting ethics issues. Should home safety cameras be equipped to detect certain individuals? Should airports use face scanning capabilities in security? How does AI enable oppressive governments to carry out human rights abuses? What are the issues with fully automated AI-powered weapons? The list of ethical questions that programmers and AI companies are faced with is limitless. Companies must address the issue at the most basic of levels in order to provide AI technology that they can stand behind with earnestness.
So how do individuals protect themselves from the creeping invasion of AI in daily life? With such highly intimate data including biometric info, personal weaknesses and strengths, facial identification available to AI companies who don’t always have customers interests at heart, there is much to be risked. At the governmental level, on March 15 two senators introduced the Commercial Facial Recognition Privacy Act to help protect consumers from AI privacy infringements. Individual states are also taking action, including California which in 2018 passed its California Consumer Privacy Act (CCPA) which significantly restricts how AI companies operate. At the personal level, consumers must inform themselves about AI. How do the technologies current in their lives affect them? Where does the data associated with those technologies go? How can it impact them in the future? As AI technology capabilities become more transparent to consumers, it’s time for consumers to wake up and educate themselves on the widespread implications for AI tech.