During my last trip to Poland, I went to a local farmers market that sold a variety of homegrown and homemade goods: produce, cheeses, clothing, toys, and more. I became shocked, though, to find multiple stands containing AI prints from fashion to toys. The prints didn’t have the typical AI errors like extra limbs or uncanny faces, but rather what you might call an “AI style.” AI designs change depending on their sources and time periods. As we advance in AI and as people create more art, the art style AI produces changes as well. At the time, the T-shirts I saw had prints that were overly smooth, displayed fused 3D and 2D art moves, and had unrealistic details. I found myself wondering if these vendors were taking up a spot of people who put their heart and soul into hand crafted clothing. Is this what kids are going to grow up with? What will become of the future with AI on the rise? How can I stop this?
While we can’t necessarily control who, where, and when it is used, we can decide how we use AI by weighing in the moral dilemmas it creates.
“You’re entering a world where AI is in everything, and you’ll be using AI in your job. So, it’s important to have boundaries around that and to be able to think critically and to be unplugged,” said Laura DeNardis, a professor and endowed chair in tech, ethics, and society at Georgetown University. “But it’s also important to understand what the capabilities are of different AI platforms and how to ethically engage with them as we move through our lives.”
Job displacement, a loss of human connection, and misinformation are some of the major ethical problems surrounding the use of AI.
“There are more bots, many of them fueled by AI, with AI generated images, with AI generated voice patterns that copy people, and there are more of those than there are people,” DeNardis explained. “[Many people] who we talk to in social media, are not actually people, and they’re aided by AI. So, you have the problem of fake people.”
Although there are some moral dilemmas surrounding artificial intelligence, there are some positive ways AI can be used. In a smaller sense, AI can create flashcards or quizzes to help students study. On a larger scale, it can even help doctors with image reading and discovering diseases faster.
“In some clinical settings, doctors use recording devices that not only just record a transcript, but it synthesizes the patient interaction,” said DeNardis. “It notifies the Doctor of any drug interactions. It can do translation of languages.”
When researching the ethics of AI, there aren’t clear laws on what can or cannot be done, at least not currently in the United States. In school settings, the guidelines of AI use are not as clear cut either. At Harvard, there are different policies depending on the college; the rules at the Harvard Business School differ from that of the rules at the Harvard Kennedy School. On the other hand, at Cornell, faculty members are asked to define their expectations of generative AI tool usage for their courses. Although there are not clear-cut laws and expectations, organizations such as UNESCO, a United Nations agency aimed at global peace and security, provide a general scope for approaching the ethics of AI. Some of the core values include performing a risk assessment when using AI, focusing on privacy protection throughout the AI lifecycle, and creating a public understanding of AI through education, civic engagement, and digital training.
It’s difficult to manage AI as the way it works and how people use it changes on a day-to-day basis. It would be almost impossible to predict the countless ways AI could be used unethically. This leaves much of the issue around AI ethics to its users. Staying updated, open-minded, and morally educated is the key to navigating what lies ahead, otherwise the overwhelming problems of AI will continue to grow.
“We have to figure out as a society what the context is for privileging, for example, privacy over speech rights, or national security over economic security,” stated DeNardis. “That’s why I find this to be a very fascinating area. We need more people to go into AI policy and in governance, and have the next generation rise up to help solve some of these problems, and it will take a while.”
