11 Common Ethical Issues in Artificial Intelligence

How we use AI applications can invite public scrutiny – particularly when it comes to ethics.

11 Common Ethical Issues in Artificial IntelligenceArtificial intelligence (AI) provides many new and exciting capabilities. We see AI in our daily lives in the form of virtual assistants, instructional programs and autonomous operations:

  • Self-driving cars? Check.
  • Instantaneous translation of phrases into another language. Check.
  • Write code. Check.

There are a ton of AI applications that we use professionally and personally. But how we use these applications sometimes brings the technology under fire for several reasons. One of the most prominent reasons AI comes under public scrutiny has to do with ethics.

Legal and Ethical Considerations for AI

Artificial intelligence often finds itself on the receiving end of controversy. Numerous conversations are being had all over the world about the ethical use of technology. For instance, in what roles should AI replace humans? What is being done to protect personal data and possible infringements on human rights? How should that data be used and to what end should it used?

The use of AI tools such as ChatGPT even provoked a writer’s strike in 2023, causing significant interruptions in the entertainment industry. The primary question being, even if AI can be used to write movie scripts, should it?

Businesses must navigate the intricacies of using AI tools as it can sometimes cross ethical lines and even cause legal concern. Without clear and concise guidelines for how AI tools can and should be used, there is potential for misuse and legal trouble.

Legal Considerations for AI

In the United States, AI regulation is decentralized which can cause uncertainty surrounding what legal implications can result from the usage of artificial intelligence. While we do have some rules that regulate the outcomes, there is often confusion around the actual operational usage of AI tools.

Here are some legal considerations:

  • Violations to intellectual property rights
  • Data privacy issues that violate General Data Protection Regulation (GDPR)
  • Data privacy issues that violate the California Consumer Privacy Act (CCPA)
  • Violations of employment regulations
  • Inappropriate usage of copyright data
  • Disputes concerning contract law when generative AI is used
  • Consumer confidentiality and issues with personally identifiable information (PII)
  • Inaccurate usage of generative AI output

11 AI Ethical Issues

Artificial intelligence has the potential to make your business more efficient. That’s a win. But increasing your output could come at a cost regardless of any savings. Making the ethics of AI a focal point will help ensure your business remains in good standing from an operational, regulatory and reputational standpoint. Here are 11 ethical issues you should know about when it comes to AI.

Issue 1: Job Displacement

Job displacement is a concern that is frequently cited in discussions surrounding AI. There is fear that automation will replace certain aspects or entire job roles, causing unemployment rates to spike industries. According to CompTIA’s Business Technology Adoption and Skills Trends report, 81% of U.S. workers have recently seen articles which focus on the replacement of workers with AI. The same report found that 3 out of 4 workers are very or somewhat concerned about how automated technologies will impact the workforce.

Issue 2: Privacy

Training of AI models requires massive amounts of data, some of which includes PII. There is currently little insight into how the data is being collected, processed and stored which raises concerns about who can access your data and how they can use it. There are other privacy concerns surrounding the use of AI in surveillance. Law enforcement agencies use AI to monitor and track the movements of suspects. While highly valuable, many are worried about the misuse of those capabilities in public spaces, infringing upon individual rights to privacy.

Issue 3: Bias

There is another ethical concern surrounding AI bias. Although AI does not inherently come with bias, systems are trained using data from human sources and deep learning which can lead to the propagation of biases through technology. For instance, an AI hiring tool could omit certain demographics if the data sets used to train the algorithm contained a bias against a particular group. This could also have legal implications if it leads to discriminatory practices.

Issue 4: Security

Security remains a top priority when it comes to AI (and really any branch of computer science). Lax security can have a wide-ranging impact. For example, AI is susceptible to malicious attacks which can compromise outcomes. The Cybersecurity Infrastructure and Security Agency (CISA) references documented instances of attacks leadings to misbehaviors in autonomous vehicles and the hiding of objects in security camera footage. Experts and governmental entities are urging for more security measures to limit potentially negative effects.

Issue 5: Explainability

It’s not enough to simply put AI tools out into the world and watch them work. It can be particularly important to understand the decision-making process with certain AI applications. In some cases, it can be difficult to understand why certain AI tools came to conclusions. This can have sizeable implications, especially in industries such as healthcare or law enforcement where influencing factors must be considered, and real human lives are at stake.

Issue 6: Accountability

The increasing prevalence of AI in all industries means that we use AI tools to make decisions daily. In cases where those decisions lead to negative outcomes, it can be difficult to identify who is responsible for the results. Are companies on the hook for validating the algorithms of a tool they buy? Or do you look to the creator of an AI tool? The quest for accountability can be a deep rabbit hole which can make it difficult to keep people and companies accountable.

Issue 7: Deepfakes

The usage of deepfakes creates ethical concerns. Deepfakes are now able to circumvent voice and facial recognition which can be used to override security measures. One study even showed that a Microsoft API was tricked more than 75% of the time using easily generated deepfakes. Other ethical challenges arise when it comes to impersonation. The usage of deepfakes to sway public opinion in political races can have far-reaching implications. There is also concern over whether deepfakes could be used to influence the stock market if a CEO were believed to be making decisions or taking actions that were considered questionable. With no oversight and easy access to the software, the abuse of deepfakes presents a significant security gap.

Issue 8: Misinformation

Misinformation has a way of creating social divides and perpetuating untrue opinions to the detriment of organizations and others. A topic that gained scrutiny in the context of the political upheaval seen in recent years, misinformation can affect public opinion and cause severe reputational damage. Once misinformation becomes widely shared on social media, it can be difficult to determine where it originated and challenging to combat. AI tools have been used to spread misinformation, making it appear as though the information is legitimate, when it is in fact not.

Issue 9: Exploitation of Intellectual Property

A recent lawsuit against ChatGPT involving several popular writers who claim the platform made illegal use of their copyrighted work has brought attention to the issue of AI exploitation of intellectual property. Several authors, including favorites as Jodi Picoult and John Grisham, recently sued OpenAI for infringing on copyright by using their content to train their algorithms. The lawsuit further claims that this type of exploitation will endanger the ability of authors to make a living from writing. This kind of exploitation has owners of intellectual property concerned about how AI will continue to impact their livelihoods.

Issue 10: Loss of Social Connection

While AI has the potential to provide hyper-personalized experiences by customizing search engine content based on your preferences and enhancing customer service through the use of chatbots, there is concern that this could lead to a lack of social connection, empathy for others and general well-being. If all you see on social media are opinions that reinforce your own, you’re unlikely to develop a mindset that allows you to empathize with others and engage in actions for social good.  

Issue 11: Balancing Ethics With Competition

New technologies present companies, tech giants and startups alike, with a particular challenge because there is a constant race to innovate. Often, success is determined by a company’s ability to be the first to release a particular technology or application. When it comes to AI systems, companies often aren’t taking the time to ensure their technology is ethically designed or that it contains stringent security measures.

What Steps are Being Taken to Regulate AI Technology?

There is still a fair amount of uncertainty surrounding AI technology and its applications. Currently, the United States doesn’t have any federal regulations regarding AI. We do, however, have a variety of existing laws that limit how AI can be used with regards to privacy, data protection and discrimination. Most companies ensure that they at least adhere to GDPR and CCPA standards. The White House also recently issued a plan for rolling out an AI Bill of Rights that could provide more comprehensive standards.

The European Union also recently set a precedent by passing the AI Act, the world’s first comprehensive AI law. China has developed their own set of AI rules which continues to evolve. The United States is only in the beginning stages of AI regulation and much of the legislation that has been passed is only applied at the state level.

Can AI Ethics Be Standardized Across Cultures and Regions?

There is a possibility that AI ethics can be standardized across cultures and regions. UNESCO, an international organization dedicated to creating a peaceful and sustainable world, developed the first global standard for AI ethics in 2021. The Recommendation on the Ethics of Artificial Intelligence outlines practices for companies to undertake to ensure the ethical usage of AI. Although not adopted by the international community at large, there is hope that these principles will be adopted by most countries at some point soon.

The Future of AI Ethics

AI ethics still has a long journey ahead, but no one truly knows where we will land when it comes to governance. Many experts argue that ethical AI is essential for a responsible future where we can focus on issues such as social good, sustainability and inclusion. One Forbes article argues that it is “crucial that companies prioritize the implementation of ethical AI practices now as the potentially negative implication of the misuse of AI are becoming increasingly urgent.” Many believe that it is incumbent upon companies and their stakeholders to ensure that internal policies governing AI technology are ethical from the ground up, or rather from the actual design of AI architecture and machine learning algorithms through the applications and usage of the technology.

Although the topic of AI ethics comes with a heavy dose of uncertainty, there is positive movement towards regulating this powerful technology.

Want to Hear More About AI?

Join the CompTIA AI Technology Interest Group!

Newsletter Sign Up

Get CompTIA news and updates in your inbox.


Read More from the CompTIA Blog

Leave a Comment