Fact vs. Fiction: AI Council Members Discuss ‘The Social Dilemma,’ Ethics and What’s Next

Artificial intelligence sometimes gets a bad rap in popular culture references like "The Social Dilemma." While some issues can be addressed, the reality is AI isn’t dangerous, it isn’t out to get you, and it’s not trying to brainwash you, according to members of CompTIA’s AI Advisory Council.

Artificial intelligence sometimes gets a bad rap in popular culture. From killer robots to deepfakes, it almost seems hard to believe that AI can do any good in the world, right? While some issues can be addressed, the reality is AI isn’t dangerous, it isn’t out to get you, and it’s not trying to brainwash you, according to a panel of leaders from CompTIA’s AI Advisory Council.

Four of the council members recently met to discuss the “The Social Dilemma,” the Netflix documentary that raises some questions about how AI is used in social media platforms. The documentary was eye-opening for most viewers, but it didn’t offer a complete view of AI, the panelists said.

Like many other innovations in history, concerns are mistakenly directed toward the technology itself, rather than how we use the technology. The primary goal of most AI-related businesses is to make our lives better, creating innovative new solutions that can help consumers, businesses, and the world, the council members said.

“The documentary was so dramatic. They were really trying to show how to manipulate this teenage kid, depending on the time of day and the actions that this kid could take,” said Rama Akkiraju, distinguished engineer of IBM Fellow, IBM Watson, and co-chair of the AI Advisory Council.

Of note, the film demonstrated how AI in social media would find the exact right time to promote a particular advertisement to the teenager, prompting the boy to react in a not-so-healthy way, she noted.

“It reminded me of when credit cards were new, it was said that everybody would go shopping beyond their means,” Akkiraju said. “That happened to some people, but over time society adjusted. We have to find the right balance of using AI for good purposes.”


Technology, Legal, Education Developments Underway

Kaladhar Voruganti, vice president of technology innovation and senior fellow at Equinix, noted that we’re still in initial stages of AI innovation and that there are three facets of AI that still need to develop in order to quell concerns and achieve maximum value: the technology must mature, legal standards and procedures need to be created, and consumers and developers need to be educated on what AI can do.

“AI is growing at an exponential pace. The new algorithms and the new way that people are aggregating different types of data, it can be used for both good and bad. I think we need to educate both children and adults on all the ramifications,” Voruganti said. “Right now, we just click on all the OKs for convenience sake because we just want to access content. We don’t think through [everything]. I think there needs to be a system in place where individuals get the option of managing their own data.”

He cited the passage of the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act as examples of how legal frameworks are catching up with the technology.

Lloyd Danzig, chairman and founder of the International  Consortium for the Ethical Development of Artificial Intelligence and co-chair of CompTIA’s AI Council, echoed the statement that consumer education is critically important to society having a better understanding of what AI can and can’t do.

“I spoke at a machine learning conference last year just after a producer of a very common at-home personal assistant was reported to have functionality that allowed the manufacturer to listen to commands being given by users. Someone raised their hand and said, ‘Why is there such an uproar? Of course, any natural language processing engine is going to have to have some human looking at some input/output to determine the accuracy,’” Danzig said. “My answer was most people don’t think that’s obvious. That’s the point. The mass market consumer buying this product does not consider that a foregone conclusion, as one might at a machine learning conference. That’s a gap in education in terms of how these things work.”

The success of AI in many applications will ultimately be determined by striking a balance between the monetization of data and appropriate legal and ethical protections for consumers, said Manoj Suvarna, business leader, HPC and AI (North America), at Hewlett Packard Enterprise.

“Corporations thinking about adopting AI, getting started, have to go with an assumption that just because you have data, doesn’t mean you have the right to use it,” he said. “Increasingly, the consumer is going to have more and more rights on what you can and cannot provide consent to. Keeping that in mind, companies should continue to evaluate these kinds of tools but have the required guard rails.”

The conversation about AI use and ethics will continue into 2021 and beyond, the executives said. It’s a topic worth discussing and debating, even as the tech innovation continue to accelerate.

“There are very few objectively clear answers to any of these questions now, but it does seem that people are starting to reach some consensus about what are best practices, what are not best practices,and things to avoid,” said Danzig.

Note: The views and opinions expressed are those of the speakers in their capacity as CompTIA AI Advisory Council members, and do not necessarily reflect the official policy or position of their respective companies.

Interested in AI? Join CompTIA’s Artificial Intelligence Technology Interest Group and continue the conversation.

Newsletter Sign Up

Get CompTIA news and updates in your inbox.


Read More from the CompTIA Blog

Leave a Comment