Learn to Leverage AI Responsibly, Skillfully

AI is not new, it’s not one thing and it’s incredibly pervasive. Learn how to bring it into your business.
How to Use and Develop AI Responsibly EMEA

Is artificial intelligence (AI) the next big thing in the tech industry, like the internet or mobile devices, or the next overhyped tech that will fade from memory?

It could be both, according to Sana Khareghani with Responsible AI UK during a fireside chat with Carolyn April, CompTIA’s senior director of industry analysis, at the CompTIA EMEA Member & Partner Conference 2023 in London.

“Three things: AI is not new, it’s not one thing and it’s incredibly pervasive,” said Khareghani, who noted that the term AI was coined in the 1950s.

“Since then, there have been a lot of things that haven’t worked and fizzled out, but things that worked too. The iPhone is pocket AI, and the amount of AI each of us uses day-to-day to make our lives easier is insane,” she said. “Think of it this way, AI is all of the problems that haven’t yet been solved. Then they become software because they’re not a problem anymore.”

“Currently, generative AI is in the newest hype cycle, led by ChatGPT and other large language models,” Khareghani added.

“What made it better than everything else before it is it put at our disposal a librarian, or other persona, that has read everything the internet ever produced. You have an interactive ability to ask a question and get a response in any topic. For the first time, society can ask more questions than we’re ever used to dealing with,” Khareghani said.

But the speed at which ChatGPT achieved mass adoption (one million users in five days) sparked fears of an artificial super intelligence coming true—most of which are unfounded because AI itself has been around for so long.

April noted that Grand View Research expects spending on AI tools to increase $407 billion by 2027, up from $137 billion last year.

“Everybody is thinking about it in a good way or a bad way, or both, but they’re thinking about it,” she said. “To me, AI is more of a feature than a product itself. Will we get to a point where we talk about AI as a simple underlying part of the devices and tools we use?”

Khareghani noted that acceptance and growth of AI will ultimately be judged by its use cases, and how it uses data to successfully problems.

“That’s the interesting way to talk about it, and sector-specific use cases at that in natural language,” she said, noting one example of how AI leveraged data regarding living patterns of certain communities in Africa to help one country there conduct a more accurate census. “We have to be very careful talking about using AI, use cases, augmenting AI, regulating AI. By itself, all of this is too high level. We need to be more pointed to gain a better understanding.”

Moving From Ethical AI to Responsible AI

“Business needs to shift its focus away from so-called ethical AI toward responsible AI because it’s a broader, more-inclusive way of thinking,” said Khareghani.

“We are pivoting away ethics not because they’re not important, but because responsibility includes it. When you talk ethics, people in tech or security will say, ‘That’s not my job.’ Responsibility includes more people that need to be in that conversation. Responsibility allows us to talk about how different parts of the organization are impacted. It needs to include the whole company to make it work,” she said.

“That can be achieved by bringing solution creators and problem owners together to talk. After all, if one side doesn’t understand the challenge, they can’t create the use cases to make a solution that is viable within an organization,” said Khareghani.

“We need to create the future vision together. Start with what is the problem, can we solve it with AI, what are the challenges to get there? Everyone owns different sets of expertise: policymakers, tech experts, front-end or customer-facing experts,” she said. “Sometimes these people never talk to each other in a company. Many times, when you bring them together, they will have trouble agreeing on anything, but they do agree on the merit of a solution that could help make new products or solutions or make the company more productive. The mission is not to redo anything that’s already happening but to point to pockets of work that are happening and to identify gaps and find funding for those gaps.”

Organizations don’t need new ways of thinking to incorporate AI to solve business problems together. Collaboration and cohesiveness are probably more important.

“AI technologies are not magic dust. Everything you know stands, we talk about this stuff as if we’ve reached the end, but we’re right at the cusp. There is an enormous amount of excitement. The potential of these technologies is enormous and the advances made are incredible,” said Khareghani.

MSPs can help businesses by working together to answer the following questions, then start a feasibility assessment to see if and how AI could be implemented:

  • What are the use cases?
  • What is the expertise challenge?
  • Do you have the right people, or do you have to bring in external partners?
  • How complex is the solution?
  • What are the ramifications?
  • Does it touch all your business?
  • What are the infrastructure needs?
  • What is the data viability?

“AI is able to make us individually more productive. And if a company has a productivity tool, these tools are readily available. Introducing these tools and having them be accepted requires interaction with people that work for you. Otherwise, what you get is the opposite, a ‘tick box.’ You spend lot of money, but nobody is using it,” Khareghani said.

Perhaps most importantly, keep in mind that AI is a tool, but any output should still be vetted manually.

“Have the same ethics and rules for the thing that’s added on to what you would normally do. ChatGPT is there to help make you happy. If it has to fabricate an answer, it will,” said Khareghani, citing an example in which ChatGPT made up some law cases that lawyers used in court, and the opposition pointed out that the cases did not exist.

“Even if you’re not introducing tools, your people might already be using them. So, what are the rules in place to make sure they are used well, and limitations are understood? That’s not just within the tech part of the organization, but the entire organization,” she said.

Ethically Operationalizing AI and Machine Learning

Download this guide full of insights, information and guidance to avoid problems down the road.

Newsletter Sign Up

Get CompTIA news and updates in your inbox.


Read More from the CompTIA Blog

Leave a Comment