Artificial intelligence (AI) is revolutionizing the way that businesses run, allowing countless organizations to run more efficiently than ever before. There’s a lot of opportunity for companies to streamline operations with this technology. However, there are some concerns around the security controls and measures behind AI and its security for businesses that use it regularly.
Whether you are pro AI, or a bit concerned about it, one question that MSPs should pose to themselves is “Who is securing it?” As AI capabilities advance, so will potential threats and vulnerabilities. This includes MSPs who need to get up to speed with the technology and know-how to secure it effectively without falling behind in the marketplace.
“These technologies are very, very good. But the model of governance surrounding it must be very different,” said Ann Westerheim, president and founder of Ekaru, during a ChannelCon2023 session called “Cybersecurity Meets AI: Revolutionizing Digital Defense.” Westerheim says more critical thinking, caution and different kinds of guardrails and braking systems will be needed when it comes to AI.
How Risk Plays a Role in AI Adoption
When it comes to AI adoption for most businesses, there is a risk factor that organizations must consider. With any adoption of new technologies there can be risk involved from initial deployment to maintaining its security. But in some cases, the hype surrounding AI and open-source AI tools like ChatGPT can be overplayed and focused on risks more than the benefits.
“There’s a whole community about existential risk—a real category of things to think about,” said panelist Karen Silverman, CEO and founder of The Cantellus Group. “If we do nothing, if we just go about business the way we’ve always gone about and consume products the way we always have, yeah, we have something to worry about. We’re all going to have to lean in and decide what kind of relationship we want to have with tech. How much authority we’re going to cede and how much we’re not. What are we going to insist humans must do? Yes, these will be tools that will occupy a lot of space, and we’re going to hand them a lot of authority. My call to action is to get actively involved to think hard about what it means as an active exercise, not a passive exercise.”
In cybersecurity it is common to assess the risk of a new tool for adoption. It can require organizations to look at the overall third-party risk of AI tool implementation and deployment. Implementing any new technology will demand accepting a level of risk for using those technologies and placing proactive safeguards surrounding it. However, it’s becoming increasingly important to organizations to adopt some level of risk with these new technologies if they want to remain competitive in the industry.
How Cybercriminals Leverage AI
Today’s cybercriminals are clever and will leverage any tactic available to exploit vulnerabilities for their own gain. AI has been used for several different attack vectors including phishing, automated bots, social engineering, data harvesting and spreading misinformation.
The manners that cybercriminals leverage AI to commit their exploits can differ. For instance, non-native English speaking threat actors can now craft more clear and concise phishing emails. This in turn can raise the number of malicious attacks that can occur which derive from phishing attacks.
“Machine learning is only as good as this data and it's not even as good as that because machine learning is about generalization and compression. There are holes. There are ways to get exploited,” said Chris Hazard, CTO and co-founder of Howso (formerly Diveplane), during the session. “There are certain machine learning techniques that are very vulnerable to certain types of attacks just like software, the SQL injection attack. There's an equivalence of that for models. So, you have to make sure that you update your models and data the same way you update your software.”
The coexistence of AI and cybercrime demonstrates a classic dual-use dilemma for security teams. The same technologies that promise advancements in various sectors can also be harnessed for malicious purposes as well. With hackers adopting AI at an increasing rate, cybersecurity professionals need to remain proactive in how cybercriminals are using it to carry out their exploits.
Beneficial Ways We Can Use AI to Combat Cybercrime
While AI is used by criminals, it can also be used by businesses and MSPs to combat cybercrime. This can be done by using the same AI technology that cybercriminals use and implementing them on the defensive side of security. MSPs can work to help support their end-user clients by providing them additional services to support mitigating AI attack methods used by attackers. Some of these additional security measures can include phishing scanning, automated endpoint protection and patching, and integrated threat intelligence capabilities. Beyond security controls, one of the most effective ways to combat threats to organizations is with awareness and education.
"Here's the reality at the end of the day—there are ways to prevent some of this stuff. There really are. And I think that's what we lead with and the education piece, MSPs and their clients,” Hazard said.
Ultimately, when used for proactive defense, AI can be a powerful tool to combat cyberattacks and their impact.
“That's how you cultivate trust, by being proactive. Cybersecurity and AI is obviously another piece to increase the payload capacity,” said Hazard. “You have technology controls that we talked about that don't rely on end-user hygiene. We've seen a lot of real success with that.”
How to Use AI to Support Client Security Solutions
In most cases, MSPs are expected to support client security as part of their overall service offerings. AI can be an essential tool for MSPs to do so, offering a host of capabilities that can support and even elevate security solutions for clients.
“The complacency that can set in with ‘I have this tool [AI]’ is that I can set and forget. I don’t know that cyber will ever get to the point of set and forget because we have a live, breathing adversary on the other end. The MSP understands the need, it’s the end user that doesn’t,” said Mike Hornsby, senior solutions engineer at BLOKWORX, during the session. “It’s important for MSPs that adopt AI as part of their support solutions for clients to ensure that they cultivate that trust. MSPs can also leverage additional AI-generated offers and machine-learning capabilities to create security-centric solutions for clients.”
Want to Learn More About AI Opportunities?
Join the CompTIA AI Technology Interest Group today!