As evidenced by the example of ChatGPT, artificial intelligence is advancing in unprecedented directions to solve exciting new problems.
But, as AI is being pointed toward critical cybersecurity operations, do the gains outweigh the potential risks and concerns?
“Absolutely, you should be worried,” said Andy Thurai (pictured), vice president and principal analyst at Constellation Research Inc. “The problem people don’t realize is that ChatGPT, being a new, shiny object, it’s all the craze that’s about. But the problem is that most of the content that’s produced either by ChatGPT or others are assets with no warranties, accountability or whatsoever. If it is content, it’s OK. But if it is something like code that you use, then it’s mostly not.”
Thurai spoke with theCUBE industry analyst Dave Vellante at CloudNativeSecurityCon, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the potential ethical concerns of combining AI and cybersecurity and their ramifications.
Digging deeper than the surface
While many in the software and computing industries are overjoyed at the ramifications of AI for developers and engineers that mean well, there’s another, more sinister side: The fact that these tools can simplify malicious acts as simple as spear phishing, Thurai pointed out.
“Hackers are already using these tools to individualize content, and it’s not just ChatGPT,” he stated. “One of the things that you are able to easily identify when you’re looking at the emails that come in from a phishing attack is you look at some of the key elements in it, whether it’s human or automated AI content. But these tools have mastered the individualization of content to mimic natural human behavior.”
It isn’t all doom and gloom, however, as AI is already showing the positive benefits it can bring to cybersecurity, especially in an age where pattern-based malware scanning isn’t nearly as effective as it once was, according to Thurai.
“AI is great at detecting things that are anomalies and reducing the noise so you can escalate only the things what you’re supposed to,” he said. “AIOps is a great use case to use in the security areas, which they’re not using to an extent yet. Incident management is another area.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of CloudNativeSecurityCon: