AI vs. adversaries: How machine-leveraged attacks drive the need for advanced security

AI vs. adversaries: How machine-leveraged attacks drive the need for advanced security

Posted on

Over two trillion events, triaged per day, leads to a significant pool of knowledge for an artificial intelligence-based security system to learn from, says the chief scientist at cybersecurity company CrowdStrike Holdings Inc.

That vast amount of data, feeding a learning AI engine, is especially important as adversaries are now likely to leverage tools found directly on machines to create attacks, rather than drop files from an outside source.

“They figured if they drop a malware file on the machine that’s an artifact, an indicator of compromise,” said Sven Krasser (pictured) senior vice president and chief scientist of CrowdStrike. “That can be detected.”

During Fal.Con 2022, industry analyst Dave Vellante spoke with Krasser in an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how a shift in adversarial behavior was making the unprecedented insights derived from AI increasingly appropriate for security.  (* Disclosure below.)

We catch them when they act

That bad actors now try to work with the tools they find on machines is a shift in how attacks are taking place, Krasser explained. AI is well suited for protection in those cases because more facets and different angles can be examined—more than a human could comprehend.

“It becomes overwhelming for the human mind,” Krasser said. “It’s just so much complexity that a human can put together in their brain. With AI, you don’t have these limitations.”

AI has the ability to connect autonomously in real time, stopping threats. As oversight, humans do take a look at what is going on to give the AI input, and feedback on where it can make improvements. More importantly, AI really helps with the volume of data, which is not something that humans can work with manually. Humans need to bring the heavy machinery, such as AI, to bear.

Keep in mind, adversarial humans also want to accomplish something. They have objectives that can be indicators of an attack.

“They’re not logging in just to do nothing,” Krasser pointed out. “AI crunches the big data and then the indicators, the knowledge that the AI generates, understanding the context of the situation, can feed into the indicators of attack that we’re evaluating to see if an adversary is acting on a specific objective … we have a good feedback loop between these two systems and they’re more tightly integrated now.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of theCUBE @ Fal.Con 2022:

(* Disclosure: CrowdStrike Holdings sponsored this segment of theCUBE. Neither CrowdStrike nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *