Beyond speeds and feeds: Cyber resilience redefines storage industry

Three insights you might have missed from IBM Storage Summit

Posted on



Storage platforms are becoming data platforms, capable of leveraging AI and data management technologies in a secure, accessible environment.

This was just one of several discussion points during theCUBE’s coverage of the IBM Storage Summit. The future of computing is being driven by data, and storage is being carried along in the wave. AI-powered solutions are transforming the storage industry into agile data platforms that must increasingly meet the demands of a distributed, hybrid infrastructure, where data processing is required wherever the information resides.

“There’s no company where all their data is in one platform or in one place,” said Rob Strechay, industry analyst for theCUBE, SiliconANGLE media’s livestreaming studio, during the event’s analyst discussion. “I think some companies would love that to be the case to make money off it, but I think what you need to look at is how can you look at and approach all of the data to bring it to where you need it? How do you really get to that next level of data usability with the security, with the performance so that it’s the right place, transformed the right way?”

Strechay was joined in the analyst segment by Sarbjeet Johal and Dave Vellante(* Disclosure below.)

Here’s theCUBE’s complete analyst video interview:

Here are three key insights you may have missed:

1. Storage is adopting an entirely new persona.

Storage has evolved from a time when enterprises placed high value on how rapidly data could be exported or imported and in what amounts. Now, the focus has moved on to storage as a data-enabling commodity with vital security and analytics capabilities.

“I think the speeds and feeds conversations are over in storage,” said Daneyand Singley (pictured, right), executive director of enterprise architecture and system sales at Mapsys Inc., in an interview with theCUBE. “We’re now on what else storage can do for us. That’s where this whole cyber resiliency and cyber vault strategy comes from with IBM.”

Here’s theCUBE’s complete video interview with Daneyand Singley, who was joined by Karen Hsu (pictured, left), vice president of storage ecosystem at IBM Corp.

In a measure of how storage enables data protection, Sam Werner, VPof IBM Storage product management at IBM, noted how the intersection of AI and security has led to advancement in storage’s ability to provide timely data protection.

“We’re up to over 75% accuracy in detecting anomalies or potential problems within our storage, but now we’ve taken it even further,” said Werner, in an interview during the event. “We’re able to move into near real-time detection of anomalies in your I/O to potentially catch a ransomware attack before it spreads across your storage environment.”

Here’s theCUBE’s complete video interview with Sam Werner, who was joined by Scott Baker, chief marketing officer and VP of IBM Infrastructure Portfolio product marketing at IBM:

A central theme from the IBM Storage Summit involved consolidation. Distributed computing architectures have raised the complexity level for many organizations, and there is interest in solutions, such as IBM’s software-defined data management offering Storage Fusion, which combines technologies in one accessible platform.

“Fusion kind of brings things together, also more from the OpenShift or the container platform, and it’s actually built on similar technologies with IBM Storage Scale, as well as IBM Storage Ceph,” said David Wohlford, worldwide senior product marketing manager of IBM Storage for AI and cloudscale at IBM, during an appearance on theCUBE. “What Storage Scale and IBM Storage Ceph really bring is the platform and bringing it together. We offer basically the file and object, bringing these two protocols together onto a single platform.”

Here’s theCUBE’s complete video interview with David Wohlford, who was joined by John Zawistowski, global systems solutions executive at Sycomp, A Technology Company Inc.:

2. Security is becoming an integral part of storage array technologies.

IBM scientists and engineers have been working on techniques for scanning data for signs of disorder as it is delivered to a flash array, a process known as entropy. The goal is to assess how the data coming in could be changing and understand whether the degree of randomness or disorder represents a real attack. By dedicating processing power in the flash system controllers, IBM technology can evaluate entropy for each volume.

“Now, it’s not necessarily that it’s a ransomware attack. It may be somebody’s turned encryption on in the application and the storage administrator doesn’t know,” said Andy Walls, fellow, chief technology officer and chief architect of IBM FlashSystems at IBM, in an interview with theCUBE. “But the storage administrator needs to know because his compressibility is changing. He needs to know that he might need to allocate more storage. So we’re detecting anomalies, as well as looking for ransomware attacks.”

Here’s theCUBE’s complete video interview with Andy Walls:

While scanning for ransomware attacks may be effective, it does not guarantee that an attack won’t be successful. In the event of a breach, organizations need to be able to respond decisively, and storage resiliency becomes a critical element in data recovery. IBM’s FlashCore Modules offer gapless data resiliency to better recover from cyberattacks.

“We want to make sure that … you’re going to have copies of data you can come back from,” said Ian Shave, director of worldwide distributed storage and data resilience sales at IBM, in conversation with theCUBE. “The new thing we’ve added is that we can discover when the threats are actually getting in, and I think this is the great combination of both the software of the array and … the elements that we’ve got in our FlashCore Modules.”

Here’s theCUBE’s complete video interview with Ian Shave:

3. AI is driving key changes in storage architectures.

As AI has come to dominate the tech landscape, storage providers are adapting their portfolios to meet increasing data demand. Christopher Maestas, worldwide executive solutions architect at IBM, described how the company is providing multiple data interfacing methods and cross-platform integrations to handle larger workload categories.

“We’ve started to see changes in workloads from media and entertainment, healthcare, life sciences [and] financial services sectors,” said Maestas, during an interview with theCUBE. “AI really has changed it, because it picked the middle of the road — not the itty-bitty files that you see or the large streaming data that you’ve been doing. We’re really seeing that data size change and, again, having to adapt to a different data size that we’ve not traditionally handled in the past.”

Here’s theCUBE’s complete video interview with Christopher Maestas:

The burgeoning field of AI has also brought challenges in dealing with a growing pool of unstructured data. IBM’s watsonx.data solution allows customers to build governance into storage engines for managing increasingly diverse sets of information.

“IBM Storage has a capability to cross through all the unstructured data,” said Vincent Hsu, fellow, chief technology officer and VP of IBM Storage at IBM, in a discussion during the event. “For IBM technology, you can see a single pane of glass to see the distribution of all the data, and then you can apply the policy on those data to allow us to be able to perform some particular function — for example, remove the PIIs or the hate speech information from the raw data sources.”

Here’s theCUBE’s complete video interview with Vincent Hsu:

IBM’s storage approach also seeks to eliminate pain points for data scientists. One of these roadblocks includes having to slog through hours of searching for the right data, which the company has addressed through tagging and labeling for seamless queries.

“The number one problem for the data scientists today is not how long my inferencing takes or not how long it takes to do model training; it’s can I get to the right data quickly?” said Pete Brey, global product executive, IBM Storage Fusion, at IBM, during an interview on theCUBE. “Some of the estimates are like 80% to 90% of their time is spent just trying to find the right data, and that’s the problem that we solve.”

Here’s theCUBE’s complete video interview with Pete Brey:

To watch more of theCUBE’s coverage of the IBM Storage Summit event, here’s our complete event video playlist:

https://www.youtube.com/watch?v=videoseries

(* Disclosure: TheCUBE is a paid media partner for the IBM Storage Summit. Neither IBM Corp., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *