In the battle between data growth and budget, an observability pipeline solution allows companies to gather data, manipulate it, and deliver it to the right place. This enables organizations to connect the dots by examining the full scope of the data to understand the true end-user experience.
The costs associated with maintaining that increased volume of data are prohibitive, and the tools available in traditional data architecture aren’t up to the demands of today’s data-driven world, according to Clint Sharp (pictured), co-founder and chief executive officer of Cribl Inc.
“Digital transformation, the pandemic and remote work are driving significantly greater data volumes,” Sharp said. “Vendors haven’t been aligned to giving customers the tools to reshape that data because they’re incentivized to get as much data into their platform as possible.”
Sharp spoke with analyst John Furrier in advance of the AWS Startup Showcase: “Data as Code — The Future of Enterprise Data and Analytics” event, an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio, airing April 5 at 10 a.m. PT. They discussed the current data landscape, observability and the new architecture of data. (* Disclosure below.)
Observability solves security and specialized data service needs
Bolstered by last year’s $200M+ funding round, Cribl is poised to grow its operation. Seeing a gap between the needs of businesses and the capabilities of the tools available, the startup has developed solutions to help companies lower costs and increase functionality, according to Sharp.
“We’re giving them the tools to be able to filter out noise and ways to be able to aggregate this high-fidelity telemetry data; we give them the tools to take back control of their data,” he said.
Security and IT professionals use data to understand the behaviors of malicious actors. But in order to find vulnerability timelines and depth of security breaches, they need to be able to filter out noise and aggregate high-fidelity telemetry data.
“That’s why you’re starting to see the concepts like observability pipelines and observability lakes emerge, because they’re targeted at people who have a very unique set of problems that are not being solved by the general purpose data processing engine,” Sharp added.
With a multitude of companies already providing specialized solutions, the challenge today is finding a way to utilize tools offered by a variety of vendors without having to maintain a legacy relationship with each one in order to get support around the data service they are performing.
“One of the biggest problems in the industry is that vendors come to customers with valuable products that make their lives better but require them to maintain a relationship with that vendor,” Sharp said. “What we offer them is the ability to reuse existing data collection technologies to use the right tool for the right job and really give them that choice.”
In addition to severing the knot between companies and data tool vendors, Cribl’s solutions offer additional functionality. With the capabilities inherent in observability platforms, companies can “go back in time and rehydrate data into a new tool and store data in open formats,” Sharp explained.
Cribl allows ‘average users’ to perform data-engineering functions
As with any advance in technology, the increased capabilities come with a downside: Finding experts in a new field is the classic Catch-22. But Cribl’s products are built with this in mind.
“A key problem is that there’s a limit on the human resources that they have available, which is why we make the software easy to use and widely applicable to security professionals and tools administrators; our product is very approachable for them,” Sharp said.
For many users, data engineering isn’t their main function — it is something that they need, but not necessarily what they are trained to be able to do. Designing to the abilities of the “average degree professional,” new users can get started quickly, according to Sharp.
Outside of human resources, another challenge lies in the data platforms themselves. With Cribl Edge, the limitations on processing capacity are expanded outside of the stream to “utilize unused capacity that you’re already paying for to do the processing rather than having to centralize and aggregate all of this data,” Sharp explained. “By routing data to multiple locations. We help them control costs by eliminating noise and waste.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the AWS Startup Showcase: “Data as Code — The Future of Enterprise Data and Analytics” event:
(* Disclosure: TheCUBE is a paid media partner for the AWS Startup Showcase: “Data as Code — The Future of Enterprise Data and Analytics” event. Neither Cribl Inc., the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)