Real-time platforms change the game for on-prem, cloud and edge data services

Real-time platforms change the game for on-prem, cloud and edge data services

Posted on

Data is the new oil. And the way companies access, manipulate and act upon their available data resources in real time is opening up new customer touchpoints and avenues for profitability.

Hazelcast Inc., a real-time stream processing platform, is working advance this space on-premises, at the edge or as a fully managed cloud service.

“One of our customers … can now basically originate a loan while the customer is banking,” said Manish Devgan (pictured), chief product officer of Hazelcast. “So you are at an ATM machine and you swipe your card, and you are taking 50 Euros out. And at that point, they can actually originate a custom loan offer based on your existing balance, your existing request and your credit score.”

Devgan spoke with theCUBE industry analyst Paul Gillin and guest analyst Keith Townsend in a conversation at last year’s KubeCon + CloudNativeCon Europe event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the solutions that companies like Hazelcast provide for different data states using Kubernetes. (* Disclosure below.)

Manipulating streaming and stored data

Different from a traditional database, Hazelcast works with data-at-rest and data-in-motion. The platform’s scale-out system runs on Kubernetes for advantages like resiliency.

For developers, Hazelcast’s entire platform is API-friendly, allowing for flexibility in calling information, integrations, and accessing streaming and stored data, according to Devgan.

“You see a lot of streaming platforms out there, which just do streaming,” Devgan explained. “But if you’re an application developer, you have to basically make a call-out to a streaming platform to do streaming analytics and then do another call to get the context of that.”

The platform enables IT and dev teams to access multiple data types in one call, thus allowing them to focus on other tasks building out their applications. The fundamental piece powering this capability at Hazelcast is its use of random access memory at scale.

“We have a scale-out RAM-based server,” Devgan stated. “That’s where you get the low latency from. In fact, last year we did a benchmark. We are able to process a billion events a second with 99% of the latency in under 30 milliseconds.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon Europe event:

(* Disclosure: TheCUBE is a paid media partner for the KubeCon + CloudNativeCon Europe event. Red Hat Inc., the main sponsor for theCUBE’s event coverage, the Cloud Native Computing Foundation, or other sponsors do not have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *