Oracle simplifies Kubernetes deployment and operations in its cloud

Oracle simplifies Kubernetes deployment and operations in its cloud

Posted on

Oracle Corp. today is introducing new features in its cloud-based Oracle Container Engine for Kubernetes that it says can improve the reliability and efficiency of large-scale environments using the Kubernetes orchestrator for software containers while also simplifying operations and reducing costs.

The enhancements are aimed at companies that want to build and run cloud-native applications on Oracle Cloud Infrastructure using constructs such as microservices and agile DevOps techniques. “Kubernetes is notoriously complex not only to operate but to find the people with deep skill sets,” said Vijay Kumar, vice president of product marketing for application development services and developer relations at Oracle. “We’re dramatically simplifying the deployment and operations of Kubernetes at scale.”

Up to half off

Oracle supports Kubernetes in a wide range of runtime environments ranging from bare metal to serverless functions, Kumar said. The Oracle Container Engine for Kubernetes is fully compliant with Cloud Native Computing Foundation constructs and features a fully managed control plane. Oracle said customers can save up to 50% compared to running Kubernetes on competitive public clouds and also take advantage of extra utilities outside of Kubernetes clusters. Oracle also offers consistent pricing across all global regions to reduce complexity, Kumar said.

“A big piece of Kubernetes is compute and on a computer-by-computer basis, we’re less than 50% of the list price of the lowest-cost region of other providers,” said Leo Leung, vice president of products and strategy at Oracle. “Then there are additional parts of Kubernetes that require compute to boot up the cluster and we’re lower cost there as well.”

The updates include virtual nodes, which enable organizations to run Kubernetes-based applications reliably at scale without the operational complexity of managing, scaling, upgrading, and troubleshooting the underlying Kubernetes node infrastructure. Virtual Nodes also provide pod-level elasticity with usage-based pricing.

“Customers that are deep into Kubernetes may want to have control over worker nodes to get fine-grained control over the infrastructure, such as running all pods inside bare metal,” Leung said. “For the majority of customers, though, we believe serverless is the right answer. They don’t want knobs and dials. They want a service that’s going to scale.”

Full lifecycle management

The enhancements give organizations more flexibility to install and configure their chosen auxiliary operational software or related applications with complete lifecycle management covering deployment, upgrades, configuration changes, and patching. Add-ons include essential software deployed on the cluster such as CoreDNS and kube-proxy, as well as access to optional software operators such as Kubernetes dashboard, Oracle database and Oracle WebLogic.

Pod-level identity and access management controls are now available. The default setting for newly-provisioned clusters has been increased to 2,000 worker nodes and support for low-cost spot instances has been added. Financially backed service-level agreements covering uptime and availability for the Kubernetes API server and worker nodes.

With the option to scale to thousands of additional nodes, “you can have a fairly large application running on a Kubernetes cluster without having all the networking between clusters,” Kumar said.

Photo: Flickr CC

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *