AWS adds support for batch computing into its EKS Kubernetes managed solution

AWS adds support for batch computing into its EKS Kubernetes managed solution

Posted on

Relating mostly to programs that can be executed with minimal human interaction, batch processing is commonly seen as an important feature of high-performance computing.

To address this need, Amazon Web Services Inc. recently announced that it integrated AWS Batch and Amazon EKS services with one another to help companies more easily run batch workloads in the cloud.

“We recently announced our batch support for EKS, our managed Kubernetes offering at AWS,” said Ian Colle (pictured), general manager of HPC at AWS. “And so batch computing is still a large portion of HPC workloads.”

Colle spoke with theCUBE industry analysts David Nicholson and Paul Gillin at SC22, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the use cases for batch computing and its connection to HPC. (* Disclosure below.)

Computing areas poised to take advantage of update

With batch processing, a computer processes a number of tasks simultaneously as a group, and the feature’s addition to EKS is poised to benefit niches such as autonomous vehicle simulations, according to Colle.

“We see lots of distributed machine learning, autonomous vehicle simulation and traditional HPC workloads taking advantage of AWS batch processing,” he said.

One trait that adds versatility and cost-effectiveness to AWS’ implementation is the ability to dynamically scale computing resources based on a workload’s queue depth. Customers can go from “seemingly nothing” to thousands of nodes, Colle explained.

“While they’re executing their work, they’re only paying for the instances while they’re working,” he said. “And then as the queue depth starts to drop and the number of jobs waiting in the queue starts to drop; then we start to dynamically scale down those resources. So it’s extremely powerful.”

In terms of the physical location of the K8s workload and the batch processing cluster, it depends entirely on customer preference, according to Colle.

“We have workflows that are all entirely within a single region, so where they could have a portion of, say the traditional HPC workflow, within that region as well as the batch, and they’re saving off the results to a shared storage file system,” he stated. “Or you can have customers that have a kind of a multi-region orchestration layer.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the SC22 event:

(* Disclosure: This is an unsponsored editorial segment. However, theCUBE is a paid media partner for SC22. Neither Dell Technologies Inc., the main sponsor for theCUBE’s event coverage  nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *