Run:ai certified to run the NVIDIA AI Enterprise software suite

TEL AVIV, Israel, September 20, 2022 /PRNewswire/ — Run:ai, the leader in compute orchestration for AI workloads, today announced that its Atlas platform is certified to run NVIDIA AI Enterprisean end-to-end, cloud-native suite of AI and data analytics software optimized to enable any organization to use AI.

“Run:AI Atlas certification for NVIDIA AI Enterprise will help data scientists run their AI workloads more efficiently,” said Omri Geller, CEO and co-founder of Run:ai. “Our mission is to accelerate AI and bring more models into production, and NVIDIA is working closely with us to help us achieve this goal.”

As many companies now use advanced machine learning technology and run larger models on more hardware, the demand for AI computer chips continues to grow. GPUs are essential for running AI applications, and companies are turning to software to get the most out of their AI infrastructure and get their models to market faster.

The Run:ai Atlas platform uses an intelligent Kubernetes scheduler and Fractional GPU software technology to provide AI practitioners with seamless access to multiple GPUs, multiple GPU nodes, or fractions of a single GPU. This allows teams to match the right amount of computing power to the needs of each AI workload, so they can do more on the same chips. With these features, Run:ai’s Atlas platform enables enterprises to maximize the efficiency of their infrastructure, avoiding a scenario where GPUs sit idle or only use a small amount of their power.

“Companies across industries are turning to AI to achieve breakthroughs that will improve customer service, drive sales and optimize operations,” said Justin Boitano, Vice President of Enterprise and Edge Computing at NVIDIA. “Run:ai’s certification for NVIDIA AI Enterprise provides customers with an integrated, cloud-native platform for deploying AI workflows with MLOps management capabilities.

Run:ai creates fractional GPUs as virtual within available GPU framebuffer memory and compute space. These fractional GPUs are accessed by containers, allowing different workloads to run in these containers, in parallel, and on the same GPU. Run:ai runs well on VMware vSphere and bare metal servers, and supports various Kubernetes distributions.

This certification is the latest in a series of Run:ai collaborations with NVIDIA. In March, Run:ai completed a proof of concept that enabled multi-cloud GPU flexibility for enterprises using NVIDIA GPUs in the cloud. This was followed by the full integration of the company NVIDIA Triton Inference Server. And in June, Run:ai worked with Weights & Biases and NVIDIA to access NVIDIA-accelerated computing resources orchestrated by Run:ai’s Atlas platform.

About Run:ai

Run:ai’s Atlas platform brings cloud-like simplicity to AI resource management – giving researchers on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system—which includes a workload-aware scheduler and abstraction layer—helps IT simplify AI implementation, increase team productivity, and to take full advantage of expensive GPUs. With Run:ai, enterprises streamline the development, management, and scale of AI applications on any infrastructure, including on-premises, edge, and cloud. Learn more about

SOURCE execution: ai