Nvidia on Wednesday announced the acquisition of AI-centric Kubernetes orchestration provider Run:ai in an effort to help bolster the efficiency of computing clusters built on GPU.Run:ai's platform provides a central user interface and control plane for working with a variety of popular Kubernetes variants. This makes it a bit like RedHat's OpenShift or SUSE's Rancher, and it features many of the same tools for managing things like namespaces, user profiles, and resource allocations.
According to Nvidia, Run:ai's platform already supports its DGX compute platforms, including its Superpod configurations, the Base Command cluster management system, NGC container library, and an AI Enterprise suite. Meanwhile, those subscribed to Nvidia's DGX Cloud will get access to Run:ai's feature set for their AI workloads, including large language model deployments.NIMS are essentially pre-configured and optimized container images containing the model, whether it be the open source or proprietary version, with all the dependencies necessary to get it running.