Nebius has recently revealed its newest version AI Cloud 3. 5 – a significant step forward of its full-stack cloud platform. The purpose of the upgrade is to make AI development workflows more efficient, enabling enterprises to roll out AI applications much quicker. The latest release offers a wide variety of innovations aimed at simplifying infrastructure, boosting performance, scalability as well as developer productivity.
Serverless Innovation to Simplify AI Workloads
The major feature of the release is the addition of serverless features which allow developers to deploy and run workloads at the speed of thought. Taking away from manual set-up and configuration of the infrastructure, the platform lets AI teams conduct experimental work, model training, and production-ready solutions’ deployment in no time.
Since infrastructure provisioning and runtime management is completely done by the Nebius platform, developers can divert their attention from routine tasks to creating and fine-tuning AI-powered applications.
Expanded GPU Portfolio for High-Performance Computing
Nebius AI Cloud 3.5 also improves its computing capabilities with the addition of the NVIDIA RTX PRO 6000 Blackwell Server Edition. The high-end GPU is designed to support various types of high-performance computing tasks, such as AI inference, industrial robotics, physical AI simulations, visual computing, and drug discovery.
Also Read: SentinelOne Expands Partnership with Google Cloud for AI-Powered Security
Streamlined Data Management and Migration
In order to meet the increasing needs of data transfer between hybrid and multi-cloud environments, Nebius has now launched a new Data Transfer Service. This feature can help in reducing the operational burden of migrating and replicating data between external S3-compatible storage systems and Nebius Cloud Regions.
Enhanced Cluster Management and Observability
The update also brings improvements to Managed Soperator, Nebius’s fully managed Slurm-on-Kubernetes solution. The configuration process has been redesigned to provide greater flexibility and control, allowing users to tailor Slurm clusters more precisely to their workload requirements.
In addition, enhancements to managed Kubernetes observability deliver deeper insights and improved control at the cluster level, enabling teams to monitor performance and optimize resource utilization more effectively.
Revamped Marketplace and Improved Administration
Nebius has redesigned its AI application marketplace to provide developers with quicker and more intuitive access to resources, models, and applications that are critical components of AI workflows today.
This new release guarantees that developers can efficiently locate and use the resources they require to speed up the development processes.
On top of that, the user administration features along with the role-based access control system have been enhanced. In this way, organizations have a hassle-free job in managing permissions across their teams. Also, the introduction of new public APIs for billing data will make it easier to monitor and generate reports on expenses for finance and operations teams.























