Databricks And NVIDIA Forge Stronger Ties To Propel Data And AI Workloads Forward
In a significant move within the tech industry, Databricks, a leader in Data and AI, has announced an expanded collaboration with NVIDIA, a giant in accelerated computing. This announcement was made at the GTC 2024 conference, marking a new chapter in the partnership between these two technology powerhouses. The focus of this collaboration is to enhance the Databricks Data Intelligence Platform through deeper technical integrations with NVIDIA's technologies.
The partnership is not new; NVIDIA recently participated in Databricks' Series I funding round. However, the latest developments promise to bring substantial benefits to customers by optimizing data and AI workloads on Databricks' platform using NVIDIA's accelerated computing and software. Ali Ghodsi, co-founder and CEO of Databricks, expressed enthusiasm about the partnership's evolution, highlighting its potential to drive value for customers through advanced workloads.

Jensen Huang, founder and CEO of NVIDIA, emphasized the importance of proprietary data as a crucial asset for intelligence creation in the AI era. He noted that accelerating data processing could significantly enhance AI development and deployment for enterprises seeking improved insights and outcomes.
Databricks' Data Intelligence Platform is rapidly becoming a go-to solution for organizations looking to develop generative AI solutions tailored to their specific business needs. Through its collaboration with NVIDIA, Databricks is focusing on model training and inference to push the boundaries of generative AI model development and deployment. The use of NVIDIA H100 Tensor Core GPUs for model training is a key aspect of this initiative, providing an efficient platform for customizing large language models (LLMs).
For model deployment, Databricks leverages NVIDIA's accelerated computing and software across its stack. A notable component is the NVIDIA TensorRT-LLM software used in Databricks' Mosaic AI Model Serving, ensuring cost-effective, scalable, and high-performance solutions. Mosaic AI's role as a TensorRT-LLM launch partner underscores the close technical collaboration between Databricks and NVIDIA.
Databricks plans to integrate native support for NVIDIA accelerated computing into Photon, its next-generation vectorized query engine. This integration aims to enhance speed and efficiency for data warehousing and analytics workloads. Photon powers Databricks SQL, offering leading price-performance and total cost of ownership (TCO) in the serverless data warehouse space.
Empowering Machine Learning and Deep Learning
Machine learning (ML) and deep learning have been pivotal workloads on Databricks' platform. The company provides pre-built deep learning infrastructure incorporating NVIDIA GPUs, along with pre-configured GPU support in the Databricks Runtime for ML. This setup enables users to quickly start with the right infrastructure while maintaining consistency across projects. Additionally, Databricks supports NVIDIA Tensor Core GPUs across all major cloud platforms, facilitating high-performance training for ML workloads.
The continued momentum between Databricks and NVIDIA aims to empower more organizations to develop next-generation data and AI applications with enhanced quality, speed, and agility. With over 10,000 organizations worldwide relying on its platform, including major names like Comcast and Condé Nast, Databricks stands at the forefront of data and AI innovation.
Databricks was founded by the original creators of key technologies such as Lakehouse, Apache Spark™, Delta Lake, and MLflow. Headquartered in San Francisco with global offices, the company continues to lead in unifying and democratizing data analytics and AI.