This article provides a high-level overview of Databricks architecture, including its enterprise architecture, in combination with AWS and Google Cloud.
Databricks is structured to enable secure cross-functional team collaboration while keeping a significant amount of backend services managed by Databricks so you can stay focused on your data science, data analytics, and data engineering tasks.
The following diagram describes the overall architecture of Databricks on AWS and Google Cloud
Databricks operates out of a control plane and a data plane.
Control plane and data plane
The control plane includes the backend services that Databricks manages in its own AWS account. Notebook commands and many other workspace configurations are stored in the control plane and encrypted at rest.
The data plane is where your data is processed.
For most Databricks computation, the compute resources are in your AWS account in what is called the Classic data plane. This is the type of data plane Databricks uses for notebooks, jobs, and for pro and classic Databricks SQL warehouses.
If you enable Serverless compute for Databricks SQL, the compute resources for Databricks SQL are in a shared Serverless data plane. The compute resources for notebooks, jobs, and pro and classic Databricks SQL warehouses still live in the Classic data plane in the customer account.
Your data lake is stored at rest in your own AWS account and Job results reside in storage in your account.
For Google Cloud:
Your Google Cloud account manages the data plane, and is where your data resides. This is also where data is processed.
Your data is stored at rest in your Google Cloud account in the data plane and in your own data sources, not the control plane, so you maintain control and ownership of your data. Job results reside in storage in your account.
Interactive notebook results are stored in a combination of the control plane (partial results for presentation in the UI) and your AWS storage. If you want interactive notebook results stored only in your cloud account storage, you can ask your Databricks representative to enable interactive notebook results in the customer account for your workspace.
E2 architecture for AWS
In September 2020, Databricks released the E2 version of the platform, which provides:
- Multi-workspace accounts: Create multiple workspaces per account using the Account API 2.0.
- Customer-managed VPCs: Create Databricks workspaces in your own VPC rather than using the default architecture in which clusters are created in a single AWS VPC that Databricks creates and configures in your AWS account.
- Secure cluster connectivity: Also known as “No Public IPs,” secure cluster connectivity lets you launch clusters in which all nodes have only private IP addresses, providing enhanced security.
- Customer-managed keys for managed services: Provide KMS keys to encrypt notebook and secret data in the Databricks-managed control plane.
Along with features like token management, IP access lists, cluster policies, and IAM credential passthrough, the E2 architecture makes the Databricks platform on AWS more secure, more scalable, and simpler to manage.