Skip to main content

Crawl Databricks

Once you have configured the Databricks access permissions, you can establish a connection between Atlan and your Databricks instance. (If you are also using AWS PrivateLink or Azure Private Link for Databricks, you will need to set that up first, too.)

To crawl metadata from your Databricks instance, review the order of operations and then complete the following steps.

Select the source

To select Databricks as your source:

  1. In the top right corner of any screen, navigate to New and then click New Workflow.

  2. From the list of packages, select Databricks Assets, and click Setup Workflow.

Provide credentials

Choose your extraction method:

Direct extraction method

JDBC

To enter your Databricks credentials:

  1. For Host, enter the hostname, AWS PrivateLink endpoint, or Azure Private Link endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Personal Access Token, enter the access token you generated when setting up access.
  4. For HTTP Path, enter one of the following:
  5. Click Test Authentication to confirm connectivity to Databricks using these details.
  6. Once successful, at the bottom of the screen click Next.
danger

Make sure your Databricks instance (SQL warehouse or interactive cluster) is up and running, otherwise the Test Authentication step times out.

AWS service principal

To enter your Databricks credentials:

  1. For Host, enter the hostname or AWS PrivateLink endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Client ID, enter the client ID for your AWS service principal.
  4. For Client Secret, enter the client secret for your AWS service principal.
  5. Click Test Authentication to confirm connectivity to Databricks using these details.
  6. Once successful, at the bottom of the screen click Next.

Azure service principal

To enter your Databricks credentials:

  1. For Host, enter the hostname or Azure Private Link endpoint for your Databricks instance.
  2. For Port, enter the port number of your Databricks instance.
  3. For Client ID, enter the application (client) ID for your Azure service principal.
  4. For Client Secret, enter the client secret for your Azure service principal.
  5. For Tenant ID, enter the directory (tenant) ID for your Azure service principal.
  6. Click Test Authentication to confirm connectivity to Databricks using these details.
  7. Once successful, at the bottom of the screen click Next.

Offline extraction method

Atlan supports the offline extraction method for fetching metadata from Databricks. This method uses Atlan's databricks-extractor tool to fetch metadata. You need to first extract the metadata yourself and then make it available in S3.

To enter your S3 details:

  1. For Bucket name, enter the name of your S3 bucket.
  2. For Bucket prefix, enter the S3 prefix under which all the metadata files exist. These include output/databricks-example/catalogs/success/result-0.json, output/databricks-example/schemas/{{catalog_name}}/success/result-0.json, output/databricks-example/tables/{{catalog_name}}/success/result-0.json, and similar files.
  3. (Optional) For Bucket region, enter the name of the S3 region.
  4. When complete, at the bottom of the screen, click Next.

Agent extraction method

Atlan supports using a Secure Agent for fetching metadata from Databricks. To use a Secure Agent, follow these steps:

  1. Select the Agent tab.
  2. Configure the Databricks data source by adding the secret keys for your secret store. For details on the required fields, refer to the Direct extraction section.
  3. Complete the Secure Agent configuration by following the instructions in the How to configure Secure Agent for workflow execution guide.
  4. Click Next after completing the configuration.

Configure the connection

To complete the Databricks connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might want to use values like production, development, gold, or analytics.

  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.

    danger

    If you don't specify any user or group, nobody can manage the connection - not even admins.

  3. (Optional) To prevent users from querying any Databricks data, change Enable SQL Query to No.

  4. (Optional) To prevent users from previewing any Databricks data, change Enable Data Preview to No.

  5. (Optional) To prevent users from running large queries, change Max Row Limit or keep the default selection.

  6. At the bottom of the screen, click the Next button to proceed.

Configure the crawler

Before running the Databricks crawler, you can further configure it.

System tables extraction method

The system metadata extraction method is only available for Unity Catalog-enabled workspaces. It provides access to detailed metadata from system tables and supports all three authentication types. You can extract metadata from your Databricks workspace using this method. Follow these steps:

  1. Set up authentication using one of the following:

  2. The default options can work as is. You may choose to override the defaults for any of the remaining options:

    • For Asset selection, select a filtering option:

      • For SQL warehouse, click the dropdown to select the SQL warehouse you want to configure.
      • To select the assets you want to include in crawling, click Include by hierarchy and filter for assets down to the database or schema level. (This defaults to all assets, if none are specified.)
      • To have the crawler include Databases, Schemas, or Tables & Views based on a naming convention, click Include by regex and specify a regular expression - for example, specifying ATLAN_EXAMPLE_DB.* for Databases includes all the matching databases and their child assets.
      • To select the assets you want to exclude from crawling, click Exclude by hierarchy and filter for assets down to the database or schema level. (This defaults to no assets, if none are specified.)
      • To have the crawler ignore Databases, Schemas, or Tables & Views based on a naming convention, click Exclude by regex and specify a regular expression - for example, specifying ATLAN_EXAMPLE_TABLES.* for Tables & Views excludes all the matching tables and views.
      • Click + to add more filters. If you add multiple filters, assets are crawled based on matching all the filtering conditions you have set.
      • To import tags from Databricks to Atlan, change Import Tags to Yes. Note that you must have a Unity Catalog-enabled workspace to import Databricks tags in Atlan.
      Did you know?

      If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Incremental extraction Public preview

  • Toggle incremental extraction, for a faster and more efficient metadata extraction.

JDBC extraction method

The JDBC extraction method uses JDBC queries to extract metadata from your Databricks instance. This was the original extraction method provided by Databricks. This extraction method is only supported for personal access token authentication.

You can override the defaults for any of these options:

  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
  • For View Definition Lineage, keep the default Yes to generate upstream lineage for views based on the tables referenced in the views or click No to exclude from crawling.
  • For Advanced Config, keep Default for the default configuration or click Advanced to further configure the crawler:
    • To enable or disable schema-level filtering at source, click Enable Source Level Filtering and select True to enable it or False to disable it.

REST API extraction method

The REST API extraction method uses Unity Catalog to extract metadata from your Databricks instance. This extraction method is supported for all three authentication options: personal access token, AWS service principal, and Azure service principal.

While REST APIs are used to extract metadata, JDBC queries are still used for querying purposes.

You can override the defaults for any of these options:

  • Change the extraction method under Extraction method to REST API.
  • To select the assets you want to include in crawling, click Include Metadata. (This will default to all assets, if none are specified.)
  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This will default to no assets if none are specified.)
  • To import tags from Databricks to Atlan, change Import Tags to Yes. Note that you must have a Unity Catalog-enabled workspace to import Databricks tags in Atlan.
    • For SQL warehouse, click the dropdown to select the SQL warehouse you have configured.
Did you know?

If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

Follow these steps to run the Databricks crawler:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.

Once the crawler has completed running, you will see the assets in Atlan's asset page! 🎉