Troubleshooting Databricks errors
When setting up or using a Databricks-backed context repository in Context Engineering Studio, you may encounter errors related to authentication, Genie Space deployment, or Unity Catalog access. This page covers the most common Databricks errors and how to fix them.
OAuth token failure
Users can't load Databricks assets in the Configure tab.
Cause
The service principal client_id or client_secret configured in the Databricks connection is incorrect or expired.
Solution
Regenerate the client secret in the Azure or Databricks service principal settings and update the connection credentials in Atlan.
YAML validation error after generation
The Build tab shows an inline error with a line number and a fix hint.
Cause
The generated Metric View YAML contains a structural issue. A common Databricks-specific cause is dollar signs inside the YAML content conflicting with the $$...$$ DDL wrapper Databricks uses. Other causes include:
- Nested aggregate definitions
- Sample values containing special characters (HTML entities or regex patterns)
- Column names that don't match the actual schema
Solution
- Review the error location shown in the Build tab.
- Use the Autofix button if available, or edit the YAML directly.
- Escape or remove any dollar signs in sample values or descriptions.
- Compare incorrect column references with the actual table schema in the catalog.
- Re-deploy after correcting the YAML.
Chat not available before deploy
The Chat & build tab is greyed out or unavailable.
Cause
For Databricks, Chat & build requires a Genie Space to exist. The Genie Space is only created after the first successful deployment.
Solution
Deploy the context repository first. See Deploy to Databricks.
Genie space creation failed
Deployment fails with a Genie-specific error message.
Cause
Either the Genie feature isn't enabled on the Databricks workspace, or the service principal doesn't have sufficient permissions to create a Genie Space.
Solution
- Enable Genie in your Databricks admin settings.
- Confirm the service principal has the required permissions on the target workspace.
- See Grant Databricks permissions for the full permission list.
Atlas catalog publish failed
The Deploy tab shows publishStatus: FAILED with a publishError message.
Cause
CES deployed the model to Databricks successfully, but couldn't register it as an entity in the Atlan catalog (Atlas). This is a connectivity issue between CES and Atlas, not a Databricks issue.
Solution
Retry the deploy. If the error persists, contact Atlan support, this indicates an Atlas connectivity issue that requires investigation.
Unity Catalog not accessible
Asset loading returns empty results in the Configure tab.
Cause
The service principal doesn't have USE CATALOG on the Unity Catalog.
Solution
GRANT USE CATALOG ON CATALOG <your_catalog> TO <service_principal>;
SQL warehouse not running
Evaluation hangs or fails without a clear error.
Cause
The SQL warehouse configured in the Databricks connection is stopped and autostart isn't enabled.
Solution
Start the warehouse in Databricks, or enable autostart on the warehouse. Confirm the service principal has CAN USE on the warehouse.
Chat returns unclear intent
The AI responds by asking for clarification instead of taking action.
Cause
The question was too vague for the intent classifier to categorize it.
Solution
Rephrase the question to be specific. For example, instead of "add something," use "add a metric for monthly active users." Clear, action-oriented prompts reduce classification ambiguity.
Chat returns errors
The chat bubble shows an error response.
Cause
The Genie Space couldn't process the question with the deployed model.
Solution
- Confirm the Metric View YAML is valid using the Build tab.
- Re-run the evaluation suite to identify failing queries.
- Deploy a corrected version of the model before retrying.
SQL generated but no results
Chat shows the generated SQL but returns zero rows.
Cause
The model or golden query points to an empty table, or the query uses a date range that doesn't match the data.
Solution
Confirm the table has data for the queried time range. If the golden query uses hard-coded date filters, update them to match your data.
Slow loading or generation
Asset loading taking more than 30 seconds or generation taking 2–3 minutes is expected behavior for large schemas. CES fetches full column metadata for each table during asset loading, and the LLM processes all columns during generation. Don't close the tab while generation is in progress.
If evaluation is slow, each golden query executes a full SQL run on your Databricks warehouse, use a larger warehouse and run evaluations one at a time to reduce wait times.
See also
Need help
If you need assistance after trying these steps, contact Atlan Support.