Skip to main content

Refine with Chat & build Private Preview

Once your context repository is created and the initial model is generated, use Chat & build to refine definitions in plain English and the YAML editor to make precise structural changes.

Prerequisites

Before you begin, make sure:

  • You've created a context repository and reviewed the generated model. See Build your context repository.
  • If you're using Databricks, you've deployed the repository at least once. Chat & build and Simulate use the live Genie Space, not the draft. See Deploy to Databricks.

Refine with Chat & build

The Chat tab is connected to your live model. On Snowflake this is the draft, on Databricks it's the deployed Genie Space. Type a change in plain English and CES proposes a YAML edit you can accept, modify, or discard.

  1. Click the Chat tab in your context repository.

  2. Describe the change you want to make. For example:

    Update the revenue metric to exclude accounts where account_type is 'trial'.
  3. CES proposes a YAML change. Review it in the Build tab, then click Accept to apply it, or edit the YAML directly if you need something different.

  4. Repeat for each refinement. Common examples:

    Add a metric for monthly active users — distinct user_id where event_type is 'login' in the last 30 days.
    Rename the dimension acq_channel to acquisition channel and add synonyms: source, channel, marketing channel.
    Add a filter that excludes rows where is_deleted is true from all queries.

Edit YAML directly

For changes that Chat & build can't express precisely, or when you need full structural control, edit the YAML directly in the Build tab. Changes save automatically. On Snowflake you edit a single file; on Databricks the YAML compiles to one Metric View per table.

  1. Sharpen descriptions. Descriptions are the single biggest driver of accuracy on Snowflake. Update the description field (or comment on Databricks) to reflect business meaning. Explain abbreviations, proprietary terms, and any nuance a business user needs to ask the right question.

    - name: arr
    description: "Annual recurring revenue. Excludes one-time fees and professional services."
  2. Fix a metric formula. If a measure aggregates the wrong column, update the expr field.

    - name: total_revenue
    expr: SUM(recognized_revenue) # was SUM(billed_amount)
  3. Add a default filter. Use a filters block to exclude rows automatically, such as test accounts or deleted records.

    filters:
    - name: exclude_internal
    expr: "account_type != 'internal'"
  4. Remove irrelevant dimensions. Delete any dimension that doesn't answer a real business question. Fewer, well-described fields produce a more reliable model than a wide schema.

  5. Add synonyms sparingly. Add a synonyms entry only for unique or industry-specific terms a clearer description can't resolve. On Snowflake, avoid synonym sprawl; synonyms consume tokens without improving accuracy. On Databricks, synonyms are more effective and can be used more liberally.

As you refine, you may surface conflicts: a metric name that produces different results depending on the asset path, a term that maps to multiple columns, or a question that can't be answered because no asset covers it. Fix these by specifying a canonical formula in the YAML, tightening the description or synonym list to remove ambiguity, or adding the missing asset to the repository.

Databricks YAML field names differ

Databricks Metric View YAML uses measures (not metrics) and comment (not description) for field-level annotations. The data_type and sample_values fields aren't supported. Applying Snowflake field names in a Databricks repository causes a YAML validation error. See the YAML schema reference for the full field reference.

Bringing BI tool metrics into your model

If you have metrics defined in a BI tool like Tableau or Sigma, define them in your Atlan glossary and link them to the relevant columns. Context Agents Studio picks up glossary-linked definitions during enrichment and maps them into the semantic model automatically.

Next steps

  • Simulate: run a question set to surface gaps and get specific fix suggestions.