Snowflake Private Preview
This page summarizes the Snowflake-specific behavior of Context Engineering Studio: what you need before you start, how CES authenticates, what works before deployment, and what artifact CES produces when you deploy.
Prerequisites
Before you create a Snowflake context repository:
- CES is enabled on your tenant and your team has the CES persona. See Setup.
- A Snowflake connection exists in Atlan. CES reuses your existing connection; no separate CES connection is required.
- Permissions are granted on the Atlan service role. See Grant Snowflake permissions for the full set. The minimum CES needs:
USAGEon the target database and schema.CREATE SEMANTIC VIEWon the target schema.SNOWFLAKE.CORTEX_USERorSNOWFLAKE.CORTEX_ANALYST_USERdatabase role for simulation and Chat & build.USAGEon the warehouse CES uses for simulations and evaluations.REFERENCESon semantic views in the target database for catalog crawling.SELECTon the source tables and views (optional, enables sample values and live simulation).
- An Atlan Insights connection is enabled on the tenant (recommended, not required). Insights gives generated models richer enrichment and seeds question sets from real user activity. Without it, you can still build, simulate, and deploy by adding question/SQL pairs manually.
CES runs a preflight check in the Configure panel that validates all these requirements before you build. Fix any failing check and re-run.
Authentication
CES authenticates to Snowflake using the service account role configured on your existing Atlan Snowflake connection. The role must be used consistently for:
- Catalog crawling (lineage, query history).
SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAMLat deploy time (the native Snowflake function that validates and creates the view atomically).GET_DDLon deployed semantic views (viaINFORMATION_SCHEMA).- Invoking Cortex Analyst during Chat & build and Simulate.
Snowflake is particular about which role is used with Cortex Analyst and semantic views. A role switch between build and deploy is the most common setup mistake, make sure the same role has all listed permissions.
What works before deploy
Unlike Databricks, Snowflake supports the full Build → Simulate loop before deployment. Once a repository is created and the initial semantic model is generated, you can immediately:
- Chat & build: ask natural-language questions directly on the draft model. CES invokes Cortex Analyst on the in-memory YAML, without requiring a deployed Semantic View. Responses return the generated SQL, a plain-language explanation, and the tabular result from executing the SQL in your warehouse.
- Simulate: run the full question set on the draft model. Fixes, re-runs, and iteration all happen pre-deploy.
- Edit YAML directly: the Build tab compiles YAML changes into the in-memory semantic layer on the fly.
In-account judging
On Snowflake, the simulation judge runs inside your Snowflake account via SNOWFLAKE.CORTEX.COMPLETE(). Result data never leaves Snowflake for judging, which simplifies governance reviews. The judge model is selected per tenant from the Cortex models available in your region and entitlement; contact Atlan support if you need a different model for your tenant.
This means you can get a repository to the state the business is willing to depend on without ever touching Snowflake production. Deploy is a release step, not a setup step.
Deployed artifact
CES compiles the repository YAML into a single Snowflake Semantic View and creates it atomically via Snowflake's native SYSTEM$CREATE_SEMANTIC_VIEW_FROM_YAML function. The function validates the YAML and creates (or replaces) the view in one step, so re-deploys are idempotent and safe.
The resulting object in your Snowflake account:
CREATE OR REPLACE SEMANTIC VIEW <target_database>.<target_schema>.<semantic_view_name>
tables (
<table_alias> AS <database>.<schema>.<table>
comment='<table_description>'
)
facts (
<table_alias>.<column> AS <fact_name>
comment='<fact_description>'
)
dimensions (
<table_alias>.<column> AS <dimension_name>
comment='<column_description>',
...
)
relationships (
<left_alias>.<column> AS <relationship_name>
REFERENCES <right_alias>.<column>
comment='<relationship_description>'
);
One context repository produces one semantic view. No stored procedures, stages, or other artifacts are created.
See the YAML schema reference for the full YAML structure and the DDL reference for the compiled output.
Sizing and scoping
Follow Snowflake's Cortex Analyst best practices:
- Start with 5–10 tables for an initial POC, narrower scope makes debugging and iteration faster.
- No hard limits on size. Cortex Analyst no longer has hard limitations on semantic view size, but irrelevant columns reduce answer quality without adding value.
- Include only business-relevant columns that appear in SQL generation. Exclude surrogate keys, ETL audit fields, and internal flags in the Build tab's column selection.
- Route across multiple semantic views. Customers have tested successfully with 50+ semantic views in production. Prefer several domain-scoped repositories over one wide one, Cortex Analyst's routing handles the selection.
Best practices for Snowflake accuracy
These Snowflake-specific practices shape how you build a repository for Cortex Analyst:
- Descriptions are required, not optional. Snowflake's guidance is explicit: high-quality descriptions on every table and column are the single biggest driver of accuracy. Explain proprietary terms and abbreviations, don't assume shared knowledge.
- Avoid synonyms unless the term is unique or industry-specific. Snowflake now discourages generic synonym sprawl, synonyms consume tokens without meaningful accuracy improvement. Reserve them for terms with genuine jargon or ambiguity that a sharper description can't resolve.
- Many-to-many relationships aren't directly supported. If your model has one, add a shared dimension (bridge) table to represent the relationship as two
MANY_TO_ONEjoins. - Verified queries carry the accuracy signal. Start with 10 to 20 verified questions covering the most common usage patterns; grow the set with production traces promoted from Observe.
End-user access
After deploy, end users need REFERENCES and SELECT on the semantic view to query it through Cortex Analyst. See Grant access and verify.
Observe (currently available for Snowflake Cortex deployments)
CES's Observe tab is currently available for Snowflake Cortex deployments. It reads from Snowflake's built-in system view SNOWFLAKE.LOCAL.CORTEX_ANALYST_REQUESTS_V, maps each production query back to its context repository by semantic model name, and surfaces:
- The user's question and the SQL Cortex Analyst generated.
- Latency, request status (
ANSWERED,UNANSWERED,CORRECT,INCORRECT). - User feedback (thumbs up / thumbs down) captured at query time.
Interactions that receive a thumbs-up are auto-promoted to your repository's question set
as verified question-answer pairs, so production feedback directly seeds the next simulation. Failing interactions surface alongside proposed fixes you can apply and re-test without leaving CES.
See Monitor with Observe tab for the workflow.
Troubleshooting
For Snowflake-specific errors, preflight failures, Cortex availability, simulation errors, deployment errors, see Troubleshooting Snowflake.
Next steps
- Grant Snowflake permissions: the one-time setup SQL.
- Build: describe a domain and generate a semantic model.
- Simulate: run a question set and act on diagnostics.
- Deploy to Snowflake: certify and push to Cortex Analyst.
- YAML schema reference.
- DDL reference.