Connect Snowflake to Lakehouse
This guide walks you through how to connect Snowflake to your Lakehouse using Snowflake's catalog-linked database feature, so you can start running Snowflake queries and building Snowflake Cortex applications on your Atlan metadata.
If your Atlan tenant is deployed on GCP, you must configure a GCS external volume before Snowflake can access Lakehouse Iceberg tables. Follow the steps in Snowflake: table is not initialized instead.
Prerequisites
Before you begin, make sure that:
-
Your Snowflake account can reach your Atlan tenant over HTTPS. If your tenant uses private networking (IP allowlists), see Private networking to allowlist your Snowflake egress IPs first.
-
You have ACCOUNTADMIN privileges in your Snowflake account.
Using a custom role instead of ACCOUNTADMIN?The setup commands require the
CREATE INTEGRATIONandCREATE DATABASEaccount-level privileges. Roles likeSYSADMINdon't have these by default. If you can't useACCOUNTADMIN, grant the required privileges to your role first:-- Run as ACCOUNTADMINGRANT CREATE INTEGRATION ON ACCOUNT TO ROLE <your_role>;GRANT CREATE DATABASE ON ACCOUNT TO ROLE <your_role>;
Set up connection in Snowflake
An Atlan administrator creates a catalog integration in Snowflake to link your Iceberg REST Catalog. This one-time setup enables all Snowflake roles to query Lakehouse data.
-
In your Atlan workspace, navigate to Workflow > Marketplace > Atlan Lakehouse > View connection details. For Snowflake setup, click the Copy button next to the Snowflake command. The command is pre-filled with your tenant-specific details, including the Catalog URL and configuration parameters for the Iceberg REST Catalog API.
-
Open your Snowflake console and navigate to the Worksheets section.
-
Create a new worksheet using the ACCOUNTADMIN role (or a custom role with the required privileges—see Prerequisites).
-
Paste the command you copied from Atlan into the worksheet. It looks similar to this:
-- Create catalog integrationCREATE OR REPLACE CATALOG INTEGRATION context_store_catalogCATALOG_SOURCE = POLARISTABLE_FORMAT = ICEBERGCATALOG_NAMESPACE = 'context_store'REST_CONFIG = (CATALOG_URI = 'https://<tenant_subdomain>.atlan.com/api/polaris/api/catalog'WAREHOUSE = 'context_store'ACCESS_DELEGATION_MODE = VENDED_CREDENTIALS)REST_AUTHENTICATION = (TYPE = OAUTHOAUTH_CLIENT_ID = '<polaris_reader_id>'OAUTH_CLIENT_SECRET = '************************'OAUTH_ALLOWED_SCOPES = ('PRINCIPAL_ROLE:lake_readers'))ENABLED = TRUE;-- Create databaseCREATE DATABASE context_storeLINKED_CATALOG = (CATALOG = 'context_store_catalog',SYNC_INTERVAL_SECONDS = 60);The
WAREHOUSEandCATALOG_NAMESPACEvalues are pre-filled by Atlan based on your tenant's Polaris configuration. Don't change them—they must exactly match what Atlan has provisioned. Using the wrong value causes:Error occurred while processing POST request. Check the REST configuration and ensure the warehouse name '<your-value>' matches the Polaris catalog name.If you see an error like
Failed to create catalog integration ... failed to parse response body into OAuthTokenResponse, the most likely cause is that your active Snowflake role lacks theCREATE INTEGRATIONprivilege. Switch to ACCOUNTADMIN or grant the required privileges as described in Prerequisites. -
Click Run in Snowflake to execute the commands. Your Snowflake account is now connected to the Lakehouse.
-
In Snowflake, under DATABASES, you now see an entry for
context_store, alongside all your other databases in Snowflake. You can now explore the contents of your Lakehouse using standard SQL commands, or use the data in the Lakehouse to build Snowflake Cortex applications.Example 1: To confirm that the setup worked and see available schemas in the Lakehouse, run:
-- Use context_store databaseUSE DATABASE context_store;-- Show schemasSHOW SCHEMAS IN context_store;Example 2: To view metadata for tables in your Atlan tenant, run:
-- Get metadata for tables registered in AtlanSELECT *FROM context_store.entity_metadata."table"LIMIT 10;Some entity type names (
table,column,view) are reserved words in Snowflake. When referencing them as table names, use double-quoted lowercase in your queries. Schema names likeentity_metadatadon't need quoting. Snowflake resolves unquoted identifiers case-insensitively. For more detail, see Object doesn't exist.
Grant access to Lakehouse tables
After the catalog integration is created, only the setup role can query Lakehouse tables. Use SQL grants to enable other roles, analysts, BI tools, and Cortex applications, to access the data. The grants below cover existing and future schemas and tables, so roles you grant once continue to work as the catalog grows.
-
Verify the granting role has
MANAGE GRANTSon thecontext_storedatabase (or isACCOUNTADMIN). If the granting role doesn't have this permission, run the following asACCOUNTADMIN:-- Run as ACCOUNTADMIN to grant MANAGE GRANTS permissionGRANT MANAGE GRANTS ON DATABASE context_store TO ROLE <granting_role>;Also verify each role you're granting Lakehouse access to has
USAGEon a warehouse (required to run queries). Iceberg table reads use compute, so without warehouseUSAGE, queries fail withNo active warehouse selected:GRANT USAGE ON WAREHOUSE <warehouse> TO ROLE <role>; -
Grant access to Lakehouse tables by running the following as
ACCOUNTADMIN(or as a role withMANAGE GRANTSoncontext_store):-- Database accessGRANT USAGE ON DATABASE context_store TO ROLE <role>;-- Schema access: existing and futureGRANT USAGE ON ALL SCHEMAS IN DATABASE context_store TO ROLE <role>;GRANT USAGE ON FUTURE SCHEMAS IN DATABASE context_store TO ROLE <role>;-- Iceberg table access: existing and future, across entire databaseGRANT SELECT ON ALL ICEBERG TABLES IN DATABASE context_store TO ROLE <role>;GRANT SELECT ON FUTURE ICEBERG TABLES IN DATABASE context_store TO ROLE <role>;Use database-scoped
IN DATABASE … FUTURE SCHEMASandIN DATABASE … FUTURE ICEBERG TABLESform. Schema-scoped equivalent (IN SCHEMA context_store.<schema>) only catches new tables inside schemas that already exist, any new schema added later by the catalog won't be covered, and the role won't see new namespaces.Don't mix database-scoped and schema-scoped future grants for same roleIf both database-level and schema-level future grants exist for the same object type on the same role, the schema-level grant wins—Snowflake treats the more specific scope as an override, and the database-level grant won't apply inside that schema. Pick one scope per role; for Lakehouse, the database-scoped form here is what recommended.
-
Verify the grants are in place by running:
-- Confirm future grants are in placeSHOW FUTURE GRANTS IN DATABASE context_store;-- Switch to the granted role and run a sample queryUSE ROLE <role>;USE WAREHOUSE <warehouse>;SELECT * FROM context_store.entity_metadata."table" LIMIT 1;If the
SELECTreturns rows, the role can read existing Iceberg tables. New schemas and tables added by Atlan from this point on are picked up automatically by the future grants.
Next steps
Now that Snowflake is connected to Lakehouse, you can:
- Query Atlan metadata from Snowflake: See the available metadata tables in Entity metadata reference.
- Use cases: Explore popular patterns such as metadata enrichment tracking, lineage impact analysis, and glossary alignment in Use cases.
- Credential rotation: If your Lakehouse credentials are rotated, see Credential rotation in the Security FAQ for Snowflake-specific update steps.