Skip to main content

Connect Snowflake to Lakehouse

This guide walks you through how to connect Snowflake to your Lakehouse using Snowflake's catalog-linked database feature, so you can start running Snowflake queries and building Snowflake Cortex applications on your Atlan metadata.

GCP-hosted tenants

If your Atlan tenant is deployed on GCP, you must configure a GCS external volume before Snowflake can access Lakehouse Iceberg tables. Follow the steps in Snowflake: table is not initialized instead.

Prerequisites

Before you begin, make sure that:

  • Your Snowflake account can reach your Atlan tenant over HTTPS. If your tenant uses private networking (IP allowlists), see Private networking to allowlist your Snowflake egress IPs first.

  • You have ACCOUNTADMIN privileges in your Snowflake account.

    Using a custom role instead of ACCOUNTADMIN?

    The setup commands require the CREATE INTEGRATION and CREATE DATABASE account-level privileges. Roles like SYSADMIN don't have these by default. If you can't use ACCOUNTADMIN, grant the required privileges to your role first:

    -- Run as ACCOUNTADMIN
    GRANT CREATE INTEGRATION ON ACCOUNT TO ROLE <your_role>;
    GRANT CREATE DATABASE ON ACCOUNT TO ROLE <your_role>;

Set up connection in Snowflake

An Atlan administrator creates a catalog integration in Snowflake to link your Iceberg REST Catalog. This one-time setup enables all Snowflake roles to query Lakehouse data.

  1. In your Atlan workspace, navigate to Workflow > Marketplace > Atlan Lakehouse > View connection details. For Snowflake setup, click the Copy button next to the Snowflake command. The command is pre-filled with your tenant-specific details, including the Catalog URL and configuration parameters for the Iceberg REST Catalog API.

  2. Open your Snowflake console and navigate to the Worksheets section.

  3. Create a new worksheet using the ACCOUNTADMIN role (or a custom role with the required privileges—see Prerequisites).

  4. Paste the command you copied from Atlan into the worksheet. It looks similar to this:

    -- Create catalog integration
    CREATE OR REPLACE CATALOG INTEGRATION context_store_catalog
    CATALOG_SOURCE = POLARIS
    TABLE_FORMAT = ICEBERG
    CATALOG_NAMESPACE = 'context_store'
    REST_CONFIG = (
    CATALOG_URI = 'https://<tenant_subdomain>.atlan.com/api/polaris/api/catalog'
    WAREHOUSE = 'context_store'
    ACCESS_DELEGATION_MODE = VENDED_CREDENTIALS
    )
    REST_AUTHENTICATION = (
    TYPE = OAUTH
    OAUTH_CLIENT_ID = '<polaris_reader_id>'
    OAUTH_CLIENT_SECRET = '************************'
    OAUTH_ALLOWED_SCOPES = ('PRINCIPAL_ROLE:lake_readers')
    )
    ENABLED = TRUE;

    -- Create database
    CREATE DATABASE context_store
    LINKED_CATALOG = (
    CATALOG = 'context_store_catalog',
    SYNC_INTERVAL_SECONDS = 60
    );

    The WAREHOUSE and CATALOG_NAMESPACE values are pre-filled by Atlan based on your tenant's Polaris configuration. Don't change them—they must exactly match what Atlan has provisioned. Using the wrong value causes: Error occurred while processing POST request. Check the REST configuration and ensure the warehouse name '<your-value>' matches the Polaris catalog name.

    If you see an error like Failed to create catalog integration ... failed to parse response body into OAuthTokenResponse, the most likely cause is that your active Snowflake role lacks the CREATE INTEGRATION privilege. Switch to ACCOUNTADMIN or grant the required privileges as described in Prerequisites.

  5. Click Run in Snowflake to execute the commands. Your Snowflake account is now connected to the Lakehouse.

  6. In Snowflake, under DATABASES, you now see an entry for context_store, alongside all your other databases in Snowflake. You can now explore the contents of your Lakehouse using standard SQL commands, or use the data in the Lakehouse to build Snowflake Cortex applications.

    Example 1: To confirm that the setup worked and see available schemas in the Lakehouse, run:

    -- Use context_store database
    USE DATABASE context_store;

    -- Show schemas
    SHOW SCHEMAS IN context_store;

    Example 2: To view metadata for tables in your Atlan tenant, run:

    -- Get metadata for tables registered in Atlan
    SELECT *
    FROM context_store.entity_metadata."table"
    LIMIT 10;

    Some entity type names (table, column, view) are reserved words in Snowflake. When referencing them as table names, use double-quoted lowercase in your queries. Schema names like entity_metadata don't need quoting. Snowflake resolves unquoted identifiers case-insensitively. For more detail, see Object doesn't exist.

Grant access to Lakehouse tables

After the catalog integration is created, only the setup role can query Lakehouse tables. Use SQL grants to enable other roles, analysts, BI tools, and Cortex applications, to access the data. The grants below cover existing and future schemas and tables, so roles you grant once continue to work as the catalog grows.

  1. Verify the granting role has MANAGE GRANTS on the context_store database (or is ACCOUNTADMIN). If the granting role doesn't have this permission, run the following as ACCOUNTADMIN:

    -- Run as ACCOUNTADMIN to grant MANAGE GRANTS permission
    GRANT MANAGE GRANTS ON DATABASE context_store TO ROLE <granting_role>;

    Also verify each role you're granting Lakehouse access to has USAGE on a warehouse (required to run queries). Iceberg table reads use compute, so without warehouse USAGE, queries fail with No active warehouse selected:

    GRANT USAGE ON WAREHOUSE <warehouse> TO ROLE <role>;
  2. Grant access to Lakehouse tables by running the following as ACCOUNTADMIN (or as a role with MANAGE GRANTS on context_store):

    -- Database access
    GRANT USAGE ON DATABASE context_store TO ROLE <role>;

    -- Schema access: existing and future
    GRANT USAGE ON ALL SCHEMAS IN DATABASE context_store TO ROLE <role>;
    GRANT USAGE ON FUTURE SCHEMAS IN DATABASE context_store TO ROLE <role>;

    -- Iceberg table access: existing and future, across entire database
    GRANT SELECT ON ALL ICEBERG TABLES IN DATABASE context_store TO ROLE <role>;
    GRANT SELECT ON FUTURE ICEBERG TABLES IN DATABASE context_store TO ROLE <role>;

    Use database-scoped IN DATABASE … FUTURE SCHEMAS and IN DATABASE … FUTURE ICEBERG TABLES form. Schema-scoped equivalent (IN SCHEMA context_store.<schema>) only catches new tables inside schemas that already exist, any new schema added later by the catalog won't be covered, and the role won't see new namespaces.

    Don't mix database-scoped and schema-scoped future grants for same role

    If both database-level and schema-level future grants exist for the same object type on the same role, the schema-level grant wins—Snowflake treats the more specific scope as an override, and the database-level grant won't apply inside that schema. Pick one scope per role; for Lakehouse, the database-scoped form here is what recommended.

  3. Verify the grants are in place by running:

    -- Confirm future grants are in place
    SHOW FUTURE GRANTS IN DATABASE context_store;

    -- Switch to the granted role and run a sample query
    USE ROLE <role>;
    USE WAREHOUSE <warehouse>;
    SELECT * FROM context_store.entity_metadata."table" LIMIT 1;

    If the SELECT returns rows, the role can read existing Iceberg tables. New schemas and tables added by Atlan from this point on are picked up automatically by the future grants.

Next steps

Now that Snowflake is connected to Lakehouse, you can:

  • Query Atlan metadata from Snowflake: See the available metadata tables in Entity metadata reference.
  • Use cases: Explore popular patterns such as metadata enrichment tracking, lineage impact analysis, and glossary alignment in Use cases.
  • Credential rotation: If your Lakehouse credentials are rotated, see Credential rotation in the Security FAQ for Snowflake-specific update steps.