Skip to main content

Troubleshooting Snowflake errors

When querying Lakehouse Iceberg tables or Polaris catalog-linked databases from Snowflake, you may encounter errors related to table initialization or object resolution. This page covers the most common errors and how to fix them.

Table isn't initialized

Error

SQL Compilation Error: table '<table_name>' is not initialized

GCP-hosted tenants only

This issue only affects Snowflake users whose Atlan tenants are deployed on Google Cloud Platform (GCP).

When querying Lakehouse Iceberg tables stored on Google Cloud Storage (GCS) from Snowflake, you may encounter a SQL compilation error stating that the table isn't initialized. This happens when the required GCS external volume and Polaris catalog integration aren't configured in Snowflake.

Cause

Snowflake requires an external volume to authenticate and access files stored in GCS. Without this configuration, Snowflake can't read the Iceberg metadata files needed to initialize the table.

When a Lakehouse Iceberg table is registered in Snowflake without a properly configured GCS external volume and Polaris catalog integration, Snowflake treats the table as uninitialized because it can't resolve the table's metadata location. In this setup, Atlan owns and manages the GCS bucket for metadata storage, while you own and manage the Snowflake account that accesses the data.

Solution

  1. Get required information from Atlan: Contact Atlan to obtain the GCS bucket name, the GCS prefix within the bucket, and the Polaris reader ID and reader secret.

  2. Create a GCS external volume: In your Snowflake account, create an external volume pointing to the Atlan GCS bucket. Replace <volume-name>, <storage-location-name>, <gcs-bucket-name>, and <gcs-prefix> with your values:

    CREATE EXTERNAL VOLUME <volume-name>
    STORAGE_LOCATIONS = (
    (
    NAME = '<storage-location-name>'
    STORAGE_PROVIDER = 'GCS'
    STORAGE_BASE_URL = 'gcs://<gcs-bucket-name>/<gcs-prefix>/'
    )
    )
    ALLOW_WRITES = FALSE;

    Retrieve the Snowflake GCP service account by running DESC EXTERNAL VOLUME <volume-name>; and note the STORAGE_GCP_SERVICE_ACCOUNT value.

  3. Request GCS bucket access from Atlan: Share the service account with Atlan. Atlan grants read-only access to the GCS bucket. Once Atlan confirms access, verify Snowflake can reach the bucket:

    SELECT SYSTEM$VERIFY_EXTERNAL_VOLUME('<volume-name>') AS status;
  4. Create a catalog integration: Create a catalog integration that connects Snowflake to the Lakehouse Polaris catalog. Replace <catalog-name> with a descriptive name, <tenant-subdomain> with your Atlan tenant subdomain, and the Polaris credential placeholders with the values from step 1:

    CREATE OR REPLACE CATALOG INTEGRATION <catalog-name>
    CATALOG_SOURCE = POLARIS
    TABLE_FORMAT = ICEBERG
    CATALOG_NAMESPACE = 'atlan-ns'
    REST_CONFIG = (
    CATALOG_URI = 'https://<tenant-subdomain>.atlan.com/api/polaris/api/catalog'
    CATALOG_NAME = 'atlan-wh'
    )
    REST_AUTHENTICATION = (
    TYPE = OAUTH
    OAUTH_CLIENT_ID = '<polaris-reader-id>'
    OAUTH_CLIENT_SECRET = '<polaris-reader-secret>'
    OAUTH_ALLOWED_SCOPES = ('PRINCIPAL_ROLE:lake_readers')
    )
    ENABLED = TRUE;
  5. Create a catalog-linked database: Create a database linked to the catalog integration. This lets Snowflake automatically sync and access the Lakehouse Iceberg tables:

    CREATE DATABASE <database-name>
    LINKED_CATALOG = (
    CATALOG = '<catalog-name>',
    ALLOWED_NAMESPACES = ('atlan-ns')
    )
    EXTERNAL_VOLUME = '<volume-name>';
  6. Verify tables are accessible: Once the database is created, Snowflake automatically syncs the tables. Verify the sync state and then query the Lakehouse Iceberg tables through the linked database:

    SELECT SYSTEM$CATALOG_LINK_STATUS('<database-name>');

Object doesn't exist

Error

Object does not exist

When querying Polaris catalog-linked databases from Snowflake, you may see Object doesn't exist errors even though the table or view clearly exists. This is caused by the Snowflake session parameter QUOTED_IDENTIFIERS_IGNORE_CASE being set to TRUE.

Snowflake has confirmed this as a known limitation that affects catalog-linked databases with lowercase object names.

You are likely hitting this issue if:

  • You are querying a Polaris catalog-linked database from Snowflake.

  • Your query uses double-quoted identifiers for tables or columns, including reserved keywords. For example:

    SELECT *
    FROM context_store.entity_metadata."table";
  • The object exists in Polaris and works from other engines, but Snowflake returns Object does not exist or similar resolution failures.

Cause

The Snowflake session parameter QUOTED_IDENTIFIERS_IGNORE_CASE is set to TRUE. With this setting, Snowflake treats double-quoted identifiers as case-insensitive and uppercases them during object resolution.

Polaris stores object names in lowercase. When Snowflake uppercases the identifier (for example, "table" becomes TABLE), the lookup in the Polaris catalog-linked database fails, even though table exists in lowercase.

This issue is especially visible when tables or columns are named using reserved SQL keywords (such as "table", "group", or "order") that are stored in lowercase in Polaris.

Solution

Before querying any Polaris catalog-linked database from Snowflake, disable QUOTED_IDENTIFIERS_IGNORE_CASE in your session:

ALTER SESSION SET QUOTED_IDENTIFIERS_IGNORE_CASE = FALSE;

Run this as the first statement in your Snowflake session—including BI tools, notebooks, and worksheets—before querying any catalog-linked database.

No special permissions required

ALTER SESSION SET doesn't require any elevated privileges. Any Snowflake user can modify their own session parameters, even if the parameter is set to TRUE at the account level.

Example session setup

-- Ensure quoted identifiers remain case-sensitive
ALTER SESSION SET QUOTED_IDENTIFIERS_IGNORE_CASE = FALSE;

-- (Optional) Set role, database, and schema as usual
USE ROLE ACCOUNTADMIN;
USE DATABASE <your_catalog_linked_database>;
USE SCHEMA <your_schema>;

-- Queries against Polaris catalog-linked objects now resolve correctly
SELECT *
FROM context_store.entity_metadata."table";

Configuration notes

  • The parameter can be set at the account, user, or role level. Even if it's globally TRUE, you can override it per session with:

    ALTER SESSION SET QUOTED_IDENTIFIERS_IGNORE_CASE = FALSE;
  • This setting is safe for other databases. Snowflake's default is FALSE, and leaving it FALSE keeps the usual SQL behavior where unquoted identifiers are uppercased and double-quoted identifiers are case-sensitive.

  • If you control Snowflake account-level configuration, consider keeping QUOTED_IDENTIFIERS_IGNORE_CASE = FALSE as the default for roles that access Polaris catalog-linked databases.

This workaround remains necessary until Snowflake ships a permanent fix for catalog-linked database behavior.


See also

  • Lakehouse: Overview of the Atlan Lakehouse and Iceberg table management.

Need help

If you need assistance after trying these steps, contact Atlan support.