Snowflake error: table isn't initialized
This issue only affects Snowflake users whose Atlan tenants are deployed on Google Cloud Platform (GCP).
When querying Lakehouse Iceberg tables stored on Google Cloud Storage (GCS) from Snowflake, you may encounter a SQL compilation error stating that the table isn't initialized. This happens when the required GCS external volume and Polaris catalog integration aren't configured in Snowflake.
Table isn't initialized
SQL Compilation Error: table '<table_name>' is not initialized
Cause
Snowflake requires an external volume to authenticate and access files stored in GCS. Without this configuration, Snowflake can't read the Iceberg metadata files needed to initialize the table.
When a Lakehouse Iceberg table is registered in Snowflake without a properly configured GCS external volume and Polaris catalog integration, Snowflake treats the table as uninitialized because it can't resolve the table's metadata location. In this setup, Atlan owns and manages the GCS bucket for metadata storage, while you own and manage the Snowflake account that accesses the data.
Solution
-
Get required information from Atlan: Contact Atlan to obtain the GCS bucket name, the GCS prefix within the bucket, and the Polaris reader ID and reader secret.
-
Create a GCS external volume: In your Snowflake account, create an external volume pointing to the Atlan GCS bucket. Replace
<volume-name>,<storage-location-name>,<gcs-bucket-name>, and<gcs-prefix>with your values:CREATE EXTERNAL VOLUME <volume-name>
STORAGE_LOCATIONS = (
(
NAME = '<storage-location-name>'
STORAGE_PROVIDER = 'GCS'
STORAGE_BASE_URL = 'gcs://<gcs-bucket-name>/<gcs-prefix>/'
)
)
ALLOW_WRITES = FALSE;Retrieve the Snowflake GCP service account by running
DESC EXTERNAL VOLUME <volume-name>;and note theSTORAGE_GCP_SERVICE_ACCOUNTvalue. -
Request GCS bucket access from Atlan: Share the service account with Atlan. Atlan grants read-only access to the GCS bucket. Once Atlan confirms access, verify Snowflake can reach the bucket:
SELECT SYSTEM$VERIFY_EXTERNAL_VOLUME('<volume-name>') AS status; -
Create a catalog integration: Create a catalog integration that connects Snowflake to the Lakehouse Polaris catalog. Replace
<catalog-name>with a descriptive name,<tenant-subdomain>with your Atlan tenant subdomain, and the Polaris credential placeholders with the values from step 1:CREATE OR REPLACE CATALOG INTEGRATION <catalog-name>
CATALOG_SOURCE = POLARIS
TABLE_FORMAT = ICEBERG
CATALOG_NAMESPACE = 'atlan-ns'
REST_CONFIG = (
CATALOG_URI = 'https://<tenant-subdomain>.atlan.com/api/polaris/api/catalog'
CATALOG_NAME = 'atlan-wh'
)
REST_AUTHENTICATION = (
TYPE = OAUTH
OAUTH_CLIENT_ID = '<polaris-reader-id>'
OAUTH_CLIENT_SECRET = '<polaris-reader-secret>'
OAUTH_ALLOWED_SCOPES = ('PRINCIPAL_ROLE:lake_readers')
)
ENABLED = TRUE; -
Create a catalog-linked database: Create a database linked to the catalog integration. This lets Snowflake automatically sync and access the Lakehouse Iceberg tables:
CREATE DATABASE <database-name>
LINKED_CATALOG = (
CATALOG = '<catalog-name>',
ALLOWED_NAMESPACES = ('atlan-ns')
)
EXTERNAL_VOLUME = '<volume-name>'; -
Verify tables are accessible: Once the database is created, Snowflake automatically syncs the tables. Verify the sync state and then query the Lakehouse Iceberg tables through the linked database:
SELECT SYSTEM$CATALOG_LINK_STATUS('<database-name>');
See also
- Lakehouse: Overview of the Atlan Lakehouse and Iceberg table management.
Need help
If you need assistance after trying these steps, contact Atlan support.