Skip to main content

Snowflake miner package

The Snowflake miner package mines query history from Snowflake. This data is used for generating lineage and usage metrics.

Source extraction

To mine query history directly from Snowflake using its built-in database:

Mine query history direct from Snowflake
Workflow miner = SnowflakeMiner.creator( // (1)
"default/snowflake/1234567890" // (2)
)
.direct( // (3)
"TEST_DB",
"TEST_SCHEMA",
1713225600
)
.excludeUsers( // (4)
List.of(
"test-user-1",
"test-user-2"
)
)
.nativeLineage(true) // (5)
.build() // (6)
.toWorkflow(); // (7)

WorkflowResponse response = miner.run(client); // (8)
  1. Base configuration for a new Snowflake miner.

  2. You must provide the exact qualifiedName of the Snowflake connection in Atlan for which you want to mine query history.

  3. To create a workflow for mining history directly from Snowflake using its built-in database you need to provide:

    • name of the database to extract from.
    • name of the schema to extract from.
    • date and time from which to start mining, as an epoch.
  4. Optionally, you can specify list of users who should be excluded when calculating usage metrics for assets (for example, system accounts).

  5. Optionally, you can specify whether to enable native lineage from Snowflake, using Snowflake's ACCESS_HISTORY.OBJECTS_MODIFIED Column. Note: this is only available only for Snowflake Enterprise customers.

  6. Build the minimal package object.

  7. Now, you can convert the package into a Workflow object.

  8. Run the workflow by invoking the run() method on the workflow client, passing the created object. Because this operation will execute work in Atlan, you must provide it an AtlanClient through which to connect to the tenant.

    Workflows run asynchronously

Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::

Offline extraction

To mine query history from the S3 bucket:

Mine query history from the S3 bucket
Workflow miner = SnowflakeMiner.creator( // (1)
"default/snowflake/1234567890" // (2)
)
.s3( // (3)
"test-s3-bucket",
"test-s3-prefix",
"TEST_QUERY",
"TEST_DB",
"TEST_SCHEMA",
"TEST_SESSION_ID"
)
.nativeLineage(true) // (4)
.build() // (5)
.toWorkflow(); // (6)

WorkflowResponse response = miner.run(client); // (7)
  1. Base configuration for a new Snowflake miner.

  2. You must provide the exact qualifiedName of the Snowflake connection in Atlan for which you want to mine query history.

  3. To create a workflow for mining history from S3 bucket you need to provide:

    • S3 bucket where the JSON line-separated files are located.
    • prefix within the S3 bucket in which the JSON line-separated files are located.
    • JSON key containing the query definition.
    • JSON key containing the default database name to use if a query is not qualified with database name.
    • JSON key containing the default schema name to use if a query is not qualified with schema name.
    • JSON key containing the session ID of the SQL query.
  4. Optionally, you can specify whether to enable native lineage from Snowflake, using Snowflake's ACCESS_HISTORY.OBJECTS_MODIFIED Column. Note: this is only available only for Snowflake Enterprise customers.

  5. Build the minimal package object.

  6. Now, you can convert the package into a Workflow object.

  7. Run the workflow by invoking the run() method on the workflow client, passing the created object. Because this operation will execute work in Atlan, you must provide it an AtlanClient through which to connect to the tenant.

    Workflows run asynchronously

Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::

Re-run existing workflow

To re-run an existing workflow for Snowflake query mining:

Re-run existing Snowflake workflow
List<WorkflowSearchResult> existing = WorkflowSearchRequest // (1)
.findByType(client, SnowflakeMiner.PREFIX, 5); // (2)
// Determine which of the results is the
// Snowflake workflow you want to re-run...
WorkflowRunResponse response = existing.get(n).rerun(client); // (3)
  1. You can search for existing workflows through the WorkflowSearchRequest class.

  2. You can find workflows by their type using the findByType() helper method and providing the prefix for one of the packages. In this example, we do so for the SnowflakeMiner. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)

  3. Once you've found the workflow you want to re-run, you can simply call the rerun() helper method on the workflow search result. The WorkflowRunResponse is just a subtype of WorkflowResponse so has the same helper method to monitor progress of the workflow run. Because this operation will execute work in Atlan, you must provide it an AtlanClient through which to connect to the tenant.

    • Optionally, you can use the rerun(client, true) method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to false
    Workflows run asynchronously

Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::

Was this page helpful?