Postgres assets package
The Postgres assets package crawls PostgreSQL assets and publishes them to Atlan for discovery.
Direct extraction
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl assets directly from PostgreSQL using basic authentication:
- Java
- Python
- Kotlin
- Raw REST API
Workflow postgres = PostgreSQLCrawler.directBasicAuth( // (1)
"production", // (2)
"postgres.x9f0ve2k1kvy.ap-south-1.rds.amazonaws.com", // (3)
5432, // (4)
"postgres", // (5)
"nCkM685ZH9g4fVICMs6H", // (6)
"demo_db", // (7)
List.of(client.getRoleCache().getIdForName("$admin")), // (8)
null,
null,
true, // (9)
true, // (10)
10000L, // (11)
Map.of("demo_db", List.of("demo")), // (12)
null); // (13)
WorkflowResponse response = postgres.run(); // (14)
-
The
PostgreSQLCrawlerpackage will create a workflow to crawl assets from PostgreSQL. ThedirectBasicAuth()method creates a workflow for crawling assets directly from PostgreSQL. -
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must provide the hostname of your PostgreSQL instance.
-
You must specify the port number of the PostgreSQL instance (use
5432for the default). -
You must provide your PostgreSQL username.
-
You must provide your PostgreSQL password.
-
You must specify the name of the PostgreSQL database you want to crawl.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers) - a list of groups (names) that will be connection admins
- a list of users (names) that will be connection admins
- everyone in a role (in this example, all
-
You can specify whether you want to allow queries to this connection (
true, as in this example) or deny all query access to the connection (false). -
You can specify whether you want to allow data previews on this connection (
true, as in this example) or deny all sample data previews to the connection (false). -
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can also optionally specify the set of assets to include in crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within that database to crawl. (If set to null, all databases and schemas will be crawled.)
-
You can also optionally specify the list of assets to exclude from crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within the database to exclude. (If set to null, no assets will be excluded.)
-
You can then run the workflow using the
run()method on the object you've created. Because this operation will execute work in Atlan, you must provide it anAtlanClientthrough which to connect to the tenant.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.packages import PostgresCrawler
client = AtlanClient()
crawler = (
PostgresCrawler( # (1)
client=client, # (2)
connection_name="production", # (3)
admin_roles=[client.role_cache.get_id_for_name("$admin")], # (4)
admin_groups=None,
admin_users=None,
row_limit=10000, # (5)
allow_query=True, # (6)
allow_query_preview=True, # (7)
)
.direct(hostname="test.com", database="test-db") # (8)
.basic_auth( # (9)
username="test-user",
password="test-password",
)
.include(assets={"test-include": ["test-asset-1", "test-asset-2"]}) # (10)
.exclude(assets=None) # (11)
.exclude_regex(regex=".*_TEST") # (12)
.source_level_filtering(enable=True) # (13)
.jdbc_internal_methods(enable=True) # (14)
.to_workflow() # (15)
)
response = client.workflow.run(crawler) # (16)
-
Base configuration for a new PostgresCrawler crawler.
-
You must provide a client instance.
-
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can specify whether you want to allow queries to this connection. (
True, as in this example) or deny all query access to the connection (False). -
You can specify whether you want to allow data previews on this connection (
True, as in this example) or deny all sample data previews to the connection (False). -
You can specify the hostname of your Postgres instance and database name for direct extraction.
-
When using
basic_auth(), you need to provide the following information:- username through which to access PostgreSQL.
- password through which to access PostgreSQL.
-
You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)
-
You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
-
You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
-
You can also optionally specify whether to enable (
True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched. -
You can also optionally specify whether to enable (
True) or disable (False) JDBC internal methods for data extraction. -
Now, you can convert the package into a
Workflowobject. -
Run the workflow by invoking the
run()method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
val postgres = PostgreSQLCrawler.directBasicAuth( // (1)
"production", // (2)
"postgres.x9f0ve2k1kvy.ap-south-1.rds.amazonaws.com", // (3)
5432, // (4)
"postgres", // (5)
"nCkM685ZH9g4fVICMs6H", // (6)
"demo_db", // (7)
listOf(client.roleCache.getIdForName("\$admin")), // (8)
null,
null,
true, // (9)
true, // (10)
10000L, // (11)
mapOf("demo_db" to listOf("demo")), // (12)
null) // (13)
WorkflowResponse response = postgres.run(); // (14)
-
The
PostgreSQLCrawlerpackage will create a workflow to crawl assets from PostgreSQL. ThedirectBasicAuth()method creates a workflow for crawling assets directly from PostgreSQL. -
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must provide the hostname of your PostgreSQL instance.
-
You must specify the port number of the PostgreSQL instance (use
5432for the default). -
You must provide your PostgreSQL username.
-
You must provide your PostgreSQL password.
-
You must specify the name of the PostgreSQL database you want to crawl.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers) - a list of groups (names) that will be connection admins
- a list of users (names) that will be connection admins
- everyone in a role (in this example, all
-
You can specify whether you want to allow queries to this connection (
true, as in this example) or deny all query access to the connection (false). -
You can specify whether you want to allow data previews on this connection (
true, as in this example) or deny all sample data previews to the connection (false). -
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can also optionally specify the set of assets to include in crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within that database to crawl. (If set to null, all databases and schemas will be crawled.)
-
You can also optionally specify the list of assets to exclude from crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within the database to exclude. (If set to null, no assets will be excluded.)
-
You can then run the workflow using the
run()method on the object you've created. Because this operation will execute work in Atlan, you must provide it anAtlanClientthrough which to connect to the tenant.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
IAM user authentication
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl assets directly from PostgreSQL using IAM user authentication:
- Java
- Python
- Raw REST API
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.packages import PostgresCrawler
client = AtlanClient()
crawler = (
PostgresCrawler( # (1)
client=client, # (2)
connection_name="production", # (3)
admin_roles=[client.role_cache.get_id_for_name("$admin")], # (4)
admin_groups=None,
admin_users=None,
row_limit=10000, # (5)
allow_query=True, # (6)
allow_query_preview=True, # (7)
)
.direct(hostname="test.com", database="test-db") # (8)
.iam_user_auth( # (9)
username="test-user",
access_key="test-access-key",
secret_key="test-secret-key",
)
.include(assets={"test-include": ["test-asset-1", "test-asset-2"]}) # (10)
.exclude(assets=None) # (11)
.exclude_regex(regex=".*_TEST") # (12)
.source_level_filtering(enable=True) # (13)
.jdbc_internal_methods(enable=True) # (14)
.to_workflow() # (15)
)
response = client.workflow.run(crawler) # (16)
-
Base configuration for a new PostgresCrawler crawler.
-
You must provide a client instance.
-
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can specify whether you want to allow queries to this connection. (
True, as in this example) or deny all query access to the connection (False). -
You can specify whether you want to allow data previews on this connection (
True, as in this example) or deny all sample data previews to the connection (False). -
You can specify the hostname of your Postgres instance and database name for direct extraction.
-
When using
iam_user_auth(), you need to provide the following information:- database username to extract from.
- access key through which to access PostgreSQL.
- secret key through which to access PostgreSQL.
-
You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)
-
You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
-
You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
-
You can also optionally specify whether to enable (
True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched. -
You can also optionally specify whether to enable (
True) or disable (False) JDBC internal methods for data extraction. -
Now, you can convert the package into a
Workflowobject. -
Run the workflow by invoking the
run()method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
IAM role authentication
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl assets directly from PostgreSQL using IAM role authentication:
- Java
- Python
- Raw REST API
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.packages import PostgresCrawler
client = AtlanClient()
crawler = (
PostgresCrawler( # (1)
client=client, # (2)
connection_name="production", # (3)
admin_roles=[client.role_cache.get_id_for_name("$admin")], # (4)
admin_groups=None,
admin_users=None,
row_limit=10000, # (5)
allow_query=True, # (6)
allow_query_preview=True, # (7)
)
.direct(hostname="test.com", database="test-db") # (8)
.iam_role_auth( # (9)
username="test-user",
access_key="test-access-key",
secret_key="test-secret-key",
)
.include(assets={"test-include": ["test-asset-1", "test-asset-2"]}) # (10)
.exclude(assets=None) # (11)
.exclude_regex(regex=".*_TEST") # (12)
.source_level_filtering(enable=True) # (13)
.jdbc_internal_methods(enable=True) # (14)
.to_workflow() # (15)
)
response = client.workflow.run(crawler) # (16)
-
Base configuration for a new PostgresCrawler crawler.
-
You must provide a client instance.
-
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can specify whether you want to allow queries to this connection. (
True, as in this example) or deny all query access to the connection (False). -
You can specify whether you want to allow data previews on this connection (
True, as in this example) or deny all sample data previews to the connection (False). -
You can specify the hostname of your Postgres instance and database name for direct extraction.
-
When using
iam_role_auth(), you need to provide the following information:- database username to extract from.
- ARN of the AWS role.
- AWS external ID.
-
You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)
-
You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
-
You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
-
You can also optionally specify whether to enable (
True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched. -
You can also optionally specify whether to enable (
True) or disable (False) JDBC internal methods for data extraction. -
Now, you can convert the package into a
Workflowobject. -
Run the workflow by invoking the
run()method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
Offline extraction
This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.
Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).
To crawl PostgeSQL assets from the S3 bucket:
- Java
- Python
- Raw REST API
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.packages import PostgresCrawler
client = AtlanClient()
crawler = (
PostgresCrawler( # (1)
client=client, # (2)
connection_name="production", # (3)
admin_roles=[client.role_cache.get_id_for_name("$admin")], # (4)
admin_groups=None,
admin_users=None,
row_limit=10000, # (5)
allow_query=True, # (6)
allow_query_preview=True, # (7)
)
.s3( # (8)
bucket_name="test-bucket",
bucket_prefix="test-prefix",
bucket_region="test-region",
)
.source_level_filtering(enable=True) # (9)
.jdbc_internal_methods(enable=True) # (10)
.to_workflow() # (11)
)
response = client.workflow.run(crawler) # (12)
-
Base configuration for a new PostgresCrawler crawler.
-
You must provide a client instance.
-
You must provide a name for the connection that the PostgreSQL assets will exist within.
-
You must specify at least one connection admin, either:
- everyone in a role (in this example, all
$adminusers). - a list of groups (names) that will be connection admins.
- a list of users (names) that will be connection admins.
- everyone in a role (in this example, all
-
You can specify a maximum number of rows that can be accessed for any asset in the connection.
-
You can specify whether you want to allow queries to this connection. (
True, as in this example) or deny all query access to the connection (False). -
You can specify whether you want to allow data previews on this connection (
True, as in this example) or deny all sample data previews to the connection (False). -
When using
s3(), you need to provide the following information:- name of the bucket/storage that contains the extracted metadata files.
- prefix is everything after the bucket/storage name, including the
path. - (Optional) name of the region if applicable.
-
You can also optionally specify whether to enable (
True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched. -
You can also optionally specify whether to enable (
True) or disable (False) JDBC internal methods for data extraction. -
Now, you can convert the package into a
Workflowobject. -
Run the workflow by invoking the
run()method on the workflow client, passing the created object.Workflows run asynchronously
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.
Re-run existing workflow
To re-run an existing workflow for PostgreSQL assets:
- Java
- Python
- Kotlin
- Raw REST API
List<WorkflowSearchResult> existing = WorkflowSearchRequest // (1)
.findByType(client, PostgreSQLCrawler.PREFIX, 5); // (2)
// Determine which of the results is the PostgreSQL workflow you want to re-run...
WorkflowRunResponse response = existing.get(n).rerun(client); // (3)
-
You can search for existing workflows through the
WorkflowSearchRequestclass. -
You can find workflows by their type using the
findByType()helper method and providing the prefix for one of the packages. In this example, we do so for thePostgreSQLCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.) -
Once you've found the workflow you want to re-run, you can simply call the
rerun()helper method on the workflow search result. TheWorkflowRunResponseis just a subtype ofWorkflowResponseso has the same helper method to monitor progress of the workflow run. Because this operation will execute work in Atlan, you must provide it anAtlanClientthrough which to connect to the tenant.- Optionally, you can use the
rerun(client, true)method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set tofalse
Workflows run asynchronously - Optionally, you can use the
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.enums import WorkflowPackage
client = AtlanClient()
existing = client.workflow.find_by_type( # (1)
prefix=WorkflowPackage.POSTGRES, max_results=5
)
# Determine which DynamoDB workflow (n)
# from the list of results you want to re-run.
response = client.workflow.rerun(existing[n]) # (2)
-
You can find workflows by their type using the workflow client
find_by_type()method and providing the prefix for one of the packages. In this example, we do so for thePostgreSQLCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.) -
Once you've found the workflow you want to re-run, you can simply call the workflow client
rerun()method.- Optionally, you can use
rerun(idempotent=True)to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set toFalse.
Workflows run asynchronously - Optionally, you can use
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
val existing = WorkflowSearchRequest // (1)
.findByType(client, PostgreSQLCrawler.PREFIX, 5) // (2)
// Determine which of the results is the PostgreSQL workflow you want to re-run...
val response = existing.get(n).rerun(client) // (3)
-
You can search for existing workflows through the
WorkflowSearchRequestclass. -
You can find workflows by their type using the
findByType()helper method and providing the prefix for one of the packages. In this example, we do so for thePostgreSQLCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.) -
Once you've found the workflow you want to re-run, you can simply call the
rerun()helper method on the workflow search result. TheWorkflowRunResponseis just a subtype ofWorkflowResponseso has the same helper method to monitor progress of the workflow run. Because this operation will execute work in Atlan, you must provide it anAtlanClientthrough which to connect to the tenant.- Optionally, you can use the
rerun(client, true)method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set tofalse
Workflows run asynchronously - Optionally, you can use the
Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed. :::
- Find the existing workflow.
- Send through the resulting re-run request.
{
"from": 0,
"size": 5,
"query": {
"bool": {
"filter": [
{
"nested": {
"path": "metadata",
"query": {
"prefix": {
"metadata.name.keyword": {
"value": "atlan-postgres" // (1)
}
}
}
}
}
]
}
},
"sort": [
{
"metadata.creationTimestamp": {
"nested": {
"path": "metadata"
},
"order": "desc"
}
}
],
"track_total_hits": true
}
-
Searching by the
atlan-postgresprefix will ensure you only find existing PostgreSQL assets workflows.Name of the workflow
The name of the workflow will be nested within the _source.metadata.name property of the response object. (Remember since this is a search, there could be multiple results, so you may want to use the other details in each result to determine which workflow you really want.)
:::
{
"namespace": "default",
"resourceKind": "WorkflowTemplate",
"resourceName": "atlan-postgres-1684500411" // (1)
}
- Send the name of the workflow as the
resourceNameto rerun it.