Skip to main content

Crawl Teradata

Once you have configured the Teradata user permissions, you can establish a connection between Atlan and Teradata.

To crawl metadata from Teradata, review the order of operations and then complete the following steps.

Select the source

To select Teradata as your source:

  1. In the top right of any screen in Atlan, navigate to +New and click New Workflow.
  2. From the Marketplace page, click Teradata Assets.
  3. In the right panel, click Setup Workflow.

Configure extraction

Select your extraction method and provide the connection details.

In Direct extraction, Atlan connects to your database and crawls metadata directly.

  1. Choose whether to use the default connection settings or provide a custom Teradata Driver URL:

    • Host: Use the default Teradata Driver URL based on standard connection parameters (host, port).
    • URL: Provide a custom Teradata Driver URL with specific driver options. Make sure your connection string conforms to the Teradata SQL Driver documentation and applicable to your Teradata instance.
  2. Choose an authentication method for your direct connection.

  1. Use standard database credentials created in your Teradata instance.

    • Host: Enter the hostname or IP address for your Teradata instance.
    • Port: Enter the port number for your Teradata instance (default is 1025).
    • Username: Enter the username you configured when setting up the Teradata user.
    • Password: Enter the password for the specified user.
  2. Click Test Authentication to verify your configuration. If the test is successful, click Next to proceed with the connection configuration.

Configure the connection

Complete the Teradata connection configuration:

  1. Provide a Connection Name that represents your source environment. For example, you might use values like production, development, gold, or analytics.

  2. (Optional) To change the users able to manage this connection, change the users or groups listed under Connection Admins.

    warning

    If you don't specify any user or group, nobody can manage the connection - not even admins.

  3. At the bottom of the screen, click Next to proceed.

Configure the crawler

Before running the Teradata crawler, you can further configure it.

On the Metadata page, you can override the defaults for any of these options:

  • To select the assets you want to exclude from crawling, click Exclude Metadata. (This defaults to no assets, if none are specified.)
  • To select the assets you want to include in crawling, click Include Metadata. (This defaults to all assets, if none are specified.)
  • To have the crawler ignore tables and views based on a naming convention, specify a regular expression in the Exclude regex for tables & views field.
  • For Advanced Config, keep Default for the default configuration or click Custom to configure the crawler:
    • For Enable Source Level Filtering, click True to enable schema-level filtering at source or click False to disable it.
    • For Use JDBC Internal Methods, click True to enable JDBC internal methods for data extraction or click False to disable it.
Teradata database naming

In Teradata, the database name is always DEFAULT. What users typically think of as "databases" are actually schemas within DEFAULT.

Agent mode filtering format

When using Agent extraction mode with source-level filtering enabled, filters use a regex-based JSON format. The database must always be ^DEFAULT$ (with regex anchors), and schemas are specified as an array with ^ (start) and $ (end) anchors.

Examples:

  • Include specific schemas: {"^DEFAULT$": ["^test_schema_1$", "^test_schema_2$"]}
  • Exclude specific schemas: {"^DEFAULT$": ["^test_schema_1$", "^atlan_user$"]}
Did you know?

If an asset appears in both the include and exclude filters, the exclude filter takes precedence.

Run the crawler

To run the Teradata crawler, after completing the previous steps:

  1. To check for any permissions or other configuration issues before running the crawler, click Preflight checks.
  2. You can either:
    • To run the crawler once immediately, at the bottom of the screen, click the Run button.
    • To schedule the crawler to run hourly, daily, weekly, or monthly, at the bottom of the screen, click the Schedule Run button.

Once the crawler has completed running, you can see the assets in Atlan's assets page! 🎉