Testing
Follow these best practices based on the Application SDK testing framework requirements to maintain code quality and reliability in your tests.
Deterministic tests
Write tests that produce consistent, repeatable results every time they run.
Recommendation: Write deterministic tests that produce the same results regardless of when or where you run them.
Why this matters: The framework uses mocks to replace external dependencies, enabling fast execution and deterministic results. Tests never wait for network calls, database queries, or file operations, providing reliable feedback during development and in CI/CD pipelines.
Implementation: Use mocks and fixtures to control all external dependencies. The framework provides AsyncMock for mocking asynchronous operations. Replace external services like databases, APIs, or network connections with mocks to keep tests deterministic.
Reference: The framework validates individual components by replacing external dependencies with mocks. This isolation enables fast execution and deterministic results because tests never wait for network calls, database queries, or file operations.
Test organization
Organize your test suite to match your source code structure and separate different test types into distinct directories for better maintainability and execution efficiency.
Structure mirroring
Organize test files to match the structure of your source code for easy navigation and maintenance.
Recommendation: Mirror your source code structure in your test directory layout.
Why this matters: The framework organizes tests to mirror source code structure for easy navigation. This parallel structure makes finding tests intuitive.
Implementation: If your handler lives in application_sdk/handlers/metadata.py, place its tests in tests/unit/handlers/test_metadata.py. If a class exists in application_sdk/services/statestore.py, place its tests in tests/unit/services/test_statestore.py. Maintain the same directory hierarchy and naming patterns throughout your test suite.
Example:
application_sdk/
services/
statestore.py
metadata.py
tests/
unit/
services/
test_statestore.py
test_metadata.py
Reference: Tests mirror source code structure for easy navigation. If your handler lives in application_sdk/handlers/metadata.py, tests live in tests/unit/handlers/test_metadata.py. This parallel structure makes finding tests intuitive.
Test type separation
Organize tests by type to enable different execution strategies and faster test runs.
Recommendation: Separate different test types into distinct directories.
Why this matters: Clear separation enables different execution strategies. Unit tests run quickly during development, while E2E tests run in CI/CD pipelines.
Implementation: Place unit tests in tests/unit/, integration tests in tests/integration/, and E2E tests in tests/e2e/. Use consistent naming patterns within each directory.
Reference: The framework supports unit testing, property-based testing, and E2E testing as separate approaches with distinct execution strategies.
Test coverage
Validate both successful operations and error handling to achieve comprehensive test coverage.
Success and failure paths
Test both successful operations and error scenarios to validate complete behavior.
Recommendation: Test both expected behavior with valid inputs and error handling with invalid inputs.
Why this matters: The framework uses pytest.raises to verify that components properly raise exceptions for invalid inputs rather than producing incorrect results silently.
Implementation: Write tests that validate both success scenarios where operations complete as expected and failure scenarios where errors occur. Use pytest.raises to verify exception handling for invalid inputs.
Reference: Tests validate both expected behavior with valid inputs and error handling with invalid inputs. The framework uses pytest.raises to verify that components properly raise exceptions for invalid inputs rather than producing incorrect results silently.
Fixture teardown
Clean up resources and state after each test to prevent interference between test runs.
Recommendation: Implement proper teardown for all fixtures to release resources and clean up state after tests complete.
Why this matters: Proper cleanup lets tests run repeatedly without accumulating artifacts and prevents resource leaks.
Implementation: Use pytest fixtures with teardown logic. Implement yield fixtures that automatically clean up after test completion. Clean up temporary files, database connections, and network sockets.
Reference: The framework requires implementing proper teardown for all fixtures to release resources and clean up state after tests complete.
Credentials management
Store credentials securely using environment variables to keep sensitive information out of your codebase.
Recommendation: Use environment variables for credentials in test configuration files.
Why this matters: The framework uses environment variables populated from GitHub secrets for E2E test credentials, keeping sensitive information out of your codebase.
Implementation: Reference credentials using $VARIABLE_NAME format in config.yaml files. GitHub Actions populate these from secrets, keeping sensitive information secure.
Reference: Credentials reference environment variables that GitHub Actions populate from secrets, keeping sensitive information out of your codebase. Use the format $VARIABLE_NAME in configuration files.
See also
- Test configuration: Complete reference for test configuration options and structure patterns
- Test framework: Understanding the Application SDK testing framework
- Set up E2E tests: Step-by-step guide to creating E2E tests