Skip to content

Getting Started

How to run dbt-bouncer#

  1. Generate dbt artifacts by running a dbt command:

    • dbt parse to generate a manifest.json artifact (no database connection required!).
    • dbt docs generate to generate a catalog.json artifact (necessary if you are using catalog checks).
    • dbt run (or any other command that implies it e.g. dbt build) to generate a run_results.json artifact (necessary if you are using run results checks).
  2. Create a config file (dbt-bouncer.yml, dbt-bouncer.toml, or a [tool.dbt-bouncer] section in pyproject.toml), details here. Alternatively, you can run dbt-bouncer init to generate a basic configuration file.

  3. Run dbt-bouncer to validate that your conventions are being maintained.

Installing with Python#

Install from pypi.org:

pip install dbt-bouncer # or via any other package manager

Run:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE>
Running dbt-bouncer (X.X.X)...
Loaded config from dbt-bouncer-example.yml...
Validating conf...

dbt-bouncer also supports a verbose mode, run:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE> -v
Running dbt-bouncer (X.X.X)...
config_file=PosixPath('dbt-bouncer-example.yml')
config_file_source='COMMANDLINE'
Config file passed via command line: dbt-bouncer-example.yml
Loading config from /home/pslattery/repos/dbt-bouncer/dbt-bouncer-example.yml...
Loading config from dbt-bouncer-example.yml...
Loaded config from dbt-bouncer-example.yml...
conf={'dbt_artifacts_dir': 'dbt_project/target', 'catalog_checks': [{'name': 'check_column_name_complies_to_column_type', 'column_name_pattern': '^is_.*', 'exclude': '^staging', 'types': ['BOOLEAN']}]}
Validating conf...

When parsing artifacts, dbt-bouncer displays a summary table of discovered resources:

            Parsed artifacts for
            'dbt_bouncer_test_project'
╭──────────────────┬─────────────────┬───────╮
 Artifact          Category         Count ├──────────────────┼─────────────────┼───────┤
 manifest.json     Exposures            2                    Macros               3                    Nodes               12                    Seeds                3                    Semantic Models      1                    Snapshots            2                    Sources              4                    Tests               36                    Unit Tests           3  catalog.json      Nodes               13                    Sources              0  run_results.json  Results             51 ╰──────────────────┴─────────────────┴───────╯

Running as an executable using uv#

Run dbt-bouncer as a standalone Python executable using uv:

uvx dbt-bouncer --config-file <PATH_TO_CONFIG_FILE>

GitHub Actions#

Run dbt-bouncer as part of your CI pipeline:

name: CI pipeline

on:
  pull_request:
      branches:
          - main

jobs:
    run-dbt-bouncer:
        permissions:
            pull-requests: write # Required to write a comment on the PR
        runs-on: ubuntu-latest
        steps:
            - name: Checkout
              uses: actions/checkout@v4

            - name: Generate or fetch dbt artifacts
              run: ...

            - uses: godatadriven/dbt-bouncer@vX.X
              with:
                check: '' # optional, comma-separated check names to run
                config-file: ./<PATH_TO_CONFIG_FILE>
                only: manifest_checks # optional, defaults to running all checks
                output-file: results.json # optional, default does not save a results file
                output-format: json # optional, one of: csv, json, junit, sarif, tap. Defaults to json
                output-only-failures: false # optional, defaults to true
                send-pr-comment: true # optional, defaults to true
                show-all-failures: false # optional, defaults to false
                verbose: false # optional, defaults to false

We recommend pinning both a major and minor version number.

Docker#

Run dbt-bouncer via Docker:

docker run --rm \
    --volume "$PWD":/app \
    ghcr.io/godatadriven/dbt-bouncer:vX.X.X \
    --config-file /app/<PATH_TO_CONFIG_FILE>

Programmatic invocation#

dbt-bouncer can be invoked programmatically. run_bouncer returns the exit code of the process.

from pathlib import Path
from dbt_bouncer.main import run_bouncer

exit_code = run_bouncer(
    config_file=Path("path/to/dbt-bouncer.yml"),
    output_file=Path("results.json"),  # optional
    output_format="json",  # optional, one of: "csv", "json", "junit", "sarif", "tap". Defaults to "json"
)

How to contribute a check to dbt-bouncer#

See Adding a new check.

How to add a custom check to dbt-bouncer#

In addition to the checks built into dbt-bouncer, the ability to add custom checks is supported. This allows users to write checks that are specific to the conventions of their projects. To add a custom check:

  1. Create an empty directory and add a custom_checks_dir key to your config file. The value of this key should be the path to the directory you just created, relative to where the config file is located.
  2. In this directory create an empty __init__.py file.
  3. In this directory create a subdirectory named catalog, manifest or run_results depending on the type of artifact you want to check.
  4. In this subdirectory create a python file that defines a check using the @check decorator:

        * The function name must start with `check_`.
        * The function must be decorated with `@check` from `dbt_bouncer.check_decorator`.
        * The first positional parameter determines the resource type to iterate over (e.g. `model`, `source`, `exposure`, `seed`).
        * Keyword-only arguments (after `*`) become user-configurable parameters, with types inferred from type hints.
        * Add `ctx` as a parameter only if the function needs access to the full check context (e.g. all models, all sources).
        * Use `fail()` from `dbt_bouncer.check_decorator` to signal a check failure with a clear message.
        * Include a docstring with a description of what the check does.
    
  5. In your config file, add the name of the check and any desired arguments.

  6. Run dbt-bouncer, your custom check will be executed.

An example:

  • Directory tree:

    .
    ├── dbt-bouncer.yml
    ├── dbt_project.yml
    ├── my_custom_checks
    |   ├── __init__.py
    |   └── manifest
    |       └── check_custom_to_me.py
    └── target
        └── manifest.json
    
  • Contents of check_custom_to_me.py:

    import re
    
    from dbt_bouncer.check_decorator import check, fail
    
    
    @check
    def check_model_naming_convention(model, *, model_name_pattern: str = "^(stg|int|fct|dim)_"):
        """Model names must match the supplied regex."""
        if not re.match(model_name_pattern, str(model.name)):
            fail(
                f"`{model.unique_id}` does not match the required pattern "
                f"`{model_name_pattern}`."
            )
    
  • Contents of dbt-bouncer.yml:

    custom_checks_dir: my_custom_checks
    
    manifest_checks:
        - name: check_model_naming_convention
          include: ^models/staging
          model_name_pattern: ^stg_
    

All custom checks automatically support the following parameters (no need to declare them): description, exclude, include, and severity.