Skip to content

Getting Started

How to run dbt-bouncer#

  1. Generate dbt artifacts by running a dbt command:

    • dbt parse to generate a manifest.json artifact.
    • dbt docs generate to generate a catalog.json artifact (necessary if you are using catalog checks).
    • dbt run (or any other command that implies it e.g. dbt build) to generate a run_results.json artifact (necessary if you are using run results checks).
  2. Create a dbt-bouncer.yml config file, details here.

  3. Run dbt-bouncer to validate that your conventions are being maintained.

Installing with Python#

Install from pypi.org:

pip install dbt-bouncer # or via any other package manager

Run:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE>
Running dbt-bouncer (X.X.X)...
Loaded config from dbt-bouncer-example.yml...
Validating conf...

dbt-bouncer also supports a verbose mode, run:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE> -v
Running dbt-bouncer (X.X.X)...
config_file=PosixPath('dbt-bouncer-example.yml')
config_file_source='COMMANDLINE'
Config file passed via command line: dbt-bouncer-example.yml
Loading config from /home/pslattery/repos/dbt-bouncer/dbt-bouncer-example.yml...
Loading config from dbt-bouncer-example.yml...
Loaded config from dbt-bouncer-example.yml...
conf={'dbt_artifacts_dir': 'dbt_project/target', 'catalog_checks': [{'name': 'check_column_name_complies_to_column_type', 'column_name_pattern': '^is_.*', 'exclude': '^staging', 'types': ['BOOLEAN']}]}
Validating conf...

Running as an executable using uv#

Run dbt-bouncer as a standalone Python executable using uv:

uvx dbt-bouncer --config-file <PATH_TO_CONFIG_FILE>

GitHub Actions#

Run dbt-bouncer as part of your CI pipeline:

name: CI pipeline

on:
  pull_request:
      branches:
          - main

jobs:
    run-dbt-bouncer:
        permissions:
            pull-requests: write # Required to write a comment on the PR
        runs-on: ubuntu-latest
        steps:
            - name: Checkout
              uses: actions/checkout@v4

            - name: Generate or fetch dbt artifacts
              run: ...

            - uses: godatadriven/dbt-bouncer@vX.X
              with:
                config-file: ./<PATH_TO_CONFIG_FILE>
                output-file: results.json # optional, default does not save a results file
                send-pr-comment: true # optional, defaults to true
                verbose: false # optional, defaults to false

We recommend pinning both a major and minor version number.

Docker#

Run dbt-bouncer via Docker:

docker run --rm \
    --volume "$PWD":/app \
    ghcr.io/godatadriven/dbt-bouncer:vX.X.X \
    --config-file /app/<PATH_TO_CONFIG_FILE>

Pex#

You can also run the .pex (Python EXecutable) artifact directly once you have a python executable (3.8 -> 3.12) installed:

wget https://github.com/godatadriven/dbt-bouncer/releases/download/vX.X.X/dbt-bouncer.pex -O dbt-bouncer.pex

python dbt-bouncer.pex --config-file $PWD/<PATH_TO_CONFIG_FILE>

How to contribute a check to dbt-bouncer#

See Adding a new check.

How to add a custom check to dbt-bouncer#

In addition to the checks built into dbt-bouncer, the ability to add custom checks is supported. This allows users to write checks that are specific to the conventions of their projects. To add a custom check:

  1. Create an empty directory and add a custom_checks_dir key to your config file. The value of this key should be the path to the directory you just created, relative to where the config file is located.
  2. In this directory create an empty __init__.py file.
  3. In this directory create a subdirectory named catalog, manifest or run_results depending on the type of artifact you want to check.
  4. In this subdirectory create a python file that defines a check. The check must meet the following criteria:

    • Start with "Check".
    • Inherit from dbt_bouncer.check_base.BaseCheck.
    • Have a name attribute that is a string whose value is the snake case equivalent of the class name.
    • A default value provided for optional input arguments and arguments that are received at execution time.
    • Have a doc string that includes a description of the check, a list of possible input parameters and at least one example.
    • A clear message in the event of a failure.
  5. In your config file, add the name of the check and any desired arguments.

  6. Run dbt-bouncer, your custom check will be executed.

An example:

  • Directory tree:

    .
    ├── dbt-bouncer.yml
    ├── dbt_project.yml
    ├── my_custom_checks
    |   ├── __init__.py
    |   └── manifest
    |       └── check_custom_to_me.py
    └── target
        └── manifest.json
    
  • Contents of check_custom_to_me.py:

    from typing import TYPE_CHECKING, Literal
    
    from pydantic import Field
    
    from dbt_bouncer.check_base import BaseCheck
    
    if TYPE_CHECKING:
        import warnings
    
        with warnings.catch_warnings():
            warnings.filterwarnings("ignore", category=UserWarning)
            from dbt_bouncer.parsers import DbtBouncerModelBase
    
    
    class CheckModelDepcrecationDate(BaseCheck):
    
        model: "DbtBouncerModelBase" = Field(default=None)
        name: Literal["check_model_deprecation_date"]
    
        def execute(self) -> None:
            """Execute the check."""
    
            assert self.model.deprecation_date is not None, f"`{self.model.name}` requires a `deprecation_date` to be set."
    
  • Contents of dbt-bouncer.yml:

    custom_checks_dir: my_custom_checks
    
    manifest_checks:
        - name: check_model_deprecation_date
          include: ^models/staging/legacy_erp