I'm receiving a 'This run exceeded your account's run memory limits' error in my failed job
If you're receiving a This run exceeded your account's run memory limits
error in your failed job, it means that the job exceeded the memory limits set for your account. All dbt Cloud accounts have a pod memory of 600Mib and memory limits are on a per run basis. They're typically influenced by the amount of result data that dbt has to ingest and process, which is small but can become bloated unexpectedly by project design choices.
Common reasons
Some common reasons for higher memory usage are:
- dbt run/build: Macros that capture large result sets from run query may not all be necessary and may be memory inefficient.
- dbt docs generate: Source or model schemas with large numbers of tables (even if those tables aren't all used by dbt) cause the ingest of very large results for catalog queries.
Resolution
There are various reasons why you could be experiencing this error. We recommend you review your data models to see if there are any opportunities to optimize or refactor them. For example, you can try to reduce the number of columns being selected, use group
or where
clauses to filter data early in the query, or use limit
clauses to reduce the amount of data being processed.
If you've tried the earlier suggestions and are still experiencing failed job runs with this error about hitting the memory limits of your account, please reach out to support. We're happy to help!