Content
Docker Compose Installation fills /app/tmp/cache and never clears it
Added by Philipp Mallot about 1 month ago
I'm running OpenProject 16.4.0 using Docker Compose (see below). Over time it seems to fill the /app/tmp/cache directory in the web container with new folders and files without ever deleting any. This eventually led to the host running out of inodes. Note, that while pgdata and opdata are mounted host-directories, the cache stays within the container. This means it get's deleted when docker-compose down-ing. However depending on the size of the cache this can take very long and can even fail.
It seem that uploading files triggers increases the file and directory count more than other activities.
I've tracked inode, folder and file count over the last couple of weeks, and it only goes up (see plots below).
I've found this bug report, but fail to understand how I can run the manual cleanup command in my dockerized version. It seem however the cleanup service is running, at least the log says so.
I need help understanding if this is a configuration issue or a bug, possibly still the one linked above?
Any help would be strongly appreciated!
Truncated output of docker-compose logs web | grep ClearTmpCacheJob:
NAME_worker | I, [2025-09-14T02:45:00.031362 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: 9b576b1e-eca3-47dd-bb9e-f83c3ec42bc0) to GoodJob(default)
NAME_worker | I, [2025-09-14T02:45:00.106214 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [9b576b1e-eca3-47dd-bb9e-f83c3ec42bc0] Performing Cron::ClearTmpCacheJob (Job ID: 9b576b1e-eca3-47dd-bb9e-f83c3ec42bc0) from GoodJob(default) enqueued at 2025-09-14T02:45:00.026100117Z
NAME_worker | I, [2025-09-14T02:45:00.471088 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [9b576b1e-eca3-47dd-bb9e-f83c3ec42bc0] Performed Cron::ClearTmpCacheJob (Job ID: 9b576b1e-eca3-47dd-bb9e-f83c3ec42bc0) from GoodJob(default) in 364.96ms
NAME_worker | I, [2025-09-21T02:45:00.028461 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: b67af27d-fc5c-4325-80bf-7fa5bc879ba3) to GoodJob(default)
NAME_worker | I, [2025-09-21T02:45:00.106138 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [b67af27d-fc5c-4325-80bf-7fa5bc879ba3] Performing Cron::ClearTmpCacheJob (Job ID: b67af27d-fc5c-4325-80bf-7fa5bc879ba3) from GoodJob(default) enqueued at 2025-09-21T02:45:00.024418565Z
NAME_worker | I, [2025-09-21T02:45:00.140253 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [b67af27d-fc5c-4325-80bf-7fa5bc879ba3] Performed Cron::ClearTmpCacheJob (Job ID: b67af27d-fc5c-4325-80bf-7fa5bc879ba3) from GoodJob(default) in 34.22ms
NAME_worker | I, [2025-09-28T02:45:00.013715 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: 542f63e1-2e7a-4c1c-af31-43289f0bfa75) to GoodJob(default)
NAME_worker | I, [2025-09-28T02:45:00.070286 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [542f63e1-2e7a-4c1c-af31-43289f0bfa75] Performing Cron::ClearTmpCacheJob (Job ID: 542f63e1-2e7a-4c1c-af31-43289f0bfa75) from GoodJob(default) enqueued at 2025-09-28T02:45:00.008754341Z
NAME_worker | I, [2025-09-28T02:45:00.105937 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [542f63e1-2e7a-4c1c-af31-43289f0bfa75] Performed Cron::ClearTmpCacheJob (Job ID: 542f63e1-2e7a-4c1c-af31-43289f0bfa75) from GoodJob(default) in 35.79ms
NAME_worker | I, [2025-10-05T02:45:00.047512 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: d68ed184-a09e-4dab-bef3-5beaa6fb437c) to GoodJob(default)
NAME_worker | I, [2025-10-05T02:45:00.134305 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [d68ed184-a09e-4dab-bef3-5beaa6fb437c] Performing Cron::ClearTmpCacheJob (Job ID: d68ed184-a09e-4dab-bef3-5beaa6fb437c) from GoodJob(default) enqueued at 2025-10-05T02:45:00.041108594Z
NAME_worker | I, [2025-10-05T02:45:00.191363 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [d68ed184-a09e-4dab-bef3-5beaa6fb437c] Performed Cron::ClearTmpCacheJob (Job ID: d68ed184-a09e-4dab-bef3-5beaa6fb437c) from GoodJob(default) in 57.14ms
NAME_worker | I, [2025-10-12T02:45:00.027528 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: 4f60bb94-19cc-416f-8a12-cbb485e96acc) to GoodJob(default)
NAME_worker | I, [2025-10-12T02:45:00.122607 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [4f60bb94-19cc-416f-8a12-cbb485e96acc] Performing Cron::ClearTmpCacheJob (Job ID: 4f60bb94-19cc-416f-8a12-cbb485e96acc) from GoodJob(default) enqueued at 2025-10-12T02:45:00.022645645Z
NAME_worker | I, [2025-10-12T02:45:00.206745 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [4f60bb94-19cc-416f-8a12-cbb485e96acc] Performed Cron::ClearTmpCacheJob (Job ID: 4f60bb94-19cc-416f-8a12-cbb485e96acc) from GoodJob(default) in 84.23ms
NAME_worker | I, [2025-10-19T02:45:00.018097 #7] INFO -- : [ActiveJob] Enqueued Cron::ClearTmpCacheJob (Job ID: dbbf4643-6dbf-4b35-892a-bba2664f5d5b) to GoodJob(default)
NAME_worker | I, [2025-10-19T02:45:00.070527 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [dbbf4643-6dbf-4b35-892a-bba2664f5d5b] Performing Cron::ClearTmpCacheJob (Job ID: dbbf4643-6dbf-4b35-892a-bba2664f5d5b) from GoodJob(default) enqueued at 2025-10-19T02:45:00.008581410Z
NAME_worker | I, [2025-10-19T02:45:00.110668 #7] INFO -- : [ActiveJob] [Cron::ClearTmpCacheJob] [dbbf4643-6dbf-4b35-892a-bba2664f5d5b] Performed Cron::ClearTmpCacheJob (Job ID: dbbf4643-6dbf-4b35-892a-bba2664f5d5b) from GoodJob(default) in 40.21ms
Plot of rising file, dir and inode count over the last 3 months of very light usage. I would expect to see a drop every time the cron job is executed.
docker-compose.yml:
version: '3.7'
networks:
frontend:
external:
name: proxy_net
op.backend:
x-op-restart-policy: &restart_policy
restart: unless-stopped
x-op-image: &image
image: openproject/openproject:${TAG:-16-slim}
x-op-app: &app
<<: [*image, *restart_policy]
environment:
OPENPROJECT_HTTPS: "${OPENPROJECT_HTTPS:-true}"
OPENPROJECT_HOST__NAME: "${OPENPROJECT_HOST__NAME:-localhost:8080}"
OPENPROJECT_HSTS: "${OPENPROJECT_HSTS:-true}"
RAILS_CACHE_STORE: "memcache"
OPENPROJECT_CACHE__MEMCACHE__SERVER: "cache:11211"
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
DATABASE_URL: "${DATABASE_URL:-postgres://postgres:p4ssw0rd@db/openproject?pool=20&encoding=unicode&reconnect=true}"
RAILS_MIN_THREADS: ${RAILS_MIN_THREADS:-4}
RAILS_MAX_THREADS: ${RAILS_MAX_THREADS:-16}
# set to true to enable the email receiving feature. See ./docker/cron for more options
IMAP_ENABLED: "${IMAP_ENABLED:-false}"
volumes:
- "${OPDATA:-opdata}:/var/openproject/assets"
services:
db:
container_name: 'op.NAME_db'
image: postgres:13
<<: *restart_policy
stop_grace_period: "3s"
volumes:
- "${PGDATA:-pgdata}:/var/lib/postgresql/data"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-p4ssw0rd}
POSTGRES_DB: openproject
networks:
- op.backend
cache:
container_name: 'op.NAME_cache'
image: memcached
<<: *restart_policy
networks:
- op.backend
web:
container_name: 'op.NAME_web'
<<: *app
command: "./docker/prod/web"
networks:
- frontend
- op.backend
depends_on:
- db
- cache
- seeder
labels:
- autoheal=true
ports:
- 8080:80
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}/health_checks/default"]
interval: 30s
timeout: 5s
retries: 3
start_period: 90s
autoheal:
container_name: 'op.NAME_autoheal'
image: willfarrell/autoheal:1.2.0
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
AUTOHEAL_CONTAINER_LABEL: autoheal
AUTOHEAL_START_PERIOD: 600
AUTOHEAL_INTERVAL: 30
worker:
container_name: 'op.NAME_worker'
<<: *app
command: "./docker/prod/worker"
networks:
- op.backend
depends_on:
- db
- cache
- seeder
cron:
container_name: 'op.NAME_cron'
<<: *app
command: "./docker/prod/cron"
networks:
- op.backend
depends_on:
- db
- cache
- seeder
seeder:
container_name: 'op.NAME_seeder'
<<: *app
command: "./docker/prod/seeder"
restart: on-failure
networks:
- op.backend
.env:
MAIL_DELIVERY_METHOD="smtp"
SMTP_ADDRESS="REMOVED"
SMTP_PORT="REMOVED"
SMTP_DOMAIN="REMOVED"
SMTP_AUTHENTICATION="login"
SMTP_USER_NAME="REMOVED"
SMTP_PASSWORD="REMOVED"
SMTP_ENABLE_STARTTLS_AUTO="true"
SERVER_HOSTNAME="REMOVED"
SECRET_KEY_BASE="REMOVED"
POSTGRES_PASSWORD="REMOVED"
DATABASE_URL="REMOVED"
TAG=16-slim
OPENPROJECT_HOST__NAME=REMOVED
PGDATA=/data/op.NAME/pgdata
OPDATA=/data/op.NAME/assets
edit: formatting