Where logs live
Application log files (inside containers)
logs/app.log— Main Flask app (app.py). Configured in the app with a TimedRotatingFileHandler: rotates at midnight, keeps 7 days of backups (suffixapp.log.YYYY-MM-DD). Writes under the app’sLOG_DIR(e.g./app/logsin Docker). Also logs to console (StreamHandler).logs/celery.log— Celery worker and Beat (tasks.py). Same pattern: TimedRotatingFileHandler at midnight, 7 days retention, plus console. Written in the celery and beat containers (each has its own/app/logs/celery.log).
logs/ in the project (or /app/logs inside the web/celery/beat containers). Rotated files are named like app.log.2025-02-23, celery.log.2025-02-23.
Docker logs
Each service (web, celery, beat, admin-portal, redis, caddy, db, etc.) also has Docker logs — whatever the process writes to stdout/stderr. Docker stores these according to the daemon’s logging driver (e.g.json-file with max-size and max-file as in docker-compose.yml). You can view them with docker compose logs.
Test message output
Generated WhatsApp test messages (from the test routes) are saved underlogs/generated_whatsapp_messages/ as JSON files. That directory is created by app.py; it is not part of the rotating app.log or the backup’s “application logs” list, but the backup process can include it if that path is under the same logs/ tree that gets copied.
Viewing and saving logs
View Docker logs (live or one-off)
From the project root (wheredocker-compose.yml is):
docker compose logs web > web_logs.txt or docker compose logs --tail=1000 web > web_recent.txt.
View application log files (inside containers)
docker cp <container>:/app/logs/app.log ./saved_app.log.
Admin portal Log viewer
The Admin portal exposes/api/logs/dates, /api/logs/files, /api/logs/content, /api/logs/errors, and /api/logs/download. These read from the backup directory (/backups/logs), not directly from the running containers. So you see backup content (by date and file) and can download files or archives. Useful after the daily backup has run and the admin-portal container has access to the same volume.
How backup works (daily task)
- Who runs it: Celery Beat runs the task
tasks.backup_logs_taskonce per day at 00:00 UTC (see Scheduling for the full Beat schedule). - What it does: The task calls
backup_all_logs()inhelpers/log_collector.py. By default it backs up yesterday’s logs (target date = today − 1 day). It can also be run manually withbackup_current=Trueto capture the current state of containers and app logs (target date = today). - Where it writes: Under
/backups/logs/(inside the container; in Docker Compose this is typically a host volume such as./backups). Structure:/backups/logs/<YYYY-MM-DD>/docker/— One file per service:web.log,celery.log,beat.log,admin-portal.log,redis.log,caddy.log,db.log. Content comes fromdocker compose logs(for the last 24 hours in scheduled mode, or all current logs in “current” mode)./backups/logs/<YYYY-MM-DD>/application/— Copies ofapp.logandcelery.log(and any rotated variants likeapp.log.YYYY-MM-DD) from the application log directory (e.g./app/logs). The collector looks in/app/logsor the project’slogs/directory.
- Which services are collected: Defined in
log_collector.DOCKER_SERVICES:web,celery,beat,admin-portal,redis,caddy,db. Application log files are listed inAPPLICATION_LOG_FILES:app.log,celery.log. - After backup: For scheduled runs, the collector also runs
compress_old_logs(days=7): backup directories older than 7 days are tar.gzed (and the original directory removed) to save space.
/backups/logs/<date>/. Older backup directories are then compressed.
How logs are backed up to S3 (frequency)
- Who runs it: Celery Beat runs
tasks.upload_logs_to_s3_taskonce per week on Sunday at 02:00 UTC. - What it does: The task calls
upload_logs_to_s3()inhelpers/s3_uploader.py. By default it uploads the previous week (Monday–Sunday). For each day in that range it looks under/backups/logs/<date>(or<date>.tar.gzif already compressed), builds a single tar.gz if needed, and uploads it to S3. Optional: delete local backup after a successful upload (delete_local=True). - S3 layout: Objects are stored under a key like
logs/<year>/<month>/week-<week_number>/<YYYY-MM-DD>.tar.gz(week number is a simple 7-day bucket from Jan 1). Metadata (date, year, month, week, etc.) is set on the object. - Configuration:
S3_BUCKET_NAMEmust be set (e.g. in env or docker-compose) for the upload to run. The celery/beat services need AWS credentials (e.g. IAM role or env vars) so thatboto3can write to the bucket.
Summary
| What | Where | When / how |
|---|---|---|
| Flask app log | logs/app.log (and rotated app.log.YYYY-MM-DD) | TimedRotatingFileHandler, midnight, 7 days; also console. |
| Celery log | logs/celery.log (and rotated) | Same in celery/beat containers. |
| Docker logs | Daemon (e.g. json-file) | Per service; view with docker compose logs. |
| Daily backup | /backups/logs/<date>/docker/*.log, .../application/*.log | Celery task at 00:00 UTC; collects Docker + app/celery logs; compresses dirs older than 7 days. |
| S3 backup | s3://<bucket>/logs/<year>/<month>/week-<n>/<date>.tar.gz | Celery task Sunday 02:00 UTC; uploads previous week’s backups; needs S3_BUCKET_NAME and AWS credentials. |
| View in UI | Admin portal Logs | Reads from /backups/logs (dates, files, content, download). |
helpers/log_collector.py and helpers/s3_uploader.py; the Celery tasks that run them are in tasks.py (see Scheduling for the Beat schedule).