Skip to main content
The application uses PostgreSQL with SQLAlchemy (Flask-SQLAlchemy) for all persistent data: users, cities, message logs, daily metrics, and system health. This page describes the schema, where models live, how the codebase connects to the database, how migrations work, and placeholders for tech stack and DB backup details.

Schema overview

The database is organized around a few core tables:
  • user_detail — Subscribers: phone number, location (lat/lon, timezone), link to city, subscription state, and activity metrics.
  • city_detail — Cached geocoding results (city, state, country, timezone, lat/lon) to avoid repeated API calls.
  • message_log — Per-message delivery tracking (status, timing, cost, WhatsApp message ID); can reference user (via message_text JSON) and city/api.
  • daily_metrics — One row per day: message counts, deliveries, new/unsubscribed users, active users, cost, timezones served.
  • system_health_log — Periodic health snapshots (CPU, memory, disk, connections, queue size, response time, service status).
  • api_detail — API definitions (name, call string, params); referenced by message logs.
  • api_usage_log — Per-day, per-service API usage (call count, success/error, cost, response time).
  • ritu_detail, ayanam_detail — Seasonal/astronomical metadata keyed by city and date ranges.
user_detail links to city_detail via city_id. message_log can reference city_detail and api_detail. ritu_detail and ayanam_detail also reference city_detail.

Schema diagram (placeholder)

[Placeholder: A diagram of the database schema (tables and relationships) will be inserted here.]

Model files

All SQLAlchemy models live under the models/ package in the repo root:
FileTablePurpose
models/user_detail.pyuser_detailUsers: phone, location, timezone, city_id, state, subscription and activity fields.
models/city_detail.pycity_detailCities: city, state, country, timezone, latitude, longitude.
models/message_log.pymessage_logMessage delivery logs: status, timing, cost, WhatsApp ID.
models/daily_metrics.pydaily_metricsDaily aggregates: sends, deliveries, subscribers, cost, timezones.
models/system_health_log.pysystem_health_logHealth checks: CPU, memory, disk, connections, queue, response time.
models/api_detail.pyapi_detailAPI definitions.
models/api_usage_log.pyapi_usage_logAPI usage per day/service.
models/ritu_detail.pyritu_detailRitu (season) data by city/date.
models/ayanam_detail.pyayanam_detailAyanam data by city/date.
The package exposes the shared db instance and model classes via models/__init__.py, which imports from db (see below) and from each model module. The main app and admin portal import from models (e.g. from models import UserDetail, CityDetail).

How the codebase connects to the database

  1. db.py (repo root) creates a single SQLAlchemy instance:
    from flask_sqlalchemy import SQLAlchemy
    db = SQLAlchemy()
    
  2. app.py (main Flask app) sets the connection string and initializes the app with the DB and migrations:
    • app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv("DATABASE_URL")
    • db.init_app(app)
    • migrate = Migrate(app, db)
  3. DATABASE_URL is the PostgreSQL connection string (e.g. postgresql+psycopg2://user:password@host:5432/dbname). In Docker Compose it is set for the web, celery, beat, and admin-portal services.
The admin portal (admin_portal/app.py) uses the same models package and its own Flask app that also sets SQLALCHEMY_DATABASE_URI from DATABASE_URL and calls db.init_app(app). Celery tasks run with a Flask app context that provides the same db and models (see Backend and Scheduling).

How migrations work

Migrations are managed with Flask-Migrate (Alembic under the hood).
  • Configapp.py registers Migrate(app, db). Alembic uses the Flask app’s db and SQLALCHEMY_DATABASE_URI via migrations/env.py, which gets the engine/URL from current_app.extensions['migrate'].db.
  • Migration scripts — Stored in migrations/versions/. Each file is a revision (e.g. b73282c5dc65_initial_tables.py, add_user_profile_fields.py, add_notification_channel.py). They define upgrade() and downgrade() using Alembic’s op API.
  • Commands (run from repo root, typically inside the web container so DATABASE_URL points at the DB):
    • Create a new revision after changing models:
      flask db revision -m "description" (then edit the generated file) or
      flask db migrate -m "description" (auto-generate from model diff).
    • Apply all pending migrations:
      flask db upgrade
    • Roll back one revision:
      flask db downgrade
For a fresh clone, after bringing up the stack and ensuring the database is created, run docker compose exec web flask db upgrade (see Quickstart).

Tech stack (placeholder)

[Placeholder: A short description of the database tech stack (PostgreSQL version, driver, connection pooling if any, and any relevant infrastructure) will be added here.]

Database backups to S3 (placeholder)

[Placeholder: If the project adds PostgreSQL backups (e.g. pg_dump) and uploads them to S3, the process and schedule will be described here. Currently, only log backups (application and Docker logs) are collected and uploaded to S3; see Logging for that flow.]