Skip to main content
← Back to projects Revelacine

Revelacine

React Next.js Node.js TypeScript PostgreSQL Python GSAP Zod Fastify

Overview

Revelacine started from a practical pain point: keeping up with movies meant juggling streaming apps, spreadsheets and random notes. I wanted a single place where I could discover new movies, vote on recommendations, track what I had already watched and eventually see how much time I had actually invested - without having to manually update everything all the time.

I turned that need into a full‑stack learning project focused on multi‑service architecture. Instead of building a simple monolith, I designed three independent services: a Next.js 13 frontend, a Fastify REST API, and a Python bot responsible for syncing the catalog with the TMDB API and optimizing images before they are stored. Each service has its own deployment path and, in a real environment, could scale independently.

My role covered the whole stack: domain modeling (movies, genres, votes, users, watchlist), Fastify + Prisma API implementation, Next.js frontend with hybrid SSR/CSR, and the Python bot that handles job orchestration and image processing. The result is a movie catalog that stays up to date continuously, backed by an image pipeline that keeps the UI visually rich without sacrificing performance.

Because this is a solo project aimed at learning, the impact is mainly technical: consolidating a modern stack, getting hands-on experience orchestrating loosely coupled services, and using a real use case to explore topics like rate limiting, exponential backoff, production CORS, and practical performance tuning in the frontend.

Key Differentiator

What makes Revelacine interesting is not “yet another movie catalog”, but the way it simulates a small production‑like environment inside a personal project. The sync layer is not a quick cron script glued to the API: it’s a dedicated Python service with its own scheduler, retry strategy with exponential backoff, and a full image pipeline integrated with an S3‑compatible storage (Cloudflare R2).

Another non‑trivial aspect is the focus on assets and external API consumption. Instead of serving TMDB’s original poster/backdrop images directly, the bot downloads them, resizes, converts to WebP with tuned compression, and only then pushes them to R2. On each sync cycle, it skips images that are already present in the bucket to avoid unnecessary work. On the data side, each type of job (trending, latest, genres) has its own schedule and retry policy, which reduces coupling and makes failure recovery more predictable.

This combination of SSR frontend, a performance‑oriented Fastify API, an isolated heavy‑duty Python bot, and a CDN‑backed image pipeline is what makes the project valuable as an architecture case study rather than “just a UI”.

Architecture

  • Frontend (Next.js 13 + React 18): Public UI for discovering movies, viewing details, voting and managing the watchlist. Uses SSR for main pages and React Query for client‑side caching and hydration. Global state (auth, votes, watchlist) is handled with lightweight Zustand stores, with GSAP for focused animation work.
  • API (Fastify + Prisma): Node.js service responsible for authentication (JWT + Google OAuth2) and REST endpoints for movies, genres, users, votes and watchlists. Uses Zod for payload validation, authentication/authorization middlewares, and exposes paginated, filterable endpoints for the catalog.
  • Sync Bot (Python 3.9): Separate service that talks directly to TMDB and PostgreSQL. Uses APScheduler to orchestrate independent jobs for trending, latest releases and genres, each with its own cron expression and retry/backoff behavior.
  • Image Pipeline (Pillow + Boto3 + Cloudflare R2): Subsystem inside the bot that downloads poster/backdrop files from TMDB, resizes them, converts them to WebP, and uploads them to a Cloudflare R2 bucket via Boto3. Returns stable CDN URLs that the frontend can use directly.
  • Database (PostgreSQL + Prisma): Central data store for movies, genres, users, votes, watchlists and sync metadata. Prisma handles schema migrations, typed queries and mapping between the Node.js API and the relational model.
  • Infrastructure (Docker, Vercel, VPS, GitHub Actions): Frontend deployed on Vercel; API and bot run in a VPS using Docker. GitHub Actions pipelines build and deploy services, and were the main driver for iterating on CORS configuration and environment management.
  • Auth & Security: Google OAuth2 for sign‑in, JWT issuance for frontend ↔ API communication, with Fastify hooks and middlewares protecting routes that require an authenticated user (e.g. voting and watchlist operations).

Technical Highlights

  • Implemented a retry mechanism with exponential backoff for Python sync jobs, preventing TMDB from being hammered during outages while keeping the catalog reasonably up to date.
  • Built an image pipeline that resizes and converts TMDB posters/backdrops to WebP via Pillow, then publishes them to Cloudflare R2 using Boto3, enabling fast delivery through a dedicated CDN domain.
  • Structured the Fastify API with Zod‑based input validation, JWT authentication and Google OAuth2 integration, maintaining strong typing across the stack with TypeScript and Prisma.
  • Split the system into three independent services (frontend, API, bot), each with distinct performance profiles and deployment lifecycles, mimicking a small‑scale microservices setup.
  • Addressed real‑world CORS issues between Vercel‑hosted frontend and a custom API domain by implementing hostname‑based origin checks rather than brittle static allowlists.
  • Scheduled multiple APScheduler cron jobs (trending every 10 minutes, latest every 30 minutes, genres hourly) with coalesce=True and max_instances=1 to avoid overlapping executions and race conditions during data sync.

Gallery