Louvando
Overview
Louvando targets a very specific, real-world scenario church musicians face every week: the pastor changes the worship set in the middle of the service, and the musician has a few seconds to find the song, transpose it to the congregation’s key and check an audio reference - usually on a flaky 3G connection. Before this project, the options were slow websites, scattered PDFs, no automatic transposition and no way to search by lyrics snippet.
It started as a personal tool for my own worship team and musician friends. As the catalog grew and feedback kept coming from different churches, it became clear this was a common pain across denominations. The product goal converged to a single constraint: fast and accurate search that still works inside a church building with poor signal.
I built Louvando as a solo project end‑to‑end: product ideation, domain modeling, backend architecture in Node.js, frontend in Next.js 13, PostgreSQL + pgvector for storage and semantic search, Redis + Bull for background jobs, offline‑first PWA, social login, deployments and ongoing operations over more than 4 years.
Today the platform serves a catalog of 1,600+ worship songs with chords, base key, dynamic transposition and audio references, reaching roughly 12,000 monthly active users. Semantic search returns results in under 1 second, and the offline mode covers the very common case of losing signal mid‑service while still needing to access previously used songs.
Key Differentiator
What makes Louvando stand out is not “yet another chords site”, but the combination of three things at once: genuinely fast semantic search for Portuguese lyrics, an offline‑first PWA tailored for live worship services, and an architecture that treats lyrics and chords as different concerns to preserve search quality.
Instead of relying purely on text search by title, the platform uses vector embeddings (pgvector + HNSW) so musicians can search by any lyric fragment, even with typos or slight variations. The semantic index lives inside PostgreSQL itself, avoiding a separate vector database and the consistency issues that come with it.
Offline behavior is treated as a first‑class requirement, not a nice‑to‑have: the service worker caches recent searches and visited songs so the app keeps working when cell service drops inside the building. Chord transposition is fully decoupled from search indexing — lyrics are stored in a clean form for embedding, while rendered chord sheets are generated at view time, including proper support for Brazilian chord notation.
Architecture
- Web App (Next.js 13 + PWA): Main user interface running in mobile and desktop browsers, using Server Components and Server Actions to optimize API calls, plus a service worker for offline caching of assets and song data.
- HTTP API (Node.js + Express): Backend layer exposing REST endpoints for auth, song management, text and semantic search, transposition, playlist building and admin operations.
- Domain Layer (TypeScript): Application services wired via dependency injection, orchestrating repositories, embedding providers, storage and queues without coupling business rules to infrastructure.
- Relational Database (PostgreSQL + TypeORM): Stores songs, clean lyrics (
raw_lyrics), chorded content, audio metadata, users and usage history, with versioned migrations. - Vector Extension (pgvector + HNSW index): Keeps embeddings for titles + lyrics and performs approximate nearest neighbor search with cosine similarity directly inside PostgreSQL.
- Text Search Engine (OpenSearch): Indexes textual fields for traditional searches by title or artist, complementing the vector layer where exact text matching is more appropriate.
- Async Queue (Redis + Bull): Manages background jobs for embedding generation, backfilling the legacy catalog, upload processing and other non‑critical tasks, with retries and controlled concurrency.
- Embedding Provider (DeepInfra + BGE‑M3): External service used by the API to generate high‑quality Portuguese embeddings, replacing a self‑hosted model that created unacceptable latency.
- File Storage (AWS S3): Stores audio references and heavy assets, decoupling media storage from the application server.
- Authentication (JWT + NextAuth): Combines JSON Web Tokens on the API with NextAuth on the frontend for Google and Facebook login, keeping sessions secure and PWA‑friendly.
- Observability (Sentry): Tracks frontend and backend errors, making it easier to spot real performance bottlenecks and device‑specific issues in the worship context.
Technical Highlights
- Replaced a self‑hosted Ollama setup using
snowflake-arctic-embed2(12–18s per search query in production) with BGE‑M3 on DeepInfra, dropping semantic search latency to sub‑second without touching domain logic thanks to anEmbeddingProviderabstraction. - Implemented a Redis + Bull job queue to backfill embeddings for 1,600+ existing songs in the background, with retry logic and concurrency limits, avoiding downtime or API slowdowns during migration.
- Explicitly split storage of
raw_lyrics(clean lyrics for indexing) from chorded display content, using aLyricsConverterProviderbacked by an LLM to strip chord notation before indexing and keep semantic search robust. - Built an offline‑first PWA with a custom caching strategy for static assets and API responses, ensuring previously accessed songs remain available when connectivity drops mid‑service.
- Leveraged pgvector with an HNSW index and cosine similarity inside PostgreSQL, removing the need for a separate vector database and simplifying data consistency and operations.
- Standardized infrastructure providers (
EmbeddingProvider,StorageProvider,MailProvider) using dependency injection, which made it straightforward to switch from Ollama to DeepInfra and from local disk storage to S3 without rewriting business logic. - Integrated Next.js 13 with shadcn/ui, TailwindCSS and D3.js to build a responsive, accessible UI that can later power richer analytics and worship set visualizations for worship leaders.