I built a PowerSync + Supabase React app with end‑to‑end encryption (E2EE), leaning on differential watch hooks for a reactive UI. This is a candid write‑up of the developer experience: smooth parts, clunky parts, performance questions I still want to answer, and where a little documentation and tooling could remove friction for the next person.
Setup & DX
The initial setup was quick: wiring Supabase Auth, pointing the PowerSync client at my service, and getting a watch query on screen. From there, layering E2EE (vault key, per‑room keys, and mirror tables) felt natural. The rough edge was operational rather than conceptual: I had to bounce between the Supabase and PowerSync dashboards to configure URLs, roles and SQL, copying values back and forth. Coming from an AWS CDK background, I missed an infrastructure‑as‑code path to stamp environments consistently. A stable CLI or IaC starter (for roles, publications, and Sync Rules) would turn that friction into muscle memory.
Architecture at a glance
Messages and rooms are stored as encrypted rows; mirrors expose the plaintext needed for UI. I used PowerSync’s differential watch to feed the chat, because it tells you exactly what changed (added/updated/removed) rather than re‑emitting whole result sets. It plays well with list UIs — especially when you want a snappy scrollbar without fetching the entire history up front. See: Live / Watch Queries and the note on trade‑offs in High‑Performance Diffs.
What felt smooth
Once the plumbing was in place, local‑first flow really shines: writes are instant, the UI stays responsive offline, and sync‑back is transparent. The encryption pieces slot in cleanly because the app owns the mirror schema, so I can keep ciphertext opaque while shaping the plaintext columns I actually render.
Where I stumbled
I spent time in trial‑and‑error on Sync Rules and Sync Streams syntax (the new system that PowerSync is working on that will eventually replace Sync Rules; it’s currently in early alpha) — figuring out which SQL operators are supported in parameter/data queries and how to structure them for buckets. The “Ask AI” assistant didn’t unlock much there. What did help was building a small catalog of working examples and learning the constraints called out in the docs: Data Queries and Parameter Queries, plus the detailed limits in Operators & Functions.
Raw tables or default views?
Default (JSON-based) SQLite views are zero-migration on the client: the PowerSync SDK applies your client schema over schemaless JSON, so you aren’t writing client migrations when fields change. Raw tables are the “you own it” path: you create tables, define the sync mappings (%%put%%/%%delete%%), add triggers to forward local writes, and handle schema changes yourself. In return, you get native indexes/constraints and faster queries on hot paths. The docs spell this out here: Raw SQLite Tables (“How Raw Tables Work”, “Define sync mapping…”, “Capture local writes with triggers”).
Why raw tables in this project: The app encrypts at the application layer. What we sync is an opaque cipher envelope (%%alg%%, %%aad%%, %%nonce%%, ciphertext, KDF params), not a JSON business object. The JSON view model can’t decrypt on the client’s behalf or project fields out of ciphertext; it’s designed for schemaless JSON that’s already usable. With raw tables, I can store the envelope as columns, decrypt locally, and maintain mirror tables that expose just the plaintext fields the UI needs, with proper indexes. That’s why JSON views weren’t a fit here.
Migrations — precisely. No client migrations for JSON views; with raw tables, migrations are your responsibility. If you add columns to a raw table, there’s currently no “re-sync with new columns” mechanism. The guidance is to delete local data for that table and let the client perform a full re-sync so the new columns populate. This is a client-side reset only — server data stays intact — but note that %%disconnectAndClear()%% does not clear raw tables; you must wipe them explicitly. See Migrations, Migrations on raw tables, and Deleting data and raw tables.
Recommendation: Start with defaults to get momentum; move targeted tables to raw when you need schema guarantees, client-side indexes, or app-layer encryption. A tiny migration playbook (version check → pause watchers → wipe affected raw tables → reconnect) makes this path safe from day one.
Performance questions (and how I plan to answer them)
I haven’t pushed this to “giant room” scale yet, so my unknowns are mostly about behavior under load. How quickly does a cold client converge when a room has millions of messages? What’s the steady‑state input‑to‑paint latency with high message rates? How big does the local DB get, and what does P95/P99 look like on typical devices? What’s the cost profile of a differential watch when result sets are large? The high‑level note in the docs is that differential watches are the most flexible but can be slower on large result sets — hence the need to measure in context. See: High‑Performance Diffs.
My plan is to build a small harness that varies fan‑out (members/room), message rate, and payload size, and record: time to “recent messages visible” on cold start; steady‑state input→paint latency; memory/CPU of the watch path; local DB footprint and query latency; convergence after offline flaps; and network/backpressure behavior under bursts. Until those numbers exist, I’d ship conservative defaults: small initial windows per room, attach Sync Streams only for visible rooms (or if using Sync Rules, use client parameters to limit data similarly), cap local cache with tail eviction, and back off when queues deepen. The built‑in diagnostics are handy here: see the troubleshooting page and Sync Diagnostics Client.
Data‑shaping & Sync Streams
Two practical tips helped the “scrollbar‑happy” prefetch problem. First, shape what you sync: be explicit about output columns and avoid accidental broadening (Data Queries). Second, let the UI drive attachment: open a Sync Stream for the room you’re actually looking at (or if using Sync Rules, use client parameters); keep a tiny warm cache for neighbors; and page backwards as the user scrolls. Differential watch makes this cheap to reason about, because you get diffs instead of whole lists. When you do need parameters from the client, read the caveats in Parameter Queries and the limits in Operators & Functions.
Documentation notes
The docs got me there, but the edges around Sync Rules and Sync Streams SQL were where I burned time. A “decision tree” for raw vs. default views, a migration playbook for raw tables, and a handful of end‑to‑end examples for common patterns (chat, feed, infinite scroll) would pay for themselves. A small “what SQL is supported in Sync Rules & Sync Streams” table with yes/no examples would reduce trial‑and‑error.
JS client ergonomics
I wanted a bit more hand‑holding in the JS client: clearer raw‑query guidance, and sharper examples around watch queries for list UIs. The watch‑query pages are good starting points: Live Queries / Watch Queries overview and High‑Performance Diffs. If you’re using a typed SQL builder (Drizzle/Kysely), first‑class adapters/wrappers for watch queries would be great.
Raw tables + Sync Streams as a default path?
Raw tables feel powerful — and paired with Sync Streams they seem like a strong mental model for new apps. I’d love official guidance on when that should be the default versus when the higher‑level helpers are a better fit. Until then, I’m comfortable recommending: start simple, then graduate to raw tables + Sync Streams where you need precision or speed.
What I’d love to see next
- A CLI or IaC starter that provisions Supabase + PowerSync and stamps roles, publications, and Sync Rules/Streams without manual copy‑paste.
- A “new‑dev guide” that says plainly when to stick to default views and when to reach for raw tables, including the migration playbook and callouts to the raw‑table sections: How Raw Tables Work, Migrations, Migrations on raw tables, and Deleting data and raw tables.
- More examples covering bucket design and supported SQL in Sync Rules/Streams: start at Data Queries, Parameter Queries, and Operators & Functions.
Closing
Overall, the experience was good. The local‑first model feels great in a chat app, and PowerSync’s watch hooks pair naturally with E2EE mirrors. The rough edges I hit — manual provisioning, raw‑table migrations, and figuring out the limits of Sync Rules/Streams — are solvable with a little guidance and tooling. I would happily ship this to production after we answer the benchmarking questions and lock in a small migration helper for raw tables.