Announcement
November 30, 2023
November 30, 2023
(updated)

Introducing PowerSync v1.0: Postgres<>SQLite sync layer

Conrad Hofmeyr

We are excited to announce the v1.0 stable release of PowerSync, a Postgres<>SQLite bi-directional sync layer. 

In this post, we’d like to give some background on the product as well as the philosophy underpinning its architecture and design.

The goal: accelerating local-first app architecture

The goal of PowerSync is to give developers significant leverage in implementing an local-first (or offline-first [1]) architecture, where app code works directly with a client-side embedded database which automatically syncs with a backend database in the background. 

We believe that local-first is poised to become the default architecture for the majority of apps: Local-first apps feel instant to use because of the near-zero latency of working with a local in-app database, are functional even if the user’s network connection is unreliable or unavailable, and provide built-in multi-user real-time collaboration. Local-first also has substantial benefits for developers: Simplifying state management (all state is in the local database), reducing the need for backend API development, and reducing backend dependency/load.

What does PowerSync provide, and how is it used?

Here’s a quick rundown of how PowerSync works and the key features it provides:

  • The PowerSync Service connects to standard open-source Postgres database (read-only) to perform dynamic partial replication of data to users. This is controlled through Sync Rules which contain SQL queries that can make use of dynamic parameters.
  • When Sync Rules are created, the PowerSync Service pre-processes and caches data and operation history from the Postgres database to efficiently and scalably sync to users. The PowerSync Service initially takes a snapshot of data defined in the sync rules, and then incrementally updates its cache whenever the data changes, using standard Postgres logical replication.
  • On the client side, the developer includes a PowerSync SDK in the app project. SDKs are available for Flutter, React Native and web (JavaScript) [2] which uses Wasm SQLite to run in browsers. The developer wires up the SDK to fetch a JWT from their application backend, which is then used for authentication between the PowerSync Service and the client SDK.
  • The client SDK keeps a local SQLite database in sync with Postgres in real-time, based on the Sync Rules configuration. The SDK allows for live queries from SQLite (which re-run whenever underlying data changes), allowing the developer to enable real-time reactivity in their app UI. It generally eliminates the need for a state management library.
  • Whenever the app performs writes to the local SQLite database, the write transactions are also placed in an upload queue [3]. The developer defines a function used by the client SDK to upload the writes to their backend application, where custom business logic can be applied before committing the changes to the Postgres database. If the user is offline or network connectivity is unreliable, the client SDK will automatically retry uploading the changes in the queue.

With relatively low effort, the developer gets robust two-way syncing between Postgres and SQLite, and with it, the benefits of a local-first architecture.

Why we built PowerSync: A need for a SQL-focused sync engine

The team behind PowerSync has been building and refining our sync technology since 2009. PowerSync is a spin-off from a full-stack app platform product where the same core sync technology has been in production for more than a decade, and is in daily use around the world by tens of thousands of enterprise users at Fortune 500 companies in industries such as energy, manufacturing and mining. The technology has been proven to provide high performance and reliability in real-world production use cases with large database volumes, in remote and disconnected environments, and there have been zero incidents of data loss since the original release.

Over the years, many folks have asked us to consider spinning off the PowerSync engine into a standalone product. In 2022, we started seriously considering this possibility, especially since there weren’t really any good sync systems available for SQL databases despite databases such as Postgres and SQLite being the most widely used in the world. There has been some innovation in the NoSQL space, the prime example being MongoDB Realm / Atlas Device Sync [4]. 

We are big proponents of SQL database technology: Even though there has been significant convergence between the maturity of SQL and NoSQL databases over the last decade, SQL as a language remains more universal and well-known, and is very effective for advanced queries such as aggregations. As for embedded databases, SQLite is hard to beat when it comes to performance, flexibility & advanced functionality (such as aggregations, joins, advanced indexing and JSON support), and how battle-tested it is (the SQLite team estimates that there are a trillion SQLite databases in active use). 

The more developers we spoke to, the more we realized that a big need exists for a SQL-focused sync engine for local-first apps. Many developers don’t want to switch to a new niche database in order to get access to syncing functionality. We decided to focus on Postgres first given its popularity.

Design goals for PowerSync

With the SQL-focused need in mind, we identified some important design goals for the system:

  • It should be easy to adopt for either new or existing systems (greenfields and brownfields). Importantly, it should not be invasive to the developer’s stack, especially the database: it should not require substantial reconfiguration or additions to the Postgres database. The system should solve the hard problems of syncing data, and then stay out of the way of the developer.
  • Dynamic partial replication is a key capability, and it should be possible to adapt the sync controls at any time without complexity.
  • Don’t bypass the developer’s existing business logic, authorization and validation. Don’t write directly to their database — the database is sacrosanct.
  • Low maintenance. In particular, schema and data migrations should be as painless as possible.
  • A simple, robust and proven architecture, not over-engineered. It should be like a Toyota — a workhorse product that will be reliable for years with minimal intervention.
  • Provide strong consistency guarantees.

Architectural decisions and engineering challenges

We relied on years of learnings and battle-testing in order to create the standalone PowerSync product. Let’s look into some of the important architectural decisions and engineering challenges:

A scalable dynamic partial replication system

“Partial replication is a problem I haven't seen many people solving but it is definitely the next frontier in this space.”tantaman on HN

Dynamic partial replication is a tricky problem to solve: Given a large backend database, we want to selectively sync different parts of it to different users’ local databases, keep them up to date incrementally, and keep everything consistent — in a performant and scalable way.

Let’s say you want to replicate completely arbitrary database queries to users. Consider the challenge of efficiently incrementally updating that replicated dataset. Let’s imagine a scenario where a user didn’t use our app for a week, and comes back online and does a sync. We run a query on a large table and the filtered results are a few hundred thousand rows. But we only need to sync the data in that result set that has changed since that user’s last sync, so the delta needs to be computed for that specific user for that specific time range. Now multiply this by thousands of concurrent users and it becomes clear how the efficiency is challenging.

Accordingly, we designed PowerSync to allow for partitioning data in a way where partitions can be shared between users where possible, and where we can track changes to partitions efficiently. PowerSync pre-processes data against queries in the Sync Rules, and indexes the data by an ‘operation ID’, resulting in an ordered list of operations which we can efficiently query to get a specific series of changes in a specific time range, for any user. We also compact this operation history automatically so that it doesn’t grow indefinitely.

PowerSync does all of this outside of the Postgres database (in the PowerSync Service) so that we don’t pollute Postgres. This architecture scales well, and is straightforward to use: Define sync rules and a client-side schema, and the data is automatically kept in sync. It’s also adaptable: If you change the sync rules, the entire new set of data is applied on the client as an atomic operation. 

Server authority instead of distributed/CRDT architecture

One of the (perhaps surprising) key decisions about the PowerSync architecture was not to use CRDTs to merge changes. 

A quick primer just in case: CRDTs are data structures where all operations are commutative, meaning they can be applied in any order on different replicas of the data, and each replica converges to the same state. This is useful for syncing data peer-to-peer, but the downside is that CRDTs come with significant overhead and more complexity. [5]

Broadly speaking, an alternative to CRDTs is to keep a global ordered list of operations/events [6]. By applying operations in the same order on each replica, they converge to the same state. Importantly, having an authoritative server in a system architecture means that a global ordering of operations can be enforced, typically without adding much complexity to the system.

While decentralized systems are an interesting area of innovation, we recognized that the vast majority of apps built around Postgres have some kind of centralized server architecture. Therefore, we decided to embrace this and design our system around the concept of server authority and a global order of operations. This has the advantage of keeping the architecture simple and robust, and offers developers the ability to control and customize how data gets written into their Postgres database. PowerSync makes use of an event sourcing pattern, whereby events/operations are sourced from Postgres via the logical replication stream.

With PowerSync, writes are sent through the developers’ own application backend, allowing them to apply their own business logic, fine-grained authorization, validations and server-side integrations. They can resolve conflicts using the techniques/algorithms of their choosing, trigger server-side workflows, and reject changes from clients if needed (and since the backend is authoritative, the new state will correctly and consistently replicate to clients).

While PowerSync does not use CRDTs for its internal protocol, it can leverage another important strength of CRDTs: very fine-grained collaboration like document/text editing. CRDTs can actually be implemented on top of PowerSync + Postgres for collaborative document editing: For example, using Yjs and storing its CRDT data structure in Postgres using blobs, and keeping it in sync between clients in real-time. See an example here.

Consistency & integrity

We designed PowerSync to have causal+ consistency. We used the concept of "checkpoints'' for consistency. A checkpoint is a single point-in-time on the server with a consistent state — only fully committed transactions are part of the state. The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. This works in line with the server authority mentioned above: The server can decide how to merge changes, and the server state will propagate to all clients.

Changes to the local SQLite database are applied on top of the last checkpoint received from the server, as well as being persisted into an upload queue. While changes are present in the upload queue, the client does not advance to a new checkpoint from the server. This means the client never has to resolve conflicts locally. Only once all the local changes have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that operations are always ordered correctly on the client.

Client-side schema architecture

We decided to create a degree of flexibility between the Postgres schema and the client-side SQLite schema. The Sync Rules configuration in PowerSync can optionally perform transformations on the Postgres data/schema, and the client-side SQLite schema can diverge from the Postgres schema if the developer prefers.

Another key decision was that the PowerSync protocol syncs “schemaless” data, and then applies the client-side schema on top of the schemaless data using SQLite views. One of the advantages of this approach is that there’s no need for client-side database migrations — these are handled automatically in almost all cases. The Sync Rules configuration can also be changed on-the-fly and the client-side database is automatically maintained accordingly. We have also put processes in place for handling Postgres schema changes while still supporting older client versions.

From beta to v1.0 stable

We launched a beta version in mid-2023, and have just reached our v1.0 stable release milestone. We are grateful for all the feedback we received from developers during the beta program. This has helped us to prioritize our roadmap. For example, a key short-term priority is to release a self-hostable version of the PowerSync Service. 

Open-source / source-availability

Making our source-code open is also a cornerstone of our philosophy. Our client SDKs are open-source (Apache 2.0 license) and a source-available edition of the PowerSync Service will be released in 2024.  Update May 31, 2024: our Open Edition has been released (a free, source-available, self-hosted version of PowerSync).

The future of PowerSync

A major focus is making PowerSync more stack-agnostic: We will add more SDKs (e.g. Kotlin, Swift, Electron, Capacitor, and more [2]) as well as support for more backend databases (Initially MySQL and Microsoft SQL Server, however, this could extend to NoSQL databases in the future). 

There are a ton of interesting innovations going on in the database world and around SQL in particular. This dynamic environment is unveiling a wealth of innovative applications for PowerSync, guiding the evolution of our feature set. Stay tuned.

Footnotes

[1] We see “local-first” as the logical successor to “offline-first” app architecture. Local-first builds on the idea of offline-first with additional ideals. The industrial research lab Ink & Switch defined local-first in 2019 as a software application architecture where the cloud/server is merely a peer and not on the critical path to supporting client applications. Their vision is that local-first client applications should have “ultimate ownership and control” over their own data. Not all software that is described as local-first today adhere to all of the ideals espoused by Ink & Switch, but many developers aspire to get closer and closer to those ideals over time, instead of merely providing equivalent capabilities to what would be considered “offline-first”.

[2] Update: Following the original publication of this blog post, client SDKs for Kotlin Multiplatform and Swift have also been released.

[3] With PowerSync, you can do local transactions in SQLite and then place the whole transaction into the upload queue as a batch, which your backend can then either reject or accept.

[4] Interestingly, Realm Sync was working on a Postgres connector before Realm was acquired by MongoDB and distribution of the Postgres connector was paused.

[5] Figma’s CTO Evan Wallace summarized this eloquently: “Figma isn't using true CRDTs […]. CRDTs are designed for decentralized systems where there is no single central authority to decide what the final state should be. There is some unavoidable performance and memory overhead with doing this. Since Figma is centralized (our server is the central authority), we can simplify our system by removing this extra overhead and benefit from a faster and leaner implementation.”

[6] Some systems don’t keep track of operation history at all. If you don’t store history, you sacrifice some transactional guarantees — you can’t accept or reject individual transactions.

Subscribe to receive updates

Thank you! Your submission has been received.
Oops! Something went wrong while submitting the form. Please try again.