Real-Time Data Sync Isn't Unsolved. It's Just Unfashionable.

I read a terrific piece this week arguing that real-time data sync is “still unsolved” in 2026. The diagnosis is sharp: databases don’t track who’s watching what, reverse query matching is brutally hard, and Google spent thirteen years trying to bolt reactivity onto increasingly standard data models — losing it each time they got closer to real SQL.
The article nails the problem. But I kept wanting to reach through the screen and say: there’s an entire architectural pattern that’s been solving this since before Firebase existed. It just doesn’t get invited to the frontend conversation very often.
It’s called Event Sourcing with CQRS. And it’s been quietly powering some of the most demanding systems in finance, logistics, and gaming for nearly two decades.
The core insight the article almost reaches
The article frames the problem as needing “tight coupling between database and UI” — and then correctly identifies that as a trap, because the ecosystem values composability. Couple harder? Developers rebel. Stay decoupled? Reactivity breaks.
Event sourcing says: you’re framing it wrong. The answer isn’t to couple the database to the UI more tightly. It’s to put an event log between them and let each side do what it’s good at.
Write something? That’s an event appended to a stream. An immutable fact: OrderPlaced, InvoiceApproved, CommentAdded. The event store is the source of truth.
Need to display something? Build a read model — a projection that subscribes to the events it cares about and maintains exactly the shape the UI needs. One projection for the dashboard. Another for the activity feed. Another for the search index. Each updated in real time as events arrive.
The read path and write path the article describes as “two unsolved halves”? Event Sourcing with CQRS literally separates them by design and solves them independently.
”Databases don’t track who’s watching what”
KurrentDB does. That’s the whole point.
When you subscribe to a stream — a category of streams, a collection of streams via a filtered subscription, or the global $all stream — KurrentDB tracks your position and pushes new events the instant they’re appended. No polling intervals. No manual cache invalidation. No reverse query matching needed.
The article describes this as the dream:
The server tells your app the instant something changes. No polling interval. No manual invalidation. The data arrives because the server knows you’re watching.
That’s a built-in subscription in KurrentDB. We’ve shipped subscriptions since the database’s earliest versions — years before many of the tools the article surveys even existed.
”Figuring out which queries are affected is the hard part”
It really is — if your architecture requires it. The article describes the nightmare of a row changing in Postgres and needing to figure out which of a thousand active clients care, based on their filters, joins, and aggregation windows.
Event Sourcing sidesteps this entirely. You don’t start from a changed row and work backwards to affected queries. You start from an event and work forwards to the projections that already told you they’re interested.
A projection that builds an order summary subscribes to OrderPlaced, ItemAdded, OrderShipped. When one of those events appears, the projection updates its read model. Done. No reverse query matching. No dependency graph across arbitrary SQL. Each read model knows exactly which events it cares about because it was built that way.
”One change is about to silently disappear”
The article flags the write path as equally unsolved — two users edit the same record, last-write-wins, data loss.
KurrentDB handles this with optimistic concurrency on streams. When you append an event, you specify the expected version of the stream. If someone else wrote first, you get an explicit conflict — not a silent overwrite. Your application decides what to do: retry, merge, alert the user. The conflict is surfaced, never swallowed.
No CRDTs. No operational transform. Just an append-only log with version checks. For the vast majority of business domains — which aren’t collaborative text editors or design tools — this is exactly the right trade-off.
The Firebase timeline illustrates the problem perfectly
The article’s succinct walkthrough of Google’s thirteen-year journey is a great illustration of why starting from the relational model and adding reactivity is a losing strategy:
- Realtime DB (2012): Reactive, but no real data model.
- Firestore (2017): Reactive with a better model, but not relational.
- Data Connect (2025): Real Postgres, but the reactivity didn’t survive.
Each step traded reactivity for a more standard data model. Event sourcing says: you don’t have to choose. The event log gives you reactivity natively — it’s an append-only stream that clients subscribe to. And you derive whatever read models you need, including fully relational ones backed by Postgres, updated in real time by projections.
You get reactive. You get a real model. And your read models can absolutely be standard SQL. All three columns checked.
”But is it actually easy?”
Fair challenge. Event sourcing has a reputation for complexity, and some of that reputation is earned. But consider what the article describes as the current state of the art: stitching together TanStack Query, WebSocket layers, cache invalidation logic, conflict resolution strategies, and bespoke subscription management — all libraries that weren’t designed for each other.
With KurrentDB, appending an event is a few lines using our SDKs (available for JavaScript/Node.js, Python, .NET, Java, Go, and Rust) over gRPC. Subscribing to a stream is another call. Building a projection is a function that receives events and updates a read model. The last mile to the UI — WebSockets, SSE, whatever your frontend prefers — pushes from the read model, which already has exactly the data shape your component needs.
Is it Firebase’s three-line onValue callback? No. But Firebase’s three-line callback couldn’t handle relations, joins, or aggregations either. That’s why Google spent thirteen years trying to fix it.
Don’t trust us — trust Motion
Motion is an AI-first workplace suite with 30+ integrated products spanning calendar, email, project management, and task automation. When they needed a sync engine to keep customer data continuously updated across all those products and all their clients in real time, they hit exactly the problem this article describes.
Their story gets even more interesting because their requirements kept changing. Motion’s CTO Chander Ramesh put it bluntly: “Every year, we think next year we’ll finally have clarity, and the systems will stop changing like crazy, and it just never happens. The entropy just magnifies.”
With KurrentDB as the event store backbone, Motion evolved through three major architectural pivots — from per-team streams, to user-specific streams, to a per-workspace model — with zero data migrations and zero data loss. Events from all three eras coexist in the same database. Each pivot would have been a high-risk schema migration with a traditional database. With event sourcing, events are facts — schema is just interpretation.
As Chander told us: “Whatever architectural decisions we make in the future, the database can handle it.”
That’s the sync engine the article says doesn’t exist. It’s in production. Across multiple products. Serving over a million users.
The AI angle makes this urgent
The article’s most prescient observation:
As AI agents increasingly write data in the background, the set of apps that need this is about to get much larger.
This is the line that should keep every architect up at night. When it was just humans clicking buttons, you could get away with stale reads and optimistic UI tricks. But agents write data at machine speed, across multiple streams of work, often without a human in the loop.
An event-sourced system doesn’t just tell you what the current state is — it tells you how it got there. AI agents need what we call the four dimensions of context: temporal (precise event sequencing and point-in-time state reconstruction), causal (which events triggered which actions and which agent decided what), relational (cross-entity correlation patterns spanning system boundaries and agent handoffs), and semantic (rich business context explaining exactly what happened and why, in natural language).
KurrentDB preserves all four dimensions natively. Events carry temporal ordering by default. Built-in $correlationId and $causationId metadata provide causal and relational context as first-class features. And because events are written in domain language — LoanRequested, CreditChecked, InvoiceApproved — they carry semantic context that LLMs can consume directly.
Traditional databases give AI a photograph. An event-native database gives AI the whole movie.
As Motion’s Chander Ramesh observed: “AI is going to be a big forcing function for audit log-ability.” When you need to explain what your AI agent saw, decided, and why — whether for debugging, compliance, or just building trust — an append-only event log with full causal context gives you that answer by default, not as an afterthought.
So why hasn’t Event Sourcing “won”?
Honestly? Marketing and developer experience.
The frontend ecosystem talks about state management in terms of React hooks and query caches. The event sourcing community talks about aggregates and bounded contexts. They’re solving the same problem from opposite ends and rarely meet in the middle.
But the underlying pattern — immutable event log, derived read models, push-based subscriptions, explicit conflict handling — is exactly what the article describes as the unsolved frontier.
It’s not unsolved. It’s been solving it in a different room. Time to knock on the door.
