Evaluating options for Collaborative Editing in a React Flow application
Introduction
I currently work on a team building a workflow builder, using React Flow.
Looks like this but this is a "hello world" example.
In our current implementation when two users edit and save a workflow at the same time, the last save wins. There is also no way to know that another user is editing the same flow. This means that users can overwrite each other's work causing confusion and reducing trust in the platform. It also discourages teams from working on flows concurrently, creating bottlenecks.
When tackling this, we considered two alternatives: a lock mechanism and multiplayer/collaborative editing.
Lock Mechanism: In this solution, a flow is locked once a user starts editing it. Any other user who opens the flow won't be able to edit it until the current editor leaves. Users can also see who currently has editing rights for the flow.
Collaborative/Multiplayer editing: In this solution, multiple users can edit the same flow in parallel, see others' changes in realtime and any conflicts are resolved automatically.
Here is a brief comparison of what these alternatives mean for our users.
| Current Experience | Locks | Collaborative editing | |
|---|---|---|---|
| 😔 "Someone else is editing" — wait your turn | 🙂 Clear indication of who has edit | 😊 Multiple people edit simultaneously | |
| 😔 "Your changes were overwritten" — work lost | 🙂 No conflicts possible — only one editor at a time | 😊 See others' cursors and selections in real-time | |
| 😔 No visibility of what others are doing | 🙂 May need to wait for others to finish | 😊 Changes merge automatically | |
| 🙂 "Request edit access" option to notify current editor | 😊 Each person has their own undo history | ||
| 😊 Works even with poor internet |
I was responsible for exploring the alternatives we had for multiplayer editing, how each could fit into our implementation and how they would compare to the lock mechanism. This blogpost explains the alternatives I considered.
Alternatives
Yjs + WebSockets
This is the recommended solution in the React Flow pro examples. Yjs is an industry-standard CRDT library with custom WebSocket integrations for building collaborative web applications.
Pros:
- Backed by the React Flow team and Synergy Codes
- We get full control over the implementation unlike managed services like Liveblocks.
- No recurring subscription costs in comparison to Liveblocks.
- Yjs has a large ecosystem of providers and integrations and a big community.
- We can self-host for data sovereignty
- Automatic conflict resolution when multiple users edit the same node/fields.
Cons:
- Requires WebSocket infrastructure. While we had notifications already setup, these use Server Sent Events (SSE). Implementing multiplayer this way would mean setting up web socket infrastructure.
- High Learning curve. Yjs introduces a peer-to-peer mental model that differs from the client-server approach used in the rest of the application. Data has to be modeled in a different way if we are to get proper conflict resolution. We would have to start thinking about which data needs to be stored in Yjs and which data shouldn't. And for data that should be stored in Yjs, which data should be sent for conflict resolution and which data should be sent for awareness. Also, introducing Yjs means another tool developers need to learn to work with.
- More complex deployment & we need to handle scaling ourselves. Also, since there is a separate deployment for every client, this meant a separate websocket infrastructure setup for every deployment.
- More implementation effort than Liveblocks and the Lock Mechanism
Adopting this meant:
- Rewriting undo/redo functionality to use the Yjs undo manager.
- Rethinking saving to decide when, where and how often to persist data.
- Defining presence indicators for the node editor for users to see which fields another user is editing in case they are editing the same node.
While this required more implementation effort than Liveblocks and demanded more infrastructural changes than Yjs + SSE, it was my recommendation if we were to do multiplayer because it doesn't require licensing like Liveblocks does and doesn't require building custom implementations around Yjs like SSE.
Yjs + REST + Server Sent Events (SSE)
Since we already use SSE for notification, combining it with Yjs was another option. The user experience for broadcasting updates would look like this:
User A makes change → Sends via REST API → Server broadcasts via SSE User B receives update → User B's UI refreshes
Pros:
- Leverages existing SSE infrastructure hence no new server types needed.
- Works with Valkey, and the frontend client would communicate with the backend client locally since they are hosted on the same server
- Simpler deployment (No changes in our pipelines)
Cons:
- Not truly realtime (HTTP round trips introduce higher latency than Websockets).
- Manual synchronization logic required, as no standard Yjs binding for SSE.
- Required designing new REST endpoints, increasing complexity.
Adopting this meant:
- Reimplementing undo/redo to use the Yjs undo manager. But I wasn't sure how the latency would affect the user experience.
- We could keep our current save functionality.
- Defining presence data for the node editor to let users know who is editing what in case they are editing the same node.
Manual synchronization meant a higher implementation effort compared to the Websockets alternative. The infrastructure requirements for this were lighter though.
Let's see how Liveblocks compared to these two.
Liveblocks
Another alternative was using a managed service like Liveblocks. Here, data goes through a third party like Liveblocks, which handles synchronization across all clients.
Other examples of managed services that I didn't look into but are good alternatives to Liveblocks include SuperViz, Electric and PowerSync
Pros:
- Fastest implementation time because of built-in features like cursors, presence, undo manager
- Zero Infrastructure to maintain.
- Existing React Flow integration and Zustand bindings.
- Good documentation & you can get support from the Liveblocks team.
Cons:
- Recurring subscription
- Vendor dependency
- Data privacy and procurement concerns. Data flows through third-party servers raising legal concerns and potential license purchase delays.
- Less control over implementation
- Platform overhead: We would need three separate Liveblocks projects (staging, testing, production) per deployment.
Adopting this meant:
- We had to re-implement undo/redo to use the Liveblocks inbuilt undo/redo functionality.
- We could as well send presence information to Liveblocks to broadcast who is editing what in the node editor.
- We could keep our current save or implement auto save (the backend would still remain as the single source of truth even though a draft data was going to be stored in both Liveblocks and our backend).
Mistakes and Learnings
Even though we ended up with a clear winner (Implementing locks), these are some of the areas that I wish could have been better and a couple of lessons I learned along the way.
-
Not having important metrics: Collaborative editing is a good solution for allowing multiple users to edit a workflow but you need a good reason to do it. The best reason should be having data to back up the need for collaborative editing or if you are sure it's going to be as disruptive as it was for figma. Without the data, you risk investing heavily in a low-impact feature. In our case, we lacked the data.
-
Not doing POCs: Initially, I relied on theory to assess impact on save, undo/redo and node editor integration. While theory can accurately answer things like infrastructure costs, code integration is best answered by getting your hands dirty. For example, the Liveblocks POC revealed significant overlooked details, i.e., based on theory, our store was unusable in its current state, but when I read through the docs and learned about the Zustand bindings, it was clear that we could use it as it was.
-
Limited time: I had three days to investigate, learn the technologies, assess fit and write a proposal all while being new to collaborative editing. I later found out that collaborative editing is a separate functionality on top of your core functionality that requires its separate infrastructure and a new mental model. Making the right decision about it requires understanding constraints of your current system and how you can work around them with the technology you want to build with and guesswork and theory are not the best ways to find the best answer for which technology to use. Investing enough time in assessment shouldn't be taken lightly.
-
Don't fully spin up a POC with AI without guidance: During the Liveblocks POC, my first attempt was to use AI but the solution was hacky. With vague requirements like "Integrate Liveblocks", it ended up creating a provider and a lot of boilerplate. When not properly guided, coding agents give terrible solutions especially when it comes to working on existing complex codebases. After reading the docs and knowing that we could use Zustand bindings and needed no provider, I was able to guide the agent to produce a better solution. This showed me that reading isn't optional when learning about new technologies to adopt even in the age of AI.
Conclusion
It was clear for us to implement locks in favour of multiplayer because:
- Lowest implementation effort, no changes to any of the existing features.
- No infrastructure changes required, we can use SSE + REST to implement locks.
- Low learning curve, we just needed to learn about how SSE is used in the rest of the application and there were established patterns.
- Collaborative editing was a "nice to have" but its impact couldn't be justified.
Even though we didn't implement multiplayer, I had fun exploring CRDTs, Yjs, Liveblocks and learning more about the codebase which is how I discovered the initiative to detangle the store. I have a clear picture of what needs to be done to integrate any of the multiplayer options should our priorities change.
Related Posts (3)
What started as a multiplayer prototype revealed duplicated state, overloaded slices, and blurred team ownership. This is how we refactored it — and why reading the docs still matters when coding with AI.
A tooltip bug where the AI fix “worked” by spreading z-index, but the real fix was keeping portals inside the right stacking context.
Discover how git worktrees let you work on multiple branches simultaneously without the hassle of git stash—perfect for juggling features, hotfixes, and code reviews.