1

I'm just starting to build a web app using React.js, Next.js, Prisma, & PostgreSQL. In this web app, users can create "projects", which are represented by 10-15 rows in the database. When users make changes to the project on the client-side, what is the best method to synchronize the database?

Here's what I've considered so far:

Live, instant updates - In the future, I want users to be able to collaborate with other users on projects, however, this isn't necessary to achieve in this stage of development. If I were to pursue this, what is the best avenue to take? Websocket, graphql, etc, could all contribute to the solution. I would like it to be a fairly scalable and simple mechanism.

Manual Push - User clicks a save button to push their updates to the database. If I were to go this route, which of the following strategies works best?

  1. Client receives data from server using GET request. User makes changes to various components and clicks submit. The client-side creates an object containing all of the project data and POST to server. Server takes the received object and overrides the database with the objects values.
    • I feel this is redundant, because even if the use only makes a small change and clicks submit, the client will send the entire packet of data to the server. Creating and managing an object/state variable with all the data seems cumbersome & bloated.
  2. Client receives data from server using GET request. User makes changes, and as they do so, client writes a change log of their modifications. When the user clicks save, the client sends this change log to the server, and it makes only those modifications.
    • This feels better, but accurately tracking changes seems difficult. If there are any strategies to maintaining this changelog, please let me know.

This seems like a very common task, but I cannot find what the common pattern/solution is to address it.

3 Answers3

1

I'm new on this site, so go easy on me. I'll answer from a workflow perspective, not with specific code. Here is a solution workflow for live, instant updates:

I had a similar app and wanted an "indicator light" to appear in the browser tab's favicon to indicate that an update was waiting for the page to reload.

  • I simply had a small text file containing the time epoch on the server.
  • If something changed on the server, I would update the file to the current epoch time.
  • A JavaScript function kept looping in the client, sending an AJAX request for the contents of that file.
  • If the file content had changed, then the favicon would be re-written by JavaScript to have the favicon with the indicator light.

You could duplicate this, but with the last step to reload your page or resend a larger AJAX request to reload whatever you want updated. But, the client would constantly send small AJAX pings to see if a 13 digit number had changed on the server.

I'm curious if others think that is a good way, or should I change my workflow also?

Jesse
  • 111
1

You wouldn't necessarily keep the whole data model (entire database?) at the client as that would only be sensible for very small data models (and even then it's debatable).

Instead, your application would request the required data (required by the current view) via whatever means you feel comfortable with (RESTful API, WebSockets) and send the data back to the server (possibly only what changed because the server doesn't need to figure out and can reduce the number or the extent of the queries, and the client already has the information about what changed, because that's where the user made the change).

The question about manual vs automatic (live) updates is dictated by your use case. If you're unsure, start simple (REST) and change it later (similar to what you suggest). One thing to keep in mind is that a live update may be difficult when you try to do input validation before storage (assuming a validated data model) - in particular when more values depend on each other and you'd like to them to be validated atomically before they are stored. With a live update method, you may need to accept invalid data because during input that data could be incomplete (and thus invalid).

orange
  • 111
0

Purging the details the choice is the cartezian product between push or pull and streaming or batching.

Keeping the server busy by constantly pinging (push-based) or keeping the server busy with storing a list of active clients (pull-based).

Sending to server consistent1 units of work (batching) or flowing to server consistent2 packages of data (streaming).

Considering the server(s)'(s) available bandwidth, computing power and runtime storage choose the programming technique, asynchronous or synchronous. While nowadays hardware is cost effective the choice is between asynchronous and synchronous. From the frameworks listed in the question the option is asynchronous solution, that is streaming data to server to store and dispatch it to clients processing same resources. The streamed data has to be atomic and consistent.


(1), (2) - consistent in the sense of ACID properties