This blog post series is about my attempt to implement The Web After Tomorrow that Nikita described in his blog post back in 2015.
In part 1 I talked about how we used DataScript, Datomic and Clojure(Script) to build a real-time app for our SaaS product Storrito.com and what performance challenge the first implementation yielded. Part 2 was about some ideas I had to solve this performance challenge.
This part will be about reconsidering some common trade-offs of web application development to bring up a solution that has the potential to save a lot of effort and accidential complexity, especially for small teams of developers.
I must admit that I had no good idea for a couple of months how to solve the described performance issue, without generating tons of effort for our team. Oftentimes it is valuable to make a pause, work on other topics and let your brain do some background processing.
One morning I woke up with the question, why it still so much simpler to build a classic server-side web application in comparison to a single-page web application (SPA). Virtual DOM frameworks like React already made a step in the right direction by providing a very simple mental model (at its core) to create user interfaces. In essence they allow you to build your UI like a classic website: you start with a blank screen and the browser renders the complete HTML. You do not have to think about what is currently shown on the screen and how to modify the DOM to the desired new UI state.
A classic server-side web application often does one or more database queries to generate the HTML string that is sent to the browser (let's say to render a HTML table of customer addresses for example). Maybe there is another simple mental model waiting for us, if we design the API for our SPA more like how things are done within a classic server-side web application.
Instead of serving the complete HTML for a webpage, we could just return the required data from the database query results via the API response and let the client do the HTML rendering.
Nothing new you might think in comparison to a REST-based API, but the key difference is that this API response is only meant for this single webpage (like the customers table UI). REST APIs often have endpoints like '/customers', where you get a JSON response with customer entries from the database. But there the accidential complexity starts to emerge:
I could continue with more points for a while, but I guess you get the point, you need to take care of dozens of other concerns that have nothing to do with returning the data to render your particular page. Furthermore your API endpoint will tend to get more complex, if the number of API consumers grows. Each consumer may need a different set of fields or related entities from the database. Your API endpoint stays simpler, if you only use it for one particular page of your client-side web application:
Another point that leads to myriads of challenges is the 'n + 1' problem. Like in the example mentioned above, where you might like to render a list of customers and the date of their last order, but the latter is not part of the '/customers' API endpoint. Therefore you need to do n more requests for each customer to the '/orders' endpoint to receive this date. This introduces quite a lot of latency to your singe-page application, since instead of 1 request you need to make n+1 requests.
Technologies like GraphQL and Falcor were designed to avoid this 'n + 1' problem, they allow to fetch all the required data with a single HTTP request. Furthermore the client developer can choose what fields of the requested resources (like '/customers') should be returned.
So why not just use GraphQL. As the name already implies GraphQL was designed as a query language. Similar like backend developers can query the database (with SQL for example), frontend developers can use the GraphQL API to do queries. It was created by an organization (Facebook), where it is common that teams are divided by backend and frontend developers. A GraphQL API serves many different frontend teams or rather their client applications (web, mobile apps etc.). Therefore all the challenges described in the list above need to be addressed. Also the challenges of a public API arise, since even if Facebook didn't offer a public GraphQL API for 3rd-party developers, the sheer amount of frontend developer teams in their organization, would require similar practices.
Don't get me wrong, GraphQL is a good solution for an organization like Facebook. The question is, if it is the right technology for a team of 1-5 developers in a small company. Normally I don't like to quote so-called "laws" like the one of Conway, but here it describes the situation very well:
organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations.
So GraphQL is a good fit for Facebook, since the separation of frontend and backend developers are baked into their organization and their communication structures.
We are a small team of 4 developers, we have no other types of employees at the moment, so we also do all the business, marketing, product design, customer support etc. But more important all of us do frontend and backend development. We do not need to file a Jira ticket into the backlog of a backend team to get an additional field into a REST-API endpoint (like '/customers'). We can just do this on our own. All of our source code lives in a single Git repository. Therefore the frontend and backend source code for a new feature is part of the same pull request. Even our build process always releases a new version of our frontend and backend at the same time. A technology like GraphQL probably does not provide the optimal trade-offs for a small organization like ours, if it was designed with organizations in mind which are magnitudes larger.
The next blog post will finally get a little bit more technical and show how to implement such a single-page web application that API works more like a classic web application.