JSX over the Wire

263 pointsposted 5 days ago
by danabramov

54 Comments

mattbessey

5 days ago

This was a really compelling article Dan, and I say that as a long time l advocate of "traditional" server side rendering like Rails of old.

I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library

https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally

it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.

Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...

Vinnl

5 days ago

There are a couple of "red flag" quips that if I hear them coming out of my mouth (or feel the urge to do so), I have to do a quick double take and reconsider my stance. "Everything old is new again" is one of them — usually, that means I'm missing some of the progress that has happened in the meantime.

specialist

4 days ago

Sometimes I imagine "progress" as movement along a coil.

In 2D, it seems like you're just reinventing the wheel. But in 3D, you can see that some hack or innovation allowed you to take a new stab at the problem.

Other times I imagine trilemmas, as depicted in Scott McCloud's awesome book Understanding Comics.

There's a bounded design (solution) space, with concerns anchoring each corner. Like maybe fast, simple, and correct. Or functional, imperative, and declarative. Or weight, durability, and cost. Or...

Our job is to divine a solution that lands somewhere in that space, balancing those concerns, as best appropriate for the given context.

By extension, there's no one-size fits all perfect solution. (Though there are "good enough" general purpose solutions.)

The beauty of experiencing many, many different cuts at a problem, is that one can start to intuit things. Like quickly understand how a new product fits in the space. Like quickly narrowing the likely solution space for the current project. Comparing and contrasting stuff in an open-minded semi-informed way.

Blah, blah, blah.

parthdesai

4 days ago

Not aware of remix, but how do you manage connection pooling, read vs write queries in these use cases?

esprehn

5 days ago

The big challenge with the approach not touched on in the post is version skew. During a deploy you'll have some new clients talk to old servers and some old clients talk to new servers. The ViewModel is a minimal representation of the data and you can constrain it with backwards compatibility guarantees (ex. Protos or Thrift), while the UI component JSON and their associated JS must be compatible with the running client.

Vercel fixes this for a fee: https://vercel.com/docs/skew-protection

I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.

yawaramin

5 days ago

Wouldn't this be easy to fix by injecting a a version number field in every JSON payload and if the expected version doesn't match the received one, just force a redirect/reload?

pfhayes

5 days ago

Forcing a reload is a regression compared to the "standard" method proposed at the start of the article. If you have a REST API that requests attributes about a model, and the client is responsible for the presentation of that model, then it is much easier to support outdated clients (perhaps outdated by weeks or months, in the case of mobile apps) without interruption, because their pre-existing logic continues to work

yawaramin

4 days ago

Arguable that it's a 'regression'...loading pages is kinda the normal behaviour in a web browser. You can try to paper over that basic truth but you can't abstract it away forever. Also, the original comment I replied to said it would be a 'big challenge', but if you accept that the web is the web and sometimes pages can load or even reload, then it's not really a 'challenge' any more at all.

presentation

4 days ago

Vercel's skew protection feature keeps old versions alive for a while and routes requests that come from an old client to that old version, with some API endpoints to forcibly kill old versions if need be, etc. I find it works reasonably well.

yawaramin

4 days ago

Wouldn't a solution that works perfectly be better than one that works 'reasonably well'?

presentation

31 minutes ago

Your solution doesn’t work perfectly, it works perfectly in the sense that your engineers wont see errors related to this situation; but it does not work perfectly in that your users have a crappy experience. For example if you have some long form and after a user inputs a ton of stuff, you just refresh their browser for them and wipe it all out, then that is a crappy experience. Or you refresh their browser when their internet connection is bad and then prevent them from using your app until the whole thing reloads.

Maybe that doesn’t matter for your use case or you’re willing to do a lot more legwork to prevent issues like that from occurring but there will always be tradeoffs.

tantalor

5 days ago

Thrashing is why

yawaramin

4 days ago

Sorry what do you mean by 'thrashing' in this context?

tantalor

3 days ago

Reload causes skew causes reload

yawaramin

3 days ago

How does reload cause skew? Reload will just load the latest version of the webapp. That's the point.

tantalor

3 days ago

If you force a reload before the rollout is complete, the user will still experience skew, because you haven't finished the rollout. The website will be completely unusable for a significant fraction of users. You might as well turn off the website during the rollout. This is the main concern of skew - how to keep the website usable at all times for all users across versions.

If your rollout times are very short then skew is not a big concern for you, because it will impact very few users. If it lasts hours, then you have to solve it.

After the rollout is complete, then reload is fine. It's a bit user hostile but they will reload into a usable state.

yawaramin

6 hours ago

If a webapp rollout lasts hours, you have a much bigger problem than skew which needs to be addressed urgently.

ricardobeat

3 days ago

Stickiness at the load balancer level helps mitigate these issues.

bastawhiz

5 days ago

This article doesn't mention "event handlers" a single time. Even if you get past the client and server getting out of sync and addressing each component by a unique id that's stable between deploys (unless it's been updated), this article doesn't show how you might make any of these components interactive. You can't add an onClick on the server. The best I can figure, you pass these in with a context?

Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).

wonnage

5 days ago

you put interactivity in client components, that seemed pretty clear to me

bastawhiz

4 days ago

And you just never have any handlers on the server components? The problem is that if a component is populated with data from the server, it's sending the data down as JSX. Which means that component can't react to interactivity of client components within it. Unless, of course, you draw the line further up and make more stuff client components.

Consider making a list of posts from some sort of feed. If each item in the list is a server component, you can't have the component representing the item be a server component if you need to handle any events in that item. So now you're limited to just making the list component itself a server component. Well what good is that?

The whole point of this is to move stuff off of the client. But it's not even clear that you're saving any bytes at all in this scenario, because if there's any props duplicated across items in the list, you've got to duplicate the data in the JSON: the shallower the returned JSX, the more raw data you send instead of JSX data. Which completely defats the point of going through all this trouble in the first place.

SebastianKra

4 days ago

You can...

...have a client component inside the post. For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.

...have a wrapper client component that takes a server components as a child. Eg. if you want to show a hover-card for each post:

    <ClientHoverCard preview={<Preview />}>
        <ServerPost />
    </ClientHoverCard>
https://nextjs.org/docs/app/building-your-application/render...

> props duplicated across items in the list, you've got to duplicate the data in the JSON

I'm pretty sure gzip would just compress that.

bastawhiz

4 days ago

> I'm pretty sure gzip would just compress that.

Bytes on the wire aren't nearly as important in this case. That value still has to be decompressed into a string and that string needs to be parsed into objects and that's all before you pump it into the renderer.

> have a wrapper client component that takes a server components as a child.

That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts. That's literally the point of doing this whole roundabout thing: to offload as much work as possible to the server.

> For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.

And now only the delete button reacts to being pressed. You can't remove the post from the page. You can't make the post semi transparent. You can't disable the other buttons on the post.

Without making a mess with contexts, state and interactivity can only happen in the client component islands.

And you know what? If you're building a page that's mostly static on a site that sees almost no code changes or deployments, this probably works great for certain cases. But it's far from an ideal practice for anything that's even mildly interactive.

Even just rendering the root of your render tree is problematic, because you probably want to show loading indicators and update the page title or whatever, and that means loading client code to load server code that runs more client code. At least with good old fashioned SSR, by the time code in the browser starts running, everything is already ready to be fully interactive.

SebastianKra

4 days ago

> That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts.

Thats where you’re wrong. The JSX snippet that I posted above gets turned into:

    { 
        type: "src/ClientHoverCard.js#ClientHoverCard", 
        props: { 
            preview: // this is already rendered on the server
            children: // this is already rendered on the server
        }
    }
If you wanted to fade the entire post when pressing the delete button without contexts, you’d create a client component like this:

    "use client"
    function DeletablePost({ children }: { children: ReactNode }) {
        const [isDeleted, setDeleted] = useState(false)
        return <div style={{ opacity: isDeleted ? 0.5 : 1 }}>
            {children}
            <DeleteButton onChange={setDeleted} />
        </div>
    }
And pass it a server component like this:

    <DeletablePost>
        <ServerPost />
    </DeletablePoast>

rwieruch

5 days ago

It's not really the scope of the article, but what about adding a client directive [0] and dropping in your event handler? Just like that, you're back in a familiar CSR React world, like in the "old" days.

[0] https://react.dev/reference/rsc/use-client

hu3

5 days ago

Random JSX nugget:

JSX is a descendant of a PHP extention called XHP [1] [2]

[1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...

[2] https://www.facebook.com/notes/10158791323777200/

Ambroos

5 days ago

Internally at Facebook you could also just call React components from XHP. Not very relevant on what you see on Facebook now as a user, but in older internal tools built with XHP it made it very easy to just throw in React components.

When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!

Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `"__dr": "GroupsCometHighlightStoryAlbumAttachmentStyle.react"`. I never looked into the mechanics of how these worked.

lioeters

4 days ago

> you could also just call React components from XHP

Fascinating, I didn't know there was such a close integration between XHP and React. I imagined the history like XHP being a predecessor or prior art, but now I see there was an overlap of both being used together, long enough to have special language constructs to "bind" the two worlds.

"ReactXHP" didn't turn up anything, but XHP-JS sounds like it.

> We have a rapidly growing library of React components, but sometimes we’ll want to render the same thing from a page that is mostly static. Rewriting the whole page in React is not always the right decision, but duplicating the rendering code in XHP and React can lead to long-term pain.

> XHP-JS makes it convenient to construct a thin XHP wrapper around a client-side React element, avoiding both of these problems. This can also be combined with XHPAsync to prefetch some data, at the cost of slightly weakening encapsulation.

https://engineering.fb.com/2015/07/09/open-source/announcing...

This is from ten years ago, and it's asking some of the same big questions as the posted article, JSX over the Wire. How to efficiently serve a mixture of static and dynamic content, where the same HTML templates and partials are rendered on server and client side. How to fetch, refresh data, and re-hydrate those templates.

With this historical context, I can understand better the purpose of React Server Components, what it's supposed to accomplish. Using the same language for both client/server-side rendering solves a large swath of the problem space. I haven't finished reading the article, so I'll go enjoy the rest of it.

zarzavat

5 days ago

I'm annoyed to learn that even the original PHP version had `class=` working.

MrJohz

5 days ago

In fairness, `className` makes a lot of sense given that the native DOM uses the `className` attribute rather than `class`. In that sense, it's a consistent choice, just a consistent choice with the DOM rather than with HTML.

The bigger issue is the changes to events and how they get fired, some of which make sense, others of which just break people's expectations of how Javascript should work when they move to non-React projects.

littlecranky67

5 days ago

Preact fixed that years ago and you can just use class=

MrJohz

4 days ago

It's not about "fixing" it, it's about choosing what you want to be consistent with. You can either be consistent with the DOM API (e.g. `document.getElementById().className = "hello"`) or with HTML (i.e. `class=...`). Both are valid choices — I personally prefer className because this is Javascript, so consistency with the DOM makes more sense, but JSX is designed to be an HTML-like syntax so I can see both ways.

The bigger difference that React makes from other frameworks, and from the DOM, is when it comes to events, in particular with events like `onChange` actually behaving more like the `onInput` event.

littlecranky67

3 days ago

To be fair, choice would be to allow both in your JSX like Preact does. Usually I wouldn't bother as I get your point with consitency. But from a practical standpoint, whenever you paste some HTML code from somewhere else, the first thing I need to do is search/replace class= to className=. Probably more relevant for tailwind/bootstrap users than MUI.

MrJohz

2 days ago

That's true, but there are various other syntax differences that mean that pasting HTML is always going to require some fixing up. For example, JSX requires all elements to have closing tags or use the /> syntax, whereas HTML has elements like input or img where's that's not correct.

That said, "class" shows up a lot more in most html than "input", so I can see the advantage of being consistent with html there.

nsonha

3 days ago

the only reason I can think of is for the dot notation assignment (not clashing with the class keyword). No one cares about consistency with DOM API in this context. Given the syntax, they most definitely expect consistency with HTML

cadamsdotcom

5 days ago

Really like this pattern, it’s a new location of the curve of “how much rendering do you give the client”. In the described architecture, JSX-as-JSON provides versatility once you’ve already shipped all the behavior to the client (a bunch of React components in a static JS that can be cached, the React Native example really demonstrated this well)

One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.

Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.

Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.

If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.

If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.

And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.

But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!

_heimdall

3 days ago

> from fully server-rendered HTML to REST APIs and everything in between

Fully server-rendered HTML is the REST API. Anything feeding back json is a form of RPC call, the consumer has to be deeply familiar with what is in the response and how it can be used.

modal-soul

5 days ago

I like this article a lot more than the previous one; not because of length.

In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.

The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.

My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?

I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".

acemarke

5 days ago

Sounds like another one of Dan's talks, "React from Another Dimension", where he imagines a world in which server-side React came first and then extracted client functionality:

- https://www.youtube.com/watch?v=zMf_xeGPn6s

rwieruch

5 days ago

Great talk, thanks for reminding me about this Mark!

hcarvalhoalves

5 days ago

> REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.

So, let's assume the alternative universe, where we did not mess up and got REST wrong.

There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.

What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?

yawaramin

5 days ago

And this is exactly what we get with htmx.

h14h

5 days ago

Excellent read! This is the first time I feel like I finally have a good handle on the "what" & "why" of RSCs.

It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.

The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.

It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:

1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?

(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))

Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.

Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.

[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>

rwieruch

5 days ago

I have used RSCs only in Next.js, but to answer your questions:

1./2.: You can update it optimistically. [0]

3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]

4.: In the case of the like button, it would be a "form button" [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.

5.: You block the double request with useTransition [5] to disable the button.

6.: In Next, you would invalidate the cache and would see your like and the like of the other user.

[0] https://react.dev/reference/react/useOptimistic

[1] https://nextjs.org/docs/app/api-reference/functions/revalida...

[2] https://nextjs.org/docs/app/api-reference/directives/use-cac...

[3] https://www.robinwieruch.de/react-form-button/

[4] https://www.robinwieruch.de/react-form-loading-pending-actio...

[5] https://react.dev/reference/react/useTransition

user

4 days ago

[deleted]

kassner

5 days ago

I feel the article could have ended after Step 1. It makes the point that you don’t have to follow REST and can build your own session-dependent API endpoints, and use them to fetch data from a component.

I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).

One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.

My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.

barrkel

5 days ago

The point of doing a server-side render follows from two other ideas:

* that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load

* that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.

The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.

moqizhengz

5 days ago

BFF is in practice a pain in the ass, it is large enterprise like Google's compromise but many ppl are trying to follow what Google does without Google's problem scope and well-developed infra.

Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.

barrkel

4 days ago

The alternative really is as Dan says: you end up with a bunch of REST endpoints that either serve up too much, or have configuration flags to control how much they serve, simply to satisfy front end concerns while avoiding adding round trip latency. You see this in much smaller apps than Google scale. It's a genuine tension.

Usually the endpoints get too fat, then there's a performance push to speed them up, then you start thinking about fat and thin versions. I've seen it happen repeatedly.

kassner

4 days ago

> The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.

Congratulations, you reinvented GraphQL. /s

Jokes apart, I don’t care much about the technology, but what exactly are we optimizing here? Does this BFF connect directly to the (relational/source of truth) DB to fetch the data with a massaged up query, or it just uses the REST API that the backend team provides? If the latter, we’re just shifting complexity around, and if the former, even if the it connects to a read-replica, you still have to coordinate schema upgrades (which is harder than coordinating a JSON endpoint).

Just let the session-dependent endpoint live in the backend. If data structure needs changes, backend team is in the best position to keep it up to date, and they can do it without waiting for the front end team to be ready to handle it on their BFF. A strong contract between both ends (ideally with an OpenAPI spec) goes a really long way.