Here’s a quick summary of the different ways you can load a website.
SSR (Server Side Rendering): The classic way. Browser makes request to server, server creates an HTML/CSS/JS bundle, sends it to browser.
CSR (Client Side Rendering): The vanilla React way. Browser makes a request to server, server sends back JS code which runs on browser, creating the HTML/CSS and triggering browser to further make requests to load all assets.
SSG (Static Site Generation): The “gotta go fast” way. Server creates an HTML/CSS/JS bundle for web pages at build time. When browser requests a page, the server just sends this pre-built bundle back.
ISG (Incremental Static Generation): The “imma cache stuff” way. Server may create some HTML/CSS/JS bundles for web pages at build time. When the browser requests a page, the server sends this pre-built bundle back. If a pre-built bundle doesn’t exist, it falls back to CSR while it builds the bundle for future requests. The server may auto-rebuild bundles after certain time intervals to support changing content.
ESR (Edge Slice Re-rendering): The “cutting edge, let’s get latency down so low it’s practically in hell” way. Server does SSG and tells the CDN to cache the bundles. Then, it instructs the CDN to update the bundle in the event that page content needs to change.
In order of performance, usually: (SSG = ISG = ESR) > CSR > SSR
In order of SEO: (SSR = SSG = ISG = ESR) > CSR
In order of correctness (will users be shown “stale” information?): (SSR = CSR) > ESR > ISG > SSG
This is vague and misleading. Your 256 core server can probably create an HTML table with 1,000,000 lines faster than my 8 core laptop but DataTables (a client side tool for rendering large tables) can paint faster than a plain html file.
So what did you measure?
If you measure data transferred then SSR usually loses because it has to hydrate the data and then send it, resulting in larger bundles. In a CSR situation your entire app should be cached in a CDN meaning you’ll get the best transfer speeds.
If you measure TTFI or time to first paint you’re still incorrect. See the example I gave above with DataTables. Rendering a plain 1,000,000 row HTML table takes longer than a DataTables table of the equivalent size because the browser is not designed to display 1,000,000 DOM elements.
There are plenty of exceptions to this as well. Like if you included DataTables just to display a 10 row table. Then the time spent downloading and interpreting DataTables would not be offset by the bad table implementation.
But if we can run Doom on a Ti-83 powered by potatoes you can probably generate HTML on your clients device.