Every experienced developer knows how important it is to maintain the efficiency of a front-end application and what a pain in the neck that is. When it comes to loading time, the difference between a successful business and a disaster is mere seconds. So a front-end engineer is responsible for making sure that the client’s side provides a better user experience, higher conversions, and ultimately, happier customers.
>16 milliseconds problem
The work that an event loop performs consists of discrete tasks (like an external script or a mouse move), microtasks, and the rendering of changes, which we’ll briefly discuss below.
Rendering never happens while any other task is executing, so it’s vital for a rich user experience that everything in an event loop happens timely. Rendering time is primarily affected by hardware specificities, like a screen refresh rate, and software specs, like energy saver mode or browser settings.
While today’s browsers do their best to show something to a user as soon as possible, most modern monitors support a refresh rate of 60 frames per second. This leaves us with only 16 milliseconds to perform a task that should be rendered so as not to disappoint a user with frame losses.
Most JS tasks are simple enough to be executed in such a short time window. But modern web applications become more complex daily, turning the client side into a feature-heavy extravaganza rich with calculations far exceeding our 16-milliseconds threshold.
Problem: processing big data arrays
Calculating piles of data can quickly exceed every possible limit and block our event loop work, especially if we attempt to do everything in a single stream. In such a case, the browser won’t be able to render anything until our heavy data work is done. As you can imagine, that does not provide the optimal user experience.
Solution. Break calculations into smaller chunks using setTimeout.
Solution. Another great solution is to use web workers. Web workers run scripts in the background so that they do not block tasks in the main thread, giving the browser a chance to show a picture as soon as possible. Read here to see more on web workers.
Problem: overusing third-party libraries
Optimization is far from universal among third-party libraries, even the popular ones. Let’s take, for example, a bcrypt that hashes a string with 13 hash-rounds. Each round takes about two seconds, blocking the main thread for quite a long time and stopping the execution of other connections.
Although not precisely a 16-milliseconds problem because this is a back-end process that does not affect rendering directly, encryption is an excellent example of how libraries that are not optimized can wreak havoc on your application.
Solution. Well, the best solution here is to choose optimized libraries. Try to find libraries specially developed for Node.js, since they use C++ bindings that allow parallelling threads and calculation up to three times faster.
Problem: layout thrashing
This is a typical performance issue, especially for single-page applications that build and destroy views on the fly. The layout is a step in a Render Queue when your browser figures out where each element of a page should appear, its size, and its relation to other objects.
Not surprisingly, the more DOM objects are on a page, the more time-consuming the process gets. The trickiest part, however, is that even the most insignificant change in a style invalidates previous calculations and triggers a whole new layout.
Solution. You want to be very attentive to arranging layout measuring (reading) and updating (writing) tasks. It’s a good practice to group these processes so you don’t force the layout multiple times. In a big project, doing so may take quite some time, but you’ll be surprised by how beneficial it can be.
Problem: big bundles
So optimizing JS files is a crucial part of your application performance improvement. Use Webpack Bundle Analyzer to see the size of output files and what they are composed of.
Solution. For React, the best solution would be to use lazy loading. React.lazy allows you to use a dynamic import that knows how to execute code in chunks, instead of processing the whole file at once.
Solution. If reducing the size of files is impossible, try to cache them so they won’t be reloaded each time your application needs them. When caching files, you’ll use four headers:
- ETag - an identifier that allows a web server to avoid resending a complete response if the content has not changed;
- Cache-Control - holds instructions you can use to control your cache;
- Expires - shows the lifetime of your cache; and
- Last-Modified - contains the date and the time when a file was last modified.
Solution. Compress the file. While most browsers support both Gzip and Brotli compression formats, I advise you to use the latter, as it’s way more efficient.