How not to shoot your own leg: optimization process in JS

Lead Front-end Developer at Fayrix shares his experience about optimization is in general and what it is in terms of JS

Hi everybody. Today I want to talk to you about optimization. What it is, what it’s for, and the last, but not least, how you do it without hurting yourself.
First, let’s discuss what optimization is in general and what it is in terms of JS. Well, optimization is improving something according to some kind of a quantitative characteristic. In JS I can distinguish four quantitative characteristics:

Code sizeit’s commonly believed that the fewer lines the code contains, the faster and the better it is. My opinion is completely different, since with a single line of code you can create such a memory leak or infinite loop that your browser just crashes.

Speed (performance) — it’s the so called computational complexity, i.e. a number of operations that a parser has to make to execute an instruction.

Build performance ­– it’s no secret that nowadays such bundlers as Webpack or Gulp are used in almost all projects, so this feature indicates how accurate the project bundler settings are. Believe me, when the server is at least a bit smarter than a coffee grinder, it does become important.

Code reusability — this feature shows how skillfully the architecture is built to reuse functions, components, modules.
Let’s address each category in more detail, clarify characteristics it contains and what they depend on.


  • Duplication. How much of the same code is written in different places;
  • Comments. Comments in code are good, but I’ve seen projects with more comments than actual code;
  • Lack of unification. A major case of such a problem would be similar functions that have small differences depending on some property.
  • Dead code. Debugging functions or functions that are not used at all are rather common.


  • Use of browser cashing mechanism;
  • Code optimization designed for those environments, where the code is executed;
  • Memory leaks;
  • Use of Web Workers;
  • Use of links to DOM tree elements;
  • Use of global variables;
  • Recursive invocations;
  • Simplification of math calculations.


  • Number of external dependencies;
  • Code transformation meaning the number of chunks and their size, css transformation, file merging, graphics optimization and a whole lot more.


  • Number of components;
  • Frequency of occurrence of components;
  • Flexibility and customization.

Like I said in my previous articles, in order to make changes we have to determine a starting point and figure out, how bad the state of things is. What should be our first steps in such a cumbersome process? Start with the simplest: speed up the build time and delete unnecessary stuff from the project. You may ask, why these two? Because these things depend on each other. Decreasing the code size leads to an increase in the build time speed, consequently, your efficiency.

Build time optimization inevitably introduces us to a ‘cold build’: it’s a process, when a project starts from scratch all the way to affecting all dependencies and recompiling the whole code. Do not confuse it with rebuilding customer code without putting in order external dependencies and other stuff.

What helps in increasing the build speed:

  • Use up-to-date bundlers. Technology forges ahead, and if you have Webpack v1, then upgrade to v4 and you’ll see a nice boost without actually doing anything;
  • Get rid of all dead dependencies. Sometimes developers forget to clean their own mess after experimenting. A colleague of mine once asked: ‘Is it true that dependencies written in package.json, but not imported anywhere in code, are not included into the built bundle?’ No, they are not included in the bundle, but the pack is rolled out. The question is: what for?
  • Divide the bundle into several profiles depending on requirements. At least two: prod and dev. Code obfuscation is an indicative example. With prod it’s mandatory, since smaller size = faster loading, while with dev obfuscation just gets in your way making you spend build time on useless manipulations;
  • Do separate build stages in-parallel;
  • Use npm clients that can cache.

Rebuild and ‘cold build’ speedup requires deleting excessive comments and dead code. But what if you have a huge project and it’s just impossible for you to inspect it all by yourself? In such cases code analyzers may help.

I personally sometimes use SonarQube. It’s not the best one, but it’s flexible. It can be taught project specific features, if there are any. Sometimes it does crazy things, but as with any tool you got to learn how to use it. And don’t forget to be skeptical about its comments. Despite all of its drawbacks, it can marvelously find dead code, comments, cut-and-paste jobs, and other knickknacks such as lack of strict comparison.

The key difference between SonarQube from ESlint/TSLint/Prettier/etc. is that it inspects for code quality, finds duplication, complicated calculations, and recommends necessary modifications. Other alternatives simply check the code for mistakes, bad syntax or format.
I had some hands-on experience with codacy, a decent service with free and paid subscriptions. It’s useful if you have to check something elsewhere without deploying the heavyweight stuff. It has an intuitive interface, detailed instructions on what’s wrong with the code and much more.

In this article I am not touching upon configuring builds, chunks, etc., since it all depends on a specific project and bundler installed. I may talk about it in my later articles.
So, the actions performed helped to speed-up the build. Great, but what’s next? Since analyzers can search for duplicated code, it may be useful to move it to separate modules or components thus increasing code reuse.

There is only one section still uncovered, and that’s the speed of the code itself. The mechanism for bringing its performance to life is called refactoring, a word everyone hates. Let’s see, what is worth doing during refactoring, and what’s not.
In this process, don’t let yourself be governed by the general rule of thumb that says: ‘If it works, don’t touch it’. The first rule in IT: make a backup, you’ll thank yourself later. On the front-end, do some tests before making any changes in order to keep up performance. Then ask yourself: how do you determine load time and leaks?

DevTools helps with this. Not only it shows memory leaks, page load time, and animation performance, it may also make an audit for you, but that’s not 100% certain. DevTools also has a useful feature such as slowing down load speed, which helps you predict page load time in poor Internet performance conditions.
So, we have identified our problems, now let’s solve them!

For starters, let’s decrease the load time using the browser caching mechanism. A browser can cache everything and later provide user with cached data. You still have localstorage and sessionstorage. They allow storing part of data that contribute to increasing SPA for future page loads and decreasing the number of server queries.

It is deemed necessary to optimize code for those environments where it is executed. However, experience shows that it consumes too much time and effort without providing a tangible increase. I suggest regarding this as a recommendation only.
Naturally, it is a good idea to eliminate all memory leaks. I will leave this out of scope of this article since everyone knows how to do it, if not — then just Google it.

Another assistant of ours is called a Web Worker. Web Workers are streams that belong to the browser that may be used to execute JS code without blocking the event loop. Web Workers allow performing CPU-intensive and time consuming tasks without blocking the user interface stream. In fact, when using them, calculations are performed in-parallel. This is true multithreading. There are three types of Web Workers:

  1. Dedicated Workers — Instances of Dedicated Web Workers are created by the main process. Only the process itself may exchange data with them.
  2. Shared Workers — Access to Shared Workers may be gained by any process that has the same source as the Worker (e.g., different browser tabs, iframe and other Shared Workers).
  3. Service Workers — These are Workers managed by events and registered using their source of origin and path. They may control the web page they are related to by intercepting and modifying navigation commands and resource queries and by data caching that may be controlled very precisely. All this gives us great means to control application behavior in a certain situation (e.g., when no network is available).

You may easily find information on to how to use them within the boundlessness of the Internet.
So, now that we have an understanding of the approaches, bells and whistles, let’s talk about the code itself.

First, try not to access the DOM tree directly since it’s a CPU-intensive operation. Imagine that in your code there’s always some manipulation with a certain object going on. Instead of working with this object via a link, you’re constantly yanking the DOM tree to search for this element and work with it, but that’s how we implement caching patterns in the code.
Step two — get rid of global variables. ES6 provided us with a great discovery of humanity called block variables (in simpler terms, declaring variables with var using let and const).

And the last, but not least. Unfortunately, not everyone has enough experience to understand this subtle aspect. I am all against using recursion functions. Yes, they decrease code size, but there is a catch: these recursion functions often have no exit conditions, they are simply forgotten. As they say, you can smash a finger with a hammer, however it’s not the problem of the hammer, but the one of the finger owner. Or like in that cat meme: recursive functions are not bad, you just have to cook them properly.

Despite all power of today’s front-end applications, don’t forget about the basics. A vivid example of wastefulness and irrationality is adding new elements in the start of a new array. Those who know — they get it, and for those who don’t, here’s the explanation. Everyone knows that array elements have their own index, and when we are going to add a new array element into its start, then the sequence of actions is as follows:

  1. Identification of array length
  2. Enumeration of each element
  3. Shifting each array element
  4. Insertion of a new element into the array
  5. Re-indexation of array elements.

It’s time to wrap it up, and for those of you who like memory cards, here’s a list of steps you take to understand what stage of optimization you’re on and what to do next:

  1. Identify how good/bad everything looks, obtain measurements.
  2. Get rid of everything that is unnecessary: unused dependencies, dead code, useless comments.
  3. Configure and speed up build time, configure different environment profiles.
  4. Analyze the code and decide, which parts we are going to optimize and rewrite.
  5. Perform tests to keep up performance.
  6. Launch refactoring, get rid of global variables, memory leaks, duplicated code and other garbage, and do not forget about caching.
  7. Simplify calculations and move all that we possibly can into Web Workers.

So, it’s not that complicated as it might seem at first. Your sequence of actions will be definitely different from mine, at least because you have a mind of your own. You may add different steps or drop a few, but the basics of the list will be similar. I intentionally made the list in such a way that this activity could go on in-parallel with the main job. Quite often the customer is not willing to pay for rework, do you agree?

And the last thing.
I believe in you, and I believe you can do it. You think I’m naïve? You may be surprised, but since you have found this article and read it from beginning to end, it means (I’ve got good news for you) you’ve got some brain and you’re trying to develop it.
So good luck in such a tedious task as front-end optimization!

Remote software teams & bespoke services for startups