Most Popular Mistakes in Node.js When Write REST APIs
Most Popular Mistakes in Node.js When Write REST APIs

We’ve reviewed a lot of code over the last decade, and we mean a lot! Most popular mistakes in Node.js when write REST APIs. Whether we are double-checking code for a client who’s asked us to develop a mobile app for an existing portfolio or going through the test project of an applicant eager to join the STRV team, we’ve seen our fair share of programming mistakes especially when it comes to backend development.

node.js banner

Sometimes the structures, coding practices and approach to various problems are awe-inspiring, and other times we have slapped ourselves on our foreheads over the horrifying things we’ve uncovered in the codebase. We’d like to take a few moments to detail a couple of these horrifying moments and outline ways to avoid them.

Don’t block your JavaScripts

tl;dr — Never ever use *Sync() methods outside of your app’s startup. They will block your process, and no other tasks can be done.

You have been probably told that Node.js is an “asynchronous, event-driven JavaScript runtime”. However, there is a chance that you have not yet found out (the hard way) that it is not the JavaScript which is asynchronous, but only the I/O operations Node.js performs as a result of your JavaScript calls. All JavaScript code is synchronous. There can be no two JavaScript statements running at the same time. Never, ever. Not even with setTimeout() or setInterval(). You can test it yourself by trying this code in Node.js or any browser console:

Note: If you run this be prepared to manually kill the process.

At first glance, it would seem that the script would write ping... to the console every second, while the while loop would just keep going on forever. However, only the first two ping logs will appear, and the while loop would then not allow Node’s event loop to end the current “tick”, thus preventing any other scheduled tasks (the ping function) from executing any more. The whole concept of the event loop and Node’s async behavior is well-explained in this article about the Node.js event loop and is a highly recommended and enlightening read for Node.js developers with all levels of experience.

So why is this bad?

Not all functions which perform I/O are automatically asynchronous. In fact, Node.js provides synchronous implementations of most (if not all) of its async APIs. For example, consider: fs.readFile() and fs.readFileSync(). They both do the same, but the latter will not allow any other JavaScript code to execute until that I/O operation completes.

While sync functions can be quite handy during an app’s startup, using them in hot code paths, such as route handlers, is absolutely disastrous. Imagine an API server under load, and you accidentally (or intentionally) include a sync call in one of the route handlers. (i.e. computing password hashes with bcrypt.) These operations are designed to be slow and usually take between 50ms to 300ms. But since the hash is being computed synchronously, your server cannot do anything else — it cannot continue handling existing requests nor accept new requests — it’s a “stop-the-world” task. This in turn leads to a spectacular increase in latency across the board and reduced request throughput. That is definitely not something you would want from your API server!

Dear machine, your request has been successful.

tl;dr — Keep your responses (including errors) structured and machine-readable.

When developing RESTful APIs, it is important to realize who your consumers are — programs. As such, it is important to always return responses which are suitable for machine processing.

Consider the following:

From the viewpoint of your API consumer, this response would immediately trigger the following questions:

  • What’s the new user’s ID? How can I retrieve this user now?
  • Are there any fields which have been auto-added by the server? What are their values?
  • Was all my data valid? Did the server change anything to make it comply with its own restrictions/validations?

As you can see, this is problematic. A better approach would be to respond with the data that was just written to the database, so the API consumer can save it, present it to the user or generally do something meaningful with the data:

This is even more important in unexpected situations, so called errors. It is best for error responses to also have a stable structure, with a machine-readable reason for the error and optionally a programmer-readable explanation:

Also, you may be tempted to include a success: false field in the body. This is not necessary. The success/failure should be decided by the HTTP status code of the response.

These are some of the best practices we have been following at STRV for a while now, and they seem to have paid off so far:

  • Remember your consumers are machines
  • Keep your REST endpoints consistent, especially with regard to data structures
  • Do not return transactional data in the body (like success: trueor something similar) — it is redundant
  • Use HTTP status codes to describe the resulting state of the record/transaction

md5 ought to suffice…

tl;dr — Always use bcrypt with at least 10 rounds for hashing passwords.

A golden rule in security is that you should always assume that your server will be compromised sooner or later. You, as a developer, need to think of all the ways an attacker could gain access to your server, but the attacker only needs to think of one that works. This means that you cannot only put effort into protecting your servers and your app, you should also make sure that some data will remain protected even if your whole database gets stolen.

Of course, it is not feasible to apply encryption to all your data

Some of it may not be sensitive enough to warrant such overhead. However, there is one piece of information that always requires special care — your users’ credentials.

When your customers put their usernames and passwords into your app or service, they are basically putting their trust into your hands that you will keep their credentials safe and secure. Such trust is a very fragile thing — keeping it is quite easy as long as everything works smoothly, but a simple misstep may end up costing you all of it. A lot of people reuse their credentials across different services, even though it is discouraged by many security researchers. Thus, if an attacker steals your users’ credentials, it is quite possible that they will gain access to other services as well.

You can relatively easily avoid such a disaster by employing a proper password hashing algorithm specifically designed for the job, like bcrypt, scrypt, or argon2. With proper password hashing in place, an attacker will have a very hard time brute-forcing the hashes because it will be time- and resource-intensive. Also, these algorithms usually have some built-in mechanism for including salts in the computation, making them resistant against all sorts of brute-force attacks.

In conclusion

There is so much more to password security than what could possibly fit into this post. If you interested in this topic, we highly recommend reading a Security SE answer about password hashing. It is very concise and thorough and should provide you with a good starting point.

Hopefully this post has helped you learn more about Node.js and some of the recommended practices when working with backend servers and RESTful APIs.

If you have any questions or corrections, or with other suggestions, please let us know.
We’ll try to solve it at the earliest possible time.
Also please share this post on social media, use the share icons just below the post.
Don’t forget to subscribe to our blog.
Thank you for reading.

LEAVE A REPLY

Please enter your comment!
Please enter your name here