How to reach sub-200ms response time with a few lines of code: Node.js and MongoDB

Navid Rezaei
3 min readJul 24, 2023

I was working on API calls for the Kiorios app (iOS app), a parenting app focused on empowering parents and making their lives easier. The response times were getting worse due to the many questions available. Luckily, the fix was easy. I am sharing the process here in case it would be helpful for you, too.

The secret to improving API response time lies in two simple steps:

  1. Pagination
  2. Indexing
Photo by Uillian Vargas on Unsplash

Let’s first look at the initial code without any improvements. For simplicity, all error checks and authentication codes have been removed:

router.get("/questions/age/:age", async (req, res) => {
const age = req.params.age;
const questions = await Question.find({ "responses.age": age });
res.json(questions);
});

Pagination

The first step is to reduce the amount of data sent back to the client in each API call. Say you are scrolling and reading all the materials in the app, in this case, questions. You only need some of the information to be loaded at a time. You need the data to fill your view space and a little more.

With this user perspective in mind, we can implement pagination in the API call. We need to specify the page and the number of items to be returned to the client. Here is the sample code:

router.get("/questions/age/:age", async (req, res) => {
const age = req.params.age;
const page = parseInt(req.query.page) || 1; // Specifies the page
const limit = parseInt(req.query.limit) || 10; // Specifies the number of items to return each page

const questions = await Question.find({ "responses.age": age })
.skip((page - 1) * limit)
.limit(limit);

res.json(questions);
});

We added two lines to read page and limit parameters and two lines to specify the page and limit in the query. As mentioned, extra error mitigation codes and authentication have been removed for brevity.

The client can easily add these two parameters to the GET call and should implement the pagination logic. Here is a sample API call:

api.domain.com/questions/age/3?page=1&limit=10

Indexing

The other thing that we could do is indexing. In our case, we will do a Single Field index on the “age” field. What MongoDB seems to do is that it creates a separate sorted list of all the age values and where they can be found. Given the sorted list, MongoDB uses the Balanced Tree structure to index and search the age values. The tree is balanced because all the leaf nodes are at the same depth, which helps keep the search time low. When the search happens, the algorithm starts from the root node, and each step of the way ignores parts of the data that are not relevant. The search time complexity is O(log(n)) in both average and worst cases. More information.

We do not need to implement all of this, and we need to add one line to our model class to do the indexing:

// Define the index on the schema
responseSchema.index({ age: 1 }, { background: true });

Please note that the indexing process may be time-consuming, so consider that. There is always a trade-off; in this case, we are having reduced time complexity by having a bit more space complexity.

Conclusion

We went through two simple methods to improve API response time when doing a query on a large MongoDB collection: pagination and indexing.

Please follow me on social media to keep in contact:

Twitter | LinkedIn | Medium

--

--