6 September 2023
Protecting Next.js API: Evaluating Three Rate Limiting Approaches

Nextjs

Rate-limiting

API

Excessive or malicious API requests can overload the system and lead to unexpected costs. Here's where rate limiting comes into play.
Wojtek Wrotek
Wojtek Wrotek
React/Next.js Frontend Developer
Protecting Next.js API: Evaluating Three Rate Limiting Approaches

Overview

One of the greatest features of Next.js is the ability to effortlessly define API routes. Being built on the top of Node.js, these routes simplify server-side operations, making it easier for developers to fetch data, interact with databases, or perform any other server-side logic without the need for a separate backend setup. However, it’s worth mentioning that they aren’t full Node.js environments - there are some limitations in which modules can be used.


Given that these API routes are accessible to the public and Next.js doesn’t provide built-in authorization features, there's a potential vulnerability to excessive use which can lead to unexpected costs. Excessive or malicious requests can also overload the system and lead to database crashes. Here's where rate limiting comes into play.

What is rate limiting?

Rate limiting is a method of controlling the number of requests a user can make to an API endpoint within a defined time frame. When the limit is surpassed, the server typically returns an HTTP 429 status code, indicating "Too Many Requests."

Importance and benefits:


  • security
  • cost control
  • resource management

What are the ways to implement it in Next.js?

There are many solutions to this problem. In this article we’ll deep dive into the most popular ones, discussing their scalability potential and drawbacks. Although we'll primarily focus on identifying users via API key, the provided examples can be easily modified to limit requests based on other criteria, such as IP addresses or user IDs.

Method 1: Using Redis Upstash

Serverless data platform

Upstash is a serverless and scalable database platform that offers Redis as a service, allowing developers to deploy and scale Redis applications without managing the infrastructure.

Why choose Upstash?

  • Efficiency: Upstash offers a library - @upstash/ratelimit, designed specifically for rate limiting.
  • Scalability: grants up to 10,000 requests per day for free. For more extensive needs, a nominal fee of $0.20 is charged for every additional 100,000 requests.
  • Reliability: Using Redis as the backbone ensures high performance and reliability for the rate limiting system.


Code

1import { Ratelimit } from "@upstash/ratelimit";
2import { Redis } from "@upstash/redis";
3import { NextApiRequest, NextApiResponse } from "next";
4
5const redis = new Redis({
6  url: 'UPSTASH_REDIS_REST_URL',
7  token: 'UPSTASH_REDIS_REST_TOKEN',
8})
9
10const ratelimit = new Ratelimit({
11  redis: redis,
12  limiter: Ratelimit.slidingWindow(2, "3 s"),
13});
14
15export default async function handler(
16  req: NextApiRequest,
17  res: NextApiResponse
18) {
19  const identifier = "api";
20  const result = await ratelimit.limit(identifier);
21  res.setHeader("X-RateLimit-Limit", result.limit);
22  res.setHeader("X-RateLimit-Remaining", result.remaining);
23  if (!result.success) {
24    return res.status(429).json({ error: "Rate limit exceeded" });
25  }
26  res.status(200).json({ name: "John Doe", rateLimitState: result });
27}

Here’s a short explanation of what’s going on in this code:


1. Identifier: The identifier "api" represents all requests under one rate limit.

2. Rate Limit Check: The ratelimit.limit(identifier) method determines the current rate-limiting state.

3. Response Headers for Rate Limiting:

  • X-RateLimit-Limit: Allowed requests in the set time frame.
  • X-RateLimit-Remaining: Remaining requests in the current window.

4. Rate Limit Exceeded Handling: If result.success is false, it indicates the rate limit has been surpassed. A response is sent back to inform the requester.

Method 2: Using express-rate-limit library (no database needed)

Serverless data platform

https://github.com/express-rate-limit/express-rate-limit


While it’s built primarily for Express.js, this library (with some modifications) can be integrated into Next.js. The Next.js documentation recommends wrapping the middleware function into Promise to make sure that the code will wait until all checks are done. If this promise resolves it simply means that we have not exceeded the rate limit.


Additionally, the library provides flexibility in terms of storage for the hit count. For instance, you can easily integrate that library with Redis.

Code

1import { NextApiRequest, NextApiResponse } from "next";
2import rateLimit from "express-rate-limit";
3
4const getKey = () => "key";
5
6const runMiddleware = (
7  req: NextApiRequest,
8  res: NextApiResponse,
9  fn: Function
10) => {
11  return new Promise((resolve, reject) => {
12    fn(req, res, (result) =>
13      result instanceof Error ? reject(result) : resolve(result)
14    );
15  });
16};
17
18export const getRateLimitMiddleware = () =>
19  rateLimit({ keyGenerator: getKey, windowMs: 60 * 1000, max: 5 });
20
21const limiter = getRateLimitMiddleware();
22
23const handler = async (req: NextApiRequest, res: NextApiResponse) => {
24  try {
25    await runMiddleware(req, res, limiter);
26    res.status(200).json({ message: "Success!" });
27  } catch {
28    res.status(429).json({ error: "Rate limit exceeded" });
29  }
30};
31
32export default handler;

Method 3: Using LRU Cache (no database needed)

Serverless data platform

https://github.com/vercel/next.js/tree/canary/examples/api-routes-rate-limit


Least Recently Used (LRU) cache is an in-memory type of cache. It's often used in applications to store and manage frequently accessed data in memory for faster retrieval. Given its nature, lru-cache can also be cleverly used for rate-limiting purposes.


This method is slightly more unique and interesting. Hopefully, Vercel provides us with an example implementation of this feature. 


Since lru-cache removes the least recently used entries, it ensures that only active users are considered in the rate-limiting process. As entries get old (i.e., the user hasn't made a request for some time), they are automatically deleted, freeing up space for new users.


Let’s quickly dive into how the rateLimit function works in Vercel’s example:

Code

1import type { NextApiResponse } from 'next'
2import LRU from 'lru-cache'
3
4type Options = {
5  uniqueTokenPerInterval?: number
6  interval?: number
7}
8
9export default function rateLimit(options?: Options) {
10  const tokenCache = new LRU({
11    max: options?.uniqueTokenPerInterval || 500,
12    ttl: options?.interval || 60000,
13  })
14
15  return {
16    check: (res: NextApiResponse, limit: number, token: string) =>
17      new Promise<void>((resolve, reject) => {
18        const tokenCount = (tokenCache.get(token) as number[]) || [0]
19        if (tokenCount[0] === 0) {
20          tokenCache.set(token, tokenCount)
21        }
22        tokenCount[0] += 1
23
24        const currentUsage = tokenCount[0]
25        const isRateLimited = currentUsage >= limit
26        res.setHeader('X-RateLimit-Limit', limit)
27        res.setHeader(
28          'X-RateLimit-Remaining',
29          isRateLimited ? 0 : limit - currentUsage
30        )
31
32        return isRateLimited ? reject() : resolve()
33      }),
34  }
35}

The check method does the following:


  1. Retrieve the current count of requests for the given token from the tokenCache. If the token is not present, it initializes it with a count of 0.
  2. It then increments the count for that token.
  3. It checks if the number of requests (currentUsage) for the token has exceeded the limit. If it has, it considers the request rate-limited.
  4. The response (res) is then updated with headers indicating the rate limit (X-RateLimit-Limit) and the remaining number of allowed requests (X-RateLimit-Remaining).
  5. Finally, based on whether the request is rate-limited or not, the promise is either rejected or resolved.


Simple right? But there’s one interesting thing about that code. In JavaScript, objects and arrays are reference types. This means that when you retrieve an array from the cache and modify it, you're modifying the same instance of the array that's stored in the cache.


Contrast this with a primitive number; if you retrieve a number from the cache, increment it, and then forget to set it back in the cache, the value in the cache won't reflect the increment.


By using an array (or an object), you ensure that the count is always up-to-date in the cache without needing to manually re-insert it after every modification.

Bonus: How exactly is it possible to store data in serverless functions?

By default, serverless functions aren't designed for data persistence. This might raise the question of how the mentioned methods operate.


On platforms like Vercel, each serverless function has its unique runtime environment. While this environment can persist between calls, it might be discarded if the API route remains unused for extended periods. Here's the key: anything inside the handler function is executed every time the function is called, but anything outside of it runs only once during the initial, or "cold," start. This behavior is what enables the above methods to function.


However, since it all relies on the short-term memory of serverless functions, the data such as count can be unexpectedly reset.

Conclusion

In summary, a solution like Redis Upstash stands out due to its scalability and reliability. For database-free setups, consider solutions like express-rate-limit or LRU Cache. They are faster to implement but not that reliable in terms of data persistence, since they rely on serverless short-term memory.


Overall, each rate-limiting strategy has its own set of benefits and drawbacks, and the best solution will depend on your application's specific needs.

blazity comet
Get a quote
Empower your vision with us today
The contact information is only used to process your request. By clicking send, you agree to allow us to store information in order to process your request.