Memory consumption of Next.js in Docker container

Why are they running fine now? Note: all of them uses standalone output.

They are running fine now because standalone output is incredibly efficient, and your apps likely have very low traffic, allowing the Linux kernel to manage them effectively.

Here is why you haven't hit a "memory wall" yet:

  • Minimal Runtime Overhead: The standalone mode removes node_modules and only includes the files needed to run the server. This can keep an idle Next.js process as low as 40–50MB. At 14 apps, that’s roughly 630MB, which fits within your ~900MB of usable RAM.
  • Shared Node.js Shared Libraries: Since all 14 apps use the same Node.js runtime, the Linux kernel is clever. It can share the "read-only" parts of the Node binary in physical memory across all processes via a technique called Memory Mapping (mmap).
  • Linux Opportunistic Allocation: Linux doesn't actually give a process all the memory it asks for immediately. It uses "Lazy Allocation," only providing physical RAM when the app actually tries to write data to a memory page.
  • Low Traffic / No "Cold Starts": If these apps aren't processing heavy requests simultaneously, they aren't ballooning their heap size. They are staying in their "resting" state.

What changes if you move to Docker?

If you package them into Docker containers, you might actually see a slight increase in total overhead because:

  1. Isolation Overhead: Each container has its own tiny file system and networking stack.
  2. Duplicate Binaries: If your Docker images use different base versions (e.g., one uses node:18 and another node:20), the kernel can't share the memory as easily as it does when they all run from the same local