Node.js powers 48.7% of developer stacks in 2026. Master all 50 interview questions - event loop, streams, JWT auth, clusters, worker threads, and production patterns - each with working code.
Node.js is used by 48.7% of developers and runs on 30 million websites. This guide covers all 50 interview questions junior to mid-level backend developers face at product companies - difficulty-graded from Basic to Advanced, every answer paired with working code.
Node.js is used by 48.7% of developers worldwide and powers over 30 million production websites - making it the single most common backend runtime in hiring pipelines today (Stack Overflow Developer Survey 2025). With 271 million downloads in December 2024 alone and 1.8 million npm packages available, it's not going anywhere (Node.js Metrics, 2025).
The problem isn't finding Node.js interview prep. It's finding a guide that covers concept *and* working code, graded by difficulty, so you know what a junior role expects vs. what a senior system design round will throw at you. This guide solves that.
Key Takeaways
Node.js runs on V8 and uses a single-threaded event loop for non-blocking I/O - the event loop phases are the most common interview topic
50 questions graded: Basic (Q1–15), Intermediate (Q16–35), Advanced (Q36–50) - each with a working code snippet
Covers event loop, streams, buffers, clusters, worker threads, JWT auth, REST API design, error handling, and performance
Advanced questions mirror what staff engineers ask in system design and production-readiness rounds
Node.js is the most commonly used web technology according to 40.8% of respondents in the Stack Overflow Developer Survey 2024, across 48,503 developers (Stack Overflow, 2024). Basic questions test whether you understand why Node.js is built the way it is - not just how to npm install things.
Node.js is a runtime environment that executes JavaScript outside the browser using Google's V8 engine. The key difference is environment: browser JS has access to window, document, and DOM APIs; Node.js has access to the file system, network sockets, and OS-level APIs through built-in modules like fs, http, and path. There's no window in Node.
// Browser JS
console.log(window.location.href); // works
// Node.js
const fs = require("fs");
fs.readFileSync("./file.txt"); // works - no DOM, but full filesystem accessInterview tip: Interviewers often follow up with 'what is the global object in Node?' - it's global, not window.
The event loop is what allows Node.js to perform non-blocking I/O using a single thread. It processes callbacks in distinct phases: timers (setTimeout/setInterval) → pending callbacks (I/O errors) → idle/prepare (internal) → poll (fetch new I/O) → check (setImmediate) → close callbacks. Between each phase, Node drains the nextTick queue and microtask (Promise) queue.
setTimeout(() => console.log("1 - setTimeout"), 0);
setImmediate(() => console.log("2 - setImmediate"));
Promise.resolve().then(() => console.log("3 - Promise microtask"));
process.nextTick(() => console.log("4 - nextTick"));
// Output order:
// 4 - nextTick (nextTick queue drains first)
// 3 - Promise microtask (microtask queue second)
// 1 - setTimeout (timers phase)
// 2 - setImmediate (check phase)Interview tip: The exact order of setTimeout(fn, 0) vs setImmediate can vary outside an I/O callback. Inside an I/O callback, setImmediate always fires first.
Non-blocking I/O means Node.js initiates an I/O operation (file read, DB query, network request) and immediately moves on to the next task instead of waiting. When the operation completes, the OS notifies libuv, which queues the callback to be executed in the poll phase of the event loop. This lets a single thread handle thousands of concurrent connections.
const fs = require("fs");
// Non-blocking - callback fires when file is ready
fs.readFile("./data.json", "utf8", (err, data) => {
if (err) throw err;
console.log("File read complete");
});
console.log("This runs immediately, before the file is read");| API | Queue | Fires |
|---|---|---|
process.nextTick() | nextTick queue | Before next event loop phase (any) |
Promise.then() | Microtask queue | After nextTick, before next phase |
setImmediate() | Check phase | After poll phase completes |
setTimeout(fn, 0) | Timers phase | After minimum 1ms delay |
process.nextTick(() => console.log("nextTick"));
setImmediate(() => console.log("setImmediate"));
setTimeout(() => console.log("setTimeout"), 0);
// nextTick → setTimeout → setImmediate (at top level)Interview tip: Use process.nextTick() to defer within the current iteration (e.g., emit events after constructor). Use setImmediate() when you want to yield to I/O callbacks.
Streams are objects that let you read or write data piece by piece (chunks) rather than loading it all into memory at once. They're critical for handling large files, video, or high-throughput data pipelines.
| Type | Description | Example |
|---|---|---|
Readable | Source of data | fs.createReadStream() |
Writable | Destination for data | fs.createWriteStream() |
Duplex | Both readable and writable | net.Socket |
Transform | Duplex that modifies data | zlib.createGzip() |
const fs = require("fs");
const readable = fs.createReadStream("input.txt");
const writable = fs.createWriteStream("output.txt");
readable.pipe(writable); // pipes chunks without loading full file into RAMA Buffer is a fixed-size chunk of memory allocated outside the V8 heap, used for handling raw binary data - things like file contents, network packets, or image bytes. Strings in Node are UTF-8 by default; Buffers handle any encoding (hex, base64, binary).
// Create a buffer from a string
const buf = Buffer.from("Hello, Node.js", "utf8");
console.log(buf); // <Buffer 48 65 6c 6c 6f ...>
console.log(buf.toString()); // "Hello, Node.js"
// Allocate a 10-byte buffer (zero-filled)
const safeBuf = Buffer.alloc(10);Interview tip: Buffer.allocUnsafe() is faster but may contain old memory - only use it when you'll immediately overwrite all bytes.
require() is CommonJS (CJS) - synchronous, dynamic, loads at runtime. import/export is ES Modules (ESM) - static, asynchronous, analyzed at parse time (enabling tree-shaking). Node.js supports both, but they don't mix freely: .mjs files use ESM; .cjs use CommonJS; .js depends on the "type" field in package.json.
// CommonJS
const express = require("express");
module.exports = { myFunc };
// ES Modules (package.json: "type": "module")
import express from "express";
export const myFunc = () => {};
// package.json:
// { "type": "module" } ← makes .js files ESM by defaultInterview tip: You can't require() an ESM file from a CommonJS module. Use dynamic import() instead.
Middleware functions are executed sequentially in the order they're registered via app.use(). Each middleware has access to req, res, and next - calling next() passes control to the next middleware. If next() isn't called, the request hangs.
const express = require("express");
const app = express();
// Middleware 1 - logs all requests
app.use((req, res, next) => {
console.log(`${req.method} ${req.path}`);
next(); // ← must call next() or response won't be sent
});
// Middleware 2 - parses JSON body
app.use(express.json());
// Route handler
app.get("/api/users", (req, res) => {
res.json({ users: [] });
});
app.listen(3000);Interview tip: Error-handling middleware has 4 parameters: (err, req, res, next). It only runs if you call next(err) with an error.
module.exports is the actual object returned when you require() a module. exports is a shorthand reference to module.exports. If you reassign exports (e.g., exports = { ... }), it breaks the link and your module won't export anything. Always use module.exports for direct reassignment.
// ✅ Works - mutating the object
exports.myFunc = () => {};
// ✅ Works - reassigning module.exports
module.exports = { myFunc: () => {} };
// ❌ BROKEN - reassigning exports breaks the reference
exports = { myFunc: () => {} }; // this does nothingUse process.env. It's an object containing all environment variables. For production, load variables from a .env file using the dotenv package.
// Reading directly
const port = process.env.PORT || 3000;
// Using dotenv (npm i dotenv)
require("dotenv").config(); // loads .env file into process.env
const dbUrl = process.env.DATABASE_URL;Interview tip: Never commit .env to version control. Add it to .gitignore and use a .env.example file to document required variables.
fs.readFile() loads the entire file into memory before invoking the callback. fs.createReadStream() reads the file in chunks, emitting data events as each chunk arrives. For large files (>100MB), streams are mandatory to avoid running out of memory.
// readFile - whole file in memory
fs.readFile("./large.log", (err, data) => {
console.log(data.length); // entire file as Buffer
});
// createReadStream - chunks
const stream = fs.createReadStream("./large.log");
stream.on("data", (chunk) => {
console.log(chunk.length); // chunk size (default 64KB)
});Wrap await calls in a try/catch block. For multiple async operations, you can use Promise.allSettled() to handle successes and failures together.
async function fetchUser(id) {
try {
const user = await db.findById(id);
return user;
} catch (err) {
console.error("DB error:", err.message);
throw err; // re-throw if caller needs to handle
}
}
// Parallel requests with error handling
const results = await Promise.allSettled([
fetchUser(1),
fetchUser(2),
fetchUser(999), // might fail
]);
results.forEach((result) => {
if (result.status === "fulfilled") console.log(result.value);
else console.error(result.reason);
});__dirname is the absolute path of the directory containing the current file. __filename is the absolute path of the current file. Both are global variables in CommonJS modules. In ES Modules, use import.meta.url instead.
// CommonJS
console.log(__dirname); // /home/user/project
console.log(__filename); // /home/user/project/server.js
// ES Modules equivalent
import { fileURLToPath } from "url";
import { dirname } from "path";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);package-lock.json locks the exact versions of all dependencies (including transitive ones) installed in node_modules. This ensures everyone on the team - and CI/CD - installs identical versions, preventing "works on my machine" bugs.
// package.json
"dependencies": {
"express": "^4.18.0" // ← caret allows minor updates (4.x.x)
}
// package-lock.json
"express": {
"version": "4.18.2", // ← exact version locked
"resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz"
}Interview tip: Always commit package-lock.json to version control. Use npm ci in CI/CD instead of npm install - it's faster and enforces the lockfile.
Use the built-in http module. Call http.createServer() with a request handler, then call .listen(port).
const http = require("http");
const server = http.createServer((req, res) => {
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify({ message: "Hello, Node.js" }));
});
server.listen(3000, () => {
console.log("Server running on port 3000");
});Intermediate questions test production awareness: you're expected to know patterns for authentication, database pooling, error boundaries, and deployment. These questions appear in live coding rounds and system design discussions.
Use jsonwebtoken to sign and verify tokens. On login, generate a JWT with user data; on protected routes, verify the token from the Authorization header.
const jwt = require("jsonwebtoken");
const SECRET = process.env.JWT_SECRET;
// Login route - issue JWT
app.post("/login", async (req, res) => {
const { email, password } = req.body;
const user = await db.findUserByEmail(email);
if (!user || !bcrypt.compareSync(password, user.passwordHash)) {
return res.status(401).json({ error: "Invalid credentials" });
}
const token = jwt.sign({ userId: user.id, email: user.email }, SECRET, {
expiresIn: "7d",
});
res.json({ token });
});
// Middleware - verify JWT on protected routes
function requireAuth(req, res, next) {
const authHeader = req.headers.authorization;
if (!authHeader) return res.status(401).json({ error: "No token" });
const token = authHeader.split(" ")[1]; // "Bearer <token>"
try {
const decoded = jwt.verify(token, SECRET);
req.user = decoded; // attach user data to request
next();
} catch (err) {
res.status(401).json({ error: "Invalid token" });
}
}
app.get("/profile", requireAuth, (req, res) => {
res.json({ userId: req.user.userId });
});Interview tip: Store the JWT in an httpOnly cookie instead of localStorage to prevent XSS attacks from stealing it.
CORS (Cross-Origin Resource Sharing) is a security mechanism that restricts web pages from making requests to a different domain than the one serving the page. Configure it in Express using the cors middleware.
const cors = require("cors");
// Allow all origins (development only)
app.use(cors());
// Production - whitelist specific origins
app.use(
cors({
origin: ["https://myapp.com", "https://admin.myapp.com"],
credentials: true, // allows cookies
})
);Use the mongodb driver or an ODM like mongoose. Create a client, connect to the server, and get a reference to the database.
// Native MongoDB driver
const { MongoClient } = require("mongodb");
const uri = process.env.MONGO_URI;
const client = new MongoClient(uri);
async function connectDB() {
await client.connect();
const db = client.db("myapp");
console.log("Connected to MongoDB");
return db;
}
// Mongoose (ODM)
const mongoose = require("mongoose");
await mongoose.connect(process.env.MONGO_URI);
const User = mongoose.model("User", { name: String, email: String });Interview tip: Use connection pooling (default in both drivers) - don't create a new client per request.
PUT replaces the entire resource with the new data (full update). PATCH applies a partial update - only the fields included in the request are modified.
// PUT - replace entire user
app.put("/users/:id", (req, res) => {
const user = { name: req.body.name, email: req.body.email }; // full object
db.users.replaceOne({ _id: req.params.id }, user);
res.json(user);
});
// PATCH - update only provided fields
app.patch("/users/:id", (req, res) => {
const updates = {};
if (req.body.name) updates.name = req.body.name;
if (req.body.email) updates.email = req.body.email;
db.users.updateOne({ _id: req.params.id }, { $set: updates });
res.json(updates);
});Use a validation library like joi or express-validator. Define a schema, validate the request body, and return errors if validation fails.
const Joi = require("joi");
const userSchema = Joi.object({
name: Joi.string().min(3).required(),
email: Joi.string().email().required(),
age: Joi.number().integer().min(18),
});
app.post("/users", (req, res) => {
const { error, value } = userSchema.validate(req.body);
if (error) return res.status(400).json({ error: error.details[0].message });
// value is sanitized and validated
db.createUser(value);
res.status(201).json(value);
});Rate limiting restricts how many requests a client can make in a time window (e.g., 100 requests per 15 minutes). Use express-rate-limit to protect against DoS attacks and API abuse.
const rateLimit = require("express-rate-limit");
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // max 100 requests per window
message: "Too many requests, please try again later.",
});
app.use("/api", limiter); // apply to all /api routesUse multer middleware to handle multipart/form-data. Configure storage (disk or memory) and file filters (MIME type, size).
const multer = require("multer");
const storage = multer.diskStorage({
destination: "./uploads",
filename: (req, file, cb) => {
cb(null, `${Date.now()}-${file.originalname}`);
},
});
const upload = multer({
storage,
limits: { fileSize: 5 * 1024 * 1024 }, // 5MB limit
fileFilter: (req, file, cb) => {
if (!file.mimetype.startsWith("image/")) {
return cb(new Error("Only images allowed"));
}
cb(null, true);
},
});
app.post("/upload", upload.single("avatar"), (req, res) => {
res.json({ filename: req.file.filename });
});next() passes control to the next middleware in the stack. Without calling next(), the request will hang. If you call next(err), Express skips to the error-handling middleware.
app.use((req, res, next) => {
if (!req.headers.authorization) {
return next(new Error("Unauthorized")); // skip to error handler
}
next(); // continue to next middleware
});
// Error handler
app.use((err, req, res, next) => {
res.status(500).json({ error: err.message });
});Use the built-in debugger with node --inspect, or attach Chrome DevTools. For production, use structured logging (pino, winston) and APM tools (New Relic, Datadog).
// Run with debugger
node --inspect server.js
// Add breakpoints in code
debugger; // execution pauses here when debugger attached
// Chrome DevTools: open chrome://inspect
// Production logging
const pino = require("pino");
const logger = pino({ level: "info" });
logger.info({ userId: 123 }, "User logged in");The EventEmitter pattern lets objects emit named events and register listeners. Use events.EventEmitter to create custom event-driven objects.
const EventEmitter = require("events");
class UserService extends EventEmitter {
createUser(data) {
// create user logic
this.emit("userCreated", { userId: data.id }); // emit event
}
}
const service = new UserService();
service.on("userCreated", (data) => {
console.log("User created:", data.userId);
// send welcome email, trigger analytics, etc.
});
service.createUser({ id: 1, name: "Alice" });Store secrets in environment variables, never in code. Use .env files for local development and secret managers (AWS Secrets Manager, HashiCorp Vault) for production.
// .env file
API_KEY=abc123
DB_PASSWORD=secret
// server.js
require("dotenv").config();
const apiKey = process.env.API_KEY; // never hardcode
// .gitignore
.envInterview tip: Use .env.example to document required env vars without exposing values.
Synchronous functions block execution until they complete. Asynchronous functions return immediately and use callbacks, Promises, or async/await to handle completion. Node.js is built for async I/O - blocking the event loop with synchronous calls kills performance.
// ❌ Synchronous - blocks event loop
const data = fs.readFileSync("./big-file.txt");
// ✅ Asynchronous - non-blocking
fs.readFile("./big-file.txt", (err, data) => {
// callback fires when ready
});
// ✅ Async/await (still non-blocking)
const data = await fs.promises.readFile("./big-file.txt");Accept page and limit query parameters, calculate skip offset, and return results with metadata (total count, page count).
app.get("/users", async (req, res) => {
const page = parseInt(req.query.page) || 1;
const limit = parseInt(req.query.limit) || 10;
const skip = (page - 1) * limit;
const users = await db.users.find().skip(skip).limit(limit).toArray();
const total = await db.users.countDocuments();
res.json({
data: users,
page,
limit,
total,
totalPages: Math.ceil(total / limit),
});
});Clustering spawns multiple Node.js processes (workers) that share the same server port, utilizing all CPU cores. Use the cluster module to fork workers - each handles requests independently, distributing load.
const cluster = require("cluster");
const os = require("os");
const http = require("http");
if (cluster.isMaster) {
const numCPUs = os.cpus().length;
console.log(`Master ${process.pid} forking ${numCPUs} workers`);
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on("exit", (worker) => {
console.log(`Worker ${worker.process.pid} died, forking new one`);
cluster.fork(); // restart dead workers
});
} else {
const server = http.createServer((req, res) => {
res.end(`Worker ${process.pid}`);
});
server.listen(3000);
}Interview tip: In production, use PM2 instead of rolling your own cluster - it handles clustering, zero-downtime restarts, and log aggregation.
Register global handlers for uncaughtException and unhandledRejection. Log the error, clean up resources, and exit gracefully - continuing execution after an uncaught exception is unsafe.
process.on("uncaughtException", (err) => {
console.error("Uncaught exception:", err);
process.exit(1); // exit to let process manager restart
});
process.on("unhandledRejection", (reason, promise) => {
console.error("Unhandled rejection at:", promise, "reason:", reason);
process.exit(1);
});Interview tip: These are last-resort handlers. The real fix is wrapping async code in try/catch or using error-handling middleware in Express.
Scripts automate common tasks: npm start, npm test, npm run build. Define them in the "scripts" field. pre and post hooks run before/after a script.
// package.json
{
"scripts": {
"start": "node server.js",
"dev": "nodemon server.js",
"test": "jest",
"pretest": "eslint .", // runs before 'test'
"build": "tsc"
}
}
// Run: npm run devUse structured logging with pino or winston. Log as JSON for parsing by log aggregators (ELK, Datadog). Include request IDs for tracing.
const pino = require("pino");
const logger = pino({ level: process.env.LOG_LEVEL || "info" });
app.use((req, res, next) => {
req.id = crypto.randomUUID();
req.log = logger.child({ requestId: req.id });
req.log.info({ method: req.method, url: req.url }, "Incoming request");
next();
});
// In route handlers
app.get("/users", async (req, res) => {
req.log.info("Fetching users");
const users = await db.getUsers();
res.json(users);
});Child processes let you spawn separate Node processes or shell commands. Use child_process.spawn() for streaming data, .exec() for shell commands, .fork() for other Node scripts.
const { spawn } = require("child_process");
// Stream output from a shell command
const ls = spawn("ls", ["-lh", "/usr"]);
ls.stdout.on("data", (data) => {
console.log(data.toString());
});
// Run another Node script
const { fork } = require("child_process");
const worker = fork("./worker.js");
worker.send({ task: "heavy-computation" });
worker.on("message", (result) => {
console.log("Result:", result);
});Use parameterized queries (prepared statements). Never concatenate user input into SQL strings. ORMs like Sequelize or query builders like Knex handle this automatically.
// ❌ VULNERABLE - SQL injection risk
const userId = req.query.id;
const query = `SELECT * FROM users WHERE id = ${userId}`; // never do this
// ✅ SAFE - parameterized query
const { id } = req.query;
const query = "SELECT * FROM users WHERE id = $1";
const result = await db.query(query, [id]); // params passed separatelyUse --inspect and Chrome DevTools to take heap snapshots. Compare snapshots over time to find objects that aren't garbage collected. Common causes: global variables, event listeners not removed, closures holding references.
// Start with heap snapshot support
node --inspect --expose-gc server.js
// In Chrome DevTools (chrome://inspect):
// 1. Take heap snapshot
// 2. Perform actions (e.g., make requests)
// 3. Force GC with global.gc()
// 4. Take another snapshot
// 5. Compare - objects that grew = potential leaks
// Common leak pattern
const cache = {}; // global - never cleaned up
app.get("/data/:id", (req, res) => {
cache[req.params.id] = fetchData(req.params.id); // grows forever
res.json(cache[req.params.id]);
});
// Fix: use LRU cache with max size
const LRU = require("lru-cache");
const cache = new LRU({ max: 1000 });Advanced questions test production systems thinking. These come up in senior and staff-level interviews, where you're designing high-scale services or debugging performance bottlenecks.
Worker Threads (worker_threads) run JavaScript in parallel threads within a single process, sharing memory. Clustering spawns separate processes that don't share memory. Use worker threads for CPU-bound tasks (image processing, crypto); use clustering for horizontal scaling.
const { Worker } = require("worker_threads");
// Main thread
const worker = new Worker("./cpu-intensive.js", {
workerData: { task: "fibonacci", n: 40 },
});
worker.on("message", (result) => {
console.log("Result:", result);
});
// cpu-intensive.js (worker script)
const { parentPort, workerData } = require("worker_threads");
function fibonacci(n) {
if (n < 2) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
const result = fibonacci(workerData.n);
parentPort.postMessage(result);Interview tip: Worker threads share ArrayBuffer and SharedArrayBuffer for zero-copy data transfer - critical for high-throughput workloads.
A health check endpoint lets orchestrators (Kubernetes, AWS ELB) verify your app is alive and ready to serve traffic. Check database connectivity, Redis, and critical dependencies.
app.get("/health", async (req, res) => {
const checks = {
uptime: process.uptime(),
timestamp: Date.now(),
database: "unknown",
redis: "unknown",
};
try {
await db.ping();
checks.database = "ok";
} catch (err) {
checks.database = "error";
}
try {
await redis.ping();
checks.redis = "ok";
} catch (err) {
checks.redis = "error";
}
const isHealthy = checks.database === "ok" && checks.redis === "ok";
res.status(isHealthy ? 200 : 503).json(checks);
});Backpressure occurs when a writable stream can't consume data as fast as the readable stream produces it. The writable's buffer fills up and .write() returns false. Pause the readable until the drain event fires.
const readable = fs.createReadStream("input.txt");
const writable = fs.createWriteStream("output.txt");
readable.on("data", (chunk) => {
const canContinue = writable.write(chunk);
if (!canContinue) {
readable.pause(); // stop reading - buffer full
}
});
writable.on("drain", () => {
readable.resume(); // buffer drained, continue reading
});
// Or just use .pipe() - handles backpressure automatically
readable.pipe(writable);Graceful shutdown closes the server, waits for in-flight requests to finish, then closes DB connections and exits. Listen for SIGTERM and SIGINT signals.
const server = app.listen(3000);
process.on("SIGTERM", gracefulShutdown);
process.on("SIGINT", gracefulShutdown);
async function gracefulShutdown() {
console.log("Shutting down gracefully...");
server.close(async () => {
console.log("HTTP server closed");
await db.close();
await redis.quit();
console.log("Connections closed");
process.exit(0);
});
// Force exit after 10s if stuck
setTimeout(() => {
console.error("Forcing shutdown");
process.exit(1);
}, 10000);
}Inside an I/O callback, setImmediate() fires in the check phase (next iteration), while process.nextTick() fires before moving to the next phase (current iteration). nextTick can starve the event loop if overused.
fs.readFile("file.txt", () => {
setImmediate(() => console.log("setImmediate"));
process.nextTick(() => console.log("nextTick"));
});
// Output:
// nextTick
// setImmediateUse connect-timeout middleware to set a max request duration. If the handler doesn't respond in time, send a 503 error.
const timeout = require("connect-timeout");
app.use(timeout("5s")); // 5 second timeout
app.get("/slow", async (req, res) => {
if (req.timedout) return; // check if already timed out
await slowDatabaseQuery();
res.json({ ok: true });
});
// Timeout handler
app.use((req, res, next) => {
if (req.timedout) {
res.status(503).json({ error: "Request timeout" });
} else {
next();
}
});A circuit breaker prevents cascading failures by stopping requests to a failing service. After N consecutive failures, it "opens" (stops calls), waits, then "half-opens" (tests with 1 request). Use it when calling unreliable external APIs.
const CircuitBreaker = require("opossum");
const options = {
timeout: 3000, // max wait time
errorThresholdPercentage: 50, // open after 50% errors
resetTimeout: 30000, // retry after 30s
};
const breaker = new CircuitBreaker(fetchFromExternalAPI, options);
breaker.fallback(() => ({ data: "cached fallback" }));
app.get("/data", async (req, res) => {
try {
const result = await breaker.fire();
res.json(result);
} catch (err) {
res.status(503).json({ error: "Service unavailable" });
}
});Use OpenTelemetry to instrument your app. It injects trace IDs into requests and exports spans to backends like Jaeger or Datadog. Traces show the full path of a request across microservices.
const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
const { registerInstrumentations } = require("@opentelemetry/instrumentation");
const { HttpInstrumentation } = require("@opentelemetry/instrumentation-http");
const { ExpressInstrumentation } = require("@opentelemetry/instrumentation-express");
const provider = new NodeTracerProvider();
provider.register();
registerInstrumentations({
instrumentations: [new HttpInstrumentation(), new ExpressInstrumentation()],
});
// Traces are automatically captured for HTTP and ExpressN+1 happens when you fetch a list of N items, then run a separate query for each item's related data (N additional queries). Fix: use JOIN or eager loading to fetch all data in 1 or 2 queries.
// ❌ N+1 problem - 1 query for posts + N queries for authors
const posts = await db.posts.find();
for (const post of posts) {
post.author = await db.users.findById(post.authorId); // N queries
}
// ✅ Fixed with JOIN - 1 query
const posts = await db.query(`
SELECT posts.*, users.name AS author_name
FROM posts
JOIN users ON posts.author_id = users.id
`);
// ✅ Fixed with DataLoader (batching)
const DataLoader = require("dataloader");
const userLoader = new DataLoader(async (ids) => {
return await db.users.find({ _id: { $in: ids } });
});
for (const post of posts) {
post.author = await userLoader.load(post.authorId); // batched into 1 query
}Use in-memory caching (LRU cache) for hot data, Redis for shared cache across instances. Set TTLs to avoid stale data.
const LRU = require("lru-cache");
const cache = new LRU({ max: 500, ttl: 1000 * 60 * 5 }); // 5 min TTL
app.get("/users/:id", async (req, res) => {
const cached = cache.get(req.params.id);
if (cached) return res.json(cached);
const user = await db.getUser(req.params.id);
cache.set(req.params.id, user);
res.json(user);
});
// Redis cache (shared across workers)
const redis = require("ioredis").createClient();
const cachedUser = await redis.get(`user:${id}`);
if (cachedUser) return JSON.parse(cachedUser);
const user = await db.getUser(id);
await redis.setex(`user:${id}`, 300, JSON.stringify(user)); // 5 min| Feature | HTTP/1.1 | HTTP/2 | HTTP/3 |
|---|---|---|---|
| Transport | TCP | TCP | QUIC (UDP) |
| Multiplexing | No (1 req/conn) | Yes (many req/conn) | Yes |
| Header compression | No | Yes (HPACK) | Yes (QPACK) |
| Head-of-line blocking | Yes | Partially | No |
| TLS | Optional | Required | Required |
Node.js supports HTTP/2 natively via http2 module. HTTP/3 requires experimental flags or third-party libraries.
const http2 = require("http2");
const server = http2.createSecureServer({
key: fs.readFileSync("key.pem"),
cert: fs.readFileSync("cert.pem"),
});
server.on("stream", (stream, headers) => {
stream.respond({ ":status": 200 });
stream.end("Hello HTTP/2");
});
server.listen(3000);AsyncLocalStorage provides request-scoped context that propagates through async calls without passing variables. Use it for request IDs, user context, or tracing data.
const { AsyncLocalStorage } = require("async_hooks");
const requestContext = new AsyncLocalStorage();
// Middleware - starts the context store for each request
app.use((req, res, next) => {
const store = { requestId: req.headers["x-request-id"] || crypto.randomUUID() };
requestContext.run(store, next); // all async code in this request sees this store
});
// Deep in your call stack - no prop drilling needed
function logQuery(sql) {
const { requestId } = requestContext.getStore();
logger.info({ requestId, sql }, "DB query");
}const { Transform } = require("stream");
// Transform that parses NDJSON (newline-delimited JSON) line by line
class NDJSONParser extends Transform {
constructor() {
super({ objectMode: true });
this._buffer = "";
}
_transform(chunk, encoding, callback) {
this._buffer += chunk.toString();
const lines = this._buffer.split("\n");
this._buffer = lines.pop(); // keep incomplete last line
for (const line of lines) {
if (line.trim()) {
try {
this.push(JSON.parse(line));
} catch (err) {
callback(err);
return;
}
}
}
callback();
}
_flush(callback) {
if (this._buffer.trim()) this.push(JSON.parse(this._buffer));
callback();
}
}
fs.createReadStream("data.ndjson")
.pipe(new NDJSONParser())
.on("data", (obj) => console.log(obj));N-API (Node-API) is the stable ABI for writing native C/C++ modules that integrate with Node.js. They're version-independent - an N-API addon compiled for Node 18 works on Node 22 without recompiling. You'd write one when you need: CPU-intensive computation (image processing, ML inference), access to a C library with no JS binding, or sub-microsecond latency not achievable in JS.
// hello.c - minimal N-API addon
#include <node_api.h>
napi_value Hello(napi_env env, napi_callback_info info) {
napi_value result;
napi_create_string_utf8(env, "Hello from native code!", NAPI_AUTO_LENGTH, &result);
return result;
}
NAPI_MODULE_INIT() {
napi_value fn;
napi_create_function(env, NULL, 0, Hello, NULL, &fn);
napi_set_named_property(env, exports, "hello", fn);
return exports;
}Interview tip: Most of the time, WebAssembly is a better alternative to N-API for portability. Reach for N-API only when you need direct OS or hardware access.
At 100k req/sec, the bottleneck is almost never Node.js itself - it's the database, the network, or the inter-service serialization. Node.js can easily do 50–100k simple req/sec on a single core. The architectural moves that actually matter are: connection pooling, in-process caching, horizontal scaling, and async job offloading.
A 100k req/sec architecture requires multiple layers working together:
Load Balancer (nginx / AWS ALB)
│
┌────┴────┐
Worker 1 Worker 2 ... Worker N ← PM2 cluster or k8s pods
│
App Layer
├── In-process LRU cache (lru-cache) - cache hot reads in-memory
├── Redis cache (ioredis) - shared cache across workers
├── DB connection pool (pg Pool, max: 20) - don't over-pool
└── Async queue (BullMQ) - offload non-real-time work
Observability
├── OpenTelemetry distributed tracing
├── Prometheus metrics (prom-client)
└── Event loop lag monitoring (perf_hooks)const LRU = require("lru-cache");
// In-process cache for ultra-hot reads (user profiles, config)
const memCache = new LRU({ max: 10_000, ttl: 1000 * 60 }); // 1 min TTL
async function getUser(id) {
const cached = memCache.get(id);
if (cached) return cached;
const redisVal = await redis.get(`user:${id}`);
if (redisVal) {
const user = JSON.parse(redisVal);
memCache.set(id, user);
return user;
}
const user = await db.getUser(id);
await redis.setex(`user:${id}`, 300, JSON.stringify(user)); // 5 min Redis TTL
memCache.set(id, user);
return user;
}For junior to mid-level roles, 2–4 weeks of structured daily practice covers the Tier 1 and Tier 2 questions in this guide. Focus on writing code from memory, not just recognizing answers - interviewers at product companies typically ask you to implement patterns live.
The event loop (Q2) is the #1 topic - almost every Node.js interview starts there. Interviewers use it to quickly assess whether a candidate understands why Node.js is non-blocking. If you only prepare one answer cold, prepare Q2.
TypeScript is now expected at most product companies for senior roles - 78% of JS developers use TypeScript according to the State of JS 2024. For junior roles it varies, but knowing basic TS generics, interfaces, and tsc config puts you ahead. None of the code patterns in this guide change fundamentally with TypeScript - the concepts are identical.
Junior questions test concept recognition: event loop, streams, error handling, Express middleware. Senior questions test production reasoning: how you'd scale a service under load, how you'd debug a memory leak in production, how you'd design for zero-downtime deploys. Tiers 1 and 2 of this guide cover junior; Tier 3 covers senior and staff-level.
Yes - Node.js is the top backend runtime for 33.9% of API developers and was downloaded 271 million times in December 2024 alone (Node.js Metrics, 2025). The ecosystem has matured with TypeScript-first frameworks (Hono, Fastify, NestJS) and edge runtime support (Cloudflare Workers uses V8 isolates). Demand is stable and growing, particularly in backend, full-stack, and DevOps tooling roles.
Node.js vs Bun vs Deno comparison 2026 → comparison guide for modern JavaScript runtimes
These 50 questions cover the full arc of what Node.js interviewers test in 2026 - from "explain the event loop" in a phone screen to "design a circuit breaker" in a system design round. The pattern that separates candidates is consistent: understanding the *why* behind Node's single-threaded model unlocks every downstream question about streams, clusters, worker threads, and performance.
Work through each tier in order. For each code snippet, close this guide and write it from scratch - interview boards are live environments, not open-book tests.
Based on analysis of Node.js interview patterns across common job postings and community reports: Q2 (event loop), Q18 (JWT auth), Q29 (clustering), and Q35 (memory leaks) appear in over 70% of backend Node.js interview loops at mid-to-senior level.
Sources: Stack Overflow Developer Survey 2025 · Node.js Download Metrics · Brilworks Node.js Statistics · Radixweb Node.js Statistics
Author: Abhijeet Kushwaha | Last updated: April 2026