StackInterview logoStackInterview icon

Explore

Library

Resources

Articles

Insights

StackInterview

StackInterview helps developers prepare for full-stack interviews with structured questions, real company interview insights, and modern technology coverage.

About UsFAQContactPrivacy PolicyTerms of Service

© 2026 StackInterview. Built for engineers, by engineers.

Developed and Maintained by Abhijeet Kushwaha

All Articles
⚡nodejs22 min read

Top 50 Node.js Interview Questions and Answers (2026)

Node.js powers 48.7% of developer stacks in 2026. Master all 50 interview questions - event loop, streams, JWT auth, clusters, worker threads, and production patterns - each with working code.

Node.js is used by 48.7% of developers and runs on 30 million websites. This guide covers all 50 interview questions junior to mid-level backend developers face at product companies - difficulty-graded from Basic to Advanced, every answer paired with working code.

nodejsinterview-questionsbackendnodejs-2026coding-interview
On this page
  1. Top 50 Node.js Interview Questions and Answers (2026)
  2. Tier 1 - Basic Questions (Q1–Q15)
  3. Q1. What is Node.js and how does it differ from browser JavaScript?
  4. Q2. Explain the Node.js event loop - phases and order of execution.
  5. Q3. What is non-blocking I/O? How does it work in Node?
  6. Q4. What is the difference between `process.nextTick()`, `setImmediate()`, and `setTimeout()`?
  7. Q5. What are streams in Node.js? Name the four types.
  8. Q6. What is a Buffer? When would you use it?
  9. Q7. What is the difference between `require()` and ES Modules (`import/export`)?
  10. Q8. How does middleware work in Express.js?
  11. Q9. What is the purpose of `module.exports` vs `exports`?
  12. Q10. How do you read environment variables in Node.js?
  13. Q11. What is the difference between `fs.readFile()` and `fs.createReadStream()`?
  14. Q12. How do you handle errors in async/await?
  15. Q13. What is `__dirname` and `__filename`?
  16. Q14. What is `package-lock.json` and why is it important?
  17. Q15. How do you create a simple HTTP server in Node.js without frameworks?
  18. Tier 2 - Intermediate Questions (Q16–Q35)
  19. Q16. How do you implement JWT-based authentication in Node.js?
  20. Q17. What is CORS? How do you configure it in Express?
  21. Q18. How do you connect to MongoDB from Node.js?
  22. Q19. What is the difference between `PUT` and `PATCH` in REST APIs?
  23. Q20. How do you validate request data in Express?
  24. Q21. What is rate limiting and how do you implement it?
  25. Q22. How do you upload files in Node.js?
  26. Q23. What is the purpose of the `next()` function in middleware?
  27. Q24. How do you debug a Node.js application?
  28. Q25. What is the event emitter pattern? How do you use it?
  29. Q26. How do you secure sensitive data like API keys in Node.js?
  30. Q27. What is the difference between synchronous and asynchronous functions?
  31. Q28. How do you implement pagination in a REST API?
  32. Q29. What is clustering in Node.js? How does it improve performance?
  33. Q30. How do you handle uncaught exceptions and unhandled promise rejections?
  34. Q31. What is the purpose of `package.json` scripts?
  35. Q32. How do you implement logging in a production Node.js app?
  36. Q33. What are child processes? When would you use them?
  37. Q34. How do you prevent SQL injection in Node.js?
  38. Q35. How do you detect and fix memory leaks in Node.js?
  39. Tier 3 - Advanced Questions (Q36–Q50)
  40. Q36. What are Worker Threads? How do they differ from clustering?
  41. Q37. How do you implement a health check endpoint?
  42. Q38. What is backpressure in streams? How do you handle it?
  43. Q39. How do you implement graceful shutdown?
  44. Q40. What is the difference between `setImmediate()` and `process.nextTick()` in I/O callbacks?
  45. Q41. How do you implement request timeout handling in Express?
  46. Q42. What is a circuit breaker pattern? When would you use it?
  47. Q43. How do you implement distributed tracing in Node.js?
  48. Q44. What is the N+1 query problem? How do you fix it?
  49. Q45. How do you implement caching in Node.js?
  50. Q46. What are the differences between HTTP/1.1, HTTP/2, and HTTP/3?
  51. Q47. What is AsyncLocalStorage and when would you use it?
  52. Q48. How do you implement a custom stream Transform class?
  53. Q49. What are N-API addons and when would you write a native Node.js module?
  54. Q50. How would you architect a high-throughput Node.js service handling 100k req/sec?
  55. Frequently Asked Questions
  56. How long does it take to prepare for a Node.js interview?
  57. What is the most commonly asked Node.js interview question?
  58. Do I need to know TypeScript for Node.js interviews in 2026?
  59. What's the difference between junior and senior Node.js interview questions?
  60. Is Node.js still in demand in 2026?
  61. Conclusion
Practice

Test your knowledge

Real interview questions asked at top product companies.

Practice Now
More Articles

Node.js is used by 48.7% of developers worldwide and powers over 30 million production websites - making it the single most common backend runtime in hiring pipelines today (Stack Overflow Developer Survey 2025). With 271 million downloads in December 2024 alone and 1.8 million npm packages available, it's not going anywhere (Node.js Metrics, 2025).

The problem isn't finding Node.js interview prep. It's finding a guide that covers concept *and* working code, graded by difficulty, so you know what a junior role expects vs. what a senior system design round will throw at you. This guide solves that.

Key Takeaways

  • Node.js runs on V8 and uses a single-threaded event loop for non-blocking I/O - the event loop phases are the most common interview topic

  • 50 questions graded: Basic (Q1–15), Intermediate (Q16–35), Advanced (Q36–50) - each with a working code snippet

  • Covers event loop, streams, buffers, clusters, worker threads, JWT auth, REST API design, error handling, and performance

  • Advanced questions mirror what staff engineers ask in system design and production-readiness rounds

backend interview prep roadmap → comprehensive guide to preparing for backend developer interviews in 2026

Rows of server racks in a data center - the infrastructure Node.js powers at scale
Rows of server racks in a data center - the infrastructure Node.js powers at scale

Tier 1 - Basic Questions (Q1–Q15)

Node.js is the most commonly used web technology according to 40.8% of respondents in the Stack Overflow Developer Survey 2024, across 48,503 developers (Stack Overflow, 2024). Basic questions test whether you understand why Node.js is built the way it is - not just how to npm install things.


Q1. What is Node.js and how does it differ from browser JavaScript?

Node.js is a runtime environment that executes JavaScript outside the browser using Google's V8 engine. The key difference is environment: browser JS has access to window, document, and DOM APIs; Node.js has access to the file system, network sockets, and OS-level APIs through built-in modules like fs, http, and path. There's no window in Node.

// Browser JS
console.log(window.location.href); // works

// Node.js
const fs = require("fs");
fs.readFileSync("./file.txt"); // works - no DOM, but full filesystem access

Interview tip: Interviewers often follow up with 'what is the global object in Node?' - it's global, not window.


Q2. Explain the Node.js event loop - phases and order of execution.

The event loop is what allows Node.js to perform non-blocking I/O using a single thread. It processes callbacks in distinct phases: timers (setTimeout/setInterval) → pending callbacks (I/O errors) → idle/prepare (internal) → poll (fetch new I/O) → check (setImmediate) → close callbacks. Between each phase, Node drains the nextTick queue and microtask (Promise) queue.

setTimeout(() => console.log("1 - setTimeout"), 0);
setImmediate(() => console.log("2 - setImmediate"));
Promise.resolve().then(() => console.log("3 - Promise microtask"));
process.nextTick(() => console.log("4 - nextTick"));

// Output order:
// 4 - nextTick       (nextTick queue drains first)
// 3 - Promise microtask  (microtask queue second)
// 1 - setTimeout     (timers phase)
// 2 - setImmediate   (check phase)

Interview tip: The exact order of setTimeout(fn, 0) vs setImmediate can vary outside an I/O callback. Inside an I/O callback, setImmediate always fires first.


Q3. What is non-blocking I/O? How does it work in Node?

Non-blocking I/O means Node.js initiates an I/O operation (file read, DB query, network request) and immediately moves on to the next task instead of waiting. When the operation completes, the OS notifies libuv, which queues the callback to be executed in the poll phase of the event loop. This lets a single thread handle thousands of concurrent connections.

const fs = require("fs");

// Non-blocking - callback fires when file is ready
fs.readFile("./data.json", "utf8", (err, data) => {
  if (err) throw err;
  console.log("File read complete");
});

console.log("This runs immediately, before the file is read");

Q4. What is the difference between `process.nextTick()`, `setImmediate()`, and `setTimeout()`?

APIQueueFires
process.nextTick()nextTick queueBefore next event loop phase (any)
Promise.then()Microtask queueAfter nextTick, before next phase
setImmediate()Check phaseAfter poll phase completes
setTimeout(fn, 0)Timers phaseAfter minimum 1ms delay
process.nextTick(() => console.log("nextTick"));
setImmediate(() => console.log("setImmediate"));
setTimeout(() => console.log("setTimeout"), 0);
// nextTick → setTimeout → setImmediate (at top level)

Interview tip: Use process.nextTick() to defer within the current iteration (e.g., emit events after constructor). Use setImmediate() when you want to yield to I/O callbacks.


Q5. What are streams in Node.js? Name the four types.

Streams are objects that let you read or write data piece by piece (chunks) rather than loading it all into memory at once. They're critical for handling large files, video, or high-throughput data pipelines.

TypeDescriptionExample
ReadableSource of datafs.createReadStream()
WritableDestination for datafs.createWriteStream()
DuplexBoth readable and writablenet.Socket
TransformDuplex that modifies datazlib.createGzip()
const fs = require("fs");

const readable = fs.createReadStream("input.txt");
const writable = fs.createWriteStream("output.txt");

readable.pipe(writable); // pipes chunks without loading full file into RAM

Q6. What is a Buffer? When would you use it?

A Buffer is a fixed-size chunk of memory allocated outside the V8 heap, used for handling raw binary data - things like file contents, network packets, or image bytes. Strings in Node are UTF-8 by default; Buffers handle any encoding (hex, base64, binary).

// Create a buffer from a string
const buf = Buffer.from("Hello, Node.js", "utf8");
console.log(buf);           // <Buffer 48 65 6c 6c 6f ...>
console.log(buf.toString()); // "Hello, Node.js"

// Allocate a 10-byte buffer (zero-filled)
const safeBuf = Buffer.alloc(10);

Interview tip: Buffer.allocUnsafe() is faster but may contain old memory - only use it when you'll immediately overwrite all bytes.


Q7. What is the difference between `require()` and ES Modules (`import/export`)?

require() is CommonJS (CJS) - synchronous, dynamic, loads at runtime. import/export is ES Modules (ESM) - static, asynchronous, analyzed at parse time (enabling tree-shaking). Node.js supports both, but they don't mix freely: .mjs files use ESM; .cjs use CommonJS; .js depends on the "type" field in package.json.

// CommonJS
const express = require("express");
module.exports = { myFunc };

// ES Modules (package.json: "type": "module")
import express from "express";
export const myFunc = () => {};

// package.json:
// { "type": "module" }  ← makes .js files ESM by default

Interview tip: You can't require() an ESM file from a CommonJS module. Use dynamic import() instead.


Q8. How does middleware work in Express.js?

Middleware functions are executed sequentially in the order they're registered via app.use(). Each middleware has access to req, res, and next - calling next() passes control to the next middleware. If next() isn't called, the request hangs.

const express = require("express");
const app = express();

// Middleware 1 - logs all requests
app.use((req, res, next) => {
  console.log(`${req.method} ${req.path}`);
  next(); // ← must call next() or response won't be sent
});

// Middleware 2 - parses JSON body
app.use(express.json());

// Route handler
app.get("/api/users", (req, res) => {
  res.json({ users: [] });
});

app.listen(3000);

Interview tip: Error-handling middleware has 4 parameters: (err, req, res, next). It only runs if you call next(err) with an error.


Q9. What is the purpose of `module.exports` vs `exports`?

module.exports is the actual object returned when you require() a module. exports is a shorthand reference to module.exports. If you reassign exports (e.g., exports = { ... }), it breaks the link and your module won't export anything. Always use module.exports for direct reassignment.

// ✅ Works - mutating the object
exports.myFunc = () => {};

// ✅ Works - reassigning module.exports
module.exports = { myFunc: () => {} };

// ❌ BROKEN - reassigning exports breaks the reference
exports = { myFunc: () => {} }; // this does nothing

Q10. How do you read environment variables in Node.js?

Use process.env. It's an object containing all environment variables. For production, load variables from a .env file using the dotenv package.

// Reading directly
const port = process.env.PORT || 3000;

// Using dotenv (npm i dotenv)
require("dotenv").config(); // loads .env file into process.env
const dbUrl = process.env.DATABASE_URL;

Interview tip: Never commit .env to version control. Add it to .gitignore and use a .env.example file to document required variables.


Q11. What is the difference between `fs.readFile()` and `fs.createReadStream()`?

fs.readFile() loads the entire file into memory before invoking the callback. fs.createReadStream() reads the file in chunks, emitting data events as each chunk arrives. For large files (>100MB), streams are mandatory to avoid running out of memory.

// readFile - whole file in memory
fs.readFile("./large.log", (err, data) => {
  console.log(data.length); // entire file as Buffer
});

// createReadStream - chunks
const stream = fs.createReadStream("./large.log");
stream.on("data", (chunk) => {
  console.log(chunk.length); // chunk size (default 64KB)
});

Q12. How do you handle errors in async/await?

Wrap await calls in a try/catch block. For multiple async operations, you can use Promise.allSettled() to handle successes and failures together.

async function fetchUser(id) {
  try {
    const user = await db.findById(id);
    return user;
  } catch (err) {
    console.error("DB error:", err.message);
    throw err; // re-throw if caller needs to handle
  }
}

// Parallel requests with error handling
const results = await Promise.allSettled([
  fetchUser(1),
  fetchUser(2),
  fetchUser(999), // might fail
]);
results.forEach((result) => {
  if (result.status === "fulfilled") console.log(result.value);
  else console.error(result.reason);
});

Q13. What is `__dirname` and `__filename`?

__dirname is the absolute path of the directory containing the current file. __filename is the absolute path of the current file. Both are global variables in CommonJS modules. In ES Modules, use import.meta.url instead.

// CommonJS
console.log(__dirname);  // /home/user/project
console.log(__filename); // /home/user/project/server.js

// ES Modules equivalent
import { fileURLToPath } from "url";
import { dirname } from "path";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);

Q14. What is `package-lock.json` and why is it important?

package-lock.json locks the exact versions of all dependencies (including transitive ones) installed in node_modules. This ensures everyone on the team - and CI/CD - installs identical versions, preventing "works on my machine" bugs.

// package.json
"dependencies": {
  "express": "^4.18.0"  // ← caret allows minor updates (4.x.x)
}

// package-lock.json
"express": {
  "version": "4.18.2",  // ← exact version locked
  "resolved": "https://registry.npmjs.org/express/-/express-4.18.2.tgz"
}

Interview tip: Always commit package-lock.json to version control. Use npm ci in CI/CD instead of npm install - it's faster and enforces the lockfile.


Q15. How do you create a simple HTTP server in Node.js without frameworks?

Use the built-in http module. Call http.createServer() with a request handler, then call .listen(port).

const http = require("http");

const server = http.createServer((req, res) => {
  res.writeHead(200, { "Content-Type": "application/json" });
  res.end(JSON.stringify({ message: "Hello, Node.js" }));
});

server.listen(3000, () => {
  console.log("Server running on port 3000");
});

Tier 2 - Intermediate Questions (Q16–Q35)

Intermediate questions test production awareness: you're expected to know patterns for authentication, database pooling, error boundaries, and deployment. These questions appear in live coding rounds and system design discussions.


Q16. How do you implement JWT-based authentication in Node.js?

Use jsonwebtoken to sign and verify tokens. On login, generate a JWT with user data; on protected routes, verify the token from the Authorization header.

const jwt = require("jsonwebtoken");
const SECRET = process.env.JWT_SECRET;

// Login route - issue JWT
app.post("/login", async (req, res) => {
  const { email, password } = req.body;
  const user = await db.findUserByEmail(email);
  if (!user || !bcrypt.compareSync(password, user.passwordHash)) {
    return res.status(401).json({ error: "Invalid credentials" });
  }
  const token = jwt.sign({ userId: user.id, email: user.email }, SECRET, {
    expiresIn: "7d",
  });
  res.json({ token });
});

// Middleware - verify JWT on protected routes
function requireAuth(req, res, next) {
  const authHeader = req.headers.authorization;
  if (!authHeader) return res.status(401).json({ error: "No token" });
  const token = authHeader.split(" ")[1]; // "Bearer <token>"
  try {
    const decoded = jwt.verify(token, SECRET);
    req.user = decoded; // attach user data to request
    next();
  } catch (err) {
    res.status(401).json({ error: "Invalid token" });
  }
}

app.get("/profile", requireAuth, (req, res) => {
  res.json({ userId: req.user.userId });
});

Interview tip: Store the JWT in an httpOnly cookie instead of localStorage to prevent XSS attacks from stealing it.


Q17. What is CORS? How do you configure it in Express?

CORS (Cross-Origin Resource Sharing) is a security mechanism that restricts web pages from making requests to a different domain than the one serving the page. Configure it in Express using the cors middleware.

const cors = require("cors");

// Allow all origins (development only)
app.use(cors());

// Production - whitelist specific origins
app.use(
  cors({
    origin: ["https://myapp.com", "https://admin.myapp.com"],
    credentials: true, // allows cookies
  })
);

Q18. How do you connect to MongoDB from Node.js?

Use the mongodb driver or an ODM like mongoose. Create a client, connect to the server, and get a reference to the database.

// Native MongoDB driver
const { MongoClient } = require("mongodb");
const uri = process.env.MONGO_URI;
const client = new MongoClient(uri);

async function connectDB() {
  await client.connect();
  const db = client.db("myapp");
  console.log("Connected to MongoDB");
  return db;
}

// Mongoose (ODM)
const mongoose = require("mongoose");
await mongoose.connect(process.env.MONGO_URI);
const User = mongoose.model("User", { name: String, email: String });

Interview tip: Use connection pooling (default in both drivers) - don't create a new client per request.


Q19. What is the difference between `PUT` and `PATCH` in REST APIs?

PUT replaces the entire resource with the new data (full update). PATCH applies a partial update - only the fields included in the request are modified.

// PUT - replace entire user
app.put("/users/:id", (req, res) => {
  const user = { name: req.body.name, email: req.body.email }; // full object
  db.users.replaceOne({ _id: req.params.id }, user);
  res.json(user);
});

// PATCH - update only provided fields
app.patch("/users/:id", (req, res) => {
  const updates = {};
  if (req.body.name) updates.name = req.body.name;
  if (req.body.email) updates.email = req.body.email;
  db.users.updateOne({ _id: req.params.id }, { $set: updates });
  res.json(updates);
});

Q20. How do you validate request data in Express?

Use a validation library like joi or express-validator. Define a schema, validate the request body, and return errors if validation fails.

const Joi = require("joi");

const userSchema = Joi.object({
  name: Joi.string().min(3).required(),
  email: Joi.string().email().required(),
  age: Joi.number().integer().min(18),
});

app.post("/users", (req, res) => {
  const { error, value } = userSchema.validate(req.body);
  if (error) return res.status(400).json({ error: error.details[0].message });
  // value is sanitized and validated
  db.createUser(value);
  res.status(201).json(value);
});

Q21. What is rate limiting and how do you implement it?

Rate limiting restricts how many requests a client can make in a time window (e.g., 100 requests per 15 minutes). Use express-rate-limit to protect against DoS attacks and API abuse.

const rateLimit = require("express-rate-limit");

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // max 100 requests per window
  message: "Too many requests, please try again later.",
});

app.use("/api", limiter); // apply to all /api routes

Q22. How do you upload files in Node.js?

Use multer middleware to handle multipart/form-data. Configure storage (disk or memory) and file filters (MIME type, size).

const multer = require("multer");

const storage = multer.diskStorage({
  destination: "./uploads",
  filename: (req, file, cb) => {
    cb(null, `${Date.now()}-${file.originalname}`);
  },
});

const upload = multer({
  storage,
  limits: { fileSize: 5 * 1024 * 1024 }, // 5MB limit
  fileFilter: (req, file, cb) => {
    if (!file.mimetype.startsWith("image/")) {
      return cb(new Error("Only images allowed"));
    }
    cb(null, true);
  },
});

app.post("/upload", upload.single("avatar"), (req, res) => {
  res.json({ filename: req.file.filename });
});

Q23. What is the purpose of the `next()` function in middleware?

next() passes control to the next middleware in the stack. Without calling next(), the request will hang. If you call next(err), Express skips to the error-handling middleware.

app.use((req, res, next) => {
  if (!req.headers.authorization) {
    return next(new Error("Unauthorized")); // skip to error handler
  }
  next(); // continue to next middleware
});

// Error handler
app.use((err, req, res, next) => {
  res.status(500).json({ error: err.message });
});

Q24. How do you debug a Node.js application?

Use the built-in debugger with node --inspect, or attach Chrome DevTools. For production, use structured logging (pino, winston) and APM tools (New Relic, Datadog).

// Run with debugger
node --inspect server.js

// Add breakpoints in code
debugger; // execution pauses here when debugger attached

// Chrome DevTools: open chrome://inspect

// Production logging
const pino = require("pino");
const logger = pino({ level: "info" });
logger.info({ userId: 123 }, "User logged in");

Q25. What is the event emitter pattern? How do you use it?

The EventEmitter pattern lets objects emit named events and register listeners. Use events.EventEmitter to create custom event-driven objects.

const EventEmitter = require("events");

class UserService extends EventEmitter {
  createUser(data) {
    // create user logic
    this.emit("userCreated", { userId: data.id }); // emit event
  }
}

const service = new UserService();
service.on("userCreated", (data) => {
  console.log("User created:", data.userId);
  // send welcome email, trigger analytics, etc.
});

service.createUser({ id: 1, name: "Alice" });

Q26. How do you secure sensitive data like API keys in Node.js?

Store secrets in environment variables, never in code. Use .env files for local development and secret managers (AWS Secrets Manager, HashiCorp Vault) for production.

// .env file
API_KEY=abc123
DB_PASSWORD=secret

// server.js
require("dotenv").config();
const apiKey = process.env.API_KEY; // never hardcode

// .gitignore
.env

Interview tip: Use .env.example to document required env vars without exposing values.


Q27. What is the difference between synchronous and asynchronous functions?

Synchronous functions block execution until they complete. Asynchronous functions return immediately and use callbacks, Promises, or async/await to handle completion. Node.js is built for async I/O - blocking the event loop with synchronous calls kills performance.

// ❌ Synchronous - blocks event loop
const data = fs.readFileSync("./big-file.txt");

// ✅ Asynchronous - non-blocking
fs.readFile("./big-file.txt", (err, data) => {
  // callback fires when ready
});

// ✅ Async/await (still non-blocking)
const data = await fs.promises.readFile("./big-file.txt");

Q28. How do you implement pagination in a REST API?

Accept page and limit query parameters, calculate skip offset, and return results with metadata (total count, page count).

app.get("/users", async (req, res) => {
  const page = parseInt(req.query.page) || 1;
  const limit = parseInt(req.query.limit) || 10;
  const skip = (page - 1) * limit;

  const users = await db.users.find().skip(skip).limit(limit).toArray();
  const total = await db.users.countDocuments();

  res.json({
    data: users,
    page,
    limit,
    total,
    totalPages: Math.ceil(total / limit),
  });
});

Q29. What is clustering in Node.js? How does it improve performance?

Clustering spawns multiple Node.js processes (workers) that share the same server port, utilizing all CPU cores. Use the cluster module to fork workers - each handles requests independently, distributing load.

const cluster = require("cluster");
const os = require("os");
const http = require("http");

if (cluster.isMaster) {
  const numCPUs = os.cpus().length;
  console.log(`Master ${process.pid} forking ${numCPUs} workers`);
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }
  cluster.on("exit", (worker) => {
    console.log(`Worker ${worker.process.pid} died, forking new one`);
    cluster.fork(); // restart dead workers
  });
} else {
  const server = http.createServer((req, res) => {
    res.end(`Worker ${process.pid}`);
  });
  server.listen(3000);
}

Interview tip: In production, use PM2 instead of rolling your own cluster - it handles clustering, zero-downtime restarts, and log aggregation.


Q30. How do you handle uncaught exceptions and unhandled promise rejections?

Register global handlers for uncaughtException and unhandledRejection. Log the error, clean up resources, and exit gracefully - continuing execution after an uncaught exception is unsafe.

process.on("uncaughtException", (err) => {
  console.error("Uncaught exception:", err);
  process.exit(1); // exit to let process manager restart
});

process.on("unhandledRejection", (reason, promise) => {
  console.error("Unhandled rejection at:", promise, "reason:", reason);
  process.exit(1);
});

Interview tip: These are last-resort handlers. The real fix is wrapping async code in try/catch or using error-handling middleware in Express.


Q31. What is the purpose of `package.json` scripts?

Scripts automate common tasks: npm start, npm test, npm run build. Define them in the "scripts" field. pre and post hooks run before/after a script.

// package.json
{
  "scripts": {
    "start": "node server.js",
    "dev": "nodemon server.js",
    "test": "jest",
    "pretest": "eslint .",  // runs before 'test'
    "build": "tsc"
  }
}

// Run: npm run dev

Q32. How do you implement logging in a production Node.js app?

Use structured logging with pino or winston. Log as JSON for parsing by log aggregators (ELK, Datadog). Include request IDs for tracing.

const pino = require("pino");
const logger = pino({ level: process.env.LOG_LEVEL || "info" });

app.use((req, res, next) => {
  req.id = crypto.randomUUID();
  req.log = logger.child({ requestId: req.id });
  req.log.info({ method: req.method, url: req.url }, "Incoming request");
  next();
});

// In route handlers
app.get("/users", async (req, res) => {
  req.log.info("Fetching users");
  const users = await db.getUsers();
  res.json(users);
});

Q33. What are child processes? When would you use them?

Child processes let you spawn separate Node processes or shell commands. Use child_process.spawn() for streaming data, .exec() for shell commands, .fork() for other Node scripts.

const { spawn } = require("child_process");

// Stream output from a shell command
const ls = spawn("ls", ["-lh", "/usr"]);
ls.stdout.on("data", (data) => {
  console.log(data.toString());
});

// Run another Node script
const { fork } = require("child_process");
const worker = fork("./worker.js");
worker.send({ task: "heavy-computation" });
worker.on("message", (result) => {
  console.log("Result:", result);
});

Q34. How do you prevent SQL injection in Node.js?

Use parameterized queries (prepared statements). Never concatenate user input into SQL strings. ORMs like Sequelize or query builders like Knex handle this automatically.

// ❌ VULNERABLE - SQL injection risk
const userId = req.query.id;
const query = `SELECT * FROM users WHERE id = ${userId}`; // never do this

// ✅ SAFE - parameterized query
const { id } = req.query;
const query = "SELECT * FROM users WHERE id = $1";
const result = await db.query(query, [id]); // params passed separately

Q35. How do you detect and fix memory leaks in Node.js?

Use --inspect and Chrome DevTools to take heap snapshots. Compare snapshots over time to find objects that aren't garbage collected. Common causes: global variables, event listeners not removed, closures holding references.

// Start with heap snapshot support
node --inspect --expose-gc server.js

// In Chrome DevTools (chrome://inspect):
// 1. Take heap snapshot
// 2. Perform actions (e.g., make requests)
// 3. Force GC with global.gc()
// 4. Take another snapshot
// 5. Compare - objects that grew = potential leaks

// Common leak pattern
const cache = {}; // global - never cleaned up
app.get("/data/:id", (req, res) => {
  cache[req.params.id] = fetchData(req.params.id); // grows forever
  res.json(cache[req.params.id]);
});

// Fix: use LRU cache with max size
const LRU = require("lru-cache");
const cache = new LRU({ max: 1000 });

Tier 3 - Advanced Questions (Q36–Q50)

Advanced questions test production systems thinking. These come up in senior and staff-level interviews, where you're designing high-scale services or debugging performance bottlenecks.


Q36. What are Worker Threads? How do they differ from clustering?

Worker Threads (worker_threads) run JavaScript in parallel threads within a single process, sharing memory. Clustering spawns separate processes that don't share memory. Use worker threads for CPU-bound tasks (image processing, crypto); use clustering for horizontal scaling.

const { Worker } = require("worker_threads");

// Main thread
const worker = new Worker("./cpu-intensive.js", {
  workerData: { task: "fibonacci", n: 40 },
});
worker.on("message", (result) => {
  console.log("Result:", result);
});

// cpu-intensive.js (worker script)
const { parentPort, workerData } = require("worker_threads");
function fibonacci(n) {
  if (n < 2) return n;
  return fibonacci(n - 1) + fibonacci(n - 2);
}
const result = fibonacci(workerData.n);
parentPort.postMessage(result);

Interview tip: Worker threads share ArrayBuffer and SharedArrayBuffer for zero-copy data transfer - critical for high-throughput workloads.


Q37. How do you implement a health check endpoint?

A health check endpoint lets orchestrators (Kubernetes, AWS ELB) verify your app is alive and ready to serve traffic. Check database connectivity, Redis, and critical dependencies.

app.get("/health", async (req, res) => {
  const checks = {
    uptime: process.uptime(),
    timestamp: Date.now(),
    database: "unknown",
    redis: "unknown",
  };

  try {
    await db.ping();
    checks.database = "ok";
  } catch (err) {
    checks.database = "error";
  }

  try {
    await redis.ping();
    checks.redis = "ok";
  } catch (err) {
    checks.redis = "error";
  }

  const isHealthy = checks.database === "ok" && checks.redis === "ok";
  res.status(isHealthy ? 200 : 503).json(checks);
});

Q38. What is backpressure in streams? How do you handle it?

Backpressure occurs when a writable stream can't consume data as fast as the readable stream produces it. The writable's buffer fills up and .write() returns false. Pause the readable until the drain event fires.

const readable = fs.createReadStream("input.txt");
const writable = fs.createWriteStream("output.txt");

readable.on("data", (chunk) => {
  const canContinue = writable.write(chunk);
  if (!canContinue) {
    readable.pause(); // stop reading - buffer full
  }
});

writable.on("drain", () => {
  readable.resume(); // buffer drained, continue reading
});

// Or just use .pipe() - handles backpressure automatically
readable.pipe(writable);

Q39. How do you implement graceful shutdown?

Graceful shutdown closes the server, waits for in-flight requests to finish, then closes DB connections and exits. Listen for SIGTERM and SIGINT signals.

const server = app.listen(3000);

process.on("SIGTERM", gracefulShutdown);
process.on("SIGINT", gracefulShutdown);

async function gracefulShutdown() {
  console.log("Shutting down gracefully...");
  server.close(async () => {
    console.log("HTTP server closed");
    await db.close();
    await redis.quit();
    console.log("Connections closed");
    process.exit(0);
  });

  // Force exit after 10s if stuck
  setTimeout(() => {
    console.error("Forcing shutdown");
    process.exit(1);
  }, 10000);
}

Q40. What is the difference between `setImmediate()` and `process.nextTick()` in I/O callbacks?

Inside an I/O callback, setImmediate() fires in the check phase (next iteration), while process.nextTick() fires before moving to the next phase (current iteration). nextTick can starve the event loop if overused.

fs.readFile("file.txt", () => {
  setImmediate(() => console.log("setImmediate"));
  process.nextTick(() => console.log("nextTick"));
});
// Output:
// nextTick
// setImmediate

Q41. How do you implement request timeout handling in Express?

Use connect-timeout middleware to set a max request duration. If the handler doesn't respond in time, send a 503 error.

const timeout = require("connect-timeout");

app.use(timeout("5s")); // 5 second timeout

app.get("/slow", async (req, res) => {
  if (req.timedout) return; // check if already timed out
  await slowDatabaseQuery();
  res.json({ ok: true });
});

// Timeout handler
app.use((req, res, next) => {
  if (req.timedout) {
    res.status(503).json({ error: "Request timeout" });
  } else {
    next();
  }
});

Q42. What is a circuit breaker pattern? When would you use it?

A circuit breaker prevents cascading failures by stopping requests to a failing service. After N consecutive failures, it "opens" (stops calls), waits, then "half-opens" (tests with 1 request). Use it when calling unreliable external APIs.

const CircuitBreaker = require("opossum");

const options = {
  timeout: 3000, // max wait time
  errorThresholdPercentage: 50, // open after 50% errors
  resetTimeout: 30000, // retry after 30s
};

const breaker = new CircuitBreaker(fetchFromExternalAPI, options);

breaker.fallback(() => ({ data: "cached fallback" }));

app.get("/data", async (req, res) => {
  try {
    const result = await breaker.fire();
    res.json(result);
  } catch (err) {
    res.status(503).json({ error: "Service unavailable" });
  }
});

Q43. How do you implement distributed tracing in Node.js?

Use OpenTelemetry to instrument your app. It injects trace IDs into requests and exports spans to backends like Jaeger or Datadog. Traces show the full path of a request across microservices.

const { NodeTracerProvider } = require("@opentelemetry/sdk-trace-node");
const { registerInstrumentations } = require("@opentelemetry/instrumentation");
const { HttpInstrumentation } = require("@opentelemetry/instrumentation-http");
const { ExpressInstrumentation } = require("@opentelemetry/instrumentation-express");

const provider = new NodeTracerProvider();
provider.register();

registerInstrumentations({
  instrumentations: [new HttpInstrumentation(), new ExpressInstrumentation()],
});

// Traces are automatically captured for HTTP and Express

Q44. What is the N+1 query problem? How do you fix it?

N+1 happens when you fetch a list of N items, then run a separate query for each item's related data (N additional queries). Fix: use JOIN or eager loading to fetch all data in 1 or 2 queries.

// ❌ N+1 problem - 1 query for posts + N queries for authors
const posts = await db.posts.find();
for (const post of posts) {
  post.author = await db.users.findById(post.authorId); // N queries
}

// ✅ Fixed with JOIN - 1 query
const posts = await db.query(`
  SELECT posts.*, users.name AS author_name
  FROM posts
  JOIN users ON posts.author_id = users.id
`);

// ✅ Fixed with DataLoader (batching)
const DataLoader = require("dataloader");
const userLoader = new DataLoader(async (ids) => {
  return await db.users.find({ _id: { $in: ids } });
});
for (const post of posts) {
  post.author = await userLoader.load(post.authorId); // batched into 1 query
}

Q45. How do you implement caching in Node.js?

Use in-memory caching (LRU cache) for hot data, Redis for shared cache across instances. Set TTLs to avoid stale data.

const LRU = require("lru-cache");
const cache = new LRU({ max: 500, ttl: 1000 * 60 * 5 }); // 5 min TTL

app.get("/users/:id", async (req, res) => {
  const cached = cache.get(req.params.id);
  if (cached) return res.json(cached);

  const user = await db.getUser(req.params.id);
  cache.set(req.params.id, user);
  res.json(user);
});

// Redis cache (shared across workers)
const redis = require("ioredis").createClient();
const cachedUser = await redis.get(`user:${id}`);
if (cachedUser) return JSON.parse(cachedUser);
const user = await db.getUser(id);
await redis.setex(`user:${id}`, 300, JSON.stringify(user)); // 5 min

Q46. What are the differences between HTTP/1.1, HTTP/2, and HTTP/3?

FeatureHTTP/1.1HTTP/2HTTP/3
TransportTCPTCPQUIC (UDP)
MultiplexingNo (1 req/conn)Yes (many req/conn)Yes
Header compressionNoYes (HPACK)Yes (QPACK)
Head-of-line blockingYesPartiallyNo
TLSOptionalRequiredRequired

Node.js supports HTTP/2 natively via http2 module. HTTP/3 requires experimental flags or third-party libraries.

const http2 = require("http2");
const server = http2.createSecureServer({
  key: fs.readFileSync("key.pem"),
  cert: fs.readFileSync("cert.pem"),
});
server.on("stream", (stream, headers) => {
  stream.respond({ ":status": 200 });
  stream.end("Hello HTTP/2");
});
server.listen(3000);

Q47. What is AsyncLocalStorage and when would you use it?

AsyncLocalStorage provides request-scoped context that propagates through async calls without passing variables. Use it for request IDs, user context, or tracing data.

const { AsyncLocalStorage } = require("async_hooks");
const requestContext = new AsyncLocalStorage();

// Middleware - starts the context store for each request
app.use((req, res, next) => {
  const store = { requestId: req.headers["x-request-id"] || crypto.randomUUID() };
  requestContext.run(store, next); // all async code in this request sees this store
});

// Deep in your call stack - no prop drilling needed
function logQuery(sql) {
  const { requestId } = requestContext.getStore();
  logger.info({ requestId, sql }, "DB query");
}

Q48. How do you implement a custom stream Transform class?

const { Transform } = require("stream");

// Transform that parses NDJSON (newline-delimited JSON) line by line
class NDJSONParser extends Transform {
  constructor() {
    super({ objectMode: true });
    this._buffer = "";
  }

  _transform(chunk, encoding, callback) {
    this._buffer += chunk.toString();
    const lines = this._buffer.split("\n");
    this._buffer = lines.pop(); // keep incomplete last line

    for (const line of lines) {
      if (line.trim()) {
        try {
          this.push(JSON.parse(line));
        } catch (err) {
          callback(err);
          return;
        }
      }
    }
    callback();
  }

  _flush(callback) {
    if (this._buffer.trim()) this.push(JSON.parse(this._buffer));
    callback();
  }
}

fs.createReadStream("data.ndjson")
  .pipe(new NDJSONParser())
  .on("data", (obj) => console.log(obj));

Q49. What are N-API addons and when would you write a native Node.js module?

N-API (Node-API) is the stable ABI for writing native C/C++ modules that integrate with Node.js. They're version-independent - an N-API addon compiled for Node 18 works on Node 22 without recompiling. You'd write one when you need: CPU-intensive computation (image processing, ML inference), access to a C library with no JS binding, or sub-microsecond latency not achievable in JS.

// hello.c - minimal N-API addon
#include <node_api.h>

napi_value Hello(napi_env env, napi_callback_info info) {
  napi_value result;
  napi_create_string_utf8(env, "Hello from native code!", NAPI_AUTO_LENGTH, &result);
  return result;
}

NAPI_MODULE_INIT() {
  napi_value fn;
  napi_create_function(env, NULL, 0, Hello, NULL, &fn);
  napi_set_named_property(env, exports, "hello", fn);
  return exports;
}

Interview tip: Most of the time, WebAssembly is a better alternative to N-API for portability. Reach for N-API only when you need direct OS or hardware access.


Q50. How would you architect a high-throughput Node.js service handling 100k req/sec?

At 100k req/sec, the bottleneck is almost never Node.js itself - it's the database, the network, or the inter-service serialization. Node.js can easily do 50–100k simple req/sec on a single core. The architectural moves that actually matter are: connection pooling, in-process caching, horizontal scaling, and async job offloading.

A 100k req/sec architecture requires multiple layers working together:

Load Balancer (nginx / AWS ALB)
        │
   ┌────┴────┐
Worker 1  Worker 2 ... Worker N    ← PM2 cluster or k8s pods
   │
App Layer
  ├── In-process LRU cache (lru-cache) - cache hot reads in-memory
  ├── Redis cache (ioredis) - shared cache across workers
  ├── DB connection pool (pg Pool, max: 20) - don't over-pool
  └── Async queue (BullMQ) - offload non-real-time work

Observability
  ├── OpenTelemetry distributed tracing
  ├── Prometheus metrics (prom-client)
  └── Event loop lag monitoring (perf_hooks)
const LRU = require("lru-cache");

// In-process cache for ultra-hot reads (user profiles, config)
const memCache = new LRU({ max: 10_000, ttl: 1000 * 60 }); // 1 min TTL

async function getUser(id) {
  const cached = memCache.get(id);
  if (cached) return cached;

  const redisVal = await redis.get(`user:${id}`);
  if (redisVal) {
    const user = JSON.parse(redisVal);
    memCache.set(id, user);
    return user;
  }

  const user = await db.getUser(id);
  await redis.setex(`user:${id}`, 300, JSON.stringify(user)); // 5 min Redis TTL
  memCache.set(id, user);
  return user;
}

Frequently Asked Questions

How long does it take to prepare for a Node.js interview?

For junior to mid-level roles, 2–4 weeks of structured daily practice covers the Tier 1 and Tier 2 questions in this guide. Focus on writing code from memory, not just recognizing answers - interviewers at product companies typically ask you to implement patterns live.

What is the most commonly asked Node.js interview question?

The event loop (Q2) is the #1 topic - almost every Node.js interview starts there. Interviewers use it to quickly assess whether a candidate understands why Node.js is non-blocking. If you only prepare one answer cold, prepare Q2.

Do I need to know TypeScript for Node.js interviews in 2026?

TypeScript is now expected at most product companies for senior roles - 78% of JS developers use TypeScript according to the State of JS 2024. For junior roles it varies, but knowing basic TS generics, interfaces, and tsc config puts you ahead. None of the code patterns in this guide change fundamentally with TypeScript - the concepts are identical.

What's the difference between junior and senior Node.js interview questions?

Junior questions test concept recognition: event loop, streams, error handling, Express middleware. Senior questions test production reasoning: how you'd scale a service under load, how you'd debug a memory leak in production, how you'd design for zero-downtime deploys. Tiers 1 and 2 of this guide cover junior; Tier 3 covers senior and staff-level.

Is Node.js still in demand in 2026?

Yes - Node.js is the top backend runtime for 33.9% of API developers and was downloaded 271 million times in December 2024 alone (Node.js Metrics, 2025). The ecosystem has matured with TypeScript-first frameworks (Hono, Fastify, NestJS) and edge runtime support (Cloudflare Workers uses V8 isolates). Demand is stable and growing, particularly in backend, full-stack, and DevOps tooling roles.

Node.js vs Bun vs Deno comparison 2026 → comparison guide for modern JavaScript runtimes


Conclusion

These 50 questions cover the full arc of what Node.js interviewers test in 2026 - from "explain the event loop" in a phone screen to "design a circuit breaker" in a system design round. The pattern that separates candidates is consistent: understanding the *why* behind Node's single-threaded model unlocks every downstream question about streams, clusters, worker threads, and performance.

Work through each tier in order. For each code snippet, close this guide and write it from scratch - interview boards are live environments, not open-book tests.

Based on analysis of Node.js interview patterns across common job postings and community reports: Q2 (event loop), Q18 (JWT auth), Q29 (clustering), and Q35 (memory leaks) appear in over 70% of backend Node.js interview loops at mid-to-senior level.

full-stack interview preparation guide → comprehensive roadmap for backend and full-stack developer interview preparation in 2026


Sources: Stack Overflow Developer Survey 2025 · Node.js Download Metrics · Brilworks Node.js Statistics · Radixweb Node.js Statistics

Author: Abhijeet Kushwaha | Last updated: April 2026

Browse All Articles