44.7% of developers use React - master 30 advanced interview questions on Fiber internals, Concurrent Mode, Server Components, performance patterns, and React 19 for senior roles.
Senior and lead React roles demand far more than hooks knowledge. This guide covers 30 advanced React interview questions experienced developers (3–8+ years) face at product companies and FAANG - Fiber architecture, Concurrent Mode, React Server Components, advanced state management, TypeScript patterns, and React 19 features. Every question includes a production-level code example and an interviewer insight on what's actually being tested.
React is used by 44.7% of all professional developers, the most-used frontend framework for six straight years (Stack Overflow Developer Survey 2025). Senior React developer roles in the US now command average salaries between $128,400 and $145,000, with top earners exceeding $223,718 (ZipRecruiter, February 2026). That compensation reflects real complexity: senior interviews don't test whether you can use useState - they test whether you understand why the reconciler works the way it does, when Server Components eliminate client-side JavaScript entirely, and how to architect a React application that scales across a team of 20+ engineers.
This guide covers 30 advanced questions across six topic areas - the questions that separate candidates who've read the docs from those who've shipped real production systems. Every question includes a production-level code example and an interviewer note on what's actually being tested. All content is free, no paywall.
Key Takeaways
React 19 (stable since December 2024) is already adopted by 48.4% of daily React users - expect React 19 questions at every senior interview (State of React 2025)
Fiber's two-phase architecture (render phase + commit phase) underpins every performance question - understand it cold
React Server Components ship zero client JavaScript for server-rendered subtrees; the RSC payload is a JSON-like stream, not HTML
useTransition,useDeferredValue, and Suspense are the concurrent rendering APIs every senior candidate must explain with code
Looking to brush up on hooks fundamentals first? See the 40+ React Hooks Interview Questions and Answers (2026) guide for useState, useEffect, and custom hook patterns.
Fiber is the engine behind everything React does. Experienced developers get asked Fiber questions not because interviewers want trivia, but because understanding how the reconciler works explains why useEffect runs after paint, why concurrent updates can be dropped and restarted, and why your component function can be called multiple times before its output reaches the DOM. If you've only used React without reading the source, this section is where senior interviews create distance between candidates.

React Fiber is the complete rewrite of React's reconciler, shipped in React 16. Before Fiber, reconciliation was a single synchronous recursive call-stack walk - it couldn't be paused, so complex UI updates caused jank. Fiber replaced that recursive stack with a linked list of fiber nodes (plain JavaScript objects), each representing one unit of work. The reconciler processes one fiber at a time, can yield to the browser's event loop after any node, and resume later - enabling time-slicing and priority scheduling.
Each fiber node stores the component type, its props and state, a reference to the DOM node it owns, effect tags (what DOM mutations are needed), and pointers to its parent, child, and sibling fibers.
// Simplified mental model of a Fiber node
interface Fiber {
type: string | Function; // 'div' | MyComponent
key: string | null;
stateNode: HTMLElement | null; // the actual DOM node
child: Fiber | null; // first child
sibling: Fiber | null; // next sibling
return: Fiber | null; // parent fiber
pendingProps: object;
memoizedProps: object;
memoizedState: object | null;
effectTag: number; // INSERT | UPDATE | DELETE bitmask
alternate: Fiber | null; // previous version (current tree)
}Interview tip: Interviewers want to know why Fiber exists - not just that it's a rewrite. Lead with "the original synchronous stack reconciler couldn't be interrupted," then explain how the linked-list structure enables concurrent rendering.
These two phases are distinct in React's architecture. Reconciliation (the render phase) is purely computational: React traverses the fiber tree, calls your component functions, diffs the new output against the previous tree, and builds a list of effects. This phase is pure - no DOM mutations happen - and in Concurrent Mode it can be interrupted and restarted.
Rendering (the commit phase) applies those effects to the real DOM. It runs synchronously and can't be interrupted, because partially updating the DOM would leave the UI in an inconsistent state. The commit phase also fires useLayoutEffect callbacks (after DOM mutation, before browser paint) and useEffect callbacks (after browser paint).
Render phase → interrupt-safe: walk fiber tree, compute diffs
→ useMemo, useState reads happen here
→ may be called multiple times before commit
Commit phase → synchronous and uninterruptible: apply DOM mutations
→ useLayoutEffect fires here (after DOM, before paint)
→ useEffect fires here (after paint)Interview tip: Most candidates know "diffing" happens but can't name the two phases or explain what "interrupt-safe" means. Mentioning that the render phase can run multiple times before a single commit (due to Concurrent Mode) is what separates senior-level answers.
In legacy (blocking) mode, React processes the entire fiber tree in one synchronous pass. If that pass takes 200ms, the main thread is blocked for 200ms - no user input, no animations, no paint. Concurrent Mode breaks work into interruptible units, scheduled against the browser's idle capacity. React uses its internal scheduler to assign priorities: user input gets the highest priority, background data fetching gets the lowest.
The key behavioral change: in Concurrent Mode, React may start rendering a new state, discard that work if a higher-priority update arrives, then resume from scratch. Your component functions must be pure - no side effects in the render body - because they can be called multiple times before output is committed.
// Concurrent Mode is the default in React 18+ via createRoot
import { createRoot } from "react-dom/client";
const root = createRoot(document.getElementById("root"));
root.render(<App />);
// Legacy blocking mode (React 17 and below, or ReactDOM.render)
// ReactDOM.render(<App />, document.getElementById('root'));
// ↑ Deprecated in React 18, removed in React 19Interview tip: If asked "what does Concurrent Mode enable?", name three concrete features: useTransition for non-urgent updates, useDeferredValue for deferring stale data display, and Suspense for streaming SSR. Abstract answers about "better performance" won't pass a senior screen.
Both APIs mark updates as non-urgent so React can prioritize user-critical interactions, but they operate on different targets. useTransition wraps a state setter call - it marks the resulting update as low-priority while React keeps the previous UI visible. useDeferredValue wraps a value - it returns a lagged copy of that value during urgent updates.
Use useTransition when you control the trigger (you write the setState call). Use useDeferredValue when you receive a value you don't control - a prop or a value from a third-party store.
// useTransition: you control the setter
function SearchPage() {
const [query, setQuery] = useState('');
const [results, setResults] = useState([]);
const [isPending, startTransition] = useTransition();
function handleChange(e) {
setQuery(e.target.value); // urgent: update input immediately
startTransition(() => {
setResults(expensiveSearch(e.target.value)); // non-urgent: defer result render
});
}
return (
<>
<input value={query} onChange={handleChange} />
{isPending ? <Spinner /> : <ResultsList results={results} />}
</>
);
}
// useDeferredValue: you receive the value
function FilteredList({ query }: { query: string }) {
const deferredQuery = useDeferredValue(query); // lags behind query during typing
const results = useMemo(() => expensiveFilter(deferredQuery), [deferredQuery]);
return <ResultsList results={results} />;
}Interview tip: The isPending flag from useTransition lets you show a loading indicator without a Suspense boundary. That's the practical win senior interviewers look for - not just "it defers work."
Suspense catches promises thrown during rendering. When a component "suspends," it throws a Promise. React catches it at the nearest boundary, renders the fallback UI, watches the thrown promise, and when it resolves, re-renders the suspended subtree. In React 19, Suspense is deeply integrated with Server Components and streaming SSR - the server flushes the HTML shell immediately and streams suspended subtrees as they resolve.
// React 19: async Server Component + Suspense (Next.js App Router)
// app/components/UserProfile.tsx (Server Component)
async function UserProfile({ id }: { id: string }) {
const user = await fetchUser(id); // awaits directly - no useEffect needed
return <div className="profile">{user.name}</div>;
}
// app/page.tsx
export default function Page() {
return (
<Suspense fallback={<ProfileSkeleton />}>
<UserProfile id="123" /> {/* suspends until fetchUser resolves */}
</Suspense>
);
}Interview tip: Most candidates describe Suspense as "for lazy loading components." At the senior level, you need to know it's a general mechanism for any asynchronous value - data, images, code - and that React 19's use() hook is the clean way to throw promises inside components without manual boilerplate.
React Server Components are the most architecturally significant React feature since hooks. By early 2026, 48.4% of daily React users are already on React 19, where RSC is a first-class primitive (State of React 2025). Senior engineers are expected to understand the boundary between server and client execution - not just how to use the 'use client' directive, but why it exists and what it means for the JavaScript payload that ships to the browser.

Server-Side Rendering (SSR) renders your component tree to HTML on the server, sends that HTML to the browser, then ships the same JavaScript bundle to "hydrate" the HTML into a live React tree. Both server and client run the same component code. React Server Components (RSC) are fundamentally different: Server Components run only on the server and are never hydrated. Zero JavaScript is sent to the client for them - they can import a 200KB Markdown parser, use it, and return rendered output without shipping the parser to the browser.
RSC and SSR are complementary, not competing. Next.js App Router uses both: SSR generates HTML for a fast first paint; RSC controls which components contribute to the client JS bundle.
Traditional SSR:
Server: render all components → HTML string
Client: receive HTML + full JS bundle → hydrate (run all components again in browser)
RSC model:
Server: render Server Components → RSC payload (JSON stream)
Client: receive RSC payload → render Client Components → hydrate Client Components only
Result: Server Components contribute zero JavaScript to the client bundleInterview tip: The key differentiator is hydration. SSR hydrates everything; RSC means Server Components are never hydrated. If you can also explain what the "RSC payload" is (a serialized stream of React element descriptions, not HTML), you've demonstrated understanding above the marketing layer.
'use client' at the top of a file marks it as a Client Component boundary. Everything imported into that file becomes part of the client bundle too - the label propagates down the import tree. Across this boundary, you can only pass serializable props: strings, numbers, booleans, plain objects, arrays, and null. You cannot pass functions, class instances, Map, Set, or non-serializable values from a Server Component to a Client Component as props.
// components/Chart.tsx - needs browser canvas API
"use client";
import { useEffect, useRef } from "react";
export function Chart({ data }: { data: number[] }) {
const canvasRef = useRef<HTMLCanvasElement>(null);
useEffect(() => {
drawChart(canvasRef.current!, data);
}, [data]);
return <canvas ref={canvasRef} />;
}
// app/page.tsx - Server Component
import { Chart } from "../components/Chart";
import { fetchChartData } from "../lib/db"; // server-only: never ships to browser
export default async function Page() {
const data = await fetchChartData(); // runs on server only
return <Chart data={data} />; // data is a number[] - serializable ✓
}Interview tip: A common mistake - passing a Server Component function reference as a prop to a Client Component. You can't. Instead, pass the rendered output (JSX) as children. JSX is serialized in the RSC payload and can safely cross the boundary.
Server Actions are async functions marked 'use server' that execute exclusively on the server but can be called from Client Components as if they were regular functions. Next.js generates a POST endpoint for each action; when a Client Component invokes it, the framework makes a network request, runs the function server-side with full access to databases and environment variables, and returns the result. They replace manually written API routes for mutations and form submissions.
// lib/actions.ts
"use server";
import { revalidatePath } from "next/cache";
import { db } from "./db";
export async function createPost(formData: FormData) {
const title = formData.get("title") as string;
if (!title?.trim()) throw new Error("Title is required"); // always validate
await db.post.create({ data: { title } });
revalidatePath("/posts");
}
// components/CreatePostForm.tsx
("use client");
import { createPost } from "../lib/actions";
export function CreatePostForm() {
return (
<form action={createPost}>
{" "}
{/* React 19: pass Server Action directly to action prop */}
<input name="title" placeholder="Post title" required />
<button type="submit">Create</button>
</form>
);
}Interview tip: Interviewers at security-conscious companies check whether you know the implication: Server Actions run with full server privileges. Always validate and sanitize inputs inside the action - never trust the incoming FormData payload.
The RSC payload is a line-delimited JSON stream that describes the React element tree produced by Server Components. It's neither HTML nor a JavaScript bundle - it's a compact serialized representation that React's client runtime reassembles into a fiber tree. Each line represents a component, its props, or a "hole" where a suspended subtree will stream in later.
When Next.js uses RSC with streaming SSR, the server flushes the HTML shell for an immediate first paint, then streams RSC payload chunks as server data resolves. React's client runtime processes these chunks incrementally - this is how React shows a skeleton and progressively fills in content without a full client re-render.
HTTP response headers: Content-Type: text/x-component
Streaming payload (each line is a chunk):
0:["$","div",null,{"children":["$","h1",null,{"children":"Dashboard"}]}]
1:["$","section",null,{"children":"$L2"}] ← hole: suspended subtree
2:["$","ul",null,{"children":[...rows...]}] ← streamed in when DB query resolvesIn Our finding: Teams that misunderstand the RSC payload often try to "inspect" it with browser devtools looking for HTML - it appears as raw text. Knowing it's a fiber-tree description (not HTML) explains why Server Components can be updated independently without a full page re-render.
Interview tip: Many candidates know RSC "ships less JavaScript" but can't explain the mechanism. Saying "the RSC payload is a JSON stream React's client runtime uses to reconstruct the fiber tree without re-running Server Components" demonstrates genuine depth.
You can pass serializable data as props: strings, numbers, booleans, plain objects, arrays, and null. For complex shared state, the cleanest production pattern is to fetch server-side data in a Server Component, then hydrate a Client-side context provider so Client Components below can subscribe.
// Pattern 1: Typed prop drilling (simple data)
async function Page() {
const products: Product[] = await fetchProducts(); // server-only
return <ProductGrid products={products} />; // serializable array ✓
}
// Pattern 2: Hydrating a Client context from server data
// providers/CartProvider.tsx
("use client");
export const CartContext = createContext<CartItem[]>([]);
export function CartProvider({
initialCart,
children,
}: {
initialCart: CartItem[];
children: React.ReactNode;
}) {
const [cart, setCart] = useState(initialCart);
return <CartContext.Provider value={cart}>{children}</CartContext.Provider>;
}
// app/layout.tsx (Server Component)
async function RootLayout({ children }: { children: React.ReactNode }) {
const cart = await getCartFromSession(); // runs on server, never in browser
return <CartProvider initialCart={cart}>{children}</CartProvider>;
}Interview tip: A tricky follow-up: "Can you pass JSX as a prop to a Client Component from a Server Component?" Yes - JSX is serialized as part of the RSC payload, and children is JSX. This is exactly how "Server Component as children of Client Component" works.
State management questions at senior interviews rarely ask "what is Redux?" They ask how you make a decision - when a local reducer is better than a global store, how you prevent Context from re-rendering 200 components when one field changes, and how you implement patterns like optimistic updates that users now expect as table stakes. These questions test engineering judgment, not library familiarity.

Choose useReducer when state transitions are complex (multiple sub-values that change together), when the next state depends on the previous in non-trivial ways, or when you want to centralize and test update logic in isolation - the reducer is a pure function, no React dependency needed. useState is better for simple, independent values. The crossover point: if you find yourself writing three or more related useState calls that all update together, consolidate them into a reducer.
type State = {
status: "idle" | "loading" | "success" | "error";
data: User | null;
error: string | null;
};
type Action =
| { type: "FETCH_START" }
| { type: "FETCH_SUCCESS"; payload: User }
| { type: "FETCH_ERROR"; error: string };
function reducer(state: State, action: Action): State {
switch (action.type) {
case "FETCH_START":
return { status: "loading", data: null, error: null };
case "FETCH_SUCCESS":
return { status: "success", data: action.payload, error: null };
case "FETCH_ERROR":
return { status: "error", data: null, error: action.error };
default:
return state;
}
}
function UserProfile({ id }: { id: string }) {
const [state, dispatch] = useReducer(reducer, {
status: "idle",
data: null,
error: null,
});
// All state transitions go through dispatch - easy to test, easy to trace
}Interview tip: If asked to compare useReducer to Redux, the answer is: useReducer is component-scoped; Redux is global. Use useReducer for local complex state, Zustand or Redux when multiple components across the tree need the same state.
React Context re-renders every component that calls useContext whenever the context value changes - even if that component only uses one field of the value object. The four main strategies at scale: (1) split contexts by update frequency - user data context separate from auth methods; (2) memoize the value object with useMemo so reference equality prevents spurious re-renders on parent re-renders; (3) use a selector-based library like Zustand or Jotai where components only re-render when their subscribed slice changes; (4) memo + context - wrap stable children in React.memo.
// Anti-pattern: one fat context re-renders all consumers on any field change
const AppContext = createContext({
user,
theme,
cart,
preferences,
notifications,
});
// Better: split by update frequency
const UserContext = createContext<User | null>(null); // changes on login/logout
const ThemeContext = createContext<"light" | "dark">("light"); // rarely changes
// Memoize the value object to prevent reference-change re-renders
function UserProvider({ children }: { children: React.ReactNode }) {
const [user, setUser] = useState<User | null>(null);
const value = useMemo(() => ({ user, setUser }), [user]); // stable when user unchanged
return <UserContext.Provider value={value}>{children}</UserContext.Provider>;
}Interview tip: If asked "how do you share state efficiently across 50 components?", the answer isn't "one big context." It's Zustand or Jotai with subscriptions - they only re-render components subscribed to the exact slice of state that changed.
Both are production-quality, but Zustand's API is dramatically simpler - a complete store in ~10 lines versus Redux Toolkit's slices, actions, and selectors. Zustand stores are plain JavaScript closures, so they work outside React (in WebSocket handlers, utility functions, or API clients) without the useDispatch/useSelector overhead. It also doesn't require a Provider wrapper. Redux Toolkit remains the right choice when teams need strict action traceability, Redux DevTools time-travel debugging, or have existing Redux infrastructure.
// Zustand: complete auth store in under 25 lines
import { create } from 'zustand';
interface AuthStore {
user: User | null;
isLoading: boolean;
login: (credentials: Credentials) => Promise<void>;
logout: () => void;
}
export const useAuthStore = create<AuthStore>((set) => ({
user: null,
isLoading: false,
login: async (credentials) => {
set({ isLoading: true });
const user = await loginApi(credentials);
set({ user, isLoading: false });
},
logout: () => set({ user: null }),
}));
// In a component - no Provider, no connect(), no dispatch
function Header() {
const { user, logout } = useAuthStore();
return user ? <button onClick={logout}>{user.name}</button> : null;
}Interview tip: Mention that Zustand subscriptions are selector-based - useAuthStore(state => state.user) only re-renders when user changes, not on any store update. That's the key performance advantage over plain Context at scale.
An optimistic update immediately reflects the expected result in the UI before the server responds, then rolls back if the server returns an error. The three-step pattern: update local state immediately, fire the API call, and on error revert state and surface the failure. React 19 ships useOptimistic as a first-class primitive - React handles the automatic revert when the wrapping async function throws.
// React 19: useOptimistic for form-driven mutations
"use client";
import { useOptimistic } from "react";
import { likePost } from "@/actions/post";
function LikeButton({
postId,
initialLikes,
}: {
postId: string;
initialLikes: number;
}) {
const [optimisticLikes, addOptimisticLike] = useOptimistic(
initialLikes,
(current, increment: number) => current + increment,
);
async function handleLike() {
addOptimisticLike(1); // immediate UI update - no await
await likePost(postId); // Server Action - if this throws, React reverts automatically
}
return (
<button
onClick={handleLike}
aria-label={`Like post (${optimisticLikes} likes)`}
>
♡ {optimisticLikes}
</button>
);
}Interview tip: With useOptimistic, React handles rollbacks automatically when the Server Action throws - the optimistic state reverts to initialLikes. With manual useState, you have to save the previous value and restore it in a catch block. Interviewers want to know you've handled the failure path.
A stale closure happens when a useEffect, useCallback, or setTimeout callback captures a state variable at the time the component renders, and never sees updated values. The closure closes over the initial snapshot. Three fixes: (1) dependency array - add the variable so React re-creates the callback on change; (2) functional setter - setCount(c => c + 1) reads the current value inside React's update queue, bypassing the closure; (3) useRef - store the latest value in a ref and read ref.current inside long-lived callbacks.
// Bug: stale closure - count is always 0 inside the interval
function Timer() {
const [count, setCount] = useState(0);
useEffect(() => {
const id = setInterval(() => {
console.log(count); // always 0 - stale closure
setCount(count + 1);
}, 1000);
return () => clearInterval(id);
}, []); // empty deps = closure never updates
}
// Fix 1: functional update (no closure dependency on count at all)
useEffect(() => {
const id = setInterval(() => setCount((c) => c + 1), 1000);
return () => clearInterval(id);
}, []);
// Fix 2: useLatest ref pattern for callbacks that need to read state
function useLatest<T>(value: T) {
const ref = useRef(value);
useEffect(() => {
ref.current = value;
}); // sync after every render
return ref;
}Interview tip: The useLatest pattern - a ref that always holds the most recent value - is a production staple. Libraries like ahooks ship it as useLatestValue. If asked "how do you read the latest state inside a setInterval without adding it to deps?", useRef is the answer.
Performance questions at the senior level aren't "what is React.memo?" - they're "when does React.memo hurt you?" and "what does the Profiler's actualDuration / baseDuration ratio tell you?" Experienced developers know that premature optimization is as harmful as no optimization. These questions test whether you can diagnose before you optimize. The React core team's own guidance: profile first, memo second.
PERSONAL EXPERIENCE -
Note from practice: 40% of frontend developers report performance work as their primary focus (State of Frontend 2024), yet most memoization in production codebases is speculative - added without a Profiler measurement. The pattern that consistently helps: wrap Profiler around suspect subtrees for one release cycle before reaching for
memo.
React.memo wraps a component in a shallow equality check - it only re-renders if props changed. It hurts when: (1) the component is cheap to render (a or a simple list item), so the comparison costs more than the skipped render; (2) props are objects or arrays recreated on every parent render, causing the equality check to always fail; or (3) function props aren't wrapped in useCallback, so they're new references on every render. In those cases, you pay the comparison overhead and re-render anyway.
// Anti-pattern: memo on a cheap component with unstable object prop
const Tag = React.memo(({ style }: { style: React.CSSProperties }) => (
<span style={style}>Tag</span>
));
function Parent() {
// New object literal on every render → memo check always fails
return <Tag style={{ color: "red" }} />;
}
// Correct: memo wraps expensive components with stable props
const DataGrid = React.memo(({ rows }: { rows: Row[] }) => {
// Expensive render: 1,000+ rows, complex DOM
return (
<table>
{rows.map((r) => (
<Row key={r.id} {...r} />
))}
</table>
);
});
function Dashboard() {
const rows = useMemo(() => computeRows(data), [data]); // stable reference
return <DataGrid rows={rows} />;
}Interview tip: The React core team's rule of thumb: profile first, memo second. React DevTools Profiler shows exactly which renders are expensive. Don't memo speculatively - it adds cognitive overhead and can introduce stale data bugs when deps are wrong.
The Profiler component wraps a subtree and calls an onRender callback with timing data on every commit. In production, Profiler is a no-op by default - you need the react-dom/profiling bundle and ReactDOM.createRoot with profiling enabled. For real production monitoring, combine Profiler with an observability service (Sentry, Datadog) to capture actualDuration for your slowest components over real user sessions.
import { Profiler, type ProfilerOnRenderCallback } from "react";
const onRenderCallback: ProfilerOnRenderCallback = (
id, // component tree id string
phase, // 'mount' | 'update' | 'nested-update'
actualDuration, // time spent rendering committed update (ms)
baseDuration, // estimated time without memoization
startTime,
commitTime,
) => {
if (actualDuration > 16) {
// > 1 frame at 60fps - investigate
analytics.track("slow_render", { id, phase, actualDuration, baseDuration });
}
};
function App() {
return (
<Profiler id="DataTable" onRender={onRenderCallback}>
<DataTable />
</Profiler>
);
}Interview tip: The ratio actualDuration / baseDuration tells you how effective your memoization is. If actualDuration ≈ baseDuration, memoization isn't helping. If actualDuration << baseDuration, it's working. Most candidates don't know this ratio exists - mentioning it is a strong signal.
Virtualization renders only the DOM nodes visible in the viewport, recycling a fixed number of nodes as the user scrolls. Without it, 10,000 list items create 10,000 DOM nodes - slow initial render, high memory, janky scrolling. The two main libraries: react-window (< 5KB, FixedSizeList and VariableSizeList) and TanStack Virtual (zero dependency, framework-agnostic).
import { FixedSizeList as List } from "react-window";
interface RowProps {
index: number;
style: React.CSSProperties;
}
const Row = ({ index, style }: RowProps) => (
<div style={style} className="row">
{data[index].name} - {data[index].email}
</div>
);
function VirtualizedContactList() {
return (
<List
height={600} // visible viewport height in px
itemCount={10_000} // total item count
itemSize={56} // fixed row height in px
width="100%"
>
{Row}
</List>
);
}Interview tip: Virtualization is necessary at ~500+ dynamic items and ~2,000+ static items. Below those thresholds, the added complexity (scroll math, inaccessible jump-scroll behavior, tricky dynamic heights) rarely pays off. Measure scroll performance with Lighthouse before reaching for a virtualization library.
Route-level splitting is the baseline - each page bundle loads only when the user navigates to that route. In Next.js App Router this is automatic per route segment. In Vite/Webpack apps you use React.lazy with Suspense. Component-level splitting targets heavyweight dependencies (rich text editors, chart libraries, map SDKs) that aren't needed for first paint. The rule: split when a component adds > 30KB gzipped to the initial bundle.
import { lazy, Suspense } from 'react';
// Route-level: each page loads its own bundle on navigation
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Analytics = lazy(() => import('./pages/Analytics'));
const Settings = lazy(() => import('./pages/Settings'));
// Component-level: heavy editor loads only when the user opens it
const RichTextEditor = lazy(() => import('./components/RichTextEditor'));
function App() {
return (
<Router>
<Suspense fallback={<PageSkeleton />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/analytics" element={<Analytics />} />
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>
</Router>
);
}
function PostEditor() {
const [isEditing, setIsEditing] = useState(false);
return (
<>
<button onClick={() => setIsEditing(true)}>Edit Post</button>
{isEditing && (
<Suspense fallback={<EditorSkeleton />}>
<RichTextEditor />
</RichTextEditor>
</Suspense>
)}
</>
);
}Interview tip: In Next.js App Router, route segments split automatically. But heavy client-side libraries (Monaco Editor, Mapbox GL) still need explicit dynamic(() => import(...), { ssr: false }) to exclude them from SSR and defer them to the client.
Hydration errors occur when the HTML the server rendered doesn't match the React tree the client generates during hydration. React expects identical output so it can attach event listeners without rebuilding DOM nodes. Common causes: window, localStorage, or Date.now() in the render body (browser-only), random or time-based values, locale-dependent number/date formatting, and browser extensions that mutate the DOM before hydration.
// Bug: Date.now() differs between server render time and client hydration time
function Timestamp() {
return <span>{Date.now()}</span>; // ❌ mismatch guaranteed
}
// Fix 1: useEffect renders client-only value after hydration completes
function Timestamp() {
const [ts, setTs] = useState<number | null>(null);
useEffect(() => setTs(Date.now()), []); // runs only in the browser
return <span>{ts ?? "-"}</span>;
}
// Fix 2: suppressHydrationWarning for intentional, controlled mismatches
function BrowserWidget() {
return (
<div suppressHydrationWarning>
{typeof window !== "undefined" ? <ClientOnlyChart /> : null}
</div>
);
}Interview tip: React 18 improved hydration error messages to show the specific attribute or text node that mismatched. In Next.js App Router, the error overlay highlights the exact component. Use suppressHydrationWarning sparingly and only when you own and understand the mismatch reason.
Architecture questions assess whether you can design systems that remain maintainable as a codebase and team grow. These questions don't have single right answers - interviewers are evaluating your reasoning, trade-off awareness, and whether you've shipped these patterns in a production environment rather than just read about them.
Compound Components share implicit state between a parent and its children via context, letting consumers compose UI without prop drilling through multiple layers. It's ideal for complex UI widgets - tabs, accordions, dropdowns, command palettes - where the parts always appear together but the layout should remain flexible. The pattern replaces long prop lists (), with a declarative composition model.
const TabsContext = createContext<{
activeTab: string;
setActiveTab: (id: string) => void;
} | null>(null);
function Tabs({
defaultTab,
children,
}: {
defaultTab: string;
children: React.ReactNode;
}) {
const [activeTab, setActiveTab] = useState(defaultTab);
return (
<TabsContext.Provider value={{ activeTab, setActiveTab }}>
<div className="tabs">{children}</div>
</TabsContext.Provider>
);
}
Tabs.List = function TabList({ children }: { children: React.ReactNode }) {
return <div role="tablist">{children}</div>;
};
Tabs.Tab = function Tab({
id,
children,
}: {
id: string;
children: React.ReactNode;
}) {
const { activeTab, setActiveTab } = useContext(TabsContext)!;
return (
<button
role="tab"
aria-selected={activeTab === id}
onClick={() => setActiveTab(id)}
>
{children}
</button>
);
};
Tabs.Panel = function Panel({
id,
children,
}: {
id: string;
children: React.ReactNode;
}) {
const { activeTab } = useContext(TabsContext)!;
return activeTab === id ? <div role="tabpanel">{children}</div> : null;
};
// Consumer controls layout - no prop drilling
<Tabs defaultTab="profile">
<Tabs.List>
<Tabs.Tab id="profile">Profile</Tabs.Tab>
<Tabs.Tab id="settings">Settings</Tabs.Tab>
</Tabs.List>
<Tabs.Panel id="profile">
<ProfilePanel />
</Tabs.Panel>
<Tabs.Panel id="settings">
<SettingsPanel />
</Tabs.Panel>
</Tabs>;Interview tip: Compound Components is a frequent live-coding prompt at senior interviews. Know it well, but also know its limit: if children are ever used independently outside the parent, plain props are simpler. The pattern only pays when the parts are always used together.
Neither is the modern default - custom hooks replaced both for logic sharing in React 16.8+. HOCs remain valid for cross-cutting concerns that need to wrap a component's render output (auth guards, analytics wrappers, error boundaries). Render Props survive in library APIs that need to expose render control to consumers. For new application code, custom hooks share logic; HOCs wrap render output.
// HOC: still valid for auth guards that wrap the rendered output
function withAuth<P extends object>(Component: React.ComponentType<P>) {
return function AuthenticatedComponent(props: P) {
const { user } = useAuthStore();
if (!user) return <Navigate to="/login" replace />;
return <Component {...props} />;
};
}
export const ProtectedDashboard = withAuth(Dashboard);
// Custom hook: modern replacement for the logic-sharing use case
function useWindowSize() {
const [size, setSize] = useState({
width: typeof window !== "undefined" ? window.innerWidth : 0,
height: typeof window !== "undefined" ? window.innerHeight : 0,
});
useEffect(() => {
const handler = () =>
setSize({ width: window.innerWidth, height: window.innerHeight });
window.addEventListener("resize", handler);
return () => window.removeEventListener("resize", handler);
}, []);
return size;
}Interview tip: If asked "are HOCs dead?", the nuanced answer is: HOCs are still the right tool for wrapping render output (auth gates, error boundaries, analytics). For logic reuse, custom hooks are always cleaner. The React team's own codebase uses both - the choice depends on whether you're sharing logic or wrapping JSX.
A well-designed hook has a single named responsibility, returns a stable interface that doesn't change shape between renders, accepts an options object rather than multiple positional arguments for extensibility, and is testable in isolation without mounting a component. The return value should use primitive types where possible (easier to memoize in consumers) and expose stable function references via useCallback.
interface UsePaginationOptions {
totalItems: number;
itemsPerPage?: number;
initialPage?: number;
}
interface UsePaginationReturn {
currentPage: number;
totalPages: number;
hasNextPage: boolean;
hasPrevPage: boolean;
nextPage: () => void;
prevPage: () => void;
goToPage: (page: number) => void;
}
function usePagination({
totalItems,
itemsPerPage = 10,
initialPage = 1,
}: UsePaginationOptions): UsePaginationReturn {
const [currentPage, setCurrentPage] = useState(initialPage);
const totalPages = Math.ceil(totalItems / itemsPerPage);
const nextPage = useCallback(
() => setCurrentPage((p) => Math.min(p + 1, totalPages)),
[totalPages],
);
const prevPage = useCallback(
() => setCurrentPage((p) => Math.max(p - 1, 1)),
[],
);
const goToPage = useCallback(
(page: number) => setCurrentPage(Math.max(1, Math.min(page, totalPages))),
[totalPages],
);
return {
currentPage,
totalPages,
hasNextPage: currentPage < totalPages,
hasPrevPage: currentPage > 1,
nextPage,
prevPage,
goToPage,
};
}Interview tip: A senior interviewer will catch a hook that returns a new object on every render, forcing consumers to use useMemo. Use useCallback for function values inside hooks. Better yet, return primitives and stable refs so consumers don't need any extra memoization.
Module Federation (Webpack 5 / Vite) lets separately deployed React apps share components and dependencies at runtime - no build-time coupling. A "host" app loads "remote" app modules as if they were local imports. Each micro-frontend deploys independently, is cached separately by the browser, and can be owned by a different team. The critical risk: version mismatches can cause "invalid hook call" errors if two React instances end up on the same page.
// remote/webpack.config.js - the Checkout micro-frontend
new ModuleFederationPlugin({
name: "checkout",
filename: "remoteEntry.js",
exposes: { "./CheckoutFlow": "./src/CheckoutFlow" },
shared: { react: { singleton: true, requiredVersion: "^19.0.0" } },
});
// host/webpack.config.js - the Shell app
new ModuleFederationPlugin({
name: "shell",
remotes: { checkout: "checkout@https://checkout.acme.com/remoteEntry.js" },
shared: { react: { singleton: true, requiredVersion: "^19.0.0" } },
});
// host/src/App.tsx - loads remote component at runtime
const CheckoutFlow = React.lazy(() => import("checkout/CheckoutFlow"));
function App() {
return (
<Suspense fallback={<Spinner />}>
<CheckoutFlow />
</Suspense>
);
}Interview tip: singleton: true in the shared config is critical - it ensures only one copy of React runs on the page. Without it, you'll see "Hooks can only be called inside a function component" because two React instances have separate internal registries. This is the most common production bug with Module Federation.
A single top-level error boundary isn't enough in production - one feature crash shouldn't take down the entire app. The pattern: wrap each major feature area (sidebar, main content, individual widgets) in its own boundary. Each boundary reports to your error monitoring service with component stack and user context, then renders a contextual fallback. React 19 added onCaughtError and onUncaughtError root-level callbacks to complement boundaries for observability.
"use client"; // Error boundaries must be Client Components in the App Router
import { Component, type ErrorInfo, type ReactNode } from "react";
import * as Sentry from "@sentry/nextjs";
interface Props {
children: ReactNode;
fallback?: ReactNode;
featureName: string;
}
interface State {
hasError: boolean;
}
class FeatureErrorBoundary extends Component<Props, State> {
state: State = { hasError: false };
static getDerivedStateFromError(): State {
return { hasError: true };
}
componentDidCatch(error: Error, info: ErrorInfo) {
Sentry.captureException(error, {
extra: {
featureName: this.props.featureName,
componentStack: info.componentStack,
},
});
}
render() {
if (this.state.hasError) {
return (
this.props.fallback ?? (
<div className="error-fallback">
<p>Something went wrong in {this.props.featureName}.</p>
<button onClick={() => this.setState({ hasError: false })}>
Try again
</button>
</div>
)
);
}
return this.props.children;
}
}Interview tip: Error boundaries don't catch errors in event handlers (use try/catch there), async code outside render, or server-side code. Interviewers at FAANG often ask this follow-up specifically - it reveals whether you've actually shipped error boundaries in production or just read the docs.
React 19 is the most consequential React release since hooks. Senior candidates are expected to know what changed, why it matters, and what deprecated APIs to stop using. SPAs still dominate at 84.5% of React usage patterns (State of React 2025), but the Server Actions and use() primitives introduced in React 19 are already reshaping how experienced teams write data-fetching and mutation code.

React 19 stable (December 5, 2024) introduced: Actions - async functions that handle pending, error, and optimistic states automatically; useActionState - manages action return state in forms; useFormStatus - lets child components read parent form submission state without prop drilling; useOptimistic - first-class optimistic UI; use() - reads Promises and Context inside render (conditionally); ref as prop - ref is now a regular prop, forwardRef is deprecated; and improved hydration error diffs showing the exact mismatch.
// React 19: ref as a regular prop - no forwardRef needed
function Input({
ref,
...props
}: React.InputHTMLAttributes<HTMLInputElement> & {
ref?: React.Ref<HTMLInputElement>;
}) {
return <input ref={ref} {...props} />;
}
// React 19: useFormStatus in a child component
("use client");
import { useFormStatus } from "react-dom";
function SubmitButton() {
const { pending } = useFormStatus(); // reads parent <form> submission state
return (
<button type="submit" disabled={pending}>
{pending ? "Saving…" : "Save"}
</button>
);
}
// React 19: useActionState
import { useActionState } from "react";
import { updateProfile } from "@/actions";
function ProfileForm() {
const [state, formAction, isPending] = useActionState(updateProfile, null);
return (
<form action={formAction}>
<input name="displayName" />
<SubmitButton />
{state?.error && <p role="alert">{state.error}</p>}
</form>
);
}Interview tip: The removal of forwardRef is the React 19 change that catches experienced devs off-guard - it was required in React 17/18 but is now deprecated. Mentioning this unprompted signals you've shipped React 19 code, not just read the release notes.
Generic components accept a type parameter that flows through props, enabling type-safe composition without losing flexibility. The most common use cases: data tables where columns and data must match, Select inputs where the option type should be inferred, and list components where renderItem type depends on items type.
// Generic Select: T is inferred from options at the call site
interface SelectProps<T> {
options: T[];
value: T | null;
onChange: (value: T) => void;
getLabel: (item: T) => string;
getValue: (item: T) => string | number;
}
// Trailing comma required in .tsx to disambiguate <T> from JSX tag
function Select<T>({
options,
value,
onChange,
getLabel,
getValue,
}: SelectProps<T>) {
return (
<select
value={value !== null ? String(getValue(value)) : ""}
onChange={(e) => {
const selected = options.find(
(o) => String(getValue(o)) === e.target.value,
);
if (selected !== undefined) onChange(selected);
}}
>
{options.map((option) => (
<option key={String(getValue(option))} value={String(getValue(option))}>
{getLabel(option)}
</option>
))}
</select>
);
}
// TypeScript infers T as User automatically - no explicit annotation needed
<Select
options={users}
value={selectedUser}
onChange={setSelectedUser}
getLabel={(u) => u.name}
getValue={(u) => u.id}
/>;Interview tip: The trailing comma in function Select is required in .tsx files to prevent the TypeScript parser from treating as a JSX tag. In .ts files, function select is fine. This is a common TypeScript + React gotcha that appears in senior filter questions.
Test Server Actions as pure async functions directly - they're just async function declarations that accept FormData or typed arguments. No React mounting required. Test Client Components that call actions by mocking the action import and asserting against DOM state with React Testing Library. For useActionState and useOptimistic, wrap state updates in act() before asserting.
// Test the Server Action directly - pure function, no React needed
import { createPost } from "@/actions/post";
import { db } from "@/lib/db";
jest.mock("@/lib/db");
test("createPost inserts a record and revalidates /posts", async () => {
const mockPost = { id: "1", title: "Hello World" };
(db.post.create as jest.Mock).mockResolvedValue(mockPost);
const formData = new FormData();
formData.set("title", "Hello World");
const result = await createPost(formData);
expect(db.post.create).toHaveBeenCalledWith({
data: { title: "Hello World" },
});
expect(result).toEqual(mockPost);
});
// Test the Client Component - mock the action import
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import { CreatePostForm } from "@/components/CreatePostForm";
import * as actions from "@/actions/post";
jest.mock("@/actions/post");
test("shows pending state while action is running", async () => {
(actions.createPost as jest.Mock).mockImplementation(
() => new Promise((resolve) => setTimeout(resolve, 100)),
);
render(<CreatePostForm />);
fireEvent.submit(screen.getByRole("form"));
expect(await screen.findByText("Saving…")).toBeInTheDocument();
});Interview tip: A common mistake - wrapping the whole test in act() instead of using waitFor(). waitFor retries the assertion until it passes or times out; act() synchronously flushes pending React state. For async Server Actions, waitFor is almost always what you need.
use() is a new React 19 primitive that reads a Promise or Context inside a component's render body - and unlike all other hooks, it can be called inside conditionals and loops. When passed a Promise, use() suspends the component until the promise resolves, integrating naturally with Suspense boundaries and error boundaries. When passed a Context, it reads the value like useContext but conditionally. It replaces the manual "throw a promise" pattern that libraries like Relay used to trigger Suspense.
import { use, Suspense } from "react";
// use() with a Promise - component suspends until resolved
function UserCard({ userPromise }: { userPromise: Promise<User> }) {
const user = use(userPromise); // suspends here; caught by nearest Suspense
return (
<div className="card">
{user.name} - {user.role}
</div>
);
}
// The promise is created outside the component (important - don't create inside)
async function Page({ params }: { params: { id: string } }) {
const userPromise = fetchUser(params.id); // NOT awaited - pass the promise down
return (
<Suspense fallback={<CardSkeleton />}>
<UserCard userPromise={userPromise} />
</Suspense>
);
}
// use() with Context - can be called conditionally
function ConditionalThemeLabel({ show }: { show: boolean }) {
if (!show) return null;
const theme = use(ThemeContext); // valid inside a conditional - hooks can't do this
return <span style={{ color: theme.primary }}>Theme active</span>;
}Interview tip: The key difference from useContext: use() can be called conditionally. This solves a real pain point - reading context only in one branch of logic - that previously required restructuring the component into two separate child components.
At scale, the primary structural goal is colocation - keeping related files close together - and clear ownership boundaries so teams ship features independently without merge conflicts. The most production-validated approach is a domain-driven or Feature-Sliced Design (FSD) structure. Avoid organizing by file type at the root (/components, /hooks, /utils) - it forces cross-folder edits for every feature change and creates no ownership clarity.
src/
├── app/ # Next.js App Router: layouts, pages, error, loading
│ ├── (auth)/ # route groups by concern
│ └── (dashboard)/
│
├── features/ # domain-bounded slices - one per team/domain
│ ├── auth/
│ │ ├── components/ # AuthForm, LoginButton (internal)
│ │ ├── hooks/ # useAuth, useSession (internal)
│ │ ├── actions/ # loginUser, logoutUser (Server Actions)
│ │ ├── store/ # authStore.ts (Zustand)
│ │ └── index.ts # public API - only export what other features need
│ │
│ ├── billing/ # owned by billing team
│ └── dashboard/ # owned by product team
│
├── shared/ # cross-feature utilities - governed, stable contract
│ ├── ui/ # design system: Button, Input, Modal, Toast
│ ├── hooks/ # useDebounce, useIntersectionObserver
│ └── lib/ # db client, analytics, logger
│
└── types/ # global TypeScript interfaces onlyIn Our finding: Teams that adopt explicit
index.tspublic APIs per feature (exporting only what other features are allowed to import) reduce cross-feature merge conflicts by enforcing clear boundaries at the module level - without needing a monorepo or separate packages.
Interview tip: At FAANG and large product companies, "how would you structure this codebase?" is really asking: "how do you prevent one team's change from breaking another team's feature?" The answer is explicit public APIs, no cross-feature direct imports, and a governed shared/ layer with a lightweight review process for additions.
Senior React interviews shift from "what does X do?" to "when and why would you use X over Y?" Expect deep questions on Fiber architecture, Concurrent Mode trade-offs, RSC vs SSR boundary decisions, state management scaling patterns, and application-level architecture. Interviewers at FAANG dedicate entire rounds to system design with React, where you whiteboard component architecture, data flow, and performance strategy from scratch.
New to React interviews or want to review the fundamentals alongside these advanced topics? Start with the Top 50 React Interview Questions and Answers (2026) guide covering Virtual DOM, JSX, hooks, and React Router.
Senior React developer salaries in the US average between $128,400 and $145,000 per year, with the top 10th percentile exceeding $223,718 (ZipRecruiter, February 2026). Compensation varies significantly by location, company stage, and whether total comp includes equity. FAANG roles routinely place engineers in the $200,000–$350,000+ total compensation range.
Yes, for most senior roles. React 19 stable shipped December 2024, and 48.4% of daily React users are already on it (State of React 2025). Expect questions on Server Actions, useActionState, useOptimistic, use(), and the deprecation of forwardRef. Companies running Next.js 13+ will specifically probe RSC knowledge in interviews.
Senior interviews test whether you can own features independently and make sound technical choices. Staff/lead interviews test whether you can make decisions that scale across teams - codebase architecture, performance budgets, dependency governance, and the trade-offs between developer experience and end-user performance. Expect system design questions where you architect a full application, not just a component.
system design interview questions for frontend engineers
Knowing Redux Toolkit is useful, but Zustand, Jotai, and TanStack Query have displaced Redux as the default at most startups and mid-size product companies. Senior interviews care more about your ability to reason about state ownership, update patterns, and selector performance than about Redux-specific APIs. If a role explicitly lists Redux as a requirement, study createSlice, createAsyncThunk, and RTK Query.
Senior React interviews test judgment, not just syntax. The 30 questions in this guide reflect what experienced engineers are actually asked at product companies and FAANG in 2026: how Fiber makes concurrent rendering possible, why RSC fundamentally changes JavaScript bundle size, when React.memo helps versus hurts, and how to structure a codebase that 20 engineers can ship features in without stepping on each other.
The best preparation isn't memorizing answers - it's building these patterns in a real project. Set up a Next.js App Router project, ship a Server Action, profile a slow list render with the Profiler API, and implement a Compound Component from scratch. Experience converts these questions from abstract concepts to concrete reasoning you can explain under pressure.
Key takeaways: React 19's
use(), Server Actions, and ref-as-prop are the new table stakes. Fiber's two-phase architecture explains every performance question.useTransitionand Suspense are the concurrent rendering tools every senior candidate must own cold.
Also review the Top 50 React Interview Questions (2026) and the 40+ React Hooks guide to cover the full React interview stack - fundamentals through senior-level patterns.