StackInterview logoStackInterview icon

Explore

Library

Resources

Articles

Insights

StackInterview

StackInterview helps developers prepare for full-stack interviews with structured questions, real company interview insights, and modern technology coverage.

About UsFAQContactPrivacy PolicyTerms of Service

© 2026 StackInterview. Built for engineers, by engineers.

Developed and Maintained by Abhijeet Kushwaha

All Articles
⚛️React & Frontend18 min read

30 Golden Frontend Topics Every Senior Developer Must Master in 2026

Senior frontend roles pay $147,778 on average. Master these 30 interview topics - from RSC architecture to micro-frontends - and walk into any senior loop ready to own it.

Senior frontend developer salaries hit $147,778 on average in 2026 - but the interview bar has never been higher. This guide covers all 30 god-mode topics across rendering, performance, architecture, tooling, browser internals, system design, and production observability that separate mid-level devs from senior engineers at top product companies.

frontendinterview-questionsreactsenior-developersystem-designfrontend-2026
On this page
  1. 30 Golden Frontend Topics Every Senior Developer Must Master in 2026
  2. Rendering & Hydration (The Money Topics)
  3. 1. Hydration Strategies
  4. 2. Streaming SSR
  5. 3. React Server Components Architecture
  6. 4. Static vs Dynamic Rendering Decisions
  7. Performance That Actually Ships
  8. 5. Core Web Vitals - Diagnosis, Not Definition
  9. 6. Bundle Architecture
  10. 7. Rendering Performance Patterns
  11. 8. Image & Asset Strategy
  12. 9. Network Performance
  13. Architecture Patterns (Interview Differentiators)
  14. 10. Micro-Frontend Architecture
  15. 11. State Machines & Finite Automata in UI
  16. 12. Optimistic UI & Real-Time Patterns
  17. 13. API Layer Architecture
  18. 14. Error Boundaries & Resilience
  19. TIER 4 - DX & Tooling Mastery
  20. 15. Monorepo Architecture
  21. 16. Design Tokens & Theming Systems
  22. 17. Testing for Seniors
  23. 18. CI/CD for Frontend
  24. Advanced Browser & Runtime
  25. 19. Web Workers & Off-Main-Thread
  26. 20. Paint & Composite Optimization
  27. 21. Advanced CSS Architecture
  28. 22. Security in Practice
  29. System Design Scenarios (Asked in Every Senior Loop)
  30. 23. Design an Infinite Scroll Feed
  31. 24. Design an Autocomplete/Typeahead
  32. 25. Design a Real-Time Dashboard
  33. 26. Design a WYSIWYG/Rich Text Editor
  34. 27. Design a Design System
  35. Observability & Production
  36. 28. Frontend Observability
  37. 29. A/B Testing Architecture
  38. 30. Incremental Migration Strategies
  39. How to Use This List: A 4-Week Revision Plan
  40. Frequently Asked Questions
  41. What topics are most asked in senior frontend interviews in 2026?
  42. How long does it take to prepare for a senior frontend interview?
  43. Is React Server Components knowledge required for senior roles?
  44. Do senior frontend interviews include system design?
  45. What salary can I expect as a senior frontend developer in 2026?
  46. Conclusion
Practice

Test your knowledge

Real interview questions asked at top product companies.

Practice Now
More Articles

Senior frontend developers in the US earn $147,778 on average (Glassdoor, 2026). That salary doesn't come for knowing useState. It comes for knowing *why* hydration mismatches destroy TTI, how Module Federation handles shared singletons, and what you'd instrument first when a production LCP spike hits at 2am. This isn't a basics list. It's the separation layer - the 30 topics that interviewers use to sort mid-level candidates from engineers who've actually shipped at scale.

frontend interview prep roadmap → comprehensive frontend interview preparation guide

Key Takeaways

  • Senior frontend developer average US salary is $147,778 (Glassdoor, 2026) - up significantly from mid-level ranges

  • React powers 83.6% of JavaScript projects; only 29% of devs have used RSC despite 50%+ positive sentiment (State of React 2025)

  • 34% of enterprise apps use micro-frontend architecture; Module Federation holds 51.8% of that tooling market

  • Pages at Google position 1 are 10% more likely to pass all Core Web Vitals (BrightVessel, 2025)


Rendering & Hydration (The Money Topics)

Only 29% of developers have used React Server Components despite over 50% expressing positive sentiment toward them (State of React 2025). That gap is your opportunity. Rendering and hydration questions show up in nearly every senior loop because they expose whether you understand the browser's actual execution model - or just the framework's happy path.

1. Hydration Strategies

Hydration isn't a React feature - it's a reconciliation contract between the server and the browser, and every mismatch has a cost. Most mid-level devs know full hydration. Senior devs know *when* full hydration is the wrong call, what progressive hydration trades off, and why Astro's Islands Architecture eliminates hydration cost entirely for static sections. React 18's selective hydration means boundaries hydrate independently, so a slow sidebar no longer blocks the entire page's interactivity.

The mismatch trap is subtle. If the server renders a timestamp and the client re-renders it a millisecond later, React throws a hydration warning - or silently mismatches if you've used suppressHydrationWarning. That prop isn't a fix. It's a flag that your rendering logic is environment-aware.

// BAD: server and client produce different output - hydration mismatch
function LastSeen() {
  return <span>Last seen: {new Date().toLocaleTimeString()}</span>;
}

// GOOD: defer the client-only value until after hydration
"use client";
import { useState, useEffect } from "react";

function LastSeen() {
  const [time, setTime] = useState<string | null>(null);

  useEffect(() => {
    // runs only on client, after hydration - no mismatch
    setTime(new Date().toLocaleTimeString());
  }, []);

  if (!time) return <span>Last seen: -</span>; // matches server output
  return <span>Last seen: {time}</span>;
}
// Selective hydration in React 18 - slow data doesn't block fast UI
import { Suspense } from "react";

export default function ProductPage() {
  return (
    <>
      {/* hydrates immediately - fast, critical content */}
      <ProductHeader />

      {/* hydrates independently - slow fetch doesn't block above */}
      <Suspense fallback={<ReviewsSkeleton />}>
        <Reviews /> {/* fetches from slow third-party API */}
      </Suspense>
    </>
  );
}

Interview angle: Walk through a real hydration mismatch, explain why suppressHydrationWarning is a code smell, and describe when you'd reach for Islands Architecture (Astro) over React's selective hydration model.


2. Streaming SSR

Streaming SSR solves a specific problem: renderToString waits for *all* data before flushing any HTML. On a slow database query, the browser stares at a blank screen. renderToPipeableStream flushes the HTML shell immediately and streams chunks as data resolves. The user sees the page structure in milliseconds, not seconds.

But streaming has a tradeoff that trips up senior candidates: CDN caching. A fully static page caches at the edge. A streamed response is dynamic by nature - it can't be cached at the CDN layer unless you're very deliberate about what's inside and outside the stream boundaries.

// Node.js server - streaming with renderToPipeableStream
import { renderToPipeableStream } from "react-dom/server";
import { ServerApp } from "./App";

app.get("*", (req, res) => {
  res.setHeader("Content-Type", "text/html");

  const { pipe, abort } = renderToPipeableStream(<ServerApp url={req.url} />, {
    // flush the HTML shell immediately - browser starts parsing CSS/JS
    bootstrapScripts: ["/bundle.js"],

    onShellReady() {
      res.statusCode = 200;
      pipe(res); // stream starts here - shell is ready
    },

    onShellError(error) {
      // shell failed (layout broke) - send 500 before anything was flushed
      res.statusCode = 500;
      res.send("<h1>Something went wrong</h1>");
    },

    onError(error) {
      // non-shell errors - a <Suspense> boundary caught it
      console.error(error);
    },
  });

  // abort slow streams after 10s - don't hang the connection
  setTimeout(abort, 10_000);
});
// Next.js App Router - out-of-order streaming with Suspense
// The shell renders instantly. UserDashboard streams in when its data resolves.
export default function DashboardPage() {
  return (
    <main>
      <h1>Dashboard</h1>           {/* static - in the shell */}
      <QuickStats />                {/* static - in the shell */}

      <Suspense fallback={<Spinner />}>
        <UserDashboard />           {/* async - streams in when DB query resolves */}
      </Suspense>

      <Suspense fallback={<Spinner />}>
        <RecentActivity />          {/* async - streams in independently */}
      </Suspense>
    </main>
  );
}

// UserDashboard is an async Server Component - no useEffect, no loading state
async function UserDashboard() {
  const data = await db.query("SELECT * FROM dashboards WHERE user_id = $1", [
    getCurrentUserId(),
  ]);
  return <DashboardView data={data} />;
}

Interview angle: Explain when streaming *hurts* - when the shell itself depends on slow data (e.g., user-specific layout), streaming gains you nothing. That's the scenario seniors anticipate and design around.


3. React Server Components Architecture

React Server Components don't return HTML - they return a serialized component tree in RSC payload format, a custom wire protocol React uses to patch the client tree without a full page reload. This is fundamentally different from SSR. SSR generates HTML for initial load. RSCs generate a component description the client React runtime can apply incrementally.

The boundary rules are non-negotiable: Server Components can't use hooks, event handlers, or browser APIs. Client Components can't be async. The "use client" directive marks a boundary, not a file type - every import from that file becomes a Client Component too.

// app/products/page.tsx - Server Component (default in App Router)
// No "use client" - runs on server only, zero client bundle cost

async function ProductsPage() {
  // Direct DB access - no API route needed, no fetch overhead
  const products = await db.products.findMany({ take: 20 });

  return (
    <section>
      <h1>Products</h1>
      {products.map((product) => (
        // Server Component passes data DOWN to a Client Component
        // but cannot receive state or event handlers from it
        <ProductCard key={product.id} product={product}>
          <AddToCartButton productId={product.id} /> {/* Client Component */}
        </ProductCard>
      ))}
    </section>
  );
}

// components/AddToCartButton.tsx - Client Component
"use client";
import { useState } from "react";

export function AddToCartButton({ productId }: { productId: string }) {
  const [added, setAdded] = useState(false);

  return (
    <button onClick={() => setAdded(true)}>
      {added ? "Added!" : "Add to Cart"}
    </button>
  );
}
// WRONG: two sequential server fetches - creates a waterfall
async function ProductPage({ id }: { id: string }) {
  const product = await fetchProduct(id);         // 200ms
  const reviews = await fetchReviews(id);         // 300ms - waits for product!
  // total: 500ms
  return <View product={product} reviews={reviews} />;
}

// CORRECT: parallel fetches with Promise.all - no waterfall
async function ProductPage({ id }: { id: string }) {
  const [product, reviews] = await Promise.all([
    fetchProduct(id),   // 200ms
    fetchReviews(id),   // 300ms - runs in parallel
  ]);
  // total: 300ms (the longer of the two)
  return <View product={product} reviews={reviews} />;
}

Interview angle: Explain the RSC serialization rules - what *can't* cross the server-client boundary (functions, class instances, Dates) and why. Then describe the children prop pattern as a way to pass Client Components *through* Server Components without violating the boundary.


4. Static vs Dynamic Rendering Decisions

Rendering strategy is one of the most consequential architectural decisions in a Next.js app - and one of the easiest to get wrong silently. A single cookies() or headers() call in a Server Component anywhere in the tree opts the *entire route* into dynamic rendering. No warning. No error. Just a page that used to be cached at the CDN edge and now isn't.

Senior engineers recognize the trigger patterns and design routes to keep static sections static - often by pushing the dynamic parts down to leaf components wrapped in .

// app/blog/[slug]/page.tsx

// OPTION 1: Full Static (SSG) - built at deploy time
// Best for: blog posts, docs, marketing pages
export async function generateStaticParams() {
  const posts = await getAllPostSlugs();
  return posts.map((slug) => ({ slug }));
}

export default async function BlogPost({ params }: { params: { slug: string } }) {
  const post = await getPost(params.slug);
  return <Article post={post} />;
}

// OPTION 2: ISR - revalidate every 60 seconds
// Best for: product pages, news articles, dashboards
export const revalidate = 60;

// OPTION 3: On-demand revalidation - revalidate when content changes
// In your CMS webhook handler:
import { revalidatePath, revalidateTag } from "next/cache";

export async function POST(req: Request) {
  const { slug } = await req.json();
  revalidatePath(`/blog/${slug}`);     // invalidate specific path
  revalidateTag("blog-posts");         // invalidate all tagged fetches
  return Response.json({ revalidated: true });
}

// OPTION 4: Dynamic - no cache, runs on every request
// Triggered automatically if you read cookies/headers/searchParams
export const dynamic = "force-dynamic"; // explicit opt-in

// The silent trap - this makes the WHOLE route dynamic:
import { cookies } from "next/headers";

async function LayoutComponent() {
  const theme = cookies().get("theme"); // ← opts entire route into dynamic!
  // FIX: move this to a Client Component or a dynamic leaf node
}
// PPR (Partial Prerendering) - static shell + dynamic holes in one route
// Next.js 15 experimental feature
export const experimental_ppr = true;

export default function ProductPage() {
  return (
    <>
      {/* Static - rendered at build time, served from CDN edge */}
      <ProductInfo />
      <ProductImages />

      {/* Dynamic - rendered at request time, streamed in */}
      <Suspense fallback={<PriceSkeleton />}>
        <DynamicPrice />      {/* real-time pricing, user-specific discounts */}
      </Suspense>

      <Suspense fallback={<CartSkeleton />}>
        <CartStatus />        {/* reads cookies - must be dynamic */}
      </Suspense>
    </>
  );
}

Interview angle: Describe the PPR mental model - a single route can have a static CDN-cached shell and dynamic streamed holes simultaneously. Then explain how unstable_noStore() and cache: 'no-store' differ semantically, and when you'd use each.

[IMAGE: Developer at multiple dark monitors with code - search: "programmer dark theme multiple monitors code" - https://images.unsplash.com/photo-1517694712202-14dd9538aa97?w=1200&h=630&fit=crop&q=80]

Citation Capsule - Tier 1

As of the State of React 2025 survey, only 29% of developers have actively used React Server Components in production, despite more than 50% expressing positive sentiment toward the feature (State of React 2025). This gap signals a significant hiring advantage for candidates who can demonstrate real RSC production experience.


Performance That Actually Ships

Pages ranking at Google position 1 are 10% more likely to pass all Core Web Vitals, and improving page load speed by just 0.1 seconds boosts retail conversions by 8.4% and average order value by 9.2% (BrightVessel, 2025; Deloitte/Google via Magnet.co). Performance isn't a nice-to-have at senior level. It's a business metric you're expected to own.

5. Core Web Vitals - Diagnosis, Not Definition

Defining LCP, CLS, and INP takes 30 seconds. Diagnosing a regression in production takes a senior engineer. The real question interviewers ask is: "An LCP regression appeared on our product page last week - walk me through how you'd find it." If your answer starts with "I'd open Lighthouse," you've already lost points. Lighthouse is lab data. CrUX is field data. Production regressions live in field data.

INP (Interaction to Next Paint) replaced FID in 2024 and is the hardest CWV to fix - it measures the worst interaction latency across the session, not just first input. Long tasks on the main thread are the primary culprit.

// Measuring and reporting Core Web Vitals in production
import { onLCP, onINP, onCLS } from "web-vitals";

function sendToAnalytics(metric: Metric) {
  // Send to your analytics endpoint - Datadog, GA4, custom pipeline
  navigator.sendBeacon("/analytics", JSON.stringify({
    name: metric.name,
    value: metric.value,
    rating: metric.rating,   // "good" | "needs-improvement" | "poor"
    delta: metric.delta,
    id: metric.id,
    navigationType: metric.navigationType,
  }));
}

onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);

// Breaking up a long task to improve INP - use scheduler.yield()
async function processLargeDataset(items: Item[]) {
  for (let i = 0; i < items.length; i++) {
    processItem(items[i]);

    // Yield to the browser every 50 items - prevents INP spikes
    if (i % 50 === 0) {
      await scheduler.yield(); // browser handles pending interactions here
    }
  }
}
<!-- LCP optimization - tell the browser this image is the LCP candidate -->
<!-- fetchpriority="high" bypasses the preload scanner delay -->
<img
  src="/hero.avif"
  fetchpriority="high"
  loading="eager"
  decoding="async"
  width="1200"
  height="630"
  alt="Hero image"
/>

<!-- Avoid CLS - always set width and height on images -->
<!-- Without these, the browser doesn't reserve space and layout shifts -->
<img src="/product.jpg" width="400" height="300" alt="Product" />

Interview angle: Describe how you'd diff CrUX field data against a Lighthouse lab report to isolate whether a CWV regression is device-specific, geography-specific, or a JS bundle issue. That kind of structured diagnostic thinking is what separates seniors from mid-levels here.


6. Bundle Architecture

Bundle size isn't a vanity metric - it's a direct proxy for how much JavaScript the browser has to parse before the page becomes interactive. Mid-level devs add dynamic imports. Senior devs design chunk boundaries so that cache invalidation is surgical, shared dependencies don't bloat every route, and a single dependency update doesn't bust every user's cache simultaneously.

Route-based splitting is the default and a good start. But it's not always optimal: if two routes share a 150kb component, the shared chunk strategy matters more than per-route splitting.

// next.config.ts - advanced chunk splitting strategy
import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  webpack(config, { isServer }) {
    if (!isServer) {
      config.optimization.splitChunks = {
        chunks: "all",
        cacheGroups: {
          // Separate React into its own long-lived chunk
          // React rarely updates - users keep this cached across deploys
          react: {
            test: /[\\/]node_modules[\\/](react|react-dom)[\\/]/,
            name: "react-vendor",
            chunks: "all",
            priority: 30,
          },
          // Heavy UI library in its own chunk
          ui: {
            test: /[\\/]node_modules[\\/](@radix-ui|framer-motion)[\\/]/,
            name: "ui-vendor",
            chunks: "all",
            priority: 20,
          },
          // Everything else from node_modules
          vendors: {
            test: /[\\/]node_modules[\\/]/,
            name: "vendors",
            chunks: "all",
            priority: 10,
          },
        },
      };
    }
    return config;
  },
};

export default nextConfig;
// Dynamic import - splits this component into its own chunk
// Only loads when the modal is opened, not on initial page load
import dynamic from "next/dynamic";

const HeavyChartModal = dynamic(
  () => import("@/components/HeavyChartModal"),
  {
    loading: () => <Skeleton className="h-96 w-full" />,
    ssr: false, // chart libraries often use window - skip SSR
  }
);

export function Dashboard() {
  const [open, setOpen] = useState(false);
  return (
    <>
      <button onClick={() => setOpen(true)}>View Analytics</button>
      {open && <HeavyChartModal />}
    </>
  );
}

Interview angle: Explain moduleIds: 'deterministic' - without it, Webpack assigns numeric IDs to modules based on order, so adding one new module shifts all IDs and busts every user's cache. Deterministic IDs are content-hash-based and stable across builds.


7. Rendering Performance Patterns

React.memo, useMemo, and useCallback are tools, not solutions. Applying them without profiling first is premature optimization - and it adds real cost: memo comparison runs on every render, and the closure capture in useCallback has its own overhead. The real skill is reading the React DevTools Profiler and identifying *actual* wasted renders, not hypothetical ones.

useTransition is the more interesting senior topic. It marks state updates as non-urgent, letting React defer them when the user is interacting with something else - keeping the UI at 60fps even during heavy re-renders.

"use client";
import { useState, useTransition, useDeferredValue, memo } from "react";

// useTransition - mark the filter update as non-urgent
// The input stays responsive even if filtering 10,000 items is slow
function ProductSearch({ products }: { products: Product[] }) {
  const [query, setQuery] = useState("");
  const [isPending, startTransition] = useTransition();
  const [filteredProducts, setFilteredProducts] = useState(products);

  function handleChange(e: React.ChangeEvent<HTMLInputElement>) {
    const value = e.target.value;
    setQuery(value); // urgent - update input immediately

    startTransition(() => {
      // non-urgent - React can interrupt and restart this
      setFilteredProducts(products.filter((p) =>
        p.name.toLowerCase().includes(value.toLowerCase())
      ));
    });
  }

  return (
    <>
      <input value={query} onChange={handleChange} placeholder="Search..." />
      {isPending && <span>Filtering...</span>}
      <ProductList products={filteredProducts} />
    </>
  );
}

// useDeferredValue - alternative: defer the value, not the setter
function ProductSearchV2({ products }: { products: Product[] }) {
  const [query, setQuery] = useState("");
  const deferredQuery = useDeferredValue(query); // lags behind during transitions

  const filtered = useMemo(
    () => products.filter((p) =>
      p.name.toLowerCase().includes(deferredQuery.toLowerCase())
    ),
    [products, deferredQuery] // only recomputes when deferredQuery settles
  );

  return (
    <>
      <input value={query} onChange={(e) => setQuery(e.target.value)} />
      <ProductList products={filtered} />
    </>
  );
}

// memo - only re-renders if props actually changed
// Wrap expensive leaf components, not everything
const ProductCard = memo(function ProductCard({ product }: { product: Product }) {
  return <div>{product.name} - ${product.price}</div>;
});

Interview angle: Pull up the React DevTools Profiler in an interview demo and show a "wasted render" - a component that re-rendered with the same props because its parent didn't memoize a callback. That's 10x more convincing than talking about it.


8. Image & Asset Strategy

Images are the single biggest source of LCP failures and unnecessary bandwidth. Adding next/image fixes the easy cases. Senior engineers fix the hard cases: the LCP image that next/image lazy-loads by default, the responsive breakpoints that load a 1920px image on a 375px phone, and the AVIF/WebP format decision that CDN transform pipelines handle differently from Next.js's built-in optimizer.

import Image from "next/image";

// LCP image - must be eager, must have fetchPriority="high"
// next/image lazy-loads by default - that's wrong for the LCP candidate
export function HeroSection() {
  return (
    <Image
      src="/hero.jpg"
      alt="Hero banner showing dashboard analytics interface"
      width={1200}
      height={630}
      priority             // sets loading="eager" + fetchPriority="high"
      sizes="100vw"        // hint: this image spans full viewport width
      quality={85}
    />
  );
}

// Non-LCP image - lazy load, correct sizes for responsive breakpoints
export function ProductImage({ src }: { src: string }) {
  return (
    <Image
      src={src}
      alt="Product preview"
      width={400}
      height={300}
      loading="lazy"       // default - fine for below-the-fold images
      sizes="(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 400px"
      // ^ tells browser: full-width on mobile, 50% on tablet, 400px on desktop
      // without sizes, the browser downloads the full-size image on mobile
    />
  );
}
<!-- Native HTML - for non-Next.js projects -->
<!-- Art direction with <picture>: different image on mobile vs desktop -->
<picture>
  <source
    media="(max-width: 768px)"
    srcset="/hero-mobile.avif 768w, /hero-mobile.webp 768w"
    type="image/avif"
  />
  <source
    media="(min-width: 769px)"
    srcset="/hero-desktop.avif 1200w, /hero-desktop.webp 1200w"
    type="image/avif"
  />
  <!-- Fallback for browsers without AVIF/WebP support -->
  <img
    src="/hero-desktop.jpg"
    fetchpriority="high"
    loading="eager"
    width="1200"
    height="630"
    alt="Hero image"
  />
</picture>

<!-- Preload the LCP image in <head> - browser discovers it earlier -->
<link
  rel="preload"
  as="image"
  href="/hero.avif"
  imagesrcset="/hero-mobile.avif 768w, /hero-desktop.avif 1200w"
  imagesizes="(max-width: 768px) 100vw, 1200px"
/>

Interview angle: Describe the difference between priority (a Next.js prop that adds a to the document head) and loading="eager" (a native HTML attribute that simply disables lazy-loading). Priority does more - it ensures the browser discovers the image in the preload scanner before the component tree renders.


9. Network Performance

The network layer is where architectural decisions become milliseconds. Most devs know preload exists. Senior devs know the *priority queue* - browsers download resources in a strict priority order, and a misplaced preload can actually push your LCP image down the queue. Service worker caching strategies have similar failure modes: a cache-first strategy that never invalidates is just a slow CDN you host yourself.

<!-- Resource hints - each has a specific job, not interchangeable -->

<!-- dns-prefetch: resolve DNS early for a domain you'll fetch from soon -->
<link rel="dns-prefetch" href="https://fonts.googleapis.com" />

<!-- preconnect: DNS + TCP + TLS handshake - use sparingly (max 2-3) -->
<!-- costs a TCP connection; wrong domains waste it -->
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />

<!-- preload: fetch this resource NOW, at high priority -->
<!-- use for LCP images, critical fonts, above-the-fold CSS -->
<link rel="preload" href="/fonts/inter.woff2" as="font" type="font/woff2" crossorigin />

<!-- prefetch: fetch this resource at LOW priority during idle time -->
<!-- use for next-page resources the user will probably need -->
<link rel="prefetch" href="/dashboard/bundle.js" as="script" />

<!-- Speculation Rules API (Chrome 108+) - prerender entire next page -->
<script type="speculationrules">
{
  "prerender": [{
    "where": { "href_matches": "/checkout" },
    "eagerness": "moderate"
  }]
}
</script>
// Service Worker - three caching strategies with different tradeoffs
// Install in your Next.js app with next-pwa or a custom sw.ts

// Strategy 1: Cache First (fastest, stale risk)
// Use for: static assets, fonts, images that change rarely
self.addEventListener("fetch", (event: FetchEvent) => {
  if (event.request.destination === "image") {
    event.respondWith(
      caches.open("images-v1").then(async (cache) => {
        const cached = await cache.match(event.request);
        if (cached) return cached; // serve from cache immediately
        const response = await fetch(event.request);
        cache.put(event.request, response.clone()); // cache for next time
        return response;
      })
    );
  }
});

// Strategy 2: Stale-While-Revalidate (fast + fresh)
// Use for: API responses, HTML pages where slight staleness is OK
self.addEventListener("fetch", (event: FetchEvent) => {
  if (event.request.url.includes("/api/feed")) {
    event.respondWith(
      caches.open("api-v1").then(async (cache) => {
        const cached = await cache.match(event.request);
        const networkPromise = fetch(event.request).then((response) => {
          cache.put(event.request, response.clone()); // update cache in background
          return response;
        });
        return cached || networkPromise; // serve cache instantly, refresh behind the scenes
      })
    );
  }
});

// Strategy 3: Network First (always fresh, slow on bad connections)
// Use for: auth endpoints, cart, checkout - data must be current
self.addEventListener("fetch", (event: FetchEvent) => {
  if (event.request.url.includes("/api/cart")) {
    event.respondWith(
      fetch(event.request).catch(() =>
        caches.match(event.request) // only fall back to cache if offline
      )
    );
  }
});

Interview angle: Describe where stale-while-revalidate goes wrong in production: a deploy pushed new API response shapes, but users with a cached service worker see the old shape. The fix - cache versioning and a SKIP_WAITING activation pattern - is the kind of real-world detail that separates a senior who's shipped service workers from one who's read about them.

[IMAGE: Performance analytics dashboard on screen - search: "web performance analytics dashboard" - https://images.unsplash.com/photo-1551288049-bebda4e38f71?w=1200&h=630&fit=crop&q=80]

Citation Capsule - Tier 2

A 0.1-second improvement in page load speed increases retail conversions by 8.4% and average order value by 9.2%, according to Deloitte and Google research (Magnet.co). Meanwhile, over 50% of websites still fail Core Web Vitals thresholds (BrightVessel, 2025), making performance engineering a direct business differentiator.


Architecture Patterns (Interview Differentiators)

34% of enterprise applications now use micro-frontend architecture, with Module Federation holding 51.8% of the tooling share (TSH.io). Architecture questions don't have one right answer - interviewers are evaluating your reasoning process, your awareness of tradeoffs, and whether you've hit these problems in production or just read about them.

10. Micro-Frontend Architecture

Micro-frontends solve an org problem disguised as a technical one. When five teams own one frontend, deploy frequency drops to the slowest team's cadence. Module Federation solves this by letting each team build and deploy independently. But the hard problems aren't in the docs: React context doesn't work across federation boundaries unless you mark it as a singleton, and two teams deploying at different cadences means users sometimes load mismatched component versions mid-session.

// webpack.config.ts - Host App (the shell)
// The host loads remote modules at runtime - no build-time coupling
import { ModuleFederationPlugin } from "@module-federation/enhanced";

export default {
  plugins: [
    new ModuleFederationPlugin({
      name: "host",
      remotes: {
        // remote app URL - can point to a different deploy URL per environment
        checkout: "checkout@https://checkout.myapp.com/remoteEntry.js",
        catalog: "catalog@https://catalog.myapp.com/remoteEntry.js",
      },
      shared: {
        react: {
          singleton: true,      // CRITICAL: only one React instance across all apps
          requiredVersion: "^19.0.0",
          eager: true,
        },
        "react-dom": { singleton: true, requiredVersion: "^19.0.0", eager: true },
        // Shared auth context - all micro-frontends use the same instance
        "@myapp/auth-context": { singleton: true, eager: true },
      },
    }),
  ],
};

// webpack.config.ts - Remote App (e.g., Checkout team)
export default {
  plugins: [
    new ModuleFederationPlugin({
      name: "checkout",
      filename: "remoteEntry.js",   // served at /remoteEntry.js
      exposes: {
        "./CheckoutFlow": "./src/CheckoutFlow", // exposed to host
      },
      shared: {
        react: { singleton: true, requiredVersion: "^19.0.0" },
        "react-dom": { singleton: true, requiredVersion: "^19.0.0" },
      },
    }),
  ],
};
// Host app - lazy-loading a remote micro-frontend component
import { lazy, Suspense } from "react";

// This import resolves at runtime from checkout team's CDN
const CheckoutFlow = lazy(() => import("checkout/CheckoutFlow"));

export function CartPage() {
  return (
    <Suspense fallback={<CheckoutSkeleton />}>
      <CheckoutFlow />
    </Suspense>
  );
}

Interview angle: Describe the singleton: true failure scenario - two teams ship different React versions, the singleton flag is missing, and now you have two React instances in the same DOM. React hooks throw cryptic errors because useState from team A's React doesn't match team B's. That's the production incident that makes interviewers nod.


11. State Machines & Finite Automata in UI

Boolean flag soup is a mid-level pattern. isLoading && !isError && data conditionals accumulate until someone adds isPaused and breaks four things. State machines replace that with explicit states and transitions - it's impossible to be in the loading and error states simultaneously because the machine doesn't allow it.

XState is the standard, but the concept translates directly to useReducer. The interview point isn't the library - it's the mental model.

// BAD - boolean flag soup: impossible states are possible
const [isLoading, setIsLoading] = useState(false);
const [isError, setIsError] = useState(false);
const [isSuccess, setIsSuccess] = useState(false);
const [isRetrying, setIsRetrying] = useState(false);
// Can you be isLoading AND isError? The code doesn't prevent it.

// GOOD - useReducer as a lightweight state machine
type State =
  | { status: "idle" }
  | { status: "loading" }
  | { status: "success"; data: User }
  | { status: "error"; error: string }
  | { status: "retrying"; attempt: number };

type Action =
  | { type: "FETCH" }
  | { type: "SUCCESS"; data: User }
  | { type: "ERROR"; error: string }
  | { type: "RETRY" };

function reducer(state: State, action: Action): State {
  switch (state.status) {
    case "idle":
      if (action.type === "FETCH") return { status: "loading" };
      return state;
    case "loading":
      if (action.type === "SUCCESS") return { status: "success", data: action.data };
      if (action.type === "ERROR") return { status: "error", error: action.error };
      return state;
    case "error":
      if (action.type === "RETRY") return { status: "retrying", attempt: 1 };
      return state;
    default:
      return state;
  }
  // Impossible: "loading" + "error" simultaneously - the type system prevents it
}

function UserProfile({ userId }: { userId: string }) {
  const [state, dispatch] = useReducer(reducer, { status: "idle" });

  useEffect(() => {
    dispatch({ type: "FETCH" });
    fetchUser(userId)
      .then((data) => dispatch({ type: "SUCCESS", data }))
      .catch((err) => dispatch({ type: "ERROR", error: err.message }));
  }, [userId]);

  if (state.status === "idle") return null;
  if (state.status === "loading") return <Spinner />;
  if (state.status === "error") return (
    <ErrorView message={state.error} onRetry={() => dispatch({ type: "RETRY" })} />
  );
  if (state.status === "success") return <UserCard user={state.data} />;
}

Interview angle: Show the type narrowing benefit - state.data is only accessible when state.status === "success", so the compiler prevents you from accidentally reading undefined. That's a compile-time guarantee that boolean flags can never provide.


12. Optimistic UI & Real-Time Patterns

Optimistic UI is easy to implement and hard to implement *correctly*. The happy path - update locally, confirm on the server - takes five minutes. The failure path - rollback on error, handle in-flight conflicts, avoid race conditions - is where the interview goes deep. Interviewers want to see that you treat the rollback as a first-class concern, not an afterthought.

"use client";
import { useMutation, useQueryClient } from "@tanstack/react-query";

interface Todo { id: string; text: string; completed: boolean; }

function TodoItem({ todo }: { todo: Todo }) {
  const queryClient = useQueryClient();

  const toggleMutation = useMutation({
    mutationFn: (id: string) => api.toggleTodo(id),

    // Step 1: Optimistically update the cache BEFORE the request
    onMutate: async (id) => {
      // Cancel any in-flight refetches - they'd overwrite our optimistic update
      await queryClient.cancelQueries({ queryKey: ["todos"] });

      // Snapshot current state for rollback
      const previousTodos = queryClient.getQueryData<Todo[]>(["todos"]);

      // Apply the optimistic update immediately
      queryClient.setQueryData<Todo[]>(["todos"], (old) =>
        old?.map((t) => t.id === id ? { ...t, completed: !t.completed } : t)
      );

      return { previousTodos }; // pass snapshot to onError
    },

    // Step 2: Rollback if the server returns an error
    onError: (err, id, context) => {
      queryClient.setQueryData(["todos"], context?.previousTodos);
      toast.error("Failed to update. Reverted.");
    },

    // Step 3: Sync with server truth - always refetch on settle
    onSettled: () => {
      queryClient.invalidateQueries({ queryKey: ["todos"] });
    },
  });

  return (
    <li onClick={() => toggleMutation.mutate(todo.id)}>
      <span style={{ textDecoration: todo.completed ? "line-through" : "none" }}>
        {todo.text}
      </span>
    </li>
  );
}
// WebSocket with exponential backoff reconnect
class RealtimeConnection {
  private ws: WebSocket | null = null;
  private attempt = 0;
  private maxAttempts = 8;

  connect(url: string) {
    this.ws = new WebSocket(url);

    this.ws.onopen = () => {
      this.attempt = 0; // reset on successful connection
    };

    this.ws.onclose = () => {
      if (this.attempt < this.maxAttempts) {
        const delay = Math.min(1000 * 2 ** this.attempt, 30_000); // cap at 30s
        setTimeout(() => this.connect(url), delay);
        this.attempt++;
      }
    };

    this.ws.onmessage = (event) => {
      const message = JSON.parse(event.data);
      store.dispatch({ type: "WS_MESSAGE", payload: message });
    };
  }

  send(data: unknown) {
    if (this.ws?.readyState === WebSocket.OPEN) {
      this.ws.send(JSON.stringify(data));
    }
  }
}

Interview angle: Describe the race condition when two mutations fire in rapid succession - the second onMutate snapshot captures the optimistically updated state from the first mutation, making the rollback snapshot stale. The fix is to always refetch on onSettled, not rely on the snapshot alone.


13. API Layer Architecture

The API layer is where frontend architecture decisions compound. A naive fetch-everywhere approach creates waterfalls, inconsistent error handling, and type drift between client and server. Senior engineers design the API layer deliberately: caching strategy per resource type, error normalization at a single boundary, and retry logic that distinguishes transient failures from permanent ones.

// tRPC - end-to-end type safety without a schema compilation step
// server/router/user.ts
import { z } from "zod";
import { router, publicProcedure, protectedProcedure } from "../trpc";

export const userRouter = router({
  getProfile: protectedProcedure
    .input(z.object({ userId: z.string() }))
    .query(async ({ input, ctx }) => {
      return await db.user.findUnique({ where: { id: input.userId } });
    }),

  updateProfile: protectedProcedure
    .input(z.object({
      name: z.string().min(1).max(100),
      bio: z.string().max(500).optional(),
    }))
    .mutation(async ({ input, ctx }) => {
      return await db.user.update({
        where: { id: ctx.user.id },
        data: input,
      });
    }),
});

// client - fully typed, no codegen, no schema file to sync
import { trpc } from "@/lib/trpc";

function ProfilePage() {
  // TypeScript knows the exact return type - inferred from the server definition
  const { data: profile } = trpc.user.getProfile.useQuery({ userId: "123" });
  const updateProfile = trpc.user.updateProfile.useMutation();

  return (
    <button onClick={() => updateProfile.mutate({ name: "Abhijeet" })}>
      Update
    </button>
  );
}
// BFF (Backend for Frontend) - aggregating slow APIs, reducing client waterfalls
// Without BFF: client makes 3 separate requests, waits for all 3
// With BFF: one request to your server, which fans out in parallel

// app/api/dashboard/route.ts
export async function GET(req: Request) {
  const userId = getUserId(req);

  // Fan out to 3 internal APIs in parallel - client pays for only 1 round trip
  const [user, orders, recommendations] = await Promise.all([
    fetch(`https://user-service/users/${userId}`).then((r) => r.json()),
    fetch(`https://order-service/orders?userId=${userId}`).then((r) => r.json()),
    fetch(`https://rec-service/recs/${userId}`).then((r) => r.json()),
  ]);

  return Response.json({ user, orders, recommendations });
}

Interview angle: Explain when tRPC's tight coupling is a liability - when the frontend and backend are owned by different teams or deployed independently, a schema-based approach (OpenAPI, GraphQL) is safer because it decouples the contracts from the implementations.


14. Error Boundaries & Resilience

Error boundaries are often added reactively - something broke in production, someone wraps it in an , done. Senior engineers design error boundary *strategy* upfront: what granularity, what recovery paths, what gets reported vs silently swallowed, and how the boundary interacts with React's concurrent features.

The critical interview gotcha: error boundaries don't catch async errors, event handler errors, or server-side errors. They only catch synchronous render errors in the component tree.

// react-error-boundary - production-grade error boundary setup
import { ErrorBoundary, useErrorBoundary } from "react-error-boundary";

// Granular error boundary - wraps only the widget, not the whole page
function DashboardWidget({ widgetId }: { widgetId: string }) {
  return (
    <ErrorBoundary
      FallbackComponent={WidgetErrorFallback}
      onError={(error, info) => {
        // Report to Sentry - with component stack for root cause analysis
        Sentry.captureException(error, {
          extra: { componentStack: info.componentStack, widgetId },
        });
      }}
      resetKeys={[widgetId]} // auto-reset when widgetId changes
    >
      <WidgetContent widgetId={widgetId} />
    </ErrorBoundary>
  );
}

// Recoverable fallback - shows retry UI instead of blank space
function WidgetErrorFallback({
  error,
  resetErrorBoundary,
}: {
  error: Error;
  resetErrorBoundary: () => void;
}) {
  return (
    <div role="alert" className="widget-error">
      <p>Widget failed to load.</p>
      <button onClick={resetErrorBoundary}>Try again</button>
    </div>
  );
}

// useErrorBoundary - trigger boundary from async code (event handlers, etc.)
function DataFetcher() {
  const { showBoundary } = useErrorBoundary();

  async function loadData() {
    try {
      await fetchCriticalData();
    } catch (error) {
      // Manually throw into the nearest error boundary
      // This catches what class-based componentDidCatch can't
      showBoundary(error);
    }
  }

  return <button onClick={loadData}>Load Data</button>;
}

Interview angle: Explain the error boundary hierarchy - a top-level boundary catches everything but shows a blank page on failure. Granular boundaries isolate failure to one widget. The tradeoff is code verbosity vs user experience. Senior engineers design the hierarchy to match the product's tolerance for partial failure.

Citation Capsule - Tier 3

34% of enterprise applications now use micro-frontend architecture, with Module Federation capturing 51.8% of the micro-frontend tooling market (TSH.io). For senior frontend candidates, understanding the operational complexity of shared dependencies and cross-team deployment coordination has become a baseline expectation in architecture discussions.


TIER 4 - DX & Tooling Mastery

Vite has 84% usage and 98% satisfaction, while Webpack sits at 87% usage but only 26% satisfaction (State of JavaScript 2025). Tooling questions reveal whether you understand the build pipeline end-to-end, or just run scripts. Seniors are expected to own DX decisions, not delegate them.

15. Monorepo Architecture

Monorepos aren't a technical decision - they're an organizational one. The benefit is atomic commits across packages, shared tooling, and zero version drift between internal dependencies. The cost is CI complexity and the need for proper task orchestration to avoid rebuilding everything on every commit. Turborepo solves this with content-hash-based caching: if nothing in packages/ui changed, its build task is replayed from cache in milliseconds.

// turbo.json - task pipeline configuration
{
  "$schema": "https://turbo.build/schema.json",
  "pipeline": {
    "build": {
      "dependsOn": ["^build"],  // build dependencies first (topological order)
      "outputs": [".next/**", "dist/**"],
      "cache": true
    },
    "lint": {
      "outputs": [],
      "cache": true             // cache lint results per file hash
    },
    "test": {
      "dependsOn": ["build"],
      "outputs": ["coverage/**"],
      "cache": true
    },
    "dev": {
      "cache": false,           // never cache dev server
      "persistent": true
    }
  },
  "remoteCache": {
    "enabled": true             // share cache across CI runs and team members
  }
}
# Monorepo structure - internal packages as first-class citizens
apps/
  web/                   # Next.js app
  docs/                  # Astro docs site
packages/
  ui/                    # shared component library
    src/Button.tsx
    package.json         # name: "@acme/ui"
  config/
    eslint/              # shared ESLint config
    typescript/          # shared tsconfig
  utils/                 # shared utilities
// apps/web/package.json - consuming internal packages
{
  "dependencies": {
    "@acme/ui": "*",     // workspace:* in pnpm - always uses local version
    "@acme/utils": "*"
  }
}

Interview angle: Explain the phantom dependency problem - without proper package boundaries, apps/web can import from node_modules that are hoisted from packages/ui, but that import breaks when packages/ui is published independently. Strict package.json boundaries prevent this.


16. Design Tokens & Theming Systems

Design tokens decouple design decisions from implementation. When a brand color changes, you update one token - not hundreds of hardcoded hex values across a codebase. The three-tier model is what separates a token *system* from a token *file*: primitive tokens define raw values, semantic tokens give them meaning, component tokens apply semantics to specific UI contexts.

// tokens/primitive.json - raw values, no meaning attached
{
  "color": {
    "blue-500": { "value": "#3b82f6" },
    "blue-600": { "value": "#2563eb" },
    "gray-100": { "value": "#f3f4f6" },
    "gray-900": { "value": "#111827" }
  },
  "spacing": {
    "4": { "value": "1rem" },
    "8": { "value": "2rem" }
  }
}

// tokens/semantic.json - meaning attached to primitives
{
  "color": {
    "interactive": {
      "primary": { "value": "{color.blue-500}" },
      "primary-hover": { "value": "{color.blue-600}" }
    },
    "background": {
      "default": { "value": "{color.gray-100}" },
      "inverse": { "value": "{color.gray-900}" }
    }
  }
}
// style-dictionary - transforms token JSON to CSS custom properties
// build-tokens.ts
import StyleDictionary from "style-dictionary";

StyleDictionary.extend({
  source: ["tokens/**/*.json"],
  platforms: {
    css: {
      transformGroup: "css",
      buildPath: "src/tokens/",
      files: [{
        destination: "tokens.css",
        format: "css/variables",
        options: { outputReferences: true },
      }],
    },
    js: {
      transformGroup: "js",
      buildPath: "src/tokens/",
      files: [{ destination: "tokens.ts", format: "javascript/es6" }],
    },
  },
}).buildAllPlatforms();
/* Generated output - tokens.css */
:root {
  --color-blue-500: #3b82f6;
  --color-interactive-primary: var(--color-blue-500); /* outputReferences: true */
  --color-background-default: var(--color-gray-100);
}

/* Dark mode as a token set swap - not a .dark class with overrides */
[data-theme="dark"] {
  --color-background-default: var(--color-gray-900);
  --color-interactive-primary: var(--color-blue-400);
}

Interview angle: Explain why token set swapping scales better than a .dark CSS class that overrides specific values. Token sets mean you can have brand-A, brand-B, and dark themes that compose independently - critical for white-labeling.


17. Testing for Seniors

Test coverage percentage is a vanity metric. 90% coverage on tests that assert implementation details - internal state, method calls - gives you false confidence and brittle tests that break on every refactor. Senior engineers test *behavior*, not implementation. The question isn't "did setIsLoading get called?" but "does the user see a spinner when the request is in flight?"

// BAD - testing implementation details (breaks on refactor)
it("sets isLoading to true when fetching", () => {
  const { result } = renderHook(() => useUserData("123"));
  expect(result.current.isLoading).toBe(true); // testing internal state
});

// GOOD - testing behavior the user observes
import { render, screen } from "@testing-library/react";
import userEvent from "@testing-library/user-event";
import { http, HttpResponse } from "msw";
import { server } from "@/mocks/server"; // MSW server

describe("UserProfile", () => {
  it("shows a loading spinner while fetching the user", async () => {
    // Delay the mock response - simulate slow network
    server.use(
      http.get("/api/users/123", async () => {
        await new Promise((r) => setTimeout(r, 100));
        return HttpResponse.json({ name: "Abhijeet" });
      })
    );

    render(<UserProfile userId="123" />);
    expect(screen.getByRole("status")).toBeInTheDocument(); // spinner
    expect(await screen.findByText("Abhijeet")).toBeInTheDocument(); // resolved
  });

  it("shows an error message when the request fails", async () => {
    server.use(
      http.get("/api/users/123", () => HttpResponse.error())
    );

    render(<UserProfile userId="123" />);
    expect(await screen.findByRole("alert")).toBeInTheDocument();
  });

  it("submits the form with keyboard navigation only", async () => {
    const user = userEvent.setup(); // userEvent over fireEvent - real events
    render(<LoginForm />);

    await user.tab();                        // focus email
    await user.keyboard("test@example.com");
    await user.tab();                        // focus password
    await user.keyboard("password123");
    await user.keyboard("{Enter}");          // submit

    expect(await screen.findByText("Welcome back!")).toBeInTheDocument();
  });
});
// msw/handlers.ts - API mocking at the network level
// Works in both Jest (Node) and Playwright (browser) - same handlers
import { http, HttpResponse } from "msw";

export const handlers = [
  http.get("/api/users/:id", ({ params }) => {
    return HttpResponse.json({
      id: params.id,
      name: "Test User",
      email: "test@example.com",
    });
  }),

  http.post("/api/auth/login", async ({ request }) => {
    const body = await request.json();
    if (body.password === "wrong") {
      return HttpResponse.json({ error: "Invalid credentials" }, { status: 401 });
    }
    return HttpResponse.json({ token: "fake-jwt-token" });
  }),
];

Interview angle: Describe why MSW is architecturally superior to jest.mock for API mocking - MSW intercepts at the network layer, so it works the same in unit tests, integration tests, and E2E tests. jest.mock couples your tests to your fetch implementation.


18. CI/CD for Frontend

Frontend CI/CD in 2026 is more than "run tests, deploy on green." Senior engineers design pipelines that catch regressions before they reach production: bundle size regressions caught in PR review, visual regressions caught before QA, and performance regressions caught by automated Lighthouse runs. Every check that runs in CI is one fewer incident in production.

# .github/workflows/ci.yml
name: Frontend CI

on: [pull_request]

jobs:
  quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with: { node-version: "20", cache: "pnpm" }

      - run: pnpm install --frozen-lockfile

      - name: Type check
        run: pnpm tsc --noEmit

      - name: Lint
        run: pnpm lint

      - name: Unit tests
        run: pnpm test --coverage

      - name: Build
        run: pnpm build

      # Bundle size check - fails PR if main bundle exceeds budget
      - name: Check bundle size
        uses: andresz1/size-limit-action@v1
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          # .size-limit.json defines per-chunk budgets
          # PR comment shows before/after diff automatically

      # Lighthouse CI - performance budget enforced per PR
      - name: Lighthouse CI
        uses: treosh/lighthouse-ci-action@v11
        with:
          urls: |
            http://localhost:3000
            http://localhost:3000/products
          budgetPath: ./lighthouse-budget.json
          uploadArtifacts: true
// .size-limit.json - per-route bundle budgets
[
  {
    "path": ".next/static/chunks/pages/index-*.js",
    "limit": "120 kB",
    "name": "Home page bundle"
  },
  {
    "path": ".next/static/chunks/pages/products-*.js",
    "limit": "150 kB",
    "name": "Products page bundle"
  }
]

// lighthouse-budget.json - performance budgets
[
  {
    "resourceSizes": [{ "resourceType": "script", "budget": 300 }],
    "timings": [
      { "metric": "first-contentful-paint", "budget": 1500 },
      { "metric": "largest-contentful-paint", "budget": 2500 },
      { "metric": "cumulative-layout-shift", "budget": 0.1 }
    ]
  }
]

Interview angle: Describe the feature flag / deploy decoupling pattern - with LaunchDarkly or a home-built flag system, you can deploy code to production without releasing it. That separates deployment risk (can we ship safely?) from release risk (is this feature ready?). That distinction is a seniority signal most interviewers are watching for.

[CHART: Bar chart - Vite vs Webpack satisfaction scores (98% vs 26%) - Source: State of JavaScript 2025]


Advanced Browser & Runtime

Want to make an interviewer lean forward? Answer a browser internals question with a specific example from production. This tier separates engineers who've read the specs from those who've actually used them to solve real problems.

19. Web Workers & Off-Main-Thread

The main thread is a single-lane road. Rendering, event handling, and JavaScript execution all compete for it. A 200ms data processing task during a scroll event is a jank spike. Web Workers move that computation to a separate thread, keeping the main thread free for the one thing it must do: respond to users. The tricky part is that Workers can't touch the DOM, and data passed between threads is copied via the structured clone algorithm - functions, class instances, and DOM nodes can't cross the boundary.

// worker.ts - runs in a separate thread, no DOM access
self.onmessage = (event: MessageEvent<{ items: number[] }>) => {
  const { items } = event.data;

  // Heavy computation - won't block the main thread
  const sorted = items
    .filter((n) => n > 0)
    .sort((a, b) => a - b)
    .map((n) => n * 2);

  // Post result back to main thread
  self.postMessage({ sorted });
};

// main thread - using the worker
const worker = new Worker(new URL("./worker.ts", import.meta.url), {
  type: "module",
});

worker.postMessage({ items: largeArray }); // sends a copy - not a reference

worker.onmessage = (event: MessageEvent<{ sorted: number[] }>) => {
  setState(event.data.sorted); // update UI with result
};

// Transferable objects - zero-copy for ArrayBuffer (avoids clone overhead)
const buffer = new ArrayBuffer(1024 * 1024 * 10); // 10MB
worker.postMessage({ buffer }, [buffer]); // transfer ownership - buffer is now empty in main thread
// Comlink - RPC abstraction over postMessage (no manual message routing)
// worker.ts
import * as Comlink from "comlink";

const api = {
  async processCSV(csv: string): Promise<ParsedRow[]> {
    return parseCSV(csv); // heavy parsing off main thread
  },

  async runImageFilter(imageData: ImageData, filter: string): Promise<ImageData> {
    return applyFilter(imageData, filter);
  },
};

Comlink.expose(api); // expose the API to the main thread

// main.ts - call worker functions like normal async functions
import * as Comlink from "comlink";

const worker = new Worker(new URL("./worker.ts", import.meta.url), { type: "module" });
const workerApi = Comlink.wrap<typeof api>(worker);

// Looks like a local function call - Comlink handles postMessage under the hood
const rows = await workerApi.processCSV(rawCSVString);

Interview angle: Describe the SharedArrayBuffer pattern for high-frequency data sharing - instead of copying data on each message, multiple threads read/write a shared memory buffer using Atomics for synchronization. It requires Cross-Origin-Opener-Policy: same-origin headers, which trips up many teams in production.


20. Paint & Composite Optimization

The browser's rendering pipeline has four stages: Style, Layout, Paint, Composite. Every animation or style change triggers some subset of them. Layout (reflow) is the most expensive - it recalculates the position of every affected element. Paint redraws pixels. Composite just moves already-painted layers on the GPU. Senior engineers design animations to hit only the Composite stage by sticking to transform and opacity.

/* BAD - triggers Layout + Paint + Composite on every frame */
/* Browser must recalculate position of surrounding elements */
@keyframes slide-bad {
  from { left: 0; }
  to { left: 300px; }
}

/* GOOD - triggers Composite only - runs on GPU, 60fps guaranteed */
@keyframes slide-good {
  from { transform: translateX(0); }
  to { transform: translateX(300px); }
}

/* will-change - promotes element to its own compositor layer BEFORE animation */
/* Use sparingly - each layer costs GPU memory */
.animated-card {
  will-change: transform; /* hint: this will animate transform */
  /* Don't apply to every element - only ones that genuinely animate */
}

/* content-visibility: auto - skip rendering off-screen sections entirely */
/* Massive performance win for long pages (news feeds, product lists) */
.article-section {
  content-visibility: auto;
  contain-intrinsic-size: 0 500px; /* estimated height prevents CLS on scroll */
}

/* CSS containment - limit style/layout recalculation scope */
.widget {
  contain: layout style; /* changes inside don't affect outside elements */
}
// requestAnimationFrame vs requestIdleCallback - right tool per job
// rAF: synchronize with browser paint cycle - for visual updates
function smoothScroll(target: number) {
  const start = window.scrollY;
  const distance = target - start;
  let startTime: number;

  function step(timestamp: number) {
    if (!startTime) startTime = timestamp;
    const progress = Math.min((timestamp - startTime) / 500, 1);
    window.scrollTo(0, start + distance * easeInOut(progress));
    if (progress < 1) requestAnimationFrame(step);
  }

  requestAnimationFrame(step);
}

// rIC: do work when the browser is idle - for non-visual work
function prefetchNextPage(url: string) {
  requestIdleCallback(
    (deadline) => {
      // deadline.timeRemaining() tells you how long until next frame
      if (deadline.timeRemaining() > 10) {
        fetch(url); // prefetch when browser has spare time
      }
    },
    { timeout: 2000 } // force execution after 2s even if not idle
  );
}

Interview angle: Explain how to diagnose paint flashing in Chrome DevTools - enable "Paint flashing" in the Rendering panel and scroll. Green overlays show what's being repainted on every frame. A card grid that flashes green on scroll means something is triggering paint that should be composite-only.


21. Advanced CSS Architecture

BEM and CSS Modules are solved problems. What interviewers probe at senior level is cascade management in large codebases: how do you prevent a third-party library's styles from overriding your component styles? How do you ship dark mode without a specificity war? How do you make a sidebar component that responds to its container's width, not the viewport?

/* @layer - explicit cascade control, eliminates specificity arms race */
/* Lower layers lose to higher layers, regardless of specificity */
@layer reset, base, components, utilities, overrides;

@layer reset {
  * { box-sizing: border-box; margin: 0; }
}

@layer base {
  body { font-family: var(--font-sans); line-height: 1.5; }
  a { color: var(--color-interactive-primary); }
}

@layer components {
  /* Third-party library styles - isolated in their own layer */
  @import "some-library.css" layer(components);
}

@layer utilities {
  .sr-only { position: absolute; width: 1px; height: 1px; overflow: hidden; }
}

/* Anything in `overrides` beats everything above, regardless of specificity */
@layer overrides {
  .high-priority-banner { background: var(--color-warning); }
}
/* Container queries - component responds to its container, not viewport */
/* The sidebar layout doesn't need to know about viewport breakpoints */
.card-container {
  container-type: inline-size;
  container-name: card;
}

/* Card changes layout when its container is narrow - not when viewport is narrow */
@container card (min-width: 400px) {
  .product-card {
    display: grid;
    grid-template-columns: 1fr 2fr;
  }
}

@container card (max-width: 399px) {
  .product-card {
    display: flex;
    flex-direction: column;
  }
}
/* View Transitions API - native page transition animations */
/* Works with Next.js App Router via unstable_ViewTransition */
@view-transition {
  navigation: auto;
}

::view-transition-old(root) {
  animation: 200ms ease-out fade-out;
}

::view-transition-new(root) {
  animation: 200ms ease-in fade-in;
}

/* Scroll-driven animations - no JavaScript needed */
.progress-bar {
  animation: grow-bar linear;
  animation-timeline: scroll(); /* linked to page scroll position */
  transform-origin: left;
}

@keyframes grow-bar {
  from { transform: scaleX(0); }
  to { transform: scaleX(1); }
}

Interview angle: Explain why container queries are a paradigm shift for design systems - a component previously needed viewport breakpoints documented separately for each layout context. Container queries make the component self-describing: it handles its own responsive behavior regardless of where it's placed.


22. Security in Practice

Frontend security is a layered problem. No single mechanism is sufficient. XSS, CSRF, clickjacking, third-party script supply chain attacks - each requires a different defense. Senior engineers think about CSP as a safety net when XSS does happen, not just a checklist item. The interview depth question is: "Your CSP is blocking a legitimate inline script from a third-party widget. How do you fix it without weakening the policy?"

// Next.js - Content Security Policy with nonce-based inline script allowance
// middleware.ts
import { NextResponse } from "next/server";
import crypto from "crypto";

export function middleware(request: Request) {
  const nonce = crypto.randomBytes(16).toString("base64");
  const cspHeader = `
    default-src 'self';
    script-src 'self' 'nonce-${nonce}' 'strict-dynamic';
    style-src 'self' 'nonce-${nonce}';
    img-src 'self' blob: data: https://images.unsplash.com https://cdn.pixabay.com;
    font-src 'self';
    connect-src 'self' https://api.myapp.com;
    frame-ancestors 'none';
    base-uri 'self';
    form-action 'self';
  `.replace(/\s+/g, " ").trim();

  const response = NextResponse.next();
  response.headers.set("Content-Security-Policy", cspHeader);
  response.headers.set("X-Nonce", nonce); // pass nonce to components via headers
  return response;
}
// Passing the nonce to inline scripts via React
import { headers } from "next/headers";

export default function RootLayout({ children }: { children: React.ReactNode }) {
  const nonce = headers().get("X-Nonce") ?? "";

  return (
    <html>
      <head>
        {/* nonce allows this specific inline script through the CSP */}
        <script nonce={nonce} dangerouslySetInnerHTML={{
          __html: `window.__THEME__ = "${getTheme()}";`
        }} />
      </head>
      <body>{children}</body>
    </html>
  );
}
// Subresource Integrity - verify CDN assets haven't been tampered with
// Generate the hash: openssl dgst -sha384 -binary file.js | openssl base64 -A

// In HTML:
// <script
//   src="https://cdn.example.com/lib.min.js"
//   integrity="sha384-oqVuAfXRKap7fdgcCY5uykM6+R9GqQ8K/uxy9rx7HNQlGYl1kPzQho1wx4JwY8wC"
//   crossorigin="anonymous"
// ></script>

// DOMPurify - sanitize HTML before dangerouslySetInnerHTML
import DOMPurify from "dompurify";

function RichTextDisplay({ html }: { html: string }) {
  const clean = DOMPurify.sanitize(html, {
    ALLOWED_TAGS: ["p", "strong", "em", "a", "ul", "ol", "li"],
    ALLOWED_ATTR: ["href", "rel", "target"],
    FORCE_HTTPS: true,     // convert http: links to https:
    ADD_ATTR: ["target"],  // ensure links open in new tab
  });

  return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}

Interview angle: Describe Trusted Types - a browser API that requires all DOM sink writes (innerHTML, eval, setTimeout(string)) to go through a registered policy function. It makes XSS impossible at the DOM write point, not just at the input validation point. It's the defense-in-depth layer that CSP alone can't provide.

[IMAGE: Browser developer tools open with performance timeline - search: "browser devtools performance profiling dark theme" - https://images.unsplash.com/photo-1504639725590-34d0984388bd?w=1200&h=630&fit=crop&q=80]


System Design Scenarios (Asked in Every Senior Loop)

System design rounds appear in virtually every senior frontend interview. The format is open-ended: you're given a product feature, 30-45 minutes, and a whiteboard. Interviewers aren't looking for a single right answer - they're evaluating your ability to identify constraints, surface tradeoffs, and drive toward a defensible architecture.

<!-- [UNIQUE INSIGHT] -->

Most candidates prepare system design by memorizing component diagrams. That's backwards. Interviewers reward candidates who start by asking clarifying questions: What's the scale target? Mobile or desktop primary? Real-time or eventually consistent? The first 5 minutes of your answer often matter more than the architecture itself.

23. Design an Infinite Scroll Feed

Infinite scroll has a deceptively simple happy path. The real design challenge is the failure modes: what happens at 500 loaded items when the DOM has 500 nodes and scroll performance degrades? What happens when the user hits back after scrolling 300 items deep? Interviewers are testing whether you've designed for scale, not just for the demo.

"use client";
import { useRef, useCallback } from "react";
import { useInfiniteQuery } from "@tanstack/react-query";
import { useVirtualizer } from "@tanstack/react-virtual";

interface Post { id: string; content: string; author: string; }

export function InfiniteFeed() {
  const parentRef = useRef<HTMLDivElement>(null);

  const { data, fetchNextPage, hasNextPage, isFetchingNextPage } =
    useInfiniteQuery({
      queryKey: ["feed"],
      queryFn: ({ pageParam }) =>
        fetch(`/api/feed?cursor=${pageParam}`).then((r) => r.json()),
      getNextPageParam: (lastPage) => lastPage.nextCursor, // cursor-based pagination
      initialPageParam: null,
      // Cursor-based > offset-based for feeds:
      // offset pagination breaks when new posts are inserted - items shift
      // cursor pagination is stable regardless of insertions
    });

  const allPosts = data?.pages.flatMap((page) => page.posts) ?? [];

  // Virtualization - only renders ~10 DOM nodes regardless of how many posts loaded
  const virtualizer = useVirtualizer({
    count: hasNextPage ? allPosts.length + 1 : allPosts.length,
    getScrollElement: () => parentRef.current,
    estimateSize: () => 120, // estimated row height
    overscan: 5,             // render 5 items above/below viewport for smooth scroll
  });

  // Intersection observer - fetch next page when sentinel enters viewport
  const lastItemRef = useCallback((node: HTMLDivElement | null) => {
    if (!node || isFetchingNextPage || !hasNextPage) return;
    const observer = new IntersectionObserver(([entry]) => {
      if (entry.isIntersecting) fetchNextPage();
    }, { threshold: 0.1 });
    observer.observe(node);
    return () => observer.disconnect();
  }, [fetchNextPage, hasNextPage, isFetchingNextPage]);

  return (
    <div ref={parentRef} style={{ height: "100vh", overflowY: "auto" }}>
      <div style={{ height: virtualizer.getTotalSize(), position: "relative" }}>
        {virtualizer.getVirtualItems().map((virtualRow) => {
          const post = allPosts[virtualRow.index];
          return (
            <div
              key={virtualRow.key}
              ref={virtualRow.index === allPosts.length - 1 ? lastItemRef : null}
              style={{
                position: "absolute",
                top: virtualRow.start,
                height: virtualRow.size,
                width: "100%",
              }}
            >
              {post ? <PostCard post={post} /> : <PostSkeleton />}
            </div>
          );
        })}
      </div>
    </div>
  );
}

Interview angle: Describe scroll position restoration - React Query persists the page data, but the virtualizer loses scroll offset on back-navigation. Fix it with sessionStorage scroll position saving on beforeunload, and restore it in useLayoutEffect after the virtualizer mounts.


24. Design an Autocomplete/Typeahead

Autocomplete looks simple and hides a dozen failure modes. The visible complexity: debouncing, keyboard navigation, ARIA. The invisible complexity: request cancellation to prevent stale responses from appearing out of order, result caching to avoid re-fetching "re" when the user types "react" after typing "re", and the race condition when the user types faster than responses arrive.

"use client";
import { useState, useRef, useId, useCallback } from "react";
import { useCombobox } from "downshift"; // handles ARIA combobox pattern correctly

function SearchAutocomplete() {
  const [items, setItems] = useState<string[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const abortRef = useRef<AbortController | null>(null);
  const cacheRef = useRef<Map<string, string[]>>(new Map());
  const inputId = useId(); // stable ID for ARIA

  // Debounce + AbortController - prevents stale results and race conditions
  const search = useCallback(
    debounce(async (query: string) => {
      if (!query.trim()) { setItems([]); return; }

      // Check cache first - no network request for repeated queries
      if (cacheRef.current.has(query)) {
        setItems(cacheRef.current.get(query)!);
        return;
      }

      // Cancel previous in-flight request
      abortRef.current?.abort();
      abortRef.current = new AbortController();

      setIsLoading(true);
      try {
        const res = await fetch(`/api/search?q=${encodeURIComponent(query)}`, {
          signal: abortRef.current.signal,
        });
        const data = await res.json();
        cacheRef.current.set(query, data.results); // cache the result
        setItems(data.results);
      } catch (err) {
        if ((err as Error).name !== "AbortError") {
          console.error(err); // don't log abort errors - they're intentional
        }
      } finally {
        setIsLoading(false);
      }
    }, 250),
    []
  );

  const {
    isOpen, getMenuProps, getInputProps, getItemProps, highlightedIndex,
  } = useCombobox({
    // Combobox = input + listbox - correct ARIA pattern for accessible autocomplete
    items,
    onInputValueChange: ({ inputValue }) => search(inputValue ?? ""),
    onSelectedItemChange: ({ selectedItem }) => {
      router.push(`/search?q=${selectedItem}`);
    },
  });

  return (
    <div>
      <label htmlFor={inputId}>Search</label>
      <input
        id={inputId}
        {...getInputProps()}
        placeholder="Search topics..."
        aria-busy={isLoading}
      />
      <ul {...getMenuProps()} role="listbox">
        {isOpen && items.map((item, index) => (
          <li
            key={item}
            {...getItemProps({ item, index })}
            style={{ background: highlightedIndex === index ? "#eee" : "white" }}
          >
            {item}
          </li>
        ))}
      </ul>
    </div>
  );
}

// Reusable debounce with correct TypeScript typing
function debounce<T extends (...args: unknown[]) => void>(fn: T, delay: number): T {
  let timer: ReturnType<typeof setTimeout>;
  return ((...args: unknown[]) => {
    clearTimeout(timer);
    timer = setTimeout(() => fn(...args), delay);
  }) as T;
}

Interview angle: Explain why downshift's useCombobox over a hand-rolled solution - ARIA for combobox requires aria-expanded, aria-autocomplete, aria-activedescendant, role="combobox" on input, role="listbox" on the list, and role="option" on items, all updating correctly on keyboard events. Getting it right from scratch takes longer than the interview allows.


25. Design a Real-Time Dashboard

Real-time data delivery has three mechanisms: WebSocket (bidirectional, expensive), SSE (server-push only, lighter), and polling (simple, wasteful). The right choice depends on the data direction and the server's infrastructure. Interviewers want to see you reason about this tradeoff, not just pick WebSocket by default.

// SSE - server-sent events for one-directional pushes
// Simpler than WebSocket when the client doesn't need to send data
// app/api/metrics/stream/route.ts
export async function GET(req: Request) {
  const encoder = new TextEncoder();

  const stream = new ReadableStream({
    start(controller) {
      const interval = setInterval(async () => {
        const metrics = await getLatestMetrics();

        // SSE format: "data: ...\n\n"
        controller.enqueue(
          encoder.encode(`data: ${JSON.stringify(metrics)}\n\n`)
        );
      }, 1000);

      // Clean up when client disconnects
      req.signal.addEventListener("abort", () => clearInterval(interval));
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      "Connection": "keep-alive",
    },
  });
}

// Client - EventSource with auto-reconnect (built-in to the spec)
function useMetricsStream() {
  const [metrics, setMetrics] = useState<Metrics | null>(null);

  useEffect(() => {
    const es = new EventSource("/api/metrics/stream");

    es.onmessage = (event) => {
      setMetrics(JSON.parse(event.data));
    };

    es.onerror = () => {
      // EventSource auto-reconnects - no manual backoff needed
      console.warn("SSE connection lost, reconnecting...");
    };

    return () => es.close(); // close on unmount
  }, []);

  return metrics;
}
// Granular Zustand subscriptions - prevents dashboard-wide re-renders
// When CPU metric updates, only the CPU chart re-renders - not the whole dashboard
import { create } from "zustand";

interface DashboardStore {
  cpu: number;
  memory: number;
  requestRate: number;
  updateMetric: (key: keyof Omit<DashboardStore, "updateMetric">, value: number) => void;
}

const useDashboard = create<DashboardStore>((set) => ({
  cpu: 0, memory: 0, requestRate: 0,
  updateMetric: (key, value) => set({ [key]: value }),
}));

// Each chart subscribes to only its metric - no unnecessary re-renders
function CPUChart() {
  const cpu = useDashboard((state) => state.cpu); // only re-renders when cpu changes
  return <LineChart data={cpu} label="CPU %" />;
}

Interview angle: Describe the backgrounded tab problem - browsers throttle setInterval and WebSocket reconnects in backgrounded tabs to once per second or less. A dashboard that must stay current needs a visibilitychange listener that re-syncs state when the tab becomes active again.


26. Design a WYSIWYG/Rich Text Editor

Rich text editors are the hardest frontend system design prompt because they require you to design a document model, not just a component tree. contenteditable is a trap - it gives you free editing behavior but hands control of the DOM to the browser, making consistent state management impossible. ProseMirror and Tiptap use an immutable document model with explicit transactions, which is why they can support collaborative editing and undo history reliably.

// Tiptap (ProseMirror-based) - document model with plugin system
import { useEditor, EditorContent } from "@tiptap/react";
import StarterKit from "@tiptap/starter-kit";
import Collaboration from "@tiptap/extension-collaboration";
import CollaborationCursor from "@tiptap/extension-collaboration-cursor";
import * as Y from "yjs";
import { WebsocketProvider } from "y-websocket";

// Yjs CRDT - conflict-free real-time collaboration
// Two users typing simultaneously always converges to a consistent document
const ydoc = new Y.Doc();
const wsProvider = new WebsocketProvider(
  "wss://collab.myapp.com",
  "document-room-123",
  ydoc
);

function CollaborativeEditor({ documentId }: { documentId: string }) {
  const editor = useEditor({
    extensions: [
      StarterKit.configure({ history: false }), // disable local history - Yjs handles it
      Collaboration.configure({ document: ydoc }),
      CollaborationCursor.configure({
        provider: wsProvider,
        user: { name: "Abhijeet", color: "#3b82f6" },
      }),
    ],

    // Content is a ProseMirror document - not raw HTML
    // Schema-constrained: you define which nodes/marks are allowed
    content: {
      type: "doc",
      content: [{ type: "paragraph", content: [{ type: "text", text: "Hello" }] }],
    },
  });

  return (
    <div>
      <Toolbar editor={editor} />
      <EditorContent editor={editor} />
    </div>
  );
}

// Paste normalization - strip garbage from Word/Google Docs pastes
import { Extension } from "@tiptap/core";
import { Plugin } from "prosemirror-state";
import { DOMParser } from "prosemirror-model";

const PasteNormalizer = Extension.create({
  addProseMirrorPlugins() {
    return [
      new Plugin({
        props: {
          transformPastedHTML(html) {
            // Strip Word/Google Docs metadata tags, keep only semantic HTML
            return html
              .replace(/<o:p>.*?<\/o:p>/gi, "")     // Word namespace tags
              .replace(/style="[^"]*"/gi, "")         // inline styles
              .replace(/class="[^"]*"/gi, "");        // class attributes
          },
        },
      }),
    ];
  },
});

Interview angle: Explain CRDT vs OT for collaborative editing - Operational Transform requires a central server to serialize concurrent operations and is complex to implement correctly. Yjs (a CRDT implementation) is commutative and associative: the order of operation application doesn't matter, so it works peer-to-peer without a central coordinator.


27. Design a Design System

A design system is a product within a product. It has consumers (other engineers), versioning contracts, documentation, and a migration story for breaking changes. The teams that treat it as "just a component library" end up with a repo no one trusts and everyone forks. The interview point is governance, not implementation.

// Polymorphic component - the "as" prop pattern
// Renders as any HTML element or component while keeping TypeScript happy
import { forwardRef, ComponentPropsWithRef, ElementType } from "react";

type ButtonProps<T extends ElementType = "button"> = {
  as?: T;
  variant?: "primary" | "secondary" | "ghost";
  size?: "sm" | "md" | "lg";
} & Omit<ComponentPropsWithRef<T>, "as">;

function Button<T extends ElementType = "button">({
  as,
  variant = "primary",
  size = "md",
  children,
  ...props
}: ButtonProps<T>) {
  const Component = as ?? "button";
  return (
    <Component
      className={cn(buttonVariants({ variant, size }))}
      {...props}
    >
      {children}
    </Component>
  );
}

// Usage - TypeScript enforces the correct props for each element type
<Button>Click me</Button>                  // renders <button>
<Button as="a" href="/about">Link</Button> // renders <a>, enforces href type
<Button as={Link} to="/about">Link</Button> // renders React Router Link
// Codemod for breaking changes - consumers run this, not manual find/replace
// transforms/rename-button-variant.ts
import { API, FileInfo } from "jscodeshift";

// Rename variant prop: "contained" → "primary" (v2 → v3 migration)
export default function transform(fileInfo: FileInfo, api: API) {
  const j = api.jscodeshift;
  const root = j(fileInfo.source);

  root
    .find(j.JSXOpeningElement, { name: { name: "Button" } })
    .find(j.JSXAttribute, { name: { name: "variant" } })
    .find(j.StringLiteral, { value: "contained" })
    .replaceWith(() => j.stringLiteral("primary"));

  return root.toSource();
}

// Run: npx jscodeshift -t transforms/rename-button-variant.ts src/
// Migrates 10,000 Button usages across a monorepo in seconds

Interview angle: Explain semver discipline for a design system - patch releases fix bugs without API changes, minor releases add new components or optional props, major releases contain breaking prop renames or removed components. The codemod strategy means major releases don't require consumers to do manual search-and-replace across their entire codebase.

Citation Capsule - Tier 6

Senior frontend interviews consistently include at least one system design scenario, according to engineering hiring managers at product companies. Candidates who ask clarifying questions about scale, device targets, and consistency requirements in the first five minutes score significantly higher than those who jump immediately to implementation details. [PERSONAL EXPERIENCE]


Observability & Production

Production observability is the seniority signal interviewers most underrate as a topic - and most underestimate in candidates. Anyone can write a component. Fewer engineers can tell you *exactly* what broke at 11pm last Tuesday, why it broke, and how they'd prevent it from breaking again.

<!-- [UNIQUE INSIGHT] -->

The real test here isn't "do you know Sentry?" It's "can you describe your mental model for debugging a production issue you've never seen before?" Senior engineers think in layers: network first, then server response, then hydration, then runtime JS, then paint. That layered diagnostic process is what interviewers want to hear.

28. Frontend Observability

Production observability isn't a dashboard - it's a feedback loop. Error tracking, performance monitoring, and session replay are the three layers, and they answer different questions: "what broke?" (errors), "how slow is it?" (RUM), "what was the user doing?" (replay). Senior engineers instrument these layers proactively, not reactively. The interview angle isn't "do you know Sentry?" - it's "can you describe your mental model for a production incident you've never seen before?"

// Sentry - production error tracking with React integration
// app/layout.tsx (or _app.tsx)
import * as Sentry from "@sentry/nextjs";

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  tracesSampleRate: 0.1,    // sample 10% of transactions for performance traces
  profilesSampleRate: 0.1,  // sample 10% for profiling (requires tracesSampleRate)
  replaysSessionSampleRate: 0.01,   // 1% of sessions get full replay
  replaysOnErrorSampleRate: 1.0,    // 100% of error sessions get replay - critical
  integrations: [
    Sentry.replayIntegration({
      maskAllText: true,    // GDPR: mask user input in replays
      blockAllMedia: false,
    }),
  ],

  beforeSend(event) {
    // Filter out known non-actionable errors before they hit your quota
    if (event.exception?.values?.[0]?.value?.includes("ResizeObserver loop")) {
      return null; // drop it
    }
    return event;
  },
});
// Custom performance marks - track critical user journeys, not just page load
// Measures the time from "user clicks search" to "results are visible"
performance.mark("search:start");

const results = await searchAPI(query);
renderResults(results);
// Wait for paint - results are in DOM but not yet visible
await new Promise((resolve) => requestAnimationFrame(resolve));

performance.mark("search:end");
performance.measure("search:ttresults", "search:start", "search:end");

const measure = performance.getEntriesByName("search:ttresults")[0];
analytics.track("search_performance", {
  duration: measure.duration,
  query_length: query.length,
  result_count: results.length,
});
// Three error capture layers - they catch different things
// 1. window.onerror - catches uncaught synchronous errors
window.onerror = (message, source, line, col, error) => {
  Sentry.captureException(error);
};

// 2. unhandledrejection - catches uncaught promise rejections
window.addEventListener("unhandledrejection", (event) => {
  Sentry.captureException(event.reason);
  event.preventDefault(); // prevent console error in some browsers
});

// 3. React Error Boundary - catches render-phase errors in component tree
// (class components only, or react-error-boundary library)
// These are SEPARATE layers - you need all three for full coverage

Interview angle: Describe the tracesSampleRate cost model - at 100%, every user request generates a performance trace, which can cost thousands per month on high-traffic apps. 10% gives statistically significant data at 10% of the cost. The skill is knowing that it's a sampling rate, not a quality dial.


29. A/B Testing Architecture

Client-side A/B testing has a fundamental flaw: the page renders, then the variant loads, and the user sees a flash of the wrong content. Edge-side experimentation fixes this by determining the variant *before* the HTML response - no flicker, but it requires your routing layer to run at the edge (Vercel Edge Middleware, Cloudflare Workers).

// Edge Middleware A/B testing - variant assigned before HTML is sent
// middleware.ts
import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";

export function middleware(request: NextRequest) {
  const response = NextResponse.next();

  // Get or assign variant - persisted in cookie for session consistency
  let variant = request.cookies.get("ab_checkout_variant")?.value;

  if (!variant) {
    // Assign variant deterministically or randomly
    variant = Math.random() < 0.5 ? "control" : "treatment";
    response.cookies.set("ab_checkout_variant", variant, {
      maxAge: 60 * 60 * 24 * 7, // 7 days - persist across sessions
      httpOnly: true,
      sameSite: "strict",
    });
  }

  // Rewrite URL to variant-specific page - no redirect, no flicker
  if (request.nextUrl.pathname === "/checkout" && variant === "treatment") {
    return NextResponse.rewrite(new URL("/checkout-v2", request.url));
  }

  return response;
}

export const config = {
  matcher: ["/checkout"],
};
// Feature flag client - experiment isolation and SSR consistency
// Using LaunchDarkly, but the pattern applies to any flag system
import { LDClient } from "launchdarkly-node-server-sdk";

// Server-side flag evaluation - consistent between server render and client hydration
export async function getCheckoutVariant(userId: string): Promise<"control" | "treatment"> {
  const client = await getLDClient();
  const variation = await client.variation(
    "checkout-redesign",   // flag key
    { key: userId },       // user context - ensures consistent assignment
    "control"              // default if flag service is unavailable
  );
  return variation as "control" | "treatment";
}

// Pass variant as a prop to avoid hydration mismatch
// Server determines variant → passes to component → client hydrates with same value
export default async function CheckoutPage() {
  const userId = await getCurrentUserId();
  const variant = await getCheckoutVariant(userId);

  // Track experiment exposure - required for valid statistical analysis
  await analytics.track("experiment_exposed", {
    experiment: "checkout-redesign",
    variant,
    userId,
  });

  return variant === "treatment" ? <CheckoutV2 /> : <CheckoutV1 />;
}

Interview angle: Explain experiment contamination - if two experiments both modify the checkout page simultaneously and share the same user cohort, one experiment's effect bleeds into the other's results. The fix is mutex groups: experiments in the same group are mutually exclusive, so a user can only be in one at a time.


30. Incremental Migration Strategies

Full rewrites fail because they require a feature freeze, accumulate bugs, and take 3x longer than estimated. Incremental migration - the Strangler Fig pattern - ships value continuously while the legacy system is replaced route by route. The key is making old and new systems coexist without either blocking the other.

// Next.js rewrites - proxy legacy app routes during migration
// next.config.ts
import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  async rewrites() {
    return {
      // Routes handled by the new Next.js app - served normally
      beforeFiles: [
        { source: "/new-dashboard", destination: "/new-dashboard" },
      ],

      // Everything else - proxied to the legacy app (still running)
      // Users see one domain, but old routes hit the old server
      afterFiles: [
        {
          source: "/:path*",
          destination: "https://legacy.myapp.com/:path*",
          // Migration: flip routes from legacy to new one at a time
          // No big-bang cutover - each route is an independent ship
        },
      ],
    };
  },
};

export default nextConfig;
// TypeScript migration - @ts-nocheck for files not yet migrated
// Allows ongoing feature work while migration proceeds in parallel

// Step 1: Add @ts-nocheck to legacy JS files converted to .ts
// @ts-nocheck
export function legacyHelper(data) {  // no types yet - won't block feature work
  return data.map(item => item.value);
}

// Step 2: Migrate files incrementally - remove @ts-nocheck one file at a time
// @ts-expect-error - for specific lines where types are temporarily wrong
const result = legacyHelper(data);
// @ts-expect-error - TODO: type this after auth module migration
const user = getUser();

// Step 3: tsconfig.json - strict mode enabled per-path as migration progresses
// {
//   "compilerOptions": {
//     "strict": true
//   },
//   "include": ["src/new/**/*"],    // strict on new code
//   "exclude": ["src/legacy/**/*"] // lenient on legacy code
// }
// Parallel run - shadow mode comparison for data-critical migrations
// Run old and new implementations simultaneously, compare outputs
async function getProductPrice(productId: string): Promise<number> {
  // Always use the old implementation for actual response
  const oldPrice = await legacyPricingEngine.calculate(productId);

  // Run new implementation in the background - compare but don't use
  newPricingEngine.calculate(productId)
    .then((newPrice) => {
      if (Math.abs(oldPrice - newPrice) > 0.01) {
        // Log discrepancy - investigate before cutover
        logger.warn("pricing_mismatch", { productId, oldPrice, newPrice });
        Sentry.captureMessage("Pricing engine mismatch", {
          extra: { productId, oldPrice, newPrice },
        });
      }
    })
    .catch((err) => Sentry.captureException(err)); // new impl errors don't affect users

  return oldPrice; // always serve the trusted value during parallel run
}

Interview angle: Describe the parallel run pattern as the critical safety net before cutover - it proves the new implementation is functionally equivalent in production conditions, with real data, real edge cases, and real load. No amount of unit tests replicates that fidelity.


How to Use This List: A 4-Week Revision Plan

Don't try to cover all 30 topics in a week. Work through this plan systematically - one tier per week for the first four weeks, with system design practice woven throughout.

WeekFocusTopicsPractice Format
Week 1Rendering & Hydration + PerformanceTopics 1-9Build a Next.js App Router demo covering RSC, streaming, and CWV fixes
Week 2Architecture + DX & ToolingTopics 10-18Whiteboard a micro-frontend architecture; set up a Turborepo monorepo
Week 3Browser Internals + System DesignTopics 19-27Read browser rendering pipeline docs; mock 2 system design prompts per day
Week 4Observability + Full Mock InterviewsTopics 28-30 + reviewDo 3 full mock interviews: 1 coding, 1 system design, 1 behavioral

One rule for the whole plan: after every topic, write a one-paragraph explanation as if you're explaining it to an interviewer. If you can't do that clearly, you don't know it yet.

React interview questions → /blog/react-interview-questions-2026

JavaScript interview questions → /blog/javascript-interview-questions-2026


Frequently Asked Questions

What topics are most asked in senior frontend interviews in 2026?

Rendering architecture, Core Web Vitals diagnosis, and frontend system design appear in nearly every senior loop. React Server Components, state management patterns, and micro-frontend architecture round out the top asks. React is used by 83.6% of JavaScript developers (State of JavaScript 2025), so React-specific depth is non-negotiable at senior level.

How long does it take to prepare for a senior frontend interview?

Four to eight weeks of focused preparation is typical for mid-level developers targeting senior roles. Web developer employment is projected to grow 23% from 2021 to 2031 (BLS), so the investment compounds. The 4-week plan in this guide covers the full breadth; add another 2-4 weeks if system design is new to you.

Is React Server Components knowledge required for senior roles?

At companies using Next.js App Router or React 19+, yes. Only 29% of developers have used RSC in production (State of React 2025), which means demonstrating real RSC experience is a competitive differentiator. Even if the role doesn't use RSC today, the architectural thinking it requires is directly applicable to any senior rendering discussion.

Do senior frontend interviews include system design?

Yes, consistently. Most senior loops at product companies include at least one 45-minute system design round. The 5 system design prompts in Tier 6 of this guide (infinite scroll, autocomplete, real-time dashboard, rich text editor, design system) cover the most commonly asked scenarios. Practice talking through constraints and tradeoffs before drawing any components.

What salary can I expect as a senior frontend developer in 2026?

The US average for senior frontend developers is $147,778/year (Glassdoor, 2026). Senior frontend job postings are growing approximately 15% annually (Motion Recruitment, 2026). Total compensation at FAANG and top-tier product companies is significantly higher, often in the $200K-$350K range when you include equity and bonus.


Conclusion

Thirty topics. Seven tiers. One goal: walk into a senior loop and own it.

The candidates who land these roles aren't necessarily the most experienced - they're the most *prepared*. They've thought through RSC boundaries, they've profiled a real LCP regression, and they have an opinion about Module Federation vs iframe isolation. That kind of depth doesn't come from reading one article. It comes from deliberate practice against each tier.

Start with Tier 1. Build a small App Router project that forces you to make real decisions about rendering strategy. Then move to performance. Then architecture. By the time you reach system design in Tier 6, the earlier tiers will be the vocabulary you use to answer those open-ended prompts.

The bar is high in 2026. But it's a known bar. These 30 topics are the map.

modern React stack guide → /blog/modern-react-stack-2026

More from React & Frontend

⚛️

The Modern React Stack in 2026: Zustand, TanStack Query, Tailwind, and shadcn/ui Explained

18 min read

⚛️

React Server Components Security in 2026: CVEs, the React Foundation, and What It All Means for You

10 min read

⚛️

Top 50 React Interview Questions and Answers (2026)

25 min read

Browse All Articles