React SME Cookbook
All FAQs

Search Documentation

Search across all documentation pages

reactnextjsdecisionsarchitecturepatternshooks

React & Next.js Architecture Decisions

Common scenarios where React and Next.js developers must choose between patterns, hooks, or architectural approaches. Each decision presents the best, second-best, and third-best choice -- plus the wrong choice developers commonly make.


Decision 1: Fetching data for a page that displays a list of products

Scenario: You need to load product data on a Next.js page.

RankChoiceApproach
BestAsync Server Componentasync function ProductsPage() { const products = await db.products.findMany(); return <ProductList products={products} />; }
2ndRoute handler + server fetchCreate a GET route handler and fetch from a Server Component. Works but adds an unnecessary network hop.
3rdgetServerSideProps (Pages Router)Still works if on Pages Router, but you lose streaming and RSC benefits.

Wrong choice: Using useEffect + fetch in a Client Component. This adds a client-server waterfall, exposes your API, and hurts SEO since content isn't in the initial HTML.

Why best is best: Async Server Components fetch data with zero client JS, stream HTML progressively, and access the database directly without an API layer.


Decision 2: Managing form submission with server-side validation

Scenario: A user submits a contact form that needs server validation and error display.

RankChoiceApproach
BestServer Action + useActionStateBind a Server Action to <form action={...}> and use useActionState for pending/error state.
2ndServer Action + manual stateCall the action via startTransition and manage your own state with useState. More boilerplate but full control.
3rdAPI route + client fetchPOST to a route handler from an onSubmit handler. Works but bypasses progressive enhancement.

Wrong choice: Client-only validation with no server validation. Never trust the client -- attackers bypass your UI.

Why best is best: useActionState gives you pending, error, and data states automatically, works without JS (progressive enhancement), and keeps validation logic on the server.


Decision 3: Sharing UI layout across multiple routes

Scenario: Your dashboard has a sidebar and header that should persist across /dashboard/analytics, /dashboard/settings, etc.

RankChoiceApproach
BestNested layout.tsxPlace a layout.tsx in app/dashboard/ with shared UI. Child routes render inside {children}.
2ndTemplate fileUse template.tsx instead. Same nesting, but re-mounts on navigation (useful for animations).
3rdWrapper componentImport a <DashboardShell> wrapper in each page. Works but duplicates the import and breaks automatic layout preservation.

Wrong choice: Putting layout logic in _app.tsx (Pages Router) or a context provider that re-renders the entire tree. This defeats the purpose of nested layouts.

Why best is best: layout.tsx is preserved across navigations -- the sidebar doesn't re-render when you switch tabs, preserving scroll state and avoiding unnecessary work.


Decision 4: Showing a loading state while a page loads

Scenario: Your dashboard page fetches heavy analytics data and you need a loading skeleton.

RankChoiceApproach
Bestloading.tsxAdd a loading.tsx file next to page.tsx. Next.js auto-wraps the page in a <Suspense> boundary with your loading UI.
2ndManual <Suspense>Wrap specific async components in <Suspense fallback={<Skeleton />}> for granular control.
3rdClient-side loading stateUse useState(true) and set false after useEffect fetch completes.

Wrong choice: Showing nothing (blank screen) while data loads. Users think the app is broken after ~300ms of no feedback.

Why best is best: loading.tsx is automatic, requires zero client JS, and enables instant navigation via React's streaming architecture.


Decision 5: Choosing between useState and useReducer

Scenario: A component manages a multi-step form with 8 fields, validation state, and step navigation.

RankChoiceApproach
BestuseReducerDefine actions like SET_FIELD, NEXT_STEP, VALIDATE. State transitions are explicit and testable.
2nduseState with an objectconst [form, setForm] = useState({...}). Simpler but spread-based updates get messy with complex logic.
3rdMultiple useState callsOne per field. Fine for simple forms but 8+ useState calls are hard to coordinate.

Wrong choice: Reaching for a global state library (Redux, Zustand) for form state that only lives in one component. Massive overkill.

Why best is best: useReducer centralizes complex state transitions into a pure function you can unit test independently of React.


Decision 6: Passing data deeply through the component tree

Scenario: Theme, locale, and user preferences need to be accessible 5+ levels deep.

RankChoiceApproach
BestReact ContextCreate a context provider near the top. Consumers read via useContext or React 19's use(ThemeContext).
2ndComponent compositionPass the data as children or render props to avoid intermediate components needing the prop.
3rdProp drillingPass props through every level. Tedious but explicit and easy to trace.

Wrong choice: Installing a state management library just to avoid prop drilling. Context is built-in and sufficient for read-heavy, rarely-changing data like themes.

Why best is best: Context is zero-dependency, built into React, and optimized for data that changes infrequently but is read widely.


Decision 7: Optimistic UI for a like button

Scenario: User clicks "like" and you want the UI to update instantly before the server confirms.

RankChoiceApproach
BestuseOptimisticconst [optimisticLikes, addOptimistic] = useOptimistic(likes, (state, newLike) => [...state, newLike]);
2ndManual optimistic with useStateSet state immediately, revert on error in a try/catch. More code, same idea.
3rdReact Query's onMutateOptimistic update via mutation callbacks. Good but pulls in a library for something React 19 does natively.

Wrong choice: Waiting for the server response before updating the UI. The 200-500ms delay makes the app feel sluggish.

Why best is best: useOptimistic automatically reverts to the real state when the Server Action completes or fails -- no manual rollback logic needed.


Decision 8: Error handling for a specific route segment

Scenario: The /dashboard/billing page can fail (payment API errors) and needs a graceful fallback.

RankChoiceApproach
Besterror.tsxAdd error.tsx next to the page. Catches rendering and data errors, offers a retry button via reset().
2ndError boundary componentWrap in a custom <ErrorBoundary> for more control over error types and reporting.
3rdTry/catch in Server ComponentCatch errors in the async function and render fallback UI inline. No automatic retry.

Wrong choice: Letting errors bubble to a global error page that replaces the entire layout. The user loses all context and navigation state.

Why best is best: error.tsx is scoped to the route segment -- the rest of the layout stays interactive, and reset() re-attempts rendering without a full page reload.


Decision 9: When to add "use client"

Scenario: A component displays a product card with a name, price, image, and an "Add to Cart" button.

RankChoiceApproach
BestSplit: Server card + Client buttonKeep ProductCard as a Server Component. Extract <AddToCartButton> as a Client Component.
2ndEntire card as Client ComponentAdd "use client" to the card. Simple but ships more JS than needed.
3rdServer Component with client-side hydration wrapperWrap the interactive part with a generic hydration boundary. Over-engineered for a button.

Wrong choice: Adding "use client" to every component "just to be safe." This defeats RSC benefits and ships unnecessary JavaScript.

Why best is best: Push the "use client" boundary as low as possible. Only the interactive button needs client JS -- the card, image, and text render with zero JS.


Decision 10: Caching and revalidation strategy for a blog

Scenario: Blog posts update infrequently. You want fast loads but fresh content within minutes.

RankChoiceApproach
BestISR with revalidateexport const revalidate = 60; in your page or layout. Serves cached HTML, revalidates in the background.
2ndOn-demand revalidationCall revalidatePath('/blog/[slug]') or revalidateTag('posts') from a CMS webhook. Instant freshness.
3rdFull static (generateStaticParams)Pre-render all posts at build. Fast but stale until the next deploy.

Wrong choice: Making the blog page fully dynamic (export const dynamic = 'force-dynamic'). No caching means every request hits the database -- slow and wasteful for content that barely changes.

Why best is best: Time-based ISR is zero-config, serves stale content while revalidating, and balances freshness with performance automatically.


Decision 11: Protecting a route from unauthenticated users

Scenario: /dashboard should only be accessible to logged-in users.

RankChoiceApproach
BestMiddlewareCheck the session in middleware.ts and redirect before the page even renders. Zero layout flash.
2ndServer Component checkVerify auth in the page's Server Component and call redirect('/login'). Works but the layout may flash.
3rdClient-side guardCheck auth in useEffect and redirect. Shows protected content briefly before redirecting.

Wrong choice: Only hiding the navigation link to /dashboard. Security through obscurity -- anyone with the URL can access the page.

Why best is best: Middleware runs at the edge before any rendering. The user never sees a flash of protected content, and the check is centralized for all protected routes.


Decision 12: Storing global client state (shopping cart, UI toggles)

Scenario: A cart needs to persist across page navigations and be accessible from the header and product pages.

RankChoiceApproach
BestZustandLightweight, no boilerplate. const useCart = create((set) => ({ items: [], add: (item) => set(...) })).
2ndReact Context + useReducerBuilt-in, no dependency. Good enough for small apps but re-renders all consumers on any change.
3rdRedux ToolkitPowerful but heavy for a cart. Worth it only if the app already uses Redux for other state.

Wrong choice: Storing cart state in a Server Component or cookie-only approach with no client reactivity. The UI won't update when items are added without a full page refresh.

Why best is best: Zustand is ~1KB, requires no providers, supports selectors to avoid unnecessary re-renders, and works naturally with React 18/19 concurrent features.


Decision 13: Handling parallel data fetches on a dashboard

Scenario: A dashboard page needs user data, analytics, and notifications -- three independent API calls.

RankChoiceApproach
BestParallel async in Server Componentconst [user, analytics, notifs] = await Promise.all([getUser(), getAnalytics(), getNotifs()]);
2ndParallel <Suspense> boundariesWrap each section in its own <Suspense>. They stream independently as they resolve.
3rdSequential awaitsconst user = await getUser(); const analytics = await getAnalytics(); -- simple but creates a waterfall.

Wrong choice: Fetching all three in a single useEffect sequentially. Three waterfalls plus client-side rendering makes the dashboard feel painfully slow.

Why best is best: Promise.all fires all three requests simultaneously. Total wait time = slowest request, not the sum of all three.


Decision 14: Styling approach for a new Next.js project

Scenario: Starting a greenfield Next.js app and choosing a styling solution.

RankChoiceApproach
BestTailwind CSSUtility-first, zero runtime, works perfectly with Server Components, huge ecosystem (shadcn/ui).
2ndCSS ModulesScoped styles, no runtime, built into Next.js. Good for teams that prefer traditional CSS.
3rdVanilla ExtractType-safe CSS-in-TS with zero runtime. More setup but great for design system teams.

Wrong choice: Using a runtime CSS-in-JS library (styled-components, Emotion) with Server Components. They require client-side rendering and break RSC streaming.

Why best is best: Tailwind has zero runtime cost, works with RSC, and combined with shadcn/ui provides production-ready accessible components out of the box.


Decision 15: Image optimization

Scenario: Your e-commerce site has hundreds of product images that need to be fast and responsive.

RankChoiceApproach
Bestnext/image<Image src={url} width={400} height={300} alt="..." /> -- auto-optimizes format, size, and lazy loads.
2ndCDN with responsive <picture>Manual <picture> with srcSet and a CDN like Cloudinary. Full control but more work.
3rdPlain <img> with lazy loading<img loading="lazy" /> -- no optimization, manual sizing, potential CLS.

Wrong choice: Using unoptimized <img> tags with full-resolution source images. Pages load megabytes of images, killing Core Web Vitals.

Why best is best: next/image automatically serves WebP/AVIF, resizes for device width, lazy loads below the fold, and prevents CLS with required dimensions.


Decision 16: Database access pattern in a Next.js app

Scenario: Your app needs to query a PostgreSQL database from server-side code.

RankChoiceApproach
BestPrisma in Server Components/ActionsQuery directly: const users = await prisma.user.findMany(). Type-safe, no API layer needed.
2ndDrizzle ORMLighter weight, SQL-like syntax, excellent TypeScript inference. Great for teams that prefer SQL.
3rdRaw SQL via pg or postgresMaximum control, no ORM overhead. Good for complex queries but no type safety without extra tooling.

Wrong choice: Exposing database queries through API routes and fetching them from Client Components. Adds latency, complexity, and attack surface for no benefit when RSC can query directly.

Why best is best: Prisma in Server Components means type-safe queries with zero client exposure, automatic connection pooling, and migrations built in.


Decision 17: URL state vs. React state for filters

Scenario: A product listing page has filters (category, price range, sort) that users want to share and bookmark.

RankChoiceApproach
BestSearch params (useSearchParams + Server Component)Read params in a Server Component: searchParams.category. Filter on the server, return only matching products.
2ndnuqs libraryType-safe URL state management with useQueryState. Handles serialization and defaults elegantly.
3rdClient-side useStateFast UI updates but filters are lost on refresh, can't be shared, and hurt SEO.

Wrong choice: Storing filters in useState only. Users can't share filtered views, back button doesn't work, and search engines can't index filtered pages.

Why best is best: URL search params make filters shareable, bookmarkable, SSR-friendly, and the server can optimize the database query based on filters.


Decision 18: Handling a modal/dialog

Scenario: Clicking "Edit Profile" should open a modal overlay.

RankChoiceApproach
BestIntercepting route + parallel routeTwo Next.js features combine: a parallel route (@modal) renders a slot alongside the page, and an intercepting route ((.)edit-profile) catches the navigation client-side and renders it inside that slot as a modal instead of a full page. See file structure and explanation below.
2ndClient Component with <dialog>Use native <dialog> element with useRef. Accessible, no library needed.
3rdHeadless UI / Radix DialogLibrary-managed focus trap, animations, and accessibility. Reliable but adds a dependency.

How the intercepting route + parallel route pattern works:

app/
  layout.tsx              ← renders {children} AND {modal}
  @modal/
    default.tsx           ← returns null (no modal by default)
    (.)edit-profile/
      page.tsx            ← modal version of edit-profile
  edit-profile/
    page.tsx              ← full-page version of edit-profile
  1. Parallel route @modal — The @modal folder defines a named "slot." In layout.tsx you render it as a prop: export default function Layout({ children, modal }) { return <>{children}{modal}</>; }. By default it renders default.tsx (which returns null — no modal visible).

  2. Intercepting route (.)edit-profile — The (.) prefix means "intercept this route at the same level." When the user clicks a <Link href="/edit-profile"> (soft/client-side navigation), Next.js matches the intercepting route inside @modal instead of the real /edit-profile page. You render it as a modal overlay.

  3. Hard navigation (direct URL, refresh) — If someone pastes yoursite.com/edit-profile in the browser or refreshes, the interceptor doesn't activate. Next.js renders the full app/edit-profile/page.tsx as a regular page. The modal slot stays null.

The result: Clicking "Edit Profile" opens a modal (fast, no page change). Sharing the URL gives the recipient the full page. Refreshing the modal URL also shows the full page. One URL, two presentations.

// app/layout.tsx
export default function RootLayout({
  children,
  modal,
}: {
  children: React.ReactNode;
  modal: React.ReactNode;
}) {
  return (
    <html>
      <body>
        {children}
        {modal}
      </body>
    </html>
  );
}
 
// app/@modal/(.)edit-profile/page.tsx — modal version
export default function EditProfileModal() {
  return (
    <div className="fixed inset-0 z-50 flex items-center justify-center bg-black/50">
      <div className="rounded-lg bg-white p-6 shadow-xl">
        <h2>Edit Profile</h2>
        {/* form fields */}
      </div>
    </div>
  );
}
 
// app/edit-profile/page.tsx — full-page fallback
export default function EditProfilePage() {
  return (
    <main className="mx-auto max-w-lg p-8">
      <h1>Edit Profile</h1>
      {/* same form, full-page layout */}
    </main>
  );
}

Wrong choice: A div with display: none/block toggled via state. No focus trapping, no escape key handling, not accessible to screen readers, and the modal has no URL so it can't be shared or bookmarked.

Why best is best: Intercepting routes give the modal a real URL -- users can share it, refresh shows a full page, and the client-side modal avoids a full navigation. It's the only pattern that provides two presentations (modal vs. page) from a single URL with zero extra state management.


Scenario: Your app needs a search feature for a product catalog with 50,000+ items.

RankChoiceApproach
BestServer-side search via search paramsDebounce input, push to URL: router.push(?q=term). Server Component queries the database with ILIKE or full-text search.
2ndDedicated search serviceUse Algolia, Meilisearch, or Elasticsearch via a Server Action. Better relevance and typo tolerance.
3rdClient-side filter with useMemoLoad all products and filter client-side. Only viable for small datasets (under 500 items).

Wrong choice: Loading 50,000 products into client memory and filtering with .filter(). Crashes mobile browsers and wastes bandwidth.

Why best is best: Server-side search leverages database indexes, sends only matching results over the wire, and keeps the search query in the URL for sharing.


Decision 20: When to use useMemo and useCallback

Scenario: A component renders a filtered list derived from props and passes a handler to child components.

RankChoiceApproach
BestUse React Compiler (React 19)Enable the React Compiler -- it auto-memoizes. No manual useMemo/useCallback needed.
2ndTargeted useMemo/useCallbackMemoize only the expensive computation and the callback passed to memo()-wrapped children.
3rdMemoize everythingWrap every derived value and handler. Adds memory overhead and complexity for marginal gains.

Wrong choice: Never memoizing, even when you've measured a performance problem. If profiling shows a 200ms re-render from an expensive filter, useMemo is the right fix.

Why best is best: The React Compiler statically analyzes your code and inserts memoization exactly where needed -- no developer effort, no missed opportunities, no over-memoization.


Decision 21: Handling environment-specific configuration

Scenario: Your app needs different API URLs for development, staging, and production.

RankChoiceApproach
Best.env.local + NEXT_PUBLIC_ prefixServer-only vars in .env.local. Client-exposed vars prefixed with NEXT_PUBLIC_. Next.js handles the rest.
2ndPlatform environment variablesSet vars in Vercel/AWS dashboard. Same convention, managed externally.
3rdConfig file with environment switchconfig.ts with process.env.NODE_ENV checks. Works but duplicates what .env files already do.

Wrong choice: Hardcoding API URLs or committing .env files with secrets to git. Secrets leak, environment switching breaks.

Why best is best: Next.js .env convention is built-in, supports per-environment overrides (.env.production), and the NEXT_PUBLIC_ prefix makes the server/client boundary explicit.


Decision 22: Implementing pagination

Scenario: An admin table shows 10,000 user records and needs pagination.

RankChoiceApproach
BestServer-side pagination via search paramsPage and limit live in the URL (?page=2&limit=20). The Server Component reads them, queries the database with OFFSET/LIMIT (or equivalent), and returns only that page of rows. See explanation and code below.
2ndCursor-based paginationUse ?cursor=abc123 for stable pagination on frequently changing data. Better for real-time feeds.
3rdClient-side paginationFetch all data, paginate in memory. Only works for small datasets.

How server-side pagination via search params works:

  1. URL is the source of truth — The current page and page size are search params (?page=2&limit=20). This means pagination state is shareable, bookmarkable, and survives refresh. The back/forward buttons navigate pages for free.

  2. Server Component reads params and queries — The page.tsx receives searchParams as a prop. It calculates OFFSET and LIMIT, queries only that slice from the database, and also fetches the total count for page controls.

  3. Client Component handles navigation — A small "use client" pagination bar uses useRouter or <Link> to update search params. No data fetching logic on the client.

  4. Streaming works automatically — Because the data fetch is in a Server Component, wrapping it in <Suspense> shows a loading skeleton while the query runs. Navigation between pages streams the new content.

// app/admin/users/page.tsx — Server Component
import { prisma } from "@/lib/db";
import { UserTable } from "./user-table";
import { PaginationBar } from "./pagination-bar";
 
interface Props {
  searchParams: Promise<{ page?: string; limit?: string }>;
}
 
export default async function UsersPage({ searchParams }: Props) {
  const { page: pageStr, limit: limitStr } = await searchParams;
  const page = Math.max(1, parseInt(pageStr ?? "1", 10));
  const limit = Math.min(100, Math.max(1, parseInt(limitStr ?? "20", 10)));
  const offset = (page - 1) * limit;
 
  // Single query for this page + total count
  const [users, total] = await Promise.all([
    prisma.user.findMany({ skip: offset, take: limit, orderBy: { createdAt: "desc" } }),
    prisma.user.count(),
  ]);
 
  const totalPages = Math.ceil(total / limit);
 
  return (
    <div>
      <h1>Users ({total})</h1>
      <UserTable users={users} />
      <PaginationBar currentPage={page} totalPages={totalPages} />
    </div>
  );
}
// app/admin/users/pagination-bar.tsx — Client Component
"use client";
 
import Link from "next/link";
 
interface PaginationBarProps {
  currentPage: number;
  totalPages: number;
}
 
export function PaginationBar({ currentPage, totalPages }: PaginationBarProps) {
  return (
    <div className="flex items-center gap-2 py-4">
      <Link
        href={`?page=${currentPage - 1}`}
        className={`rounded border px-3 py-1 ${currentPage <= 1 ? "pointer-events-none opacity-50" : ""}`}
      >
        Previous
      </Link>
      <span className="text-sm">
        Page {currentPage} of {totalPages}
      </span>
      <Link
        href={`?page=${currentPage + 1}`}
        className={`rounded border px-3 py-1 ${currentPage >= totalPages ? "pointer-events-none opacity-50" : ""}`}
      >
        Next
      </Link>
    </div>
  );
}

Key points: The database only returns 20 rows per request (not 10,000). The URL tells you exactly which page you're on. Changing pages is a server navigation -- no client-side data fetching code. <Link> with search params gives you prefetching and streaming for free.

Wrong choice: Infinite scroll that fetches all 10,000 records into memory. Browser crashes, accessibility nightmare, and users can't jump to page 50.

Why best is best: Server-side pagination puts page state in the URL (shareable, bookmarkable), fetches only the needed slice from the database, streams HTML via RSC, and requires zero client-side data fetching logic.


Decision 23: Choosing between Server Actions and API Routes

Scenario: A form needs to create a new record in the database.

RankChoiceApproach
BestServer Action"use server" function called from <form action={...}>. Type-safe, progressively enhanced, no manual endpoint.
2ndAPI Route (Route Handler)app/api/records/route.ts with a POST handler. Better when external clients (mobile apps, webhooks) need the endpoint.
3rdtRPCEnd-to-end type safety with a client/server contract. Powerful but heavy if Server Actions cover your needs.

Wrong choice: Creating API routes for mutations only used by your own Next.js frontend. Server Actions eliminate the boilerplate of managing endpoints, serialization, and error handling.

Why best is best: Server Actions are co-located with your UI, automatically handle serialization, support progressive enhancement, and integrate with useActionState for pending/error states.


Decision 24: Structuring a large Next.js project

Scenario: Your app has 30+ routes across multiple feature areas (auth, dashboard, settings, public marketing).

RankChoiceApproach
BestRoute groups + feature colocationapp/(auth)/login, app/(dashboard)/analytics, app/(marketing)/pricing. Shared layouts per group.
2ndFeature folders outside app/src/features/auth/, src/features/dashboard/ with components, hooks, and utils. app/ only has thin route files.
3rdFlat app/ structureAll routes at top level. Works for small apps but becomes unmanageable at 30+ routes.

Wrong choice: Organizing by file type (components/, hooks/, utils/) instead of feature. Developers jump between 5 directories to work on one feature.

Why best is best: Route groups let you share layouts and middleware per feature area, keep related files together, and the parenthesized names don't affect the URL.


Decision 25: Real-time updates (e.g., chat, notifications)

Scenario: Your app needs to show live notifications as they arrive.

RankChoiceApproach
BestWebSockets (Socket.io or native)Persistent connection, bidirectional, low latency. Use a Client Component to manage the connection.
2ndServer-Sent Events (SSE)One-way server-to-client push. Simpler than WebSockets, auto-reconnects, works through proxies.
3rdPolling with setIntervalFetch every N seconds. Simple but wastes bandwidth and has inherent latency.

Wrong choice: Polling every second with useEffect + fetch. Hammers the server, drains mobile batteries, and still has up to 1 second of latency.

Why best is best: WebSockets deliver messages instantly with a single persistent connection, supporting both sending and receiving without repeated HTTP overhead.


Decision 26: Testing strategy for a React component

Scenario: A <CheckoutForm> component handles validation, submission, and error display.

RankChoiceApproach
BestReact Testing Library + VitestTest user behavior: fill fields, submit, assert error messages appear. render(<CheckoutForm />).
2ndPlaywright/Cypress E2ETest the full flow in a real browser. Slower but catches integration issues across the stack.
3rdSnapshot testsexpect(tree).toMatchSnapshot(). Catches unexpected changes but doesn't verify behavior.

Wrong choice: Testing implementation details (internal state, method calls, CSS classes). Tests break on every refactor even when behavior is unchanged.

Why best is best: RTL tests what users see and do -- if the test passes, the component works. Refactoring internals doesn't break tests, so they actually maintain confidence.


Decision 27: Handling metadata and SEO

Scenario: Each product page needs unique title, description, and Open Graph tags.

RankChoiceApproach
BestgenerateMetadata functionExport async function generateMetadata({ params }) that fetches product data and returns { title, description, openGraph }.
2ndStatic metadata exportexport const metadata = { title: '...' }. Good for pages with fixed metadata.
3rd<Head> from next/headPages Router only. Client-side, doesn't work with RSC.

Wrong choice: Not setting metadata at all or using the same title on every page. Search engines can't differentiate your pages, and social shares look broken.

Why best is best: generateMetadata runs on the server, can fetch data to generate dynamic titles, and is deduplicated with the page's data fetch via fetch cache.


Decision 28: Handling file uploads

Scenario: Users upload profile avatars (max 5MB) to your app.

RankChoiceApproach
BestPresigned URL upload to S3/R2Server Action generates a presigned URL. Client uploads directly to storage. No server memory pressure.
2ndServer Action with FormDataconst file = formData.get('avatar'). Server receives the file and forwards to storage. Simpler but memory-bound.
3rdAPI Route with multer-style parsingClassic approach. More control but more code and configuration.

Wrong choice: Storing uploaded files on the Next.js server's filesystem. Ephemeral containers (Vercel, Docker) lose files on redeploy. Files must go to durable storage.

Why best is best: Presigned URLs let the client upload directly to S3/R2 -- the server never touches the bytes, so it handles any file size without memory pressure.


Decision 29: Internationalization (i18n)

Scenario: Your marketing site needs to support English, Spanish, and Japanese.

RankChoiceApproach
Bestnext-intl with App RouterMiddleware detects locale, route groups per language [locale]/, server-side translations.
2ndnext-i18next (Pages Router)Mature, well-documented. Best option if you're still on Pages Router.
3rdManual locale routingBuild your own [locale] dynamic segment and translation loader. Full control but reinvents the wheel.

Wrong choice: Client-side only translation (loading JSON bundles in useEffect). Content flashes in the default language before switching, and search engines only see the default.

Why best is best: next-intl integrates with App Router's middleware and Server Components, so translated content is in the initial HTML -- no flash, full SEO, and automatic locale detection.


Decision 30: Deploying a Next.js application

Scenario: Your team is ready to deploy a production Next.js app.

RankChoiceApproach
BestVercelZero-config deployment, edge functions, ISR support, analytics, preview deployments per PR.
2ndSelf-hosted with next startRun the Next.js production server on any machine you control. Full control over infrastructure, but you manage everything yourself. See explanation below.
3rdDocker containerBuild a Docker image with standalone output. Good for Kubernetes or existing container infrastructure.

How self-hosted with next start works:

Next.js includes a built-in Node.js production server. You build the app once, then run it as a long-lived process on any server with Node.js installed.

  1. Build step — Run next build on your CI or on the server. This compiles all pages, generates static assets, and produces the .next/ output directory. Server Components are pre-rendered, client bundles are optimized and code-split.

  2. Start step — Run next start (defaults to port 3000). This launches a Node.js HTTP server that handles routing, SSR, Server Actions, ISR revalidation, and middleware -- all the same features you get on Vercel, just running on your own machine.

  3. What you need to manage yourself:

ConcernWhat Vercel does for youWhat you do self-hosted
Process managementAutomaticUse PM2 or systemd to keep the process alive and restart on crash
HTTPS / TLSAutomaticPut Nginx or Caddy in front as a reverse proxy with TLS termination
ScalingAuto-scalesRun multiple instances behind a load balancer (ALB, Nginx upstream)
CDN / static assetsEdge CDN built-inConfigure CloudFront, Cloudflare, or serve from Nginx
ISR cacheManaged distributed cacheWorks out of the box on a single server; for multi-server, configure a shared cache handler
Environment variablesDashboard UISet in .env.production, systemd unit file, or your deployment tool
Zero-downtime deploysAutomaticBlue-green or rolling deploys via your own scripts
# Typical self-hosted deployment on an EC2/DigitalOcean instance
 
# 1. Build (often done in CI, then artifacts are copied to the server)
npm ci
next build
 
# 2. Start with PM2 for process management
pm2 start npm --name "my-app" -- start
# or directly:
pm2 start node_modules/.bin/next --name "my-app" -- start -p 3000
 
# 3. Nginx reverse proxy (simplified)
# /etc/nginx/sites-available/my-app
# server {
#   listen 443 ssl;
#   server_name myapp.com;
#   location / {
#     proxy_pass http://localhost:3000;
#     proxy_http_version 1.1;
#     proxy_set_header Upgrade $http_upgrade;
#     proxy_set_header Connection 'upgrade';
#     proxy_set_header Host $host;
#     proxy_cache_bypass $http_upgrade;
#   }
# }

When to choose self-hosted over Vercel: You need to stay within a specific cloud provider (e.g., everything in AWS VPC), your company policy prohibits third-party hosting, you need custom server middleware (e.g., WebSocket upgrades), or cost is a concern at high traffic volumes where Vercel pricing exceeds a dedicated server.

Wrong choice: Exporting as static HTML (next export / output: 'export') when your app uses Server Components, middleware, or ISR. These features require a Node.js runtime -- static export silently drops them.

Why best is best: Vercel is built by the Next.js team -- features like ISR, middleware, and Server Actions work out of the box with zero infrastructure configuration. Self-hosted is the right 2nd choice when you need that infrastructure control, but expect to own the ops burden.