React & Next.js Architecture Decisions
Common scenarios where React and Next.js developers must choose between patterns, hooks, or architectural approaches. Each decision presents the best, second-best, and third-best choice -- plus the wrong choice developers commonly make.
Decision 1: Fetching data for a page that displays a list of products
Scenario: You need to load product data on a Next.js page.
| Rank | Choice | Approach |
|---|---|---|
| Best | Async Server Component | async function ProductsPage() { const products = await db.products.findMany(); return <ProductList products={products} />; } |
| 2nd | Route handler + server fetch | Create a GET route handler and fetch from a Server Component. Works but adds an unnecessary network hop. |
| 3rd | getServerSideProps (Pages Router) | Still works if on Pages Router, but you lose streaming and RSC benefits. |
Wrong choice: Using useEffect + fetch in a Client Component. This adds a client-server waterfall, exposes your API, and hurts SEO since content isn't in the initial HTML.
Why best is best: Async Server Components fetch data with zero client JS, stream HTML progressively, and access the database directly without an API layer.
Decision 2: Managing form submission with server-side validation
Scenario: A user submits a contact form that needs server validation and error display.
| Rank | Choice | Approach |
|---|---|---|
| Best | Server Action + useActionState | Bind a Server Action to <form action={...}> and use useActionState for pending/error state. |
| 2nd | Server Action + manual state | Call the action via startTransition and manage your own state with useState. More boilerplate but full control. |
| 3rd | API route + client fetch | POST to a route handler from an onSubmit handler. Works but bypasses progressive enhancement. |
Wrong choice: Client-only validation with no server validation. Never trust the client -- attackers bypass your UI.
Why best is best: useActionState gives you pending, error, and data states automatically, works without JS (progressive enhancement), and keeps validation logic on the server.
Decision 3: Sharing UI layout across multiple routes
Scenario: Your dashboard has a sidebar and header that should persist across /dashboard/analytics, /dashboard/settings, etc.
| Rank | Choice | Approach |
|---|---|---|
| Best | Nested layout.tsx | Place a layout.tsx in app/dashboard/ with shared UI. Child routes render inside {children}. |
| 2nd | Template file | Use template.tsx instead. Same nesting, but re-mounts on navigation (useful for animations). |
| 3rd | Wrapper component | Import a <DashboardShell> wrapper in each page. Works but duplicates the import and breaks automatic layout preservation. |
Wrong choice: Putting layout logic in _app.tsx (Pages Router) or a context provider that re-renders the entire tree. This defeats the purpose of nested layouts.
Why best is best: layout.tsx is preserved across navigations -- the sidebar doesn't re-render when you switch tabs, preserving scroll state and avoiding unnecessary work.
Decision 4: Showing a loading state while a page loads
Scenario: Your dashboard page fetches heavy analytics data and you need a loading skeleton.
| Rank | Choice | Approach |
|---|---|---|
| Best | loading.tsx | Add a loading.tsx file next to page.tsx. Next.js auto-wraps the page in a <Suspense> boundary with your loading UI. |
| 2nd | Manual <Suspense> | Wrap specific async components in <Suspense fallback={<Skeleton />}> for granular control. |
| 3rd | Client-side loading state | Use useState(true) and set false after useEffect fetch completes. |
Wrong choice: Showing nothing (blank screen) while data loads. Users think the app is broken after ~300ms of no feedback.
Why best is best: loading.tsx is automatic, requires zero client JS, and enables instant navigation via React's streaming architecture.
Decision 5: Choosing between useState and useReducer
Scenario: A component manages a multi-step form with 8 fields, validation state, and step navigation.
| Rank | Choice | Approach |
|---|---|---|
| Best | useReducer | Define actions like SET_FIELD, NEXT_STEP, VALIDATE. State transitions are explicit and testable. |
| 2nd | useState with an object | const [form, setForm] = useState({...}). Simpler but spread-based updates get messy with complex logic. |
| 3rd | Multiple useState calls | One per field. Fine for simple forms but 8+ useState calls are hard to coordinate. |
Wrong choice: Reaching for a global state library (Redux, Zustand) for form state that only lives in one component. Massive overkill.
Why best is best: useReducer centralizes complex state transitions into a pure function you can unit test independently of React.
Decision 6: Passing data deeply through the component tree
Scenario: Theme, locale, and user preferences need to be accessible 5+ levels deep.
| Rank | Choice | Approach |
|---|---|---|
| Best | React Context | Create a context provider near the top. Consumers read via useContext or React 19's use(ThemeContext). |
| 2nd | Component composition | Pass the data as children or render props to avoid intermediate components needing the prop. |
| 3rd | Prop drilling | Pass props through every level. Tedious but explicit and easy to trace. |
Wrong choice: Installing a state management library just to avoid prop drilling. Context is built-in and sufficient for read-heavy, rarely-changing data like themes.
Why best is best: Context is zero-dependency, built into React, and optimized for data that changes infrequently but is read widely.
Decision 7: Optimistic UI for a like button
Scenario: User clicks "like" and you want the UI to update instantly before the server confirms.
| Rank | Choice | Approach |
|---|---|---|
| Best | useOptimistic | const [optimisticLikes, addOptimistic] = useOptimistic(likes, (state, newLike) => [...state, newLike]); |
| 2nd | Manual optimistic with useState | Set state immediately, revert on error in a try/catch. More code, same idea. |
| 3rd | React Query's onMutate | Optimistic update via mutation callbacks. Good but pulls in a library for something React 19 does natively. |
Wrong choice: Waiting for the server response before updating the UI. The 200-500ms delay makes the app feel sluggish.
Why best is best: useOptimistic automatically reverts to the real state when the Server Action completes or fails -- no manual rollback logic needed.
Decision 8: Error handling for a specific route segment
Scenario: The /dashboard/billing page can fail (payment API errors) and needs a graceful fallback.
| Rank | Choice | Approach |
|---|---|---|
| Best | error.tsx | Add error.tsx next to the page. Catches rendering and data errors, offers a retry button via reset(). |
| 2nd | Error boundary component | Wrap in a custom <ErrorBoundary> for more control over error types and reporting. |
| 3rd | Try/catch in Server Component | Catch errors in the async function and render fallback UI inline. No automatic retry. |
Wrong choice: Letting errors bubble to a global error page that replaces the entire layout. The user loses all context and navigation state.
Why best is best: error.tsx is scoped to the route segment -- the rest of the layout stays interactive, and reset() re-attempts rendering without a full page reload.
Decision 9: When to add "use client"
Scenario: A component displays a product card with a name, price, image, and an "Add to Cart" button.
| Rank | Choice | Approach |
|---|---|---|
| Best | Split: Server card + Client button | Keep ProductCard as a Server Component. Extract <AddToCartButton> as a Client Component. |
| 2nd | Entire card as Client Component | Add "use client" to the card. Simple but ships more JS than needed. |
| 3rd | Server Component with client-side hydration wrapper | Wrap the interactive part with a generic hydration boundary. Over-engineered for a button. |
Wrong choice: Adding "use client" to every component "just to be safe." This defeats RSC benefits and ships unnecessary JavaScript.
Why best is best: Push the "use client" boundary as low as possible. Only the interactive button needs client JS -- the card, image, and text render with zero JS.
Decision 10: Caching and revalidation strategy for a blog
Scenario: Blog posts update infrequently. You want fast loads but fresh content within minutes.
| Rank | Choice | Approach |
|---|---|---|
| Best | ISR with revalidate | export const revalidate = 60; in your page or layout. Serves cached HTML, revalidates in the background. |
| 2nd | On-demand revalidation | Call revalidatePath('/blog/[slug]') or revalidateTag('posts') from a CMS webhook. Instant freshness. |
| 3rd | Full static (generateStaticParams) | Pre-render all posts at build. Fast but stale until the next deploy. |
Wrong choice: Making the blog page fully dynamic (export const dynamic = 'force-dynamic'). No caching means every request hits the database -- slow and wasteful for content that barely changes.
Why best is best: Time-based ISR is zero-config, serves stale content while revalidating, and balances freshness with performance automatically.
Decision 11: Protecting a route from unauthenticated users
Scenario: /dashboard should only be accessible to logged-in users.
| Rank | Choice | Approach |
|---|---|---|
| Best | Middleware | Check the session in middleware.ts and redirect before the page even renders. Zero layout flash. |
| 2nd | Server Component check | Verify auth in the page's Server Component and call redirect('/login'). Works but the layout may flash. |
| 3rd | Client-side guard | Check auth in useEffect and redirect. Shows protected content briefly before redirecting. |
Wrong choice: Only hiding the navigation link to /dashboard. Security through obscurity -- anyone with the URL can access the page.
Why best is best: Middleware runs at the edge before any rendering. The user never sees a flash of protected content, and the check is centralized for all protected routes.
Decision 12: Storing global client state (shopping cart, UI toggles)
Scenario: A cart needs to persist across page navigations and be accessible from the header and product pages.
| Rank | Choice | Approach |
|---|---|---|
| Best | Zustand | Lightweight, no boilerplate. const useCart = create((set) => ({ items: [], add: (item) => set(...) })). |
| 2nd | React Context + useReducer | Built-in, no dependency. Good enough for small apps but re-renders all consumers on any change. |
| 3rd | Redux Toolkit | Powerful but heavy for a cart. Worth it only if the app already uses Redux for other state. |
Wrong choice: Storing cart state in a Server Component or cookie-only approach with no client reactivity. The UI won't update when items are added without a full page refresh.
Why best is best: Zustand is ~1KB, requires no providers, supports selectors to avoid unnecessary re-renders, and works naturally with React 18/19 concurrent features.
Decision 13: Handling parallel data fetches on a dashboard
Scenario: A dashboard page needs user data, analytics, and notifications -- three independent API calls.
| Rank | Choice | Approach |
|---|---|---|
| Best | Parallel async in Server Component | const [user, analytics, notifs] = await Promise.all([getUser(), getAnalytics(), getNotifs()]); |
| 2nd | Parallel <Suspense> boundaries | Wrap each section in its own <Suspense>. They stream independently as they resolve. |
| 3rd | Sequential awaits | const user = await getUser(); const analytics = await getAnalytics(); -- simple but creates a waterfall. |
Wrong choice: Fetching all three in a single useEffect sequentially. Three waterfalls plus client-side rendering makes the dashboard feel painfully slow.
Why best is best: Promise.all fires all three requests simultaneously. Total wait time = slowest request, not the sum of all three.
Decision 14: Styling approach for a new Next.js project
Scenario: Starting a greenfield Next.js app and choosing a styling solution.
| Rank | Choice | Approach |
|---|---|---|
| Best | Tailwind CSS | Utility-first, zero runtime, works perfectly with Server Components, huge ecosystem (shadcn/ui). |
| 2nd | CSS Modules | Scoped styles, no runtime, built into Next.js. Good for teams that prefer traditional CSS. |
| 3rd | Vanilla Extract | Type-safe CSS-in-TS with zero runtime. More setup but great for design system teams. |
Wrong choice: Using a runtime CSS-in-JS library (styled-components, Emotion) with Server Components. They require client-side rendering and break RSC streaming.
Why best is best: Tailwind has zero runtime cost, works with RSC, and combined with shadcn/ui provides production-ready accessible components out of the box.
Decision 15: Image optimization
Scenario: Your e-commerce site has hundreds of product images that need to be fast and responsive.
| Rank | Choice | Approach |
|---|---|---|
| Best | next/image | <Image src={url} width={400} height={300} alt="..." /> -- auto-optimizes format, size, and lazy loads. |
| 2nd | CDN with responsive <picture> | Manual <picture> with srcSet and a CDN like Cloudinary. Full control but more work. |
| 3rd | Plain <img> with lazy loading | <img loading="lazy" /> -- no optimization, manual sizing, potential CLS. |
Wrong choice: Using unoptimized <img> tags with full-resolution source images. Pages load megabytes of images, killing Core Web Vitals.
Why best is best: next/image automatically serves WebP/AVIF, resizes for device width, lazy loads below the fold, and prevents CLS with required dimensions.
Decision 16: Database access pattern in a Next.js app
Scenario: Your app needs to query a PostgreSQL database from server-side code.
| Rank | Choice | Approach |
|---|---|---|
| Best | Prisma in Server Components/Actions | Query directly: const users = await prisma.user.findMany(). Type-safe, no API layer needed. |
| 2nd | Drizzle ORM | Lighter weight, SQL-like syntax, excellent TypeScript inference. Great for teams that prefer SQL. |
| 3rd | Raw SQL via pg or postgres | Maximum control, no ORM overhead. Good for complex queries but no type safety without extra tooling. |
Wrong choice: Exposing database queries through API routes and fetching them from Client Components. Adds latency, complexity, and attack surface for no benefit when RSC can query directly.
Why best is best: Prisma in Server Components means type-safe queries with zero client exposure, automatic connection pooling, and migrations built in.
Decision 17: URL state vs. React state for filters
Scenario: A product listing page has filters (category, price range, sort) that users want to share and bookmark.
| Rank | Choice | Approach |
|---|---|---|
| Best | Search params (useSearchParams + Server Component) | Read params in a Server Component: searchParams.category. Filter on the server, return only matching products. |
| 2nd | nuqs library | Type-safe URL state management with useQueryState. Handles serialization and defaults elegantly. |
| 3rd | Client-side useState | Fast UI updates but filters are lost on refresh, can't be shared, and hurt SEO. |
Wrong choice: Storing filters in useState only. Users can't share filtered views, back button doesn't work, and search engines can't index filtered pages.
Why best is best: URL search params make filters shareable, bookmarkable, SSR-friendly, and the server can optimize the database query based on filters.
Decision 18: Handling a modal/dialog
Scenario: Clicking "Edit Profile" should open a modal overlay.
| Rank | Choice | Approach |
|---|---|---|
| Best | Intercepting route + parallel route | Two Next.js features combine: a parallel route (@modal) renders a slot alongside the page, and an intercepting route ((.)edit-profile) catches the navigation client-side and renders it inside that slot as a modal instead of a full page. See file structure and explanation below. |
| 2nd | Client Component with <dialog> | Use native <dialog> element with useRef. Accessible, no library needed. |
| 3rd | Headless UI / Radix Dialog | Library-managed focus trap, animations, and accessibility. Reliable but adds a dependency. |
How the intercepting route + parallel route pattern works:
app/
layout.tsx ← renders {children} AND {modal}
@modal/
default.tsx ← returns null (no modal by default)
(.)edit-profile/
page.tsx ← modal version of edit-profile
edit-profile/
page.tsx ← full-page version of edit-profile
-
Parallel route
@modal— The@modalfolder defines a named "slot." Inlayout.tsxyou render it as a prop:export default function Layout({ children, modal }) { return <>{children}{modal}</>; }. By default it rendersdefault.tsx(which returnsnull— no modal visible). -
Intercepting route
(.)edit-profile— The(.)prefix means "intercept this route at the same level." When the user clicks a<Link href="/edit-profile">(soft/client-side navigation), Next.js matches the intercepting route inside@modalinstead of the real/edit-profilepage. You render it as a modal overlay. -
Hard navigation (direct URL, refresh) — If someone pastes
yoursite.com/edit-profilein the browser or refreshes, the interceptor doesn't activate. Next.js renders the fullapp/edit-profile/page.tsxas a regular page. The modal slot staysnull.
The result: Clicking "Edit Profile" opens a modal (fast, no page change). Sharing the URL gives the recipient the full page. Refreshing the modal URL also shows the full page. One URL, two presentations.
// app/layout.tsx
export default function RootLayout({
children,
modal,
}: {
children: React.ReactNode;
modal: React.ReactNode;
}) {
return (
<html>
<body>
{children}
{modal}
</body>
</html>
);
}
// app/@modal/(.)edit-profile/page.tsx — modal version
export default function EditProfileModal() {
return (
<div className="fixed inset-0 z-50 flex items-center justify-center bg-black/50">
<div className="rounded-lg bg-white p-6 shadow-xl">
<h2>Edit Profile</h2>
{/* form fields */}
</div>
</div>
);
}
// app/edit-profile/page.tsx — full-page fallback
export default function EditProfilePage() {
return (
<main className="mx-auto max-w-lg p-8">
<h1>Edit Profile</h1>
{/* same form, full-page layout */}
</main>
);
}Wrong choice: A div with display: none/block toggled via state. No focus trapping, no escape key handling, not accessible to screen readers, and the modal has no URL so it can't be shared or bookmarked.
Why best is best: Intercepting routes give the modal a real URL -- users can share it, refresh shows a full page, and the client-side modal avoids a full navigation. It's the only pattern that provides two presentations (modal vs. page) from a single URL with zero extra state management.
Decision 19: Server-side vs. client-side search
Scenario: Your app needs a search feature for a product catalog with 50,000+ items.
| Rank | Choice | Approach |
|---|---|---|
| Best | Server-side search via search params | Debounce input, push to URL: router.push(?q=term). Server Component queries the database with ILIKE or full-text search. |
| 2nd | Dedicated search service | Use Algolia, Meilisearch, or Elasticsearch via a Server Action. Better relevance and typo tolerance. |
| 3rd | Client-side filter with useMemo | Load all products and filter client-side. Only viable for small datasets (under 500 items). |
Wrong choice: Loading 50,000 products into client memory and filtering with .filter(). Crashes mobile browsers and wastes bandwidth.
Why best is best: Server-side search leverages database indexes, sends only matching results over the wire, and keeps the search query in the URL for sharing.
Decision 20: When to use useMemo and useCallback
Scenario: A component renders a filtered list derived from props and passes a handler to child components.
| Rank | Choice | Approach |
|---|---|---|
| Best | Use React Compiler (React 19) | Enable the React Compiler -- it auto-memoizes. No manual useMemo/useCallback needed. |
| 2nd | Targeted useMemo/useCallback | Memoize only the expensive computation and the callback passed to memo()-wrapped children. |
| 3rd | Memoize everything | Wrap every derived value and handler. Adds memory overhead and complexity for marginal gains. |
Wrong choice: Never memoizing, even when you've measured a performance problem. If profiling shows a 200ms re-render from an expensive filter, useMemo is the right fix.
Why best is best: The React Compiler statically analyzes your code and inserts memoization exactly where needed -- no developer effort, no missed opportunities, no over-memoization.
Decision 21: Handling environment-specific configuration
Scenario: Your app needs different API URLs for development, staging, and production.
| Rank | Choice | Approach |
|---|---|---|
| Best | .env.local + NEXT_PUBLIC_ prefix | Server-only vars in .env.local. Client-exposed vars prefixed with NEXT_PUBLIC_. Next.js handles the rest. |
| 2nd | Platform environment variables | Set vars in Vercel/AWS dashboard. Same convention, managed externally. |
| 3rd | Config file with environment switch | config.ts with process.env.NODE_ENV checks. Works but duplicates what .env files already do. |
Wrong choice: Hardcoding API URLs or committing .env files with secrets to git. Secrets leak, environment switching breaks.
Why best is best: Next.js .env convention is built-in, supports per-environment overrides (.env.production), and the NEXT_PUBLIC_ prefix makes the server/client boundary explicit.
Decision 22: Implementing pagination
Scenario: An admin table shows 10,000 user records and needs pagination.
| Rank | Choice | Approach |
|---|---|---|
| Best | Server-side pagination via search params | Page and limit live in the URL (?page=2&limit=20). The Server Component reads them, queries the database with OFFSET/LIMIT (or equivalent), and returns only that page of rows. See explanation and code below. |
| 2nd | Cursor-based pagination | Use ?cursor=abc123 for stable pagination on frequently changing data. Better for real-time feeds. |
| 3rd | Client-side pagination | Fetch all data, paginate in memory. Only works for small datasets. |
How server-side pagination via search params works:
-
URL is the source of truth — The current page and page size are search params (
?page=2&limit=20). This means pagination state is shareable, bookmarkable, and survives refresh. The back/forward buttons navigate pages for free. -
Server Component reads params and queries — The
page.tsxreceivessearchParamsas a prop. It calculatesOFFSETandLIMIT, queries only that slice from the database, and also fetches the total count for page controls. -
Client Component handles navigation — A small
"use client"pagination bar usesuseRouteror<Link>to update search params. No data fetching logic on the client. -
Streaming works automatically — Because the data fetch is in a Server Component, wrapping it in
<Suspense>shows a loading skeleton while the query runs. Navigation between pages streams the new content.
// app/admin/users/page.tsx — Server Component
import { prisma } from "@/lib/db";
import { UserTable } from "./user-table";
import { PaginationBar } from "./pagination-bar";
interface Props {
searchParams: Promise<{ page?: string; limit?: string }>;
}
export default async function UsersPage({ searchParams }: Props) {
const { page: pageStr, limit: limitStr } = await searchParams;
const page = Math.max(1, parseInt(pageStr ?? "1", 10));
const limit = Math.min(100, Math.max(1, parseInt(limitStr ?? "20", 10)));
const offset = (page - 1) * limit;
// Single query for this page + total count
const [users, total] = await Promise.all([
prisma.user.findMany({ skip: offset, take: limit, orderBy: { createdAt: "desc" } }),
prisma.user.count(),
]);
const totalPages = Math.ceil(total / limit);
return (
<div>
<h1>Users ({total})</h1>
<UserTable users={users} />
<PaginationBar currentPage={page} totalPages={totalPages} />
</div>
);
}// app/admin/users/pagination-bar.tsx — Client Component
"use client";
import Link from "next/link";
interface PaginationBarProps {
currentPage: number;
totalPages: number;
}
export function PaginationBar({ currentPage, totalPages }: PaginationBarProps) {
return (
<div className="flex items-center gap-2 py-4">
<Link
href={`?page=${currentPage - 1}`}
className={`rounded border px-3 py-1 ${currentPage <= 1 ? "pointer-events-none opacity-50" : ""}`}
>
Previous
</Link>
<span className="text-sm">
Page {currentPage} of {totalPages}
</span>
<Link
href={`?page=${currentPage + 1}`}
className={`rounded border px-3 py-1 ${currentPage >= totalPages ? "pointer-events-none opacity-50" : ""}`}
>
Next
</Link>
</div>
);
}Key points: The database only returns 20 rows per request (not 10,000). The URL tells you exactly which page you're on. Changing pages is a server navigation -- no client-side data fetching code. <Link> with search params gives you prefetching and streaming for free.
Wrong choice: Infinite scroll that fetches all 10,000 records into memory. Browser crashes, accessibility nightmare, and users can't jump to page 50.
Why best is best: Server-side pagination puts page state in the URL (shareable, bookmarkable), fetches only the needed slice from the database, streams HTML via RSC, and requires zero client-side data fetching logic.
Decision 23: Choosing between Server Actions and API Routes
Scenario: A form needs to create a new record in the database.
| Rank | Choice | Approach |
|---|---|---|
| Best | Server Action | "use server" function called from <form action={...}>. Type-safe, progressively enhanced, no manual endpoint. |
| 2nd | API Route (Route Handler) | app/api/records/route.ts with a POST handler. Better when external clients (mobile apps, webhooks) need the endpoint. |
| 3rd | tRPC | End-to-end type safety with a client/server contract. Powerful but heavy if Server Actions cover your needs. |
Wrong choice: Creating API routes for mutations only used by your own Next.js frontend. Server Actions eliminate the boilerplate of managing endpoints, serialization, and error handling.
Why best is best: Server Actions are co-located with your UI, automatically handle serialization, support progressive enhancement, and integrate with useActionState for pending/error states.
Decision 24: Structuring a large Next.js project
Scenario: Your app has 30+ routes across multiple feature areas (auth, dashboard, settings, public marketing).
| Rank | Choice | Approach |
|---|---|---|
| Best | Route groups + feature colocation | app/(auth)/login, app/(dashboard)/analytics, app/(marketing)/pricing. Shared layouts per group. |
| 2nd | Feature folders outside app/ | src/features/auth/, src/features/dashboard/ with components, hooks, and utils. app/ only has thin route files. |
| 3rd | Flat app/ structure | All routes at top level. Works for small apps but becomes unmanageable at 30+ routes. |
Wrong choice: Organizing by file type (components/, hooks/, utils/) instead of feature. Developers jump between 5 directories to work on one feature.
Why best is best: Route groups let you share layouts and middleware per feature area, keep related files together, and the parenthesized names don't affect the URL.
Decision 25: Real-time updates (e.g., chat, notifications)
Scenario: Your app needs to show live notifications as they arrive.
| Rank | Choice | Approach |
|---|---|---|
| Best | WebSockets (Socket.io or native) | Persistent connection, bidirectional, low latency. Use a Client Component to manage the connection. |
| 2nd | Server-Sent Events (SSE) | One-way server-to-client push. Simpler than WebSockets, auto-reconnects, works through proxies. |
| 3rd | Polling with setInterval | Fetch every N seconds. Simple but wastes bandwidth and has inherent latency. |
Wrong choice: Polling every second with useEffect + fetch. Hammers the server, drains mobile batteries, and still has up to 1 second of latency.
Why best is best: WebSockets deliver messages instantly with a single persistent connection, supporting both sending and receiving without repeated HTTP overhead.
Decision 26: Testing strategy for a React component
Scenario: A <CheckoutForm> component handles validation, submission, and error display.
| Rank | Choice | Approach |
|---|---|---|
| Best | React Testing Library + Vitest | Test user behavior: fill fields, submit, assert error messages appear. render(<CheckoutForm />). |
| 2nd | Playwright/Cypress E2E | Test the full flow in a real browser. Slower but catches integration issues across the stack. |
| 3rd | Snapshot tests | expect(tree).toMatchSnapshot(). Catches unexpected changes but doesn't verify behavior. |
Wrong choice: Testing implementation details (internal state, method calls, CSS classes). Tests break on every refactor even when behavior is unchanged.
Why best is best: RTL tests what users see and do -- if the test passes, the component works. Refactoring internals doesn't break tests, so they actually maintain confidence.
Decision 27: Handling metadata and SEO
Scenario: Each product page needs unique title, description, and Open Graph tags.
| Rank | Choice | Approach |
|---|---|---|
| Best | generateMetadata function | Export async function generateMetadata({ params }) that fetches product data and returns { title, description, openGraph }. |
| 2nd | Static metadata export | export const metadata = { title: '...' }. Good for pages with fixed metadata. |
| 3rd | <Head> from next/head | Pages Router only. Client-side, doesn't work with RSC. |
Wrong choice: Not setting metadata at all or using the same title on every page. Search engines can't differentiate your pages, and social shares look broken.
Why best is best: generateMetadata runs on the server, can fetch data to generate dynamic titles, and is deduplicated with the page's data fetch via fetch cache.
Decision 28: Handling file uploads
Scenario: Users upload profile avatars (max 5MB) to your app.
| Rank | Choice | Approach |
|---|---|---|
| Best | Presigned URL upload to S3/R2 | Server Action generates a presigned URL. Client uploads directly to storage. No server memory pressure. |
| 2nd | Server Action with FormData | const file = formData.get('avatar'). Server receives the file and forwards to storage. Simpler but memory-bound. |
| 3rd | API Route with multer-style parsing | Classic approach. More control but more code and configuration. |
Wrong choice: Storing uploaded files on the Next.js server's filesystem. Ephemeral containers (Vercel, Docker) lose files on redeploy. Files must go to durable storage.
Why best is best: Presigned URLs let the client upload directly to S3/R2 -- the server never touches the bytes, so it handles any file size without memory pressure.
Decision 29: Internationalization (i18n)
Scenario: Your marketing site needs to support English, Spanish, and Japanese.
| Rank | Choice | Approach |
|---|---|---|
| Best | next-intl with App Router | Middleware detects locale, route groups per language [locale]/, server-side translations. |
| 2nd | next-i18next (Pages Router) | Mature, well-documented. Best option if you're still on Pages Router. |
| 3rd | Manual locale routing | Build your own [locale] dynamic segment and translation loader. Full control but reinvents the wheel. |
Wrong choice: Client-side only translation (loading JSON bundles in useEffect). Content flashes in the default language before switching, and search engines only see the default.
Why best is best: next-intl integrates with App Router's middleware and Server Components, so translated content is in the initial HTML -- no flash, full SEO, and automatic locale detection.
Decision 30: Deploying a Next.js application
Scenario: Your team is ready to deploy a production Next.js app.
| Rank | Choice | Approach |
|---|---|---|
| Best | Vercel | Zero-config deployment, edge functions, ISR support, analytics, preview deployments per PR. |
| 2nd | Self-hosted with next start | Run the Next.js production server on any machine you control. Full control over infrastructure, but you manage everything yourself. See explanation below. |
| 3rd | Docker container | Build a Docker image with standalone output. Good for Kubernetes or existing container infrastructure. |
How self-hosted with next start works:
Next.js includes a built-in Node.js production server. You build the app once, then run it as a long-lived process on any server with Node.js installed.
-
Build step — Run
next buildon your CI or on the server. This compiles all pages, generates static assets, and produces the.next/output directory. Server Components are pre-rendered, client bundles are optimized and code-split. -
Start step — Run
next start(defaults to port 3000). This launches a Node.js HTTP server that handles routing, SSR, Server Actions, ISR revalidation, and middleware -- all the same features you get on Vercel, just running on your own machine. -
What you need to manage yourself:
| Concern | What Vercel does for you | What you do self-hosted |
|---|---|---|
| Process management | Automatic | Use PM2 or systemd to keep the process alive and restart on crash |
| HTTPS / TLS | Automatic | Put Nginx or Caddy in front as a reverse proxy with TLS termination |
| Scaling | Auto-scales | Run multiple instances behind a load balancer (ALB, Nginx upstream) |
| CDN / static assets | Edge CDN built-in | Configure CloudFront, Cloudflare, or serve from Nginx |
| ISR cache | Managed distributed cache | Works out of the box on a single server; for multi-server, configure a shared cache handler |
| Environment variables | Dashboard UI | Set in .env.production, systemd unit file, or your deployment tool |
| Zero-downtime deploys | Automatic | Blue-green or rolling deploys via your own scripts |
# Typical self-hosted deployment on an EC2/DigitalOcean instance
# 1. Build (often done in CI, then artifacts are copied to the server)
npm ci
next build
# 2. Start with PM2 for process management
pm2 start npm --name "my-app" -- start
# or directly:
pm2 start node_modules/.bin/next --name "my-app" -- start -p 3000
# 3. Nginx reverse proxy (simplified)
# /etc/nginx/sites-available/my-app
# server {
# listen 443 ssl;
# server_name myapp.com;
# location / {
# proxy_pass http://localhost:3000;
# proxy_http_version 1.1;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection 'upgrade';
# proxy_set_header Host $host;
# proxy_cache_bypass $http_upgrade;
# }
# }When to choose self-hosted over Vercel: You need to stay within a specific cloud provider (e.g., everything in AWS VPC), your company policy prohibits third-party hosting, you need custom server middleware (e.g., WebSocket upgrades), or cost is a concern at high traffic volumes where Vercel pricing exceeds a dedicated server.
Wrong choice: Exporting as static HTML (next export / output: 'export') when your app uses Server Components, middleware, or ISR. These features require a Node.js runtime -- static export silently drops them.
Why best is best: Vercel is built by the Next.js team -- features like ISR, middleware, and Server Actions work out of the box with zero infrastructure configuration. Self-hosted is the right 2nd choice when you need that infrastructure control, but expect to own the ops burden.