React SME Cookbook
All FAQs

Search Documentation

Search across all documentation pages

data-fetchingparallelwaterfallcachingswrtanstack-queryserver-componentsprefetchn-plus-one

Data Fetching Performance — Eliminate waterfalls, parallelize fetches, and cache aggressively

Recipe

// Server Component — zero client JS for data fetching
// app/dashboard/page.tsx
import { db } from "@/lib/db";
 
export default async function DashboardPage() {
  // PARALLEL fetches — total time = max(50ms, 120ms, 80ms) = 120ms
  const [stats, orders, activity] = await Promise.all([
    db.stats.findFirst(),           // 50ms
    db.order.findMany({ take: 10 }),// 120ms
    db.activity.findMany({ take: 5 }), // 80ms
  ]);
 
  return (
    <div>
      <StatsPanel data={stats} />
      <OrderTable data={orders} />
      <ActivityFeed data={activity} />
    </div>
  );
}
 
// Compare with WATERFALL — total time = 50 + 120 + 80 = 250ms
// const stats = await db.stats.findFirst();
// const orders = await db.order.findMany({ take: 10 });
// const activity = await db.activity.findMany({ take: 5 });

When to reach for this: For every page that fetches data. Default to Server Component fetching with Promise.all for parallel requests. Only use client-side fetching (SWR, TanStack Query) when you need real-time updates or user-specific data that changes after page load.

Working Example

// ---- BEFORE: Waterfall fetches — 1.8s total, 5 sequential requests ----
 
// app/products/[id]/page.tsx
"use client";
 
import { useState, useEffect } from "react";
 
export default function ProductPage({ params }: { params: { id: string } }) {
  const [product, setProduct] = useState(null);
  const [reviews, setReviews] = useState([]);
  const [related, setRelated] = useState([]);
  const [seller, setSeller] = useState(null);
  const [inventory, setInventory] = useState(null);
  const [loading, setLoading] = useState(true);
 
  useEffect(() => {
    async function load() {
      // WATERFALL: each fetch waits for the previous one
      const productRes = await fetch(`/api/products/${params.id}`);     // 200ms
      const productData = await productRes.json();
      setProduct(productData);
 
      // These depend on product but are fetched sequentially
      const reviewsRes = await fetch(`/api/reviews?productId=${params.id}`); // 300ms
      setReviews(await reviewsRes.json());
 
      const relatedRes = await fetch(`/api/products/related?category=${productData.category}`); // 400ms
      setRelated(await relatedRes.json());
 
      const sellerRes = await fetch(`/api/sellers/${productData.sellerId}`); // 150ms
      setSeller(await sellerRes.json());
 
      const inventoryRes = await fetch(`/api/inventory/${params.id}`);   // 100ms
      setInventory(await inventoryRes.json());
 
      setLoading(false);
      // Total: 200 + 300 + 400 + 150 + 100 = 1150ms network + hydration overhead
    }
    load();
  }, [params.id]);
 
  if (loading) return <div>Loading...</div>;
 
  return (
    <div>
      <h1>{product.name}</h1>
      <p>${product.price}</p>
      <p>Sold by: {seller.name}</p>
      <p>In stock: {inventory.quantity}</p>
      <ReviewList reviews={reviews} />
      <RelatedProducts products={related} />
    </div>
  );
}
 
// ---- AFTER: Parallel Server Component fetching — 400ms total (65% faster) ----
 
// app/products/[id]/page.tsx — Server Component
import { Suspense } from "react";
import { db } from "@/lib/db";
import { notFound } from "next/navigation";
import { ReviewList } from "./ReviewList";
import { RelatedProducts } from "./RelatedProducts";
 
export default async function ProductPage({
  params,
}: {
  params: Promise<{ id: string }>;
}) {
  const { id } = await params;
 
  // First fetch: product data (needed for seller and related queries)
  const product = await db.product.findUnique({
    where: { id },
    include: { seller: true }, // JOIN instead of separate fetch — eliminates N+1
  });
 
  if (!product) notFound();
 
  // Second wave: PARALLEL fetches for independent data
  const [reviews, related, inventory] = await Promise.all([
    db.review.findMany({
      where: { productId: id },
      orderBy: { createdAt: "desc" },
      take: 20,
    }),
    db.product.findMany({
      where: { category: product.category, id: { not: id } },
      take: 6,
    }),
    db.inventory.findUnique({ where: { productId: id } }),
  ]);
  // Total: 200ms (product+seller) + max(300ms, 400ms, 100ms) = 200 + 400 = 600ms
  // But with caching: product cached from previous visits, so ~400ms
 
  return (
    <div>
      <h1>{product.name}</h1>
      <p>${product.price}</p>
      <p>Sold by: {product.seller.name}</p>
      <p>In stock: {inventory?.quantity ?? 0}</p>
 
      <Suspense fallback={<div className="h-64 animate-pulse" />}>
        <ReviewList reviews={reviews} productId={id} />
      </Suspense>
 
      <Suspense fallback={<div className="h-48 animate-pulse" />}>
        <RelatedProducts products={related} />
      </Suspense>
    </div>
  );
}
 
// Even faster: split into streaming sections
// app/products/[id]/page.tsx — Maximum parallelism with streaming
export default async function ProductPageStreaming({
  params,
}: {
  params: Promise<{ id: string }>;
}) {
  const { id } = await params;
 
  const product = await db.product.findUnique({
    where: { id },
    include: { seller: true },
  });
 
  if (!product) notFound();
 
  return (
    <div>
      <h1>{product.name}</h1>
      <p>${product.price}</p>
      <p>Sold by: {product.seller.name}</p>
 
      {/* Each section streams independently */}
      <Suspense fallback={<InventorySkeleton />}>
        <InventoryBadge productId={id} />
      </Suspense>
 
      <Suspense fallback={<ReviewsSkeleton />}>
        <ReviewSection productId={id} />
      </Suspense>
 
      <Suspense fallback={<RelatedSkeleton />}>
        <RelatedSection category={product.category} excludeId={id} />
      </Suspense>
    </div>
  );
}
 
// Each section fetches its own data — all stream in parallel
async function InventoryBadge({ productId }: { productId: string }) {
  const inventory = await db.inventory.findUnique({ where: { productId } });
  return <p>In stock: {inventory?.quantity ?? 0}</p>;
}
 
async function ReviewSection({ productId }: { productId: string }) {
  const reviews = await db.review.findMany({
    where: { productId },
    orderBy: { createdAt: "desc" },
    take: 20,
  });
  return <ReviewList reviews={reviews} productId={productId} />;
}
 
async function RelatedSection({
  category,
  excludeId,
}: {
  category: string;
  excludeId: string;
}) {
  const related = await db.product.findMany({
    where: { category, id: { not: excludeId } },
    take: 6,
  });
  return <RelatedProducts products={related} />;
}

What this demonstrates:

  • Waterfall: 5 sequential fetches = 1150ms total
  • Parallel: Promise.all groups = 600ms total (48% faster)
  • Streaming: independent Suspense sections = 200ms first paint, 400ms full (65% faster)
  • Prisma include replaces separate seller fetch: eliminates 1 round trip (N+1 prevention)
  • Server Component: zero client JS for all data fetching, no loading/error state management

Deep Dive

How It Works

  • Server Component fetching runs on the server with direct database access, eliminating the API route round trip and sending zero JavaScript to the client. The data is serialized into the React Server Component payload.
  • Promise.all parallelism starts all promises simultaneously and resolves when the slowest one completes. Total time equals the maximum individual fetch time, not the sum.
  • Waterfall detection — A waterfall occurs when fetch B depends on the result of fetch A, but the dependency is artificial. If B does not actually need A's data, they can run in parallel.
  • N+1 queries — Fetching a list of products, then fetching the seller for each product individually, creates N+1 queries (1 for the list + N for each seller). Prisma's include or select with relations resolves this with a single JOIN query.
  • SWR and TanStack Query deduplication — Multiple components requesting the same endpoint within a time window receive the same response from a single network request. This prevents duplicate fetches in client-side rendering scenarios.

Variations

Caching with revalidation:

// Fetch with Next.js caching
async function getProducts() {
  const res = await fetch("https://api.example.com/products", {
    next: {
      revalidate: 3600, // Cache for 1 hour
      tags: ["products"], // Tag for targeted revalidation
    },
  });
  return res.json();
}
 
// Revalidate specific cache tag after mutation
import { revalidateTag } from "next/cache";
 
async function createProduct(data: ProductData) {
  await db.product.create({ data });
  revalidateTag("products"); // Invalidate products cache
}

SWR for client-side real-time data:

"use client";
 
import useSWR from "swr";
 
const fetcher = (url: string) => fetch(url).then((r) => r.json());
 
function LiveOrderCount() {
  const { data, error, isLoading } = useSWR("/api/orders/count", fetcher, {
    refreshInterval: 5000, // Poll every 5 seconds
    dedupingInterval: 2000, // Deduplicate requests within 2s
  });
 
  if (isLoading) return <span>...</span>;
  if (error) return <span>Error</span>;
  return <span>{data.count} orders</span>;
}

Prefetching for anticipated navigation:

// Prefetch on hover — data ready when user clicks
import { useRouter } from "next/navigation";
 
function ProductCard({ product }: { product: Product }) {
  const router = useRouter();
 
  return (
    <div
      onMouseEnter={() => router.prefetch(`/products/${product.id}`)}
      onClick={() => router.push(`/products/${product.id}`)}
    >
      {product.name}
    </div>
  );
}

Preventing N+1 with Prisma:

// BAD: N+1 — 1 query for orders + N queries for customers
const orders = await db.order.findMany();
const ordersWithCustomers = await Promise.all(
  orders.map(async (order) => ({
    ...order,
    customer: await db.customer.findUnique({ where: { id: order.customerId } }),
  }))
);
 
// GOOD: Single query with JOIN
const orders = await db.order.findMany({
  include: { customer: true },
});

TypeScript Notes

  • Promise.all preserves tuple types: Promise.all([fetchA(), fetchB()]) returns [TypeA, TypeB].
  • SWR and TanStack Query accept generic type parameters for response typing.
  • Server Component props with params are typed as Promise<{ key: string }> in Next.js 15+.

Gotchas

  • Sequential awaits that should be parallelconst a = await fetchA(); const b = await fetchB(); runs sequentially even if B does not depend on A. Fix: Use Promise.all([fetchA(), fetchB()]) for independent fetches.

  • Fetching in client components when server components work — Client-side useEffect + fetch adds hydration latency, loading states, and client JS. Fix: Default to Server Component data fetching. Only use client-side fetching for data that changes after initial page load.

  • Missing error handling in Promise.all — If one promise rejects, Promise.all rejects immediately, losing results from other promises. Fix: Use Promise.allSettled when partial results are acceptable, or wrap each promise in a try-catch.

  • Over-fetching data — Selecting all columns when you only need id and name wastes bandwidth and memory. Fix: Use Prisma select to fetch only the fields your component needs.

  • Cache key mismatches — Different query parameters for the same logical data create separate cache entries. Fix: Normalize query parameters and use consistent cache key patterns.

  • N+1 in list pages — Rendering a list of items that each fetch their own data creates N+1 requests. Fix: Fetch all data in the parent component with a single query that includes relations.

Alternatives

ApproachTrade-off
Server Component fetchZero client JS; no real-time updates
SWRAutomatic revalidation, dedup; client JS overhead
TanStack QueryPowerful caching, pagination; heavier than SWR
Promise.allSimple parallelism; all-or-nothing error handling
Promise.allSettledPartial results on failure; more complex result handling
Streaming with SuspenseProgressive rendering; requires skeleton design
GraphQLPrecise data fetching; schema overhead

FAQs

Why is Promise.all faster than sequential awaits for independent fetches?
  • Promise.all starts all promises simultaneously.
  • Total time equals the slowest individual fetch, not the sum of all fetches.
  • Example: three fetches of 50ms, 120ms, and 80ms complete in 120ms instead of 250ms.
What is an N+1 query and how do you fix it with Prisma?
  • Fetching a list (1 query) then fetching a related record for each item (N queries) = N+1 queries.
  • Fix: Use Prisma include or select with relations to resolve it in a single JOIN query.
// N+1: 1 + N queries
const orders = await db.order.findMany();
for (const o of orders) {
  o.customer = await db.customer.findUnique({ where: { id: o.customerId } });
}
 
// Fixed: 1 query with JOIN
const orders = await db.order.findMany({ include: { customer: true } });
When should you use client-side fetching (SWR/TanStack Query) instead of Server Components?
  • When data changes after initial page load (e.g., live order counts, real-time dashboards).
  • When user-specific data must update without a full page refresh.
  • Server Components are the default for initial data; client fetching is for post-load interactivity.
How does Suspense streaming improve perceived performance for data fetching?
  • Each <Suspense> boundary streams independently as its data resolves.
  • The user sees the page shell and fast sections immediately, while slower sections show skeletons.
  • First paint time equals the fastest section, not the slowest.
Gotcha: What happens if one promise in Promise.all rejects?
  • Promise.all rejects immediately, discarding results from other promises.
  • Fix: Use Promise.allSettled when partial results are acceptable, or wrap each promise in try-catch.
const [statsResult, ordersResult] = await Promise.allSettled([
  fetchStats(),
  fetchOrders(),
]);
const stats = statsResult.status === "fulfilled" ? statsResult.value : null;
Why is fetching data in a client useEffect worse than Server Component fetching?
  • Client-side: browser downloads JS, hydrates, then starts the fetch -- adding hundreds of ms.
  • Server Component: data is fetched on the server and included in the initial HTML.
  • Client approach also requires managing loading/error states and ships extra JS to the client.
How do you prefetch data for anticipated navigation?
import { useRouter } from "next/navigation";
 
function ProductCard({ product }: { product: Product }) {
  const router = useRouter();
  return (
    <div
      onMouseEnter={() => router.prefetch(`/products/${product.id}`)}
      onClick={() => router.push(`/products/${product.id}`)}
    >
      {product.name}
    </div>
  );
}
How does Promise.all preserve tuple types in TypeScript?
// TypeScript infers [Stats, Order[], Activity[]]
const [stats, orders, activity] = await Promise.all([
  fetchStats(),       // Returns Promise<Stats>
  fetchOrders(),      // Returns Promise<Order[]>
  fetchActivity(),    // Returns Promise<Activity[]>
]);
// stats: Stats, orders: Order[], activity: Activity[]
How are params typed in Next.js 15+ Server Components with TypeScript?
  • In Next.js 15+, params is a Promise and must be awaited.
export default async function Page({
  params,
}: {
  params: Promise<{ id: string }>;
}) {
  const { id } = await params;
}
Gotcha: What is the risk of over-fetching data from the database?
  • Selecting all columns when you only need id and name wastes bandwidth and memory.
  • Fix: Use Prisma select to fetch only the fields your component renders.
  • This is especially impactful for list pages fetching dozens of records.
How does SWR deduplicate multiple requests for the same endpoint?
  • Multiple components calling useSWR with the same key within the dedupingInterval receive the same response from a single network request.
  • Default dedupingInterval is 2 seconds.
  • This prevents duplicate fetches when several components need the same data.
What is the difference between fetch-level and page-level caching in Next.js?
  • Fetch-level: next: { revalidate: 3600 } on a specific fetch() call caches that one response.
  • Page-level: export const revalidate = 3600 caches the entire route and all its fetches.
  • Fetch-level gives fine-grained control; page-level is simpler for uniform caching.