React SME Cookbook
All FAQs

Search Documentation

Search across all documentation pages

checklistauditlighthouse-cibundle-budgetsci-cdmonitoringperformancecore-web-vitals

Performance Audit Checklist — 30-point systematic review with CI enforcement

Recipe

# Run a complete performance audit in 3 steps:
 
# 1. Automated checks (CI/CD)
#    - Bundle analysis: ANALYZE=true npm run build
#    - Lighthouse CI: lhci autorun
#    - TypeScript + ESLint: npm run lint && npm run type-check
 
# 2. Manual checks (Developer)
#    - React DevTools Profiler: Record interactions, check flame chart
#    - Chrome Memory tab: Heap snapshots before/after navigation
#    - Core Web Vitals: Lighthouse audit on production URL
 
# 3. Monitoring (Production)
#    - web-vitals library: Real user metrics to analytics
#    - Error tracking: Sentry, Datadog, or New Relic
#    - Bundle size tracking: Compare against budget on every PR

When to reach for this: Before every major release, quarterly for established apps, and when users report performance issues. Use the CI workflow to catch regressions automatically on every pull request.

Working Example

# .github/workflows/performance.yml — GitHub Actions with performance gates
 
name: Performance Audit
 
on:
  pull_request:
    branches: [main]
  push:
    branches: [main]
 
jobs:
  bundle-analysis:
    name: Bundle Size Check
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: "npm"
      - run: npm ci
 
      # Build with bundle analysis
      - name: Analyze bundle
        run: ANALYZE=true npm run build
        env:
          NEXT_TELEMETRY_DISABLED: 1
 
      # Check bundle budgets
      - name: Check client bundle size
        run: |
          # Get the size of the client JS bundle
          CLIENT_SIZE=$(find .next/static/chunks -name "*.js" -exec cat {} + | wc -c)
          CLIENT_SIZE_KB=$((CLIENT_SIZE / 1024))
          echo "Client bundle: ${CLIENT_SIZE_KB}KB"
 
          # Fail if client bundle exceeds 300KB
          if [ "$CLIENT_SIZE_KB" -gt 300 ]; then
            echo "::error::Client bundle (${CLIENT_SIZE_KB}KB) exceeds 300KB budget"
            exit 1
          fi
 
      # Upload bundle stats for comparison
      - name: Upload bundle stats
        uses: actions/upload-artifact@v4
        with:
          name: bundle-stats
          path: .next/analyze/
 
  lighthouse:
    name: Lighthouse CI
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: "npm"
      - run: npm ci
      - run: npm run build
        env:
          NEXT_TELEMETRY_DISABLED: 1
 
      # Run Lighthouse CI
      - name: Run Lighthouse
        uses: treosh/lighthouse-ci-action@v12
        with:
          configPath: ./lighthouserc.json
          uploadArtifacts: true
          temporaryPublicStorage: true
 
  type-and-lint:
    name: TypeScript & ESLint
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: "npm"
      - run: npm ci
      - run: npm run type-check
      - run: npm run lint
// lighthouserc.json — Lighthouse CI configuration with performance budgets
{
  "ci": {
    "collect": {
      "startServerCommand": "npm start",
      "startServerReadyPattern": "ready on",
      "url": [
        "http://localhost:3000/",
        "http://localhost:3000/products",
        "http://localhost:3000/dashboard"
      ],
      "numberOfRuns": 3
    },
    "assert": {
      "assertions": {
        "categories:performance": ["error", { "minScore": 0.9 }],
        "categories:accessibility": ["warn", { "minScore": 0.9 }],
        "categories:best-practices": ["warn", { "minScore": 0.9 }],
        "first-contentful-paint": ["error", { "maxNumericValue": 1800 }],
        "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
        "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }],
        "total-blocking-time": ["error", { "maxNumericValue": 200 }],
        "interactive": ["warn", { "maxNumericValue": 3500 }],
        "unused-javascript": ["warn", { "maxLength": 1 }]
      }
    },
    "upload": {
      "target": "temporary-public-storage"
    }
  }
}
// scripts/perf-audit.ts — Manual audit script
import { execSync } from "child_process";
 
interface AuditItem {
  id: string;
  category: string;
  severity: "CRITICAL" | "HIGH" | "MEDIUM";
  description: string;
  check: () => boolean | Promise<boolean>;
}
 
const auditItems: AuditItem[] = [
  // --- RENDERING (1-6) ---
  {
    id: "R1",
    category: "Rendering",
    severity: "CRITICAL",
    description: "No unnecessary re-renders on idle (React DevTools Profiler)",
    check: () => true, // Manual check
  },
  {
    id: "R2",
    category: "Rendering",
    severity: "CRITICAL",
    description: "React.memo on expensive list items with stable keys",
    check: () => true,
  },
  {
    id: "R3",
    category: "Rendering",
    severity: "HIGH",
    description: "No inline objects/functions passed to memoized children",
    check: () => true,
  },
  {
    id: "R4",
    category: "Rendering",
    severity: "HIGH",
    description: "State colocated with components that use it",
    check: () => true,
  },
  {
    id: "R5",
    category: "Rendering",
    severity: "MEDIUM",
    description: "useTransition for non-urgent updates (search, filters)",
    check: () => true,
  },
  {
    id: "R6",
    category: "Rendering",
    severity: "MEDIUM",
    description: "Virtualization for lists exceeding 100 items",
    check: () => true,
  },
 
  // --- COMPONENTS (7-11) ---
  {
    id: "C1",
    category: "Components",
    severity: "CRITICAL",
    description: "Default to Server Components, minimal use client boundaries",
    check: () => {
      const result = execSync(
        'grep -r "use client" app/ --include="*.tsx" --include="*.ts" -l | wc -l'
      ).toString().trim();
      const clientFiles = parseInt(result);
      console.log(`  Found ${clientFiles} "use client" files`);
      return clientFiles < 20; // Threshold depends on app size
    },
  },
  {
    id: "C2",
    category: "Components",
    severity: "HIGH",
    description: "No entire layouts marked as client components",
    check: () => {
      try {
        execSync('grep -r "use client" app/**/layout.tsx 2>/dev/null');
        return false; // Found "use client" in a layout
      } catch {
        return true; // No "use client" in layouts
      }
    },
  },
  {
    id: "C3",
    category: "Components",
    severity: "HIGH",
    description: "Granular Suspense boundaries (not one per page)",
    check: () => true,
  },
  {
    id: "C4",
    category: "Components",
    severity: "MEDIUM",
    description: "Error boundaries around each Suspense boundary",
    check: () => true,
  },
  {
    id: "C5",
    category: "Components",
    severity: "MEDIUM",
    description: "Skeleton loading states match final layout dimensions",
    check: () => true,
  },
 
  // --- BUNDLE (12-17) ---
  {
    id: "B1",
    category: "Bundle",
    severity: "CRITICAL",
    description: "Client bundle under 150KB gzipped (landing), under 300KB (app)",
    check: () => {
      // Automated check in CI
      return true;
    },
  },
  {
    id: "B2",
    category: "Bundle",
    severity: "CRITICAL",
    description: "Heavy libraries dynamically imported (charts, editors, maps)",
    check: () => true,
  },
  {
    id: "B3",
    category: "Bundle",
    severity: "HIGH",
    description: "No moment.js (use date-fns), no full lodash (use lodash-es or native)",
    check: () => {
      try {
        const pkg = require("./package.json");
        const deps = { ...pkg.dependencies, ...pkg.devDependencies };
        const banned = ["moment", "lodash"];
        const found = banned.filter((d) => d in deps);
        if (found.length > 0) {
          console.log(`  Found banned dependencies: ${found.join(", ")}`);
          return false;
        }
        return true;
      } catch {
        return true;
      }
    },
  },
  {
    id: "B4",
    category: "Bundle",
    severity: "HIGH",
    description: "Tree-shaking: named imports, no barrel file re-exports for large modules",
    check: () => true,
  },
  {
    id: "B5",
    category: "Bundle",
    severity: "MEDIUM",
    description: "Route groups splitting marketing and app bundles",
    check: () => true,
  },
  {
    id: "B6",
    category: "Bundle",
    severity: "MEDIUM",
    description: "No unused dependencies in package.json",
    check: () => true,
  },
 
  // --- DATA FETCHING (18-23) ---
  {
    id: "D1",
    category: "Data Fetching",
    severity: "CRITICAL",
    description: "No waterfall fetches — use Promise.all for independent queries",
    check: () => true,
  },
  {
    id: "D2",
    category: "Data Fetching",
    severity: "CRITICAL",
    description: "Caching strategy defined: revalidate times and tags for all data",
    check: () => true,
  },
  {
    id: "D3",
    category: "Data Fetching",
    severity: "HIGH",
    description: "No N+1 queries — use Prisma include/select for relations",
    check: () => true,
  },
  {
    id: "D4",
    category: "Data Fetching",
    severity: "HIGH",
    description: "Data fetching colocated in the Server Component that needs it",
    check: () => true,
  },
  {
    id: "D5",
    category: "Data Fetching",
    severity: "MEDIUM",
    description: "Revalidation triggered on all mutation paths",
    check: () => true,
  },
  {
    id: "D6",
    category: "Data Fetching",
    severity: "MEDIUM",
    description: "SWR or TanStack Query deduplication for client-side fetches",
    check: () => true,
  },
 
  // --- ASSETS & CWV (24-28) ---
  {
    id: "A1",
    category: "Assets",
    severity: "CRITICAL",
    description: "All images use next/image with width, height, and alt",
    check: () => {
      try {
        execSync('grep -r "<img " app/ components/ --include="*.tsx" 2>/dev/null');
        console.log("  Found raw <img> tags — use next/image instead");
        return false;
      } catch {
        return true; // No raw img tags found
      }
    },
  },
  {
    id: "A2",
    category: "Assets",
    severity: "CRITICAL",
    description: "LCP image has priority prop, all fonts use next/font",
    check: () => true,
  },
  {
    id: "A3",
    category: "Assets",
    severity: "HIGH",
    description: "LCP under 2.5s, INP under 200ms, CLS under 0.1",
    check: () => true, // Checked by Lighthouse CI
  },
  {
    id: "A4",
    category: "Assets",
    severity: "HIGH",
    description: "No external font CDN requests (Google Fonts link tags)",
    check: () => {
      try {
        execSync('grep -r "fonts.googleapis.com" app/ --include="*.tsx" --include="*.ts" 2>/dev/null');
        return false;
      } catch {
        return true;
      }
    },
  },
  {
    id: "A5",
    category: "Assets",
    severity: "MEDIUM",
    description: "Blur placeholders for above-the-fold images",
    check: () => true,
  },
 
  // --- MEMORY & MONITORING (29-30) ---
  {
    id: "M1",
    category: "Memory",
    severity: "HIGH",
    description: "All useEffect hooks have proper cleanup (listeners, timers, fetches)",
    check: () => true, // Manual review
  },
  {
    id: "M2",
    category: "Monitoring",
    severity: "MEDIUM",
    description: "web-vitals reporting to analytics in production",
    check: () => true,
  },
];
 
// Run automated checks
async function runAudit() {
  console.log("Performance Audit — 30-Point Checklist\n");
  console.log("=".repeat(60));
 
  let passed = 0;
  let failed = 0;
  let manual = 0;
 
  for (const item of auditItems) {
    try {
      const result = await item.check();
      const status = result ? "PASS" : "FAIL";
      const icon = result ? "[OK]" : "[!!]";
      console.log(`\n${icon} ${item.id} [${item.severity}] ${item.description}`);
 
      if (result) passed++;
      else failed++;
    } catch {
      console.log(`\n[--] ${item.id} [${item.severity}] ${item.description} (manual check)`);
      manual++;
    }
  }
 
  console.log("\n" + "=".repeat(60));
  console.log(`Results: ${passed} passed, ${failed} failed, ${manual} manual`);
  console.log(`Score: ${Math.round((passed / (passed + failed)) * 100)}%`);
 
  if (failed > 0) process.exit(1);
}
 
runAudit();

What this demonstrates:

  • 30 audit items across 6 categories with severity ratings
  • Automated CI checks: bundle size, Lighthouse scores, TypeScript, ESLint
  • Manual checks: Profiler, memory snapshots, skeleton review
  • GitHub Actions workflow that blocks PRs failing performance budgets
  • Lighthouse CI with specific numeric thresholds for each metric

Deep Dive

How It Works

  • Bundle budgets set maximum sizes for JavaScript bundles. The CI workflow measures the client bundle after build and fails if it exceeds the threshold. Landing pages should target under 150KB gzipped; app pages under 300KB gzipped.
  • Lighthouse CI runs Lighthouse in a CI environment, collecting performance scores across multiple URLs with multiple runs for statistical reliability. Assertions enforce minimum scores and maximum metric values.
  • Severity levels prioritize audit items. CRITICAL items cause immediate user-visible performance problems. HIGH items cause measurable degradation. MEDIUM items are best practices that prevent future issues.
  • Automated vs manual checks — Some items (bundle size, dependency scanning, lint rules) can be fully automated. Others (Profiler analysis, memory leak detection, skeleton layout matching) require human judgment.

Variations

Quick 10-point checklist for every PR review:

## Performance PR Checklist
 
- [ ] No new "use client" without justification
- [ ] Images use next/image with dimensions
- [ ] No new waterfall fetches (parallel with Promise.all)
- [ ] Dynamic imports for components over 30KB
- [ ] useEffect hooks have cleanup functions
- [ ] No inline objects or functions passed to memoized components
- [ ] New data fetches have caching strategy (revalidate, tags)
- [ ] No banned dependencies (moment, lodash, axios)
- [ ] Suspense boundaries around async sections
- [ ] No console.log in production code

Bundle budget in next.config.ts:

// next.config.ts — Webpack-level bundle limits
const nextConfig = {
  experimental: {
    webpackBuildWorker: true,
  },
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.performance = {
        maxAssetSize: 300 * 1024,  // 300KB per asset
        maxEntrypointSize: 300 * 1024,
        hints: "error",            // Fail build if exceeded
      };
    }
    return config;
  },
};

Vercel Speed Insights integration:

// app/layout.tsx
import { SpeedInsights } from "@vercel/speed-insights/next";
import { Analytics } from "@vercel/analytics/react";
 
export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <body>
        {children}
        <SpeedInsights />  {/* Real user CWV monitoring */}
        <Analytics />       {/* Page view and custom event tracking */}
      </body>
    </html>
  );
}

TypeScript Notes

  • The audit script types are defined with the AuditItem interface for consistency.
  • Lighthouse CI configuration is JSON, not TypeScript, but can be validated with @lhci/cli types.
  • The web-vitals library provides TypeScript types for all metric objects.

Gotchas

  • Lighthouse CI scores vary between runs — Network conditions, CPU load, and container performance affect results. Fix: Run at least 3 iterations (numberOfRuns: 3) and use median scores for assertions.

  • Bundle size check counts all chunks, not just entry — The total client JS includes shared chunks, framework runtime, and page-specific code. A single page may load only a subset. Fix: Measure per-route bundle size using @next/bundle-analyzer treemap, not just total .next/static/chunks size.

  • CI performance differs from production — CI runners are shared infrastructure with variable performance. Lighthouse scores in CI may be 10-20 points lower than production. Fix: Set CI thresholds slightly lower than production targets, and validate with real production measurements.

  • Automated checks miss runtime issues — Bundle size and Lighthouse catch load-time problems but miss runtime performance issues like memory leaks, slow interactions, and accumulated CLS. Fix: Supplement automated checks with periodic manual profiling sessions.

  • Over-optimizing for Lighthouse score — Techniques like defer-loading everything to game the score can hurt real user experience. Fix: Prioritize real user metrics (CrUX, web-vitals) over lab scores.

Alternatives

ApproachTrade-off
Lighthouse CIComprehensive lab testing; scores vary between runs
Vercel Speed InsightsZero-config for Vercel; vendor-specific
WebPageTestDetailed waterfall analysis; slower CI integration
Calibre or SpeedCurveContinuous monitoring; paid services
Custom web-vitals reportingReal user data; requires analytics infrastructure
BundlewatchPR-level bundle comparison; focused only on size

FAQs

What are the six categories in the 30-point performance audit?
  • Rendering (R1-R6): re-renders, memoization, transitions, virtualization
  • Components (C1-C5): Server Components, Suspense boundaries, skeletons
  • Bundle (B1-B6): bundle size, dynamic imports, tree-shaking, dependencies
  • Data Fetching (D1-D6): waterfalls, caching, N+1 queries, revalidation
  • Assets & CWV (A1-A5): images, fonts, LCP, CLS targets
  • Memory & Monitoring (M1-M2): effect cleanup, web-vitals reporting
What bundle size budgets should you enforce in CI?
  • Landing pages: under 150KB gzipped client JS.
  • Application pages: under 300KB gzipped client JS.
  • Enforce with a CI step that measures .next/static/chunks and fails the build if exceeded.
How does Lighthouse CI improve reliability over a single Lighthouse run?
  • It runs multiple iterations (recommended numberOfRuns: 3) and uses median scores.
  • Single runs vary due to CPU load, network, and container performance.
  • Assertions enforce minimum scores and maximum metric values per URL.
What should the quick 10-point PR checklist cover?
  • No new "use client" without justification
  • Images use next/image with dimensions
  • No waterfall fetches (use Promise.all)
  • Dynamic imports for components over 30KB
  • useEffect hooks have cleanup functions
How do you set webpack-level bundle limits in next.config.ts?
const nextConfig = {
  webpack: (config, { isServer }) => {
    if (!isServer) {
      config.performance = {
        maxAssetSize: 300 * 1024,
        maxEntrypointSize: 300 * 1024,
        hints: "error", // Fail build if exceeded
      };
    }
    return config;
  },
};
Gotcha: Why do Lighthouse CI scores in CI differ from production scores?
  • CI runners are shared infrastructure with variable CPU and network performance.
  • Scores can be 10-20 points lower than production.
  • Fix: Set CI thresholds slightly lower than production targets and validate with real production measurements.
What is the difference between automated and manual audit checks?
  • Automated: Bundle size, dependency scanning, lint rules, Lighthouse CI -- run on every PR.
  • Manual: React DevTools Profiler, heap snapshots, skeleton layout review -- require human judgment.
  • Both are necessary: automated catches regressions, manual catches runtime issues.
Gotcha: Why can over-optimizing for a Lighthouse score hurt real users?
  • Techniques like defer-loading everything to game the score can delay above-the-fold content.
  • Lab scores do not reflect real user devices and networks.
  • Fix: Prioritize real user metrics (CrUX, web-vitals) over lab scores.
How do you type the AuditItem interface for the audit script in TypeScript?
interface AuditItem {
  id: string;
  category: string;
  severity: "CRITICAL" | "HIGH" | "MEDIUM";
  description: string;
  check: () => boolean | Promise<boolean>;
}
How do you integrate Vercel Speed Insights for automatic CWV monitoring?
// app/layout.tsx
import { SpeedInsights } from "@vercel/speed-insights/next";
 
export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html lang="en">
      <body>
        {children}
        <SpeedInsights />
      </body>
    </html>
  );
}
What severity level should trigger an immediate fix vs a planned improvement?
  • CRITICAL: Immediate fix required -- causes user-visible performance problems (e.g., bundle over budget, waterfall fetches).
  • HIGH: Fix in current sprint -- causes measurable degradation (e.g., N+1 queries, missing image optimization).
  • MEDIUM: Plan for next sprint -- best practices that prevent future issues (e.g., unused dependencies, missing Suspense).
How does the automated check for raw img tags work in the audit script?
  • The script uses grep to search for <img in .tsx files under app/ and components/.
  • If found, the check fails -- all images should use next/image instead.
  • This catches accidental raw <img> tags that miss optimization, lazy loading, and CLS prevention.