Parallel Promises & Promise.all
Fetch multiple data sources in parallel to eliminate waterfalls and speed up server rendering.
Recipe
Quick-reference for parallel data fetching patterns.
// Promise.all - fails fast if any promise rejects
const [users, posts, stats] = await Promise.all([
getUsers(),
getPosts(),
getStats(),
]);
// Promise.allSettled - never rejects, returns status for each
const results = await Promise.allSettled([
getUsers(),
getPosts(),
getStats(),
]);
// Promise.race - resolves/rejects with the first to settle
const fastest = await Promise.race([
fetchFromPrimary(),
fetchFromFallback(),
]);
// Promise.any - resolves with the first to fulfill (ignores rejections)
const firstSuccess = await Promise.any([
fetchFromCDN1(),
fetchFromCDN2(),
]);When to reach for this: You have multiple independent data fetches in a Server Component and want them to run simultaneously instead of sequentially.
Working Example
// app/dashboard/page.tsx
import { Suspense } from "react";
async function getUser(id: string) {
const res = await fetch(`https://api.example.com/users/${id}`);
return res.json();
}
async function getOrders(userId: string) {
const res = await fetch(`https://api.example.com/orders?userId=${userId}`);
return res.json();
}
async function getNotifications(userId: string) {
const res = await fetch(`https://api.example.com/notifications?userId=${userId}`);
return res.json();
}
export default async function DashboardPage() {
const userId = "user-123";
// All three fetches start at the same time
const [user, orders, notifications] = await Promise.all([
getUser(userId),
getOrders(userId),
getNotifications(userId),
]);
return (
<div>
<h1>Welcome, {user.name}</h1>
<p>{notifications.length} unread notifications</p>
<h2>Recent Orders</h2>
<ul>
{orders.map((order: { id: string; total: number }) => (
<li key={order.id}>${order.total}</li>
))}
</ul>
</div>
);
}What this demonstrates:
- Three independent API calls execute in parallel
- Total time equals the slowest call (not the sum of all three)
- Clean destructuring of results
- Server Component with no client JavaScript
Deep Dive
How It Works
Promise.alltakes an array of promises and returns a single promise that resolves to an array of results- All promises start executing immediately when created (not when awaited)
- The returned promise resolves when ALL promises fulfill
- If ANY promise rejects,
Promise.allrejects immediately with that error - Next.js
fetchin Server Components is automatically deduped and cached
Parameters & Return Values
| Method | Resolves When | Rejects When | Returns |
|---|---|---|---|
Promise.all | All fulfill | Any rejects | Array of values |
Promise.allSettled | All settle | Never | Array of {status, value} or {status, reason} |
Promise.race | First settles | First rejects | Single value |
Promise.any | First fulfills | All reject | Single value |
Variations
Promise.allSettled for partial failures:
export default async function DashboardPage() {
const results = await Promise.allSettled([
getUser("user-123"),
getOrders("user-123"),
getRecommendations("user-123"), // Might fail, non-critical
]);
const user = results[0].status === "fulfilled" ? results[0].value : null;
const orders = results[1].status === "fulfilled" ? results[1].value : [];
const recs = results[2].status === "fulfilled" ? results[2].value : [];
return (
<div>
{user && <h1>{user.name}</h1>}
<OrderList orders={orders} />
{recs.length > 0 && <Recommendations items={recs} />}
</div>
);
}Parallel fetches with Suspense (independent streaming):
// Each section loads independently - no Promise.all needed
export default function DashboardPage() {
return (
<div>
<Suspense fallback={<Skeleton />}>
<UserProfile id="user-123" />
</Suspense>
<Suspense fallback={<Skeleton />}>
<OrderList userId="user-123" />
</Suspense>
<Suspense fallback={<Skeleton />}>
<Notifications userId="user-123" />
</Suspense>
</div>
);
}
// Each component fetches its own data
async function UserProfile({ id }: { id: string }) {
const user = await getUser(id);
return <h1>{user.name}</h1>;
}Dependent fetches (waterfall is correct):
// When fetch B depends on fetch A, sequential is correct
export default async function UserPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
const user = await getUser(id);
// These depend on user data, but are independent of each other
const [posts, followers] = await Promise.all([
getPostsByAuthor(user.id),
getFollowers(user.id),
]);
return <Profile user={user} posts={posts} followers={followers} />;
}Helper function for typed parallel fetches:
async function fetchParallel<T extends readonly Promise<unknown>[]>(
...promises: T
): Promise<{ -readonly [K in keyof T]: Awaited<T[K]> }> {
return Promise.all(promises) as Promise<{ -readonly [K in keyof T]: Awaited<T[K]> }>;
}
// Usage - fully typed results
const [user, posts] = await fetchParallel(
getUser("123"), // User type
getPosts("123"), // Post[] type
);TypeScript Notes
// Promise.all preserves tuple types
const results = await Promise.all([
getUser("1"), // returns Promise<User>
getPosts("1"), // returns Promise<Post[]>
getCount(), // returns Promise<number>
]);
// results is [User, Post[], number]
// Promise.allSettled result type
type SettledResult<T> =
| { status: "fulfilled"; value: T }
| { status: "rejected"; reason: unknown };
// Type guard for settled results
function isFulfilled<T>(
result: PromiseSettledResult<T>
): result is PromiseFulfilledResult<T> {
return result.status === "fulfilled";
}
const results = await Promise.allSettled([getUsers(), getPosts()]);
const successfulResults = results.filter(isFulfilled).map(r => r.value);Gotchas
-
Promise.all fails fast. If one promise rejects, you lose ALL results, even from promises that succeeded. Fix: Use
Promise.allSettledwhen some fetches are non-critical, or wrap individual promises in try/catch. -
Promises start immediately, not when awaited.
const p = fetch(url)starts the fetch right away.awaitjust waits for the result. If you create promises in a loop, they all start in parallel. Fix: This is usually what you want, but be aware for rate-limited APIs. -
Unhandled rejection in Promise.all. If you forget to catch the rejection from
Promise.all, it becomes an unhandled promise rejection and may crash your server. Fix: Always wrap in try/catch or usePromise.allSettled. -
Not actually parallel. If you accidentally
awaiteach fetch before starting the next, they run sequentially. Fix: Create all promises first, then await them together.
// Bad: sequential (each await blocks)
const users = await getUsers();
const posts = await getPosts();
// Good: parallel (both start immediately)
const [users, posts] = await Promise.all([getUsers(), getPosts()]);- Too many parallel requests. Firing 50 fetches in parallel can overwhelm APIs or hit rate limits. Fix: Use a concurrency limiter like
p-limitor batch requests.
Alternatives
| Alternative | Use When | Don't Use When |
|---|---|---|
Sequential await | Fetches depend on each other (B needs A's result) | Independent fetches (creates waterfalls) |
| Suspense boundaries | Each section should stream independently | You need all data before rendering anything |
Promise.allSettled | Some fetches are optional or may fail | All data is required to render |
Promise.race | You want the fastest response from multiple sources | You need all results |
p-limit | You need to limit concurrency (rate-limited APIs) | A few parallel fetches |
Real-World Example
From a production Next.js 15 / React 19 SaaS application (SystemsArchitect.io).
// Production example: Admin documents API with 6 parallel Prisma queries
// File: app/api/admin/documents/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { prisma } from '@/lib/prisma';
export async function GET(request: NextRequest) {
const { searchParams } = request.nextUrl;
const page = parseInt(searchParams.get('page') ?? '1', 10);
const limit = parseInt(searchParams.get('limit') ?? '20', 10);
const offset = (page - 1) * limit;
const [
documents,
totalCount,
categoryCounts,
recentlyUpdated,
publishedCount,
storageUsed,
] = await Promise.all([
// Main query with pagination trick: fetch limit+1 to detect if more pages exist
prisma.document.findMany({
take: limit + 1,
skip: offset,
orderBy: { updatedAt: 'desc' },
include: { author: { select: { name: true, email: true } } },
}),
prisma.document.count(),
prisma.document.groupBy({
by: ['categoryId'],
_count: { id: true },
}),
prisma.document.findMany({
take: 5,
orderBy: { updatedAt: 'desc' },
where: { updatedAt: { gte: new Date(Date.now() - 24 * 60 * 60 * 1000) } },
}),
prisma.document.count({ where: { status: 'PUBLISHED' } }),
// Postgres-specific raw query for aggregation
prisma.$queryRaw<[{ total: bigint }]>`
SELECT COALESCE(SUM("file_size"), 0) as total FROM "Document"
`,
]);
const hasMore = documents.length > limit;
const paginatedDocs = hasMore ? documents.slice(0, limit) : documents;
const totalStorage = Number(storageUsed[0]?.total ?? 0n);
return NextResponse.json({
documents: paginatedDocs,
hasMore,
totalCount,
categoryCounts,
recentlyUpdated,
publishedCount,
totalStorage,
});
}What this demonstrates in production:
- The
take: limit + 1pagination trick avoids a separate count query just to know if a next page exists. If you get back more rows thanlimit, there are more pages. Slice the extra row off before returning. $queryRawis used for the Postgres-specificCOALESCE(SUM(...))aggregation because Prisma's aggregation API does not supportCOALESCE. The result type must be annotated explicitly as[{ total: bigint }]because Postgres returnsbigintforSUM.- The
bigintfrom Postgres cannot be serialized to JSON directly.Number(storageUsed[0]?.total ?? 0n)converts it to a regular number. For values that could exceedNumber.MAX_SAFE_INTEGER, use.toString()instead. - Six parallel queries execute simultaneously via
Promise.all. The total time is the duration of the slowest query, not the sum of all six. On a cold start this brought the endpoint from around 1200ms (sequential) down to under 300ms. - Connection pool sizing matters with parallel queries. Six simultaneous Prisma queries consume six connections from the pool. The default Prisma pool size is
num_cpus * 2 + 1. On serverless platforms, keep parallel query count within pool limits or increase the pool size in the connection string.
FAQs
What happens if one promise rejects in Promise.all?
Promise.allrejects immediately with that error- All other results are lost, even from promises that already succeeded
- Use
Promise.allSettledwhen some fetches are non-critical
When should you use Promise.allSettled instead of Promise.all?
- When some fetches are optional or non-critical (e.g., recommendations)
- When you want partial data instead of a complete failure
- Each result has a
statusof"fulfilled"or"rejected"so you can handle each individually
Why do sequential await calls create a waterfall even though each promise starts immediately?
- Promises start executing when created, but
awaitblocks the next line const a = await fetchA(); const b = await fetchB();waits for A to finish before creating B- To run in parallel, create all promises first, then
await Promise.all([...])
// Bad: sequential
const users = await getUsers();
const posts = await getPosts();
// Good: parallel
const [users, posts] = await Promise.all([getUsers(), getPosts()]);What is the difference between Promise.race and Promise.any?
Promise.raceresolves or rejects with the first promise to settle (either fulfill or reject)Promise.anyresolves with the first promise to fulfill, ignoring rejectionsPromise.anyonly rejects if all promises reject (with anAggregateError)
How does Promise.all preserve TypeScript tuple types?
const results = await Promise.all([
getUser("1"), // Promise<User>
getPosts("1"), // Promise<Post[]>
getCount(), // Promise<number>
]);
// results is [User, Post[], number] -- fully typedHow do you write a type guard for Promise.allSettled results?
function isFulfilled<T>(
result: PromiseSettledResult<T>
): result is PromiseFulfilledResult<T> {
return result.status === "fulfilled";
}
const results = await Promise.allSettled([getUsers(), getPosts()]);
const successes = results.filter(isFulfilled).map((r) => r.value);When is a sequential waterfall actually correct?
- When fetch B depends on the result of fetch A (e.g., you need the user ID before fetching their posts)
- Combine: fetch the dependency first, then use
Promise.allfor independent sub-fetches
const user = await getUser(id);
const [posts, followers] = await Promise.all([
getPostsByAuthor(user.id),
getFollowers(user.id),
]);What happens if you fire too many parallel requests at once?
- You can overwhelm APIs or hit rate limits
- Database connection pools may be exhausted (e.g., Prisma default pool size is
num_cpus * 2 + 1) - Use a concurrency limiter like
p-limitto batch requests
How does the take: limit + 1 pagination trick from the real-world example work?
- Fetch one extra row beyond your page size
- If you get back more rows than
limit, there are more pages (hasMore = true) - Slice the extra row off before returning to the client
- This avoids a separate
COUNT(*)query just to check for the next page
Can you use Suspense boundaries instead of Promise.all for parallel data loading?
- Yes. Each async Server Component in its own
<Suspense>boundary fetches independently - Unlike
Promise.all, sections render as they resolve rather than waiting for all to finish - Use Suspense when sections should stream independently; use
Promise.allwhen you need all data before rendering
Why does the real-world example convert bigint to Number() before returning JSON?
- Postgres returns
bigintforSUMaggregations bigintcannot be serialized to JSON directly (throwsTypeError)Number()converts it to a regular number; use.toString()if the value could exceedNumber.MAX_SAFE_INTEGER
What is the fetchParallel helper function and why is it useful?
async function fetchParallel<T extends readonly Promise<unknown>[]>(
...promises: T
): Promise<{ -readonly [K in keyof T]: Awaited<T[K]> }> {
return Promise.all(promises) as any;
}- It preserves individual types for each promise in the tuple
- Provides fully typed destructured results without manual type annotations
Related
- Data Fetching - fetch in Server Components
- Streaming - Suspense-based parallel streaming
- Caching - request deduplication and caching
- Generators - async iteration for large datasets