Deploy Next.js on EC2 / Standalone Server
Deploying Next.js on a standalone Linux server (AWS EC2, DigitalOcean Droplet, Hetzner, etc.) using next build && next start. Everything Vercel handles for you automatically -- CDN, SSL, scaling, preview deploys, environment variables -- is now your responsibility. This guide walks you through every piece.
Recipe
Quick-reference recipe card -- copy-paste ready.
# On your server (Ubuntu 22.04+)
# 1. Build the production bundle
NODE_ENV=production npm ci
NODE_ENV=production npx next build
# 2. Start with PM2 (process manager)
npm install -g pm2
pm2 start npm --name "myapp" -- start
pm2 save
pm2 startup
# 3. Nginx reverse proxy (after installing nginx)
sudo apt install nginx certbot python3-certbot-nginx -y
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
# 4. SSL certificate
sudo certbot --nginx -d myapp.com -d www.myapp.com
# 5. Verify
curl -I https://myapp.comWhen to reach for this: Your company requires self-hosting, you need to control the server environment, you are deploying to a VPC with no public internet access, or you want predictable monthly costs instead of usage-based billing.
Working Example
A complete deployment walkthrough from a fresh Ubuntu server to a production-ready Next.js application.
Step 1: Provision the Server
Launch an EC2 instance (or equivalent) with:
- OS: Ubuntu 22.04 LTS or 24.04 LTS
- Instance type:
t3.mediumminimum (2 vCPU, 4 GB RAM) for most Next.js apps - Storage: 30 GB+ SSD (builds consume disk space)
Configure the security group / firewall:
# If using ufw on the server directly
sudo ufw allow 22/tcp # SSH
sudo ufw allow 80/tcp # HTTP (Nginx)
sudo ufw allow 443/tcp # HTTPS (Nginx)
sudo ufw enableInstall Node.js 20 via nvm:
# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
# Install Node.js 20 LTS
nvm install 20
nvm alias default 20
node -v # v20.x.xCreate a dedicated user (never run Node.js as root):
sudo adduser --disabled-password nextjs
sudo usermod -aG sudo nextjs
sudo su - nextjsStep 2: Clone and Build
# Clone your repository
git clone https://github.com/your-org/your-app.git /home/nextjs/app
cd /home/nextjs/app
# Install production dependencies
npm ci
# Build for production
NODE_ENV=production npx next buildAfter the build completes, the .next/ directory contains:
.next/
├── cache/ # ISR cache, image optimization cache, build cache
├── server/ # Server-side bundles (App Router pages, API routes)
│ ├── app/ # Compiled App Router pages
│ ├── chunks/ # Shared server chunks
│ └── pages/ # Compiled Pages Router pages (if any)
├── static/ # Client-side JS/CSS bundles (fingerprinted)
│ └── chunks/ # Code-split client bundles
├── BUILD_ID # Unique build identifier
├── build-manifest.json # Maps routes to client bundles
└── trace # Build trace dataStart the production server to verify:
NODE_ENV=production npx next start -p 3000
# Visit http://<server-ip>:3000 to verify, then Ctrl+CStep 3: Environment Variables
Create .env.production in your project root:
# .env.production
# Server-side only (not exposed to the browser)
DATABASE_URL="postgresql://user:pass@db-host:5432/mydb"
NEXTAUTH_SECRET="your-secret-key-here"
NEXTAUTH_URL="https://myapp.com"
# Client-side (embedded in the JS bundle at BUILD time)
NEXT_PUBLIC_API_URL="https://api.myapp.com"
NEXT_PUBLIC_POSTHOG_KEY="phc_xxxxxxxxxxxx"The NEXT_PUBLIC_ prefix is critical to understand:
| Prefix | Available Where | When Resolved | Change Requires |
|---|---|---|---|
NEXT_PUBLIC_ | Server + Client (browser) | Build time (baked into JS bundle) | Rebuild |
| No prefix | Server only | Runtime (read from process.env) | Restart |
For PM2-managed environment variables, use an ecosystem.config.js:
// ecosystem.config.js
module.exports = {
apps: [
{
name: "myapp",
script: "node_modules/.bin/next",
args: "start -p 3000",
cwd: "/home/nextjs/app",
env: {
NODE_ENV: "production",
DATABASE_URL: "postgresql://user:pass@db-host:5432/mydb",
NEXTAUTH_SECRET: "your-secret-key-here",
NEXTAUTH_URL: "https://myapp.com",
},
},
],
};Step 4: Process Management with PM2
PM2 keeps your Node.js process alive, restarts it on crash, and survives server reboots.
# Install PM2 globally
npm install -g pm2Create a production-ready ecosystem.config.js with cluster mode:
// ecosystem.config.js
module.exports = {
apps: [
{
name: "myapp",
script: "node_modules/.bin/next",
args: "start -p 3000",
cwd: "/home/nextjs/app",
instances: "max", // Use all available CPU cores
exec_mode: "cluster", // Cluster mode for load balancing
max_memory_restart: "512M", // Restart if memory exceeds 512MB
env: {
NODE_ENV: "production",
PORT: 3000,
},
// Logging
error_file: "/home/nextjs/logs/err.log",
out_file: "/home/nextjs/logs/out.log",
log_date_format: "YYYY-MM-DD HH:mm:ss Z",
merge_logs: true,
},
],
};Start and persist:
# Create logs directory
mkdir -p /home/nextjs/logs
# Start the application
pm2 start ecosystem.config.js
# Save the process list (so PM2 knows what to restart after reboot)
pm2 save
# Generate the startup script (run the command it outputs as root)
pm2 startup
# Copy-paste the generated command, e.g.:
# sudo env PATH=$PATH:/home/nextjs/.nvm/versions/node/v20.x.x/bin pm2 startup systemd -u nextjs --hp /home/nextjs
# Verify processes are running
pm2 lsStep 5: Nginx Reverse Proxy
Nginx sits in front of Node.js to handle TLS termination, gzip compression, static asset caching, and security headers.
sudo apt install nginx -yCreate the site configuration:
# /etc/nginx/sites-available/myapp
upstream nextjs_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name myapp.com www.myapp.com;
# Redirect all HTTP to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name myapp.com www.myapp.com;
# SSL certificates (managed by certbot)
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Gzip compression
gzip on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Serve Next.js static assets directly from Nginx (bypasses Node.js)
location /_next/static/ {
alias /home/nextjs/app/.next/static/;
expires 365d;
access_log off;
add_header Cache-Control "public, immutable";
}
# Serve public directory assets directly
location /public/ {
alias /home/nextjs/app/public/;
expires 30d;
access_log off;
}
# Favicon and robots.txt
location = /favicon.ico {
alias /home/nextjs/app/public/favicon.ico;
access_log off;
}
# Proxy all other requests to Next.js
location / {
proxy_pass http://nextjs_upstream;
proxy_http_version 1.1;
# Required headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# WebSocket support (required for Server Actions streaming, HMR in dev)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
}
}Enable the site and test:
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default # Remove default site
sudo nginx -t # Test configuration
sudo systemctl reload nginxStep 6: SSL with Certbot
# Install certbot
sudo apt install certbot python3-certbot-nginx -y
# Obtain and install certificate (Nginx plugin auto-configures SSL)
sudo certbot --nginx -d myapp.com -d www.myapp.com
# Verify auto-renewal is set up
sudo certbot renew --dry-run
# Certbot installs a systemd timer automatically; verify it:
sudo systemctl list-timers | grep certbotStep 7: Zero-Downtime Deploys
Create a deploy script that pulls the latest code, rebuilds, and gracefully reloads:
#!/bin/bash
# /home/nextjs/deploy.sh
set -euo pipefail
APP_DIR="/home/nextjs/app"
LOG_FILE="/home/nextjs/logs/deploy-$(date +%Y%m%d-%H%M%S).log"
echo "=== Deploy started at $(date) ===" | tee "$LOG_FILE"
cd "$APP_DIR"
# Pull latest code
echo "Pulling latest code..." | tee -a "$LOG_FILE"
git pull origin main 2>&1 | tee -a "$LOG_FILE"
# Install dependencies (ci for clean installs)
echo "Installing dependencies..." | tee -a "$LOG_FILE"
npm ci 2>&1 | tee -a "$LOG_FILE"
# Build the application
echo "Building..." | tee -a "$LOG_FILE"
NODE_ENV=production npx next build 2>&1 | tee -a "$LOG_FILE"
# Gracefully reload PM2 processes (zero-downtime)
echo "Reloading PM2 processes..." | tee -a "$LOG_FILE"
pm2 reload myapp 2>&1 | tee -a "$LOG_FILE"
echo "=== Deploy completed at $(date) ===" | tee -a "$LOG_FILE"Why pm2 reload instead of pm2 restart:
reload-- Starts new worker processes first, waits for them to accept connections, then gracefully shuts down old workers. Zero downtime.restart-- Kills all workers immediately, then starts new ones. Brief downtime while new processes boot.
chmod +x /home/nextjs/deploy.shStep 8: Health Checks
Create a health check endpoint for load balancer monitoring (ALB, ELB, or uptime services):
// app/api/health/route.ts
import { NextResponse } from "next/server";
export const dynamic = "force-dynamic";
export async function GET() {
return NextResponse.json({
status: "ok",
uptime: process.uptime(),
timestamp: new Date().toISOString(),
node_version: process.version,
memory: {
rss_mb: Math.round(process.memoryUsage().rss / 1024 / 1024),
heap_used_mb: Math.round(process.memoryUsage().heapUsed / 1024 / 1024),
},
});
}Configure your load balancer or monitoring service to poll https://myapp.com/api/health every 30 seconds. A 200 response with "status": "ok" means the server is healthy.
Step 9: Logging
Set up PM2 log rotation to prevent logs from consuming all disk space:
# Install the log rotation module
pm2 install pm2-logrotate
# Configure rotation settings
pm2 set pm2-logrotate:max_size 50M # Rotate when log reaches 50MB
pm2 set pm2-logrotate:retain 30 # Keep 30 rotated files
pm2 set pm2-logrotate:compress true # Gzip old logs
pm2 set pm2-logrotate:dateFormat YYYY-MM-DD_HH-mm-ssFor shipping logs to a centralized service:
Option A: CloudWatch (AWS)
# Install the CloudWatch agent
sudo apt install amazon-cloudwatch-agent -y
# Configure it to watch PM2 log files
# /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/home/nextjs/logs/out.log",
"log_group_name": "myapp/stdout",
"log_stream_name": "{instance_id}"
},
{
"file_path": "/home/nextjs/logs/err.log",
"log_group_name": "myapp/stderr",
"log_stream_name": "{instance_id}"
}
]
}
}
}
}Option B: Structured logging with Pino (recommended for log aggregators like Datadog, Grafana Loki):
// lib/logger.ts
import pino from "pino";
export const logger = pino({
level: process.env.LOG_LEVEL ?? "info",
formatters: {
level: (label) => ({ level: label }),
},
timestamp: pino.stdTimeFunctions.isoTime,
});Deep Dive
Vercel vs. Self-Hosted: What You Now Own
| Capability | Vercel (Managed) | Self-Hosted (Your Responsibility) |
|---|---|---|
| CDN / Edge caching | Automatic global CDN | Configure CloudFront, Cloudflare, or Fastly yourself |
| ISR revalidation | Distributed cache across edge nodes | Works on single server; multi-server needs shared cache (Redis, S3) |
| Preview deployments | Automatic per-PR preview URLs | Set up yourself (separate PM2 process per branch, or skip it) |
| Serverless scaling | Auto-scales to zero and up | Fixed server capacity; scale with more instances + ALB |
| Image optimization | Edge-optimized next/image | next/image works but uses your server CPU; offload to a CDN |
| Analytics / Web Vitals | Built-in dashboard | Add your own (Datadog, Grafana, PostHog, Vercel Speed Insights OSS) |
| Build cache | Remote build cache | Local .next/cache; persist across deploys by not deleting it |
| HTTPS | Automatic SSL | You manage certificates (certbot auto-renewal cron) |
| Environment variables | Dashboard UI, per-environment | No dashboard; manage via .env files, PM2 config, or AWS SSM |
| Middleware | Runs at the edge (V8 isolates) | Runs in Node.js (not edge); same API, different runtime characteristics |
ISR on a Standalone Server
Incremental Static Regeneration works out of the box on a single server because the regenerated pages are written to .next/cache/ on local disk. When a page is revalidated, Next.js:
- Serves the stale page immediately
- Regenerates the page in the background
- Writes the new page to
.next/cache/ - Serves the new page on the next request
The problem with multiple servers: If you scale to 2+ servers behind a load balancer, each server has its own .next/cache/. Server A might have a fresh page while Server B still serves a stale one. Users see inconsistent content.
Solutions:
- Sticky sessions -- Route users to the same server via ALB session affinity. Simplest but reduces load balancing effectiveness.
- Shared NFS mount -- Mount
.next/cache/from an EFS volume. All servers share the same cache. Adds latency but ensures consistency. - Custom cache handler -- Use
incrementalCacheHandlerPathinnext.config.tsto point to a Redis or S3-backed cache:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
cacheHandler: "./cache-handler.ts",
cacheMaxMemorySize: 0, // Disable in-memory caching, use external store only
};
export default nextConfig;// cache-handler.ts
import { CacheHandler } from "next/dist/server/lib/incremental-cache";
import { createClient } from "redis";
const client = createClient({ url: process.env.REDIS_URL });
client.connect();
export default class RedisCacheHandler extends CacheHandler {
async get(key: string) {
const data = await client.get(key);
return data ? JSON.parse(data) : null;
}
async set(key: string, data: unknown, ctx: { revalidate?: number }) {
const ttl = ctx.revalidate ?? 60;
await client.set(key, JSON.stringify(data), { EX: ttl });
}
async revalidateTag(tag: string) {
// Scan for keys with this tag and delete them
const keys = await client.keys(`*:${tag}:*`);
if (keys.length > 0) {
await client.del(keys);
}
}
}The standalone Output Mode
Setting output: "standalone" in next.config.ts tells the build process to trace your application's imports and bundle only the required node_modules into .next/standalone/:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;After building, the .next/standalone/ directory contains:
.next/standalone/
├── node_modules/ # Only the dependencies your app actually uses (~50MB)
├── server.js # Minimal Node.js server entry point
├── package.json
└── .next/
└── server/ # Compiled server bundlesYou must manually copy static assets:
# After building with output: "standalone"
cp -r public .next/standalone/public
cp -r .next/static .next/standalone/.next/staticThen start with:
cd .next/standalone
NODE_ENV=production node server.js| Scenario | Use standalone | Use default output |
|---|---|---|
| Docker containers | Yes -- minimal image size | No |
| Minimal server footprint | Yes -- ~50MB vs hundreds | No |
| Lambda / serverless | Yes | No |
Full server with node_modules available | No | Yes -- simpler |
| Monorepo with shared packages | Depends | Often easier without |
Monitoring and Reliability
PM2 auto-restart on crash:
PM2 automatically restarts crashed processes. Configure memory limits to catch memory leaks:
# Set memory limit (restarts if exceeded)
pm2 start ecosystem.config.js # max_memory_restart already set in config
# Monitor in real time
pm2 monitHealth check endpoint for ALB:
Configure the AWS ALB target group health check:
- Path:
/api/health - Interval: 30 seconds
- Healthy threshold: 2 consecutive successes
- Unhealthy threshold: 3 consecutive failures
- Timeout: 10 seconds
Disk space management:
The .next/cache/ directory can grow unbounded, especially with ISR and image optimization:
# Cron job to prune ISR cache older than 7 days
# Add to crontab -e
0 3 * * * find /home/nextjs/app/.next/cache -type f -mtime +7 -delete 2>/dev/nullNode.js memory tuning:
If your app processes large payloads or datasets, increase the V8 heap limit:
// ecosystem.config.js
module.exports = {
apps: [
{
name: "myapp",
script: "node_modules/.bin/next",
args: "start -p 3000",
node_args: "--max-old-space-size=1024", // 1 GB heap limit
max_memory_restart: "1200M", // PM2 restart threshold above V8 limit
// ... rest of config
},
],
};Gotchas
-
Forgetting to copy
public/and.next/static/with standalone output. Theoutput: "standalone"mode bundles only the server code. Static assets (public/,.next/static/) must be copied into.next/standalone/manually, or Nginx will serve 404s for every CSS, JS, and image file. -
Running
next startas root. If the Node.js process is compromised, the attacker has root access. Always create a dedicatednextjsuser with minimal permissions. PM2 runs as that user, and Nginx (which does need port 80/443) runs as its ownwww-datauser. -
Not setting
NODE_ENV=production. Next.js skips critical optimizations in development mode: no minification, no dead code elimination, verbose error pages with source maps exposed. Always setNODE_ENV=productionin your PM2 config or shell environment before building and starting. -
Exposing port 3000 directly to the internet. Never let users hit the Node.js process directly. Nginx provides TLS termination, rate limiting, security headers, gzip compression, and protection against slowloris attacks. Your security group should only allow port 3000 from
127.0.0.1. -
ISR cache growing unbounded. On high-traffic sites with many dynamic pages (e.g.,
/product/[id]with 100k products),.next/cache/fetch-cache/and.next/cache/images/can fill the disk. Monitor disk usage and set up a cron job to prune old cache entries. -
Missing
Upgradeheaders in Nginx. Withoutproxy_set_header Upgrade $http_upgradeandproxy_set_header Connection "upgrade", WebSocket connections fail silently. This affects Server Actions streaming responses, React Server Components streaming, and dev-mode HMR. The connection appears to work but data never arrives. -
Certbot renewal not automated. Let's Encrypt certificates expire every 90 days. While
certbotsets up a systemd timer by default, verify it is active:sudo systemctl list-timers | grep certbot. If the timer is missing, add0 0 1 * * certbot renew --quietto root's crontab. -
NEXT_PUBLIC_vars baked at build time. Developers migrating from Vercel are used to changing environment variables in a dashboard and having them take effect on the next request. On a standalone server,NEXT_PUBLIC_variables are embedded in the JavaScript bundle duringnext build. Changing them requires a full rebuild and redeploy, not just a PM2 restart. Server-side-only variables (without theNEXT_PUBLIC_prefix) do take effect after a restart. -
Build failing with OOM on small instances.
next buildcan consume 2+ GB of RAM on large apps. If you are building on at3.micro(1 GB RAM), add swap space or build on a larger instance and copy the.next/directory over. -
Forgetting to persist
.next/cache/across deploys. If your deploy script runsrm -rf .nextbefore building, you lose the build cache and ISR cache. Builds take longer, and all ISR pages must regenerate. Instead, only remove.next/server/and.next/static/if needed, preserving.next/cache/.
Alternatives
| Alternative | Use When | Don't Use When |
|---|---|---|
| Vercel | You want zero ops, auto-scaling, preview deploys | Company policy requires self-hosting or specific cloud |
| Docker on ECS/EKS | You need container orchestration, auto-scaling, multi-region | Simple single-server deployment |
| Static export | Fully static site with no server features | You use SSR, ISR, middleware, or Server Actions |
| Coolify / Dokku | You want a self-hosted PaaS (Heroku-like) on your own server | Enterprise-scale with complex orchestration needs |
FAQs
Does ISR (Incremental Static Regeneration) still work on a standalone server?
Yes. ISR works out of the box on a single server because regenerated pages are written to .next/cache/ on local disk. The only complication is multi-server setups where each server has its own cache. In that case, use sticky sessions, a shared NFS mount (AWS EFS), or a custom cache handler backed by Redis or S3.
How do I do preview deployments without Vercel?
You have several options: (1) Run a separate PM2 process per branch on a different port, with Nginx routing by subdomain (pr-123.preview.myapp.com). (2) Use Coolify or Dokku, which provide automatic preview deploys. (3) Skip preview deploys and rely on staging environments. Most teams choose option 3 unless they have a dedicated DevOps engineer.
What about image optimization? Does next/image still work?
Yes, next/image works on a standalone server. The difference is that image optimization (resizing, format conversion to WebP/AVIF) runs on your server's CPU instead of Vercel's edge network. For high-traffic sites, this can be CPU-intensive. Mitigation: put a CDN (CloudFront, Cloudflare) in front of your server to cache optimized images, or use loader prop to offload to a service like Cloudinary or Imgix.
How do I roll back a bad deployment?
Since you are deploying via git pull, roll back by checking out the previous commit and rebuilding:
cd /home/nextjs/app
git log --oneline -5 # Find the last good commit
git checkout <commit-hash> # Check out that commit
npm ci && NODE_ENV=production npx next build
pm2 reload myappFor faster rollbacks, keep the previous .next/ build directory as a backup before each deploy.
Can I use Server Actions on a standalone server?
Yes. Server Actions work identically on a standalone server. They execute as POST requests to the same Node.js server. The only difference from Vercel is that they run in a long-lived Node.js process instead of a serverless function, so be mindful of memory leaks in long-running processes.
Do I need a load balancer for a single server?
No. A single EC2 instance with Nginx as a reverse proxy is sufficient. You only need a load balancer (AWS ALB/NLB) when scaling to multiple servers. However, even with one server, placing it behind an ALB gives you health checks, easy SSL termination via ACM (no certbot needed), and a simpler migration path when you scale later.
How much does this cost compared to Vercel?
A t3.medium EC2 instance (2 vCPU, 4 GB RAM) costs approximately $30/month with a reserved instance or $34/month on-demand. This can handle moderate traffic that would cost $100+ on Vercel Pro. However, you are paying with your time for ops, monitoring, and security patches. For small teams, Vercel's managed service is often cheaper when you factor in engineering time.
Should I use the standalone output mode or the default?
Use output: "standalone" when deploying in Docker containers or when you want the smallest possible deployment artifact (~50 MB). Use the default output when you deploy with the full node_modules/ directory and want simpler deploys (just git pull && npm ci && next build && pm2 reload). Standalone adds a manual step of copying public/ and .next/static/.
How do I handle multiple environments (staging, production)?
Use separate .env.staging and .env.production files. Next.js loads .env.production automatically when NODE_ENV=production. For staging, either set NODE_ENV=staging with a custom env loading strategy, or use PM2 ecosystem files with different env blocks per environment:
// ecosystem.config.js
module.exports = {
apps: [{
name: "myapp",
script: "node_modules/.bin/next",
args: "start",
env_production: { NODE_ENV: "production", API_URL: "https://api.myapp.com" },
env_staging: { NODE_ENV: "production", API_URL: "https://staging-api.myapp.com" },
}],
};
// Start with: pm2 start ecosystem.config.js --env stagingDoes Middleware run the same way as on Vercel?
The API is identical, but the runtime is different. On Vercel, Middleware runs in V8 edge isolates (limited Web API). On a standalone server, Middleware runs in the full Node.js runtime, which means you have access to more Node.js APIs but lose the edge-location benefit. If your Middleware is latency-sensitive (e.g., geolocation redirects), consider placing a CDN in front of your server.
How do I set up a CI/CD pipeline for this?
Use GitHub Actions (or your CI tool) to SSH into the server and run the deploy script:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to production
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.EC2_HOST }}
username: nextjs
key: ${{ secrets.EC2_SSH_KEY }}
script: /home/nextjs/deploy.shWhat if my build takes too long and causes downtime?
The build runs while the old version is still serving traffic (PM2 keeps the old processes alive until pm2 reload). There is no downtime during the build itself. The only risk is if the build consumes so much CPU/RAM that it degrades the running app. Solutions: (1) Build on a separate CI server and rsync the .next/ directory over. (2) Use a larger instance during builds. (3) Add swap space.
Related
- Deployment -- Overview of Next.js deployment options
- Environment Variables -- Managing env vars in Next.js
- Deploy Next.js on ECS/EKS -- Container-based deployment