Most tutorials stop at npm run build. This one doesn't.
Getting a Next.js app running locally takes ten minutes. Getting it production-ready on a Linux VM — with a reverse proxy, process manager, TLS, and an automated deploy pipeline — is a different story. This guide covers the full path: from a fresh Ubuntu server to a site that stays up, redeploys on push, and serves over HTTPS.
0. If You're New: Why Self-Host Next.js at All?
If you've only ever deployed to Vercel or Netlify, this question is fair: why bother with a VM when one-click platforms exist?
The honest answer: most of the time, you shouldn't. Vercel is excellent for Next.js — it's literally built by the team behind the framework. But there are a few situations where renting a VM and running things yourself is the better call:
- Predictable, flat costs. Serverless can get expensive at scale, especially with image optimization and edge functions.
- Backend processes that don't fit serverless. Long-running jobs, websockets, custom binaries, or anything that needs more than ~10 seconds of execution time.
- Learning. Touching every layer — DNS, TLS, proxy, process manager — once is worth more than ten Vercel deploys when you're trying to understand how the web actually works.
- Full control. Logs, system tuning, custom networking, region choice — all yours.
If none of those apply, deploy to Vercel and close this tab. If any do, read on.
This guide assumes you're comfortable on the command line, can edit a config file in nano or vim, and have at least seen SSH before. You don't need to be a sysadmin — you'll be one by the end.
1. The Target Architecture
Before any commands, here's the picture we're building toward:
Internet
│
▼
┌────────────────────┐
│ Nginx :80, :443 │ ← TLS termination, gzip, static cache
└─────────┬──────────┘
│ proxy_pass
▼
┌────────────────────┐
│ Next.js :3000 │ ← node process, managed by PM2
└─────────┬──────────┘
│
▼
systemd ← restarts PM2 on reboot
Each piece has one job:
- Nginx is the public-facing door. It handles TLS, compresses responses, and caches static assets so Node never sees those requests.
- Next.js runs on
localhost:3000— never exposed to the internet directly. - PM2 keeps the Node process alive across crashes.
- systemd keeps PM2 alive across reboots.
The pattern is "layered guardians." Each layer assumes the one below it might fail, and revives it. Once you internalize this, every other Linux service feels familiar.
I'm using Ubuntu 22.04 LTS. The commands are the same on Debian; adjust the package manager on other distros.
2. Provision and Connect
Any cloud provider works — DigitalOcean, Hetzner, AWS EC2, GCP, Vultr. Spin up an instance with:
- Ubuntu 22.04 LTS
- At least 1 vCPU, 1 GB RAM (2 GB recommended for the build step)
- A static IP assigned
SSH in:
ssh root@your-server-ipCreate a non-root user
Logging in as root is dangerous — every command you run has full system privileges. A typo or compromised process can wipe the box. The fix is a regular user with sudo rights.
adduser deploy
usermod -aG sudo deploy
# copy your ssh key over so the new user can log in
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploySwitch to the deploy user — everything that follows runs as deploy:
su - deploy3. Install Node.js (Use nvm, Not apt)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install --lts
nvm use --lts
node -v # should print v22.x or similar4. Install and Configure PM2
PM2 is a process manager for Node. It restarts your app on crash, captures logs, and exposes a simple status dashboard.
npm install -g pm2To survive a reboot, PM2 itself needs to be started by systemd:
pm2 startup
# It prints a sudo command — copy and run exactly what it gives youpm2 startup doesn't run anything itself — it generates a systemctl command tailored to your OS, user, and Node path. You run that command (with sudo) to register PM2 as a systemd service. This is the layer that survives reboots.
5. Clone and Build Your App
cd ~
git clone https://github.com/your-username/your-repo.git app
cd app
npm ci # clean install from lockfile
npm run buildOn a 1 GB RAM droplet, the build can run out of memory. If it does, add --max-old-space-size=1024 to your build script, or upgrade to 2 GB. Builds are the most memory-hungry part of the lifecycle — runtime is far lighter.
6. Start the App with PM2
pm2 start npm --name "nextjs-app" -- start
pm2 save # persist process list across rebootsVerify:
pm2 status
pm2 logs nextjs-app --lines 20
curl http://localhost:3000 # should return your app's HTMLNext.js listens on 3000 only on localhost. The firewall (set up later) will block external access to that port — only Nginx will reach it.
7. Nginx as a Reverse Proxy
A reverse proxy sits in front of your app and decides what to do with each request before passing it on. We use it for three reasons:
- TLS termination — Nginx handles HTTPS so Node doesn't have to.
- Static asset caching —
_next/static/*files have content hashes; serve them with a 1-year cache. - gzip compression — smaller responses, faster pages.
sudo apt update
sudo apt install nginx -y
sudo systemctl enable nginxCreate a site config:
sudo nano /etc/nginx/sites-available/your-domain.comserver {
listen 80;
server_name your-domain.com www.your-domain.com;
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
# Serve hashed static assets directly — never hit Node for these
location /_next/static/ {
alias /home/deploy/app/.next/static/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Everything else proxies to Next.js
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}Enable and reload:
sudo ln -s /etc/nginx/sites-available/your-domain.com /etc/nginx/sites-enabled/
sudo nginx -t # config test — must print "ok"
sudo systemctl reload nginx8. Add HTTPS with Let's Encrypt
Point your domain's DNS A record to your server IP first — Certbot verifies ownership by serving a file from your domain.
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d your-domain.com -d www.your-domain.comCertbot will:
- Obtain a Let's Encrypt certificate
- Modify your Nginx config to listen on
:443with TLS - Add an HTTP → HTTPS redirect
- Register a renewal cron job (Let's Encrypt certs last 90 days)
Confirm renewal works:
sudo certbot renew --dry-run9. Automate Deploys with GitHub Actions
The goal: push to main → GitHub Actions SSHes into your server, pulls latest code, rebuilds, and restarts PM2.
Create a dedicated deploy key
Don't reuse your personal SSH key — give the CI runner its own key, scoped to this server only.
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/github_deploy -N ""
cat ~/.ssh/github_deploy.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/github_deploy # copy this — it's the private key for GitHubAdd secrets to GitHub
Repo → Settings → Secrets and variables → Actions:
| Secret name | Value |
|---|---|
SERVER_HOST | Your server IP |
SERVER_USER | deploy |
SERVER_SSH_KEY | The private key (~/.ssh/github_deploy) |
The workflow file
Create .github/workflows/deploy.yml:
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SERVER_SSH_KEY }}
script: |
cd ~/app
git pull origin main
npm ci
npm run build
pm2 restart nextjs-appPush to main, watch the Actions tab — it should SSH in, build, and restart PM2.
Building on the server is the simplest setup but the most fragile — small VMs run out of memory mid-build. A more robust pattern is to build inside the GitHub Actions runner and rsync only the .next output and package.json to the server. That's a follow-up post.
10. Harden the Firewall
Allow only the ports you need:
sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full' # ports 80 and 443
sudo ufw enable
sudo ufw statusUFW blocks everything else — including direct access to port 3000, which is exactly what you want.
11. Sanity Checklist
Before calling it done:
- App responds at
https://your-domain.com - HTTP redirects to HTTPS
-
pm2 statusshowsonline -
sudo certbot renew --dry-runpasses - A push to
maintriggers a deploy and the app comes back up -
curl http://your-server-ip:3000from outside the server fails (firewall is doing its job)
12. OUR TAKE: When This Stack Is Right (and When It Isn't)
I deployed my portfolio this way, and after living with it for a while, here's the honest summary.
Where this shines:
- Costs are flat and predictable — a $5–10 VM handles a personal portfolio with room to spare.
- Every layer is debuggable with standard tools (
journalctl,pm2 logs,nginx -t). No black-box platform. - The skills transfer. Once you've done this, deploying any web app to any VM is the same pattern with different binaries.
Where it gets painful:
- You're on the hook for OS patches, log rotation, disk-full alerts, and certificate edge cases. Vercel handles all of this for you.
- Single-VM means single point of failure. No automatic failover, no global CDN. Your site goes down with the box.
- Build-on-server is brittle on small VMs. The "right" version of this eventually moves builds into CI.
My rule of thumb: if your project would cost more than $20/month on Vercel, or if you have any non-HTTP workload, the VM is worth the operational tax. If it's a static-leaning portfolio with light traffic, Vercel's free tier is genuinely hard to beat — and you should only do this for the learning.
What surprised me most: how little magic is involved. Strip away the marketing pages of every PaaS and you find something close to this stack underneath. Knowing that changed how I read every "deploy in one click" tutorial.
What's Next
From here you can layer on:
- Environment variables — store in
/home/deploy/app/.env.local, never commit them - Zero-downtime deploys — run two PM2 instances behind Nginx with
pm2 reloadinstead ofrestart - Monitoring —
pm2 monitfor live, UptimeRobot for external pings - Log rotation —
pm2 install pm2-logrotateto stop logs from filling disk - Builds in CI — ship only the
.nextartifact, leave the VM with just Node + PM2 + Nginx
The stack described here (Ubuntu + Nginx + PM2 + Certbot + GitHub Actions) is boring in the best possible way. It's what a huge fraction of production Next.js deployments actually run on.
Although this blog is my original work, I used AI assistance to refine structure, improve clarity, and enhance readability.
