Tutorial

How to Host Multiple Websites on One VPS: The Complete Guide

March 31, 2026

Back to Blog

Why Host Multiple Sites on One VPS?

Running each website on its own VPS sounds clean. Isolated, predictable, low blast radius if something goes wrong. But at $20-40 per VPS per month, ten websites means a $200-400 monthly bill before you've written a single line of code. For independent developers, freelancers, and small agencies, that math doesn't work.

The good news: a single well-configured VPS with 4 GB RAM can comfortably host 20 to 50 low-to-medium traffic websites. Here's what that looks like in practice — and when you should push back against it.

The Real Cost Comparison

  • 5 separate VPS: $100-200/month, 5 login portals, 5 maintenance windows, 5 certificate renewals
  • 1 VPS, 5 sites: $20-40/month, one admin overhead, centralized monitoring, shared SSL automation
  • Break-even point: Usually 3+ sites on the same hardware tier

When NOT to Consolidate

Multi-site hosting is not always the right answer. Avoid consolidating when:

  • Any single site receives more than 50,000 monthly visitors (resource contention becomes real)
  • Compliance requirements mandate physical separation (HIPAA, PCI-DSS)
  • A site has a history of being compromised (one bad actor can affect neighbors)
  • Sites have wildly different uptime requirements — one maintenance window affects all
  • Customers pay for dedicated resources and expect them

For everything else, a properly configured multi-site VPS is a smart, professional setup — not a corner-cutting shortcut.


Prerequisites

Before setting up multiple sites, make sure your VPS has the right foundation:

Minimum Hardware for Multi-Site Hosting

Sites RAM CPU Disk Notes
1-5 2 GB 1-2 vCPU 40 GB SSD Low traffic, static or simple PHP
5-20 4 GB 2 vCPU 80 GB SSD Mixed traffic, WordPress, databases
20-50 8 GB 4 vCPU 160 GB SSD Active e-commerce, caching essential
50+ 16 GB+ 8 vCPU+ 320 GB+ SSD Requires serious monitoring and tuning

What You Need Before Starting

  • A VPS with root access running Ubuntu 22.04 or 24.04 (or Debian 12+)
  • Domain names pointed to your server's IP (DNS A records propagated)
  • Basic Linux command line familiarity (file editing, service management)
  • A web server installed: Nginx (recommended) or Apache
  • SSH key authentication set up — never use passwords on production

Directory Structure: Give Every Site Its Own Home

Before touching any web server config, establish a clean directory layout. This matters for permissions, backups, and your sanity at 3 AM when something breaks.

/var/www/
├── site1.com/
│   ├── public_html/          ← document root
│   ├── logs/                 ← per-site access and error logs
│   └── tmp/                  ← PHP session and temp files
├── site2.com/
│   ├── public_html/
│   ├── logs/
│   └── tmp/
└── site3.com/
    ├── public_html/
    ├── logs/
    └── tmp/

Create this structure with a script you can reuse:

#!/bin/bash
DOMAIN=$1
mkdir -p /var/www/$DOMAIN/{public_html,logs,tmp}

# Create a dedicated user per site (security!)
useradd -r -s /usr/sbin/nologin -d /var/www/$DOMAIN $DOMAIN
chown -R $DOMAIN:www-data /var/www/$DOMAIN
chmod -R 750 /var/www/$DOMAIN
chmod 755 /var/www/$DOMAIN/public_html

echo "$DOMAIN directory structure created."

Run it for each new site: bash setup-site.sh site1.com

The key principle here is one Linux user per site. When PHP-FPM runs as site1.com, it physically cannot read files owned by site2.com. This is proper isolation — not a config flag you can forget to set.


Nginx Virtual Hosts (Server Blocks)

Nginx is the recommended web server for multi-site setups. Its event-driven architecture handles many concurrent connections with minimal memory overhead — critical when dozens of sites share one server.

How Server Blocks Work

Nginx uses server blocks to decide which configuration handles each incoming request. When a request arrives for site1.com, Nginx reads the Host header and routes it to the matching server block. Multiple blocks can share port 80/443 — Nginx handles the sorting.

Configuration File per Site

Store each site's config in /etc/nginx/sites-available/, then symlink it to /etc/nginx/sites-enabled/:

sudo nano /etc/nginx/sites-available/site1.com

Basic HTTP configuration for site1.com:

server {
    listen 80;
    listen [::]:80;
    server_name site1.com www.site1.com;

    root /var/www/site1.com/public_html;
    index index.php index.html;

    access_log /var/www/site1.com/logs/access.log;
    error_log  /var/www/site1.com/logs/error.log;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php8.4-fpm-site1.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }

    location ~ /\.ht {
        deny all;
    }
}

For site2.com, copy the file and change server_name, root, log paths, and fastcgi_pass socket.

Enable the site:

sudo ln -s /etc/nginx/sites-available/site1.com /etc/nginx/sites-enabled/
sudo nginx -t            # test config — ALWAYS do this first
sudo nginx -s reload     # graceful reload, zero downtime

Default Catch-All Block

Add a catch-all server block to handle requests for domains not configured on this server — otherwise Nginx serves the first enabled config to unknown hosts, which can leak information:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 444;   # close connection immediately, no response
}

Apache Virtual Hosts

If you prefer Apache — or need .htaccess support for legacy applications — the process is similar but uses a different syntax.

Virtual Host Configuration

sudo nano /etc/apache2/sites-available/site1.com.conf
<VirtualHost *:80>
    ServerName site1.com
    ServerAlias www.site1.com
    DocumentRoot /var/www/site1.com/public_html

    ErrorLog  /var/www/site1.com/logs/error.log
    CustomLog /var/www/site1.com/logs/access.log combined

    <Directory /var/www/site1.com/public_html>
        Options -Indexes +FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>

    <FilesMatch "\.php$">
        SetHandler "proxy:unix:/run/php/php8.4-fpm-site1.sock|fcgi://localhost"
    </FilesMatch>
</VirtualHost>

Enable the site and reload:

sudo a2ensite site1.com.conf
sudo a2dissite 000-default.conf    # disable the default if it's in the way
sudo apache2ctl configtest
sudo systemctl reload apache2

Nginx vs Apache for Multi-Site: Quick Comparison

Factor Nginx Apache
Memory per idle site ~2 MB ~8 MB (prefork)
Static file serving Excellent Good
.htaccess support No (must convert rules) Yes (performance cost)
Config learning curve Moderate Gentler for beginners
PHP handling Via PHP-FPM socket (faster) Via mod_php or FPM proxy
Recommended for 10+ sites Yes Possible, but tune carefully

PHP-FPM Per-Site Pools: The Right Way to Run PHP

This is the part most guides skip — and it's where real isolation happens. Running all sites through a shared PHP process is asking for trouble: one site's misconfigured script can read another site's files, a memory leak in one pool crashes all PHP processes, and you can't set different PHP versions per site.

The solution: one PHP-FPM pool per site.

Why Per-Site Pools Matter

  • Security: open_basedir restricts PHP file access to that site's directory only
  • Stability: One site's crashed pool doesn't affect other sites
  • Flexibility: site1.com can run PHP 8.1, site2.com can run PHP 8.4
  • Resource control: Set per-site memory limits, max processes, timeouts

Creating a Pool Config

Create /etc/php/8.4/fpm/pool.d/site1.com.conf:

[site1.com]
user  = site1.com
group = www-data

listen = /run/php/php8.4-fpm-site1.sock
listen.owner = www-data
listen.group = www-data
listen.mode  = 0660

pm = dynamic
pm.max_children      = 5
pm.start_servers     = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests      = 500

php_admin_value[open_basedir]      = /var/www/site1.com/:/tmp/
php_admin_value[upload_tmp_dir]    = /var/www/site1.com/tmp/
php_admin_value[session.save_path] = /var/www/site1.com/tmp/
php_admin_flag[display_errors]     = off
php_admin_value[error_log]         = /var/www/site1.com/logs/php_error.log
php_admin_value[memory_limit]      = 128M
php_admin_value[max_execution_time] = 30

Restart the relevant PHP-FPM version:

sudo systemctl restart php8.4-fpm

Your Nginx config's fastcgi_pass then points to unix:/run/php/php8.4-fpm-site1.sock.

Different PHP Versions Per Site

If site1.com needs PHP 8.1 (legacy app) and site2.com needs PHP 8.4:

# Install both versions (Ubuntu/Debian with ondrej/php PPA)
sudo add-apt-repository ppa:ondrej/php
sudo apt install php8.1-fpm php8.4-fpm

# Create separate pool configs:
# /etc/php/8.1/fpm/pool.d/site1.com.conf  (runs as php8.1-fpm)
# /etc/php/8.4/fpm/pool.d/site2.com.conf  (runs as php8.4-fpm)

# Nginx site1.com points to:
fastcgi_pass unix:/run/php/php8.1-fpm-site1.sock;

# Nginx site2.com points to:
fastcgi_pass unix:/run/php/php8.4-fpm-site2.sock;

SSL Certificates: HTTPS for Every Site

Every site needs HTTPS in 2026 — not just for SEO, but because browsers actively warn users on HTTP sites. Certbot handles Let's Encrypt certificates automatically.

Installing Certbot

sudo apt install certbot python3-certbot-nginx -y

Issuing a Certificate

# Single domain + www
sudo certbot --nginx -d site1.com -d www.site1.com

# Multiple separate domains in one command
sudo certbot --nginx \
  -d site1.com -d www.site1.com \
  -d site2.com -d www.site2.com

Certbot automatically modifies your Nginx config to add SSL and redirect HTTP to HTTPS. Check what it added:

sudo nano /etc/nginx/sites-available/site1.com

Wildcard Certificates (DNS Challenge)

For *.site1.com covering all subdomains, use the DNS challenge:

sudo certbot certonly \
  --manual \
  --preferred-challenges=dns \
  -d site1.com \
  -d *.site1.com

Certbot will prompt you to add a _acme-challenge TXT record to your DNS. After adding it, press Enter — the certificate is issued.

Auto-Renewal

Certbot installs a systemd timer or cron job automatically. Verify it:

sudo systemctl status certbot.timer
# or test the renewal process:
sudo certbot renew --dry-run

Let's Encrypt certificates expire after 90 days. Auto-renewal runs twice daily and renews if less than 30 days remain.


Database Per Site: Never Share Credentials

One of the most common — and most dangerous — shortcuts in multi-site hosting is sharing a single database user across multiple sites. If one site's WordPress installation is compromised, that database user has access to every other site's data.

The rule is simple: one database user, one database, one site.

MySQL Setup Per Site

sudo mysql -u root -p
-- For site1.com
CREATE DATABASE site1_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'site1_user'@'localhost' IDENTIFIED BY 'use-a-strong-random-password';
GRANT ALL PRIVILEGES ON site1_db.* TO 'site1_user'@'localhost';

-- For site2.com
CREATE DATABASE site2_db CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'site2_user'@'localhost' IDENTIFIED BY 'different-strong-password';
GRANT ALL PRIVILEGES ON site2_db.* TO 'site2_user'@'localhost';

FLUSH PRIVILEGES;

PostgreSQL Setup Per Site

sudo -u postgres psql
CREATE USER site1_user WITH PASSWORD 'use-a-strong-random-password';
CREATE DATABASE site1_db OWNER site1_user;

CREATE USER site2_user WITH PASSWORD 'different-strong-password';
CREATE DATABASE site2_db OWNER site2_user;

-- Revoke public schema access (security hardening)
\c site1_db
REVOKE ALL ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO site1_user;

Storing Credentials Safely

Never put database passwords in your Nginx config or anywhere web-accessible. Store them in the application's config file (e.g., wp-config.php, .env) and make sure that file is not readable by other site users:

chmod 600 /var/www/site1.com/public_html/.env
chown site1.com:site1.com /var/www/site1.com/public_html/.env

DNS Configuration: Pointing Domains to Your Server

For multi-site hosting, every domain needs to point to the same server IP — but with its own DNS records. The web server handles the routing once the request arrives.

Required DNS Records Per Site

Record Type Name Value Purpose
A @ your.server.ip Root domain → server
A www your.server.ip www subdomain → server
AAAA @ your::ipv6::address IPv6 access (optional)
MX @ mail.site1.com Email delivery (if hosting email)
TXT @ v=spf1 ip4:your.server.ip -all SPF for email (if hosting email)

If you use Cloudflare for DNS, the process is:

  1. Add each domain to Cloudflare
  2. Create A records pointing to your VPS IP (proxy off / orange cloud: your choice)
  3. Update nameservers at your registrar to Cloudflare's
  4. Wait for propagation (minutes with Cloudflare, up to 48 hours with some registrars)

Important: When you first set up a site, disable Cloudflare proxying (grey cloud) until you have confirmed the site works and SSL is issued. Otherwise Certbot's HTTP challenge can fail.


WordPress Multi-Site: The Right Approach

WordPress has a built-in multisite feature that runs multiple sites from one installation. Avoid it for independent client sites — it creates shared failure points and makes migrations painful.

Instead, use separate WordPress installations per site. Each installation gets:

  • Its own wp-config.php with unique database credentials and salts
  • Its own PHP-FPM pool with open_basedir isolation
  • Its own Linux user owning the files
  • Its own database user with no cross-site access

WordPress-Specific Configuration

# wp-config.php additions for per-site security
define('DB_NAME',     'site1_db');
define('DB_USER',     'site1_user');
define('DB_PASSWORD', 'your-password');
define('DB_HOST',     'localhost');

// Redis object cache (different DB number per site!)
define('WP_REDIS_HOST',     '127.0.0.1');
define('WP_REDIS_PORT',     6379);
define('WP_REDIS_DATABASE', 1);  // site2 uses 2, site3 uses 3, etc.

// Unique keys - generate at: https://api.wordpress.org/secret-key/1.1/salt/
define('AUTH_KEY',         'unique-random-string-per-site');
// ... (8 unique keys, never reuse across sites)

Redis Object Cache Per Site

If multiple WordPress sites use Redis for object caching, they must use different Redis databases (0-15) or separate key prefixes — otherwise their caches collide:

# Site 1: Redis DB 1
define('WP_REDIS_DATABASE', 1);
define('WP_REDIS_PREFIX', 'site1:');

# Site 2: Redis DB 2
define('WP_REDIS_DATABASE', 2);
define('WP_REDIS_PREFIX', 'site2:');

Security Isolation: Protecting Sites from Each Other

The worst-case scenario in multi-site hosting is a compromised site reading or modifying another site's files. Proper isolation prevents this at multiple layers.

Security Isolation Checklist

Layer What It Does How to Implement
Linux users File ownership enforcement One system user per site, home 700
PHP open_basedir Restricts PHP file access Set in PHP-FPM pool config
PHP disable_functions Blocks dangerous PHP functions exec, shell_exec, passthru, system in pool config
Separate PHP pools Process isolation One socket/pool per site
Database users Data isolation One DB user per site, no cross-grants
SSH/SFTP jails File access via SSH chroot per user, SFTP-only for clients
Firewall (UFW/nftables) Port-level protection Allow 80, 443, your SSH port only

Add dangerous function restrictions to each PHP-FPM pool:

php_admin_value[disable_functions] = exec,passthru,shell_exec,system,proc_open,popen,\
  curl_exec,curl_multi_exec,parse_ini_file,show_source,symlink

CageFS-Style Isolation

For maximum isolation — especially in shared hosting environments where you don't trust your users — consider a chroot-style setup where each user has a private filesystem view. This prevents even proc_open escapes and symlink attacks.

Setting this up manually is complex. Panels like Panelica implement this as a built-in 5-layer isolation system (cgroups v2, Linux namespaces, SSH chroot jails, per-user PHP-FPM pools, and Unix permission enforcement) that applies to every user automatically.


Resource Management: Don't Let One Site Starve the Others

Without resource limits, one site with a poorly optimized database query can consume all server RAM and make every other site unresponsive. This is resource contention — the main operational risk of multi-site hosting.

PHP-FPM Process Budgeting

The formula for pm.max_children per pool:

max_children = (available_RAM_for_PHP) / (avg_PHP_process_size_MB)

Example:
- 4 GB VPS: reserve 1 GB for OS + DB = 3 GB for PHP
- Average WordPress PHP process: 30-60 MB
- 3000 MB / 40 MB = 75 total PHP processes across all pools
- 10 sites: ~7-8 max_children per site (adjust based on traffic)

Monitor actual process sizes:

# Average memory per PHP-FPM process
ps --no-headers -o "rss,cmd" -C php-fpm8.4 | \
  awk '{ sum+=$1; count++ } END { print sum/count/1024 " MB average" }'

Nginx Connection Limits Per Site

# In http {} block of nginx.conf:
limit_req_zone $binary_remote_addr zone=site1_limit:10m rate=10r/s;

# In the site1.com server block:
limit_req zone=site1_limit burst=20 nodelay;

Linux Cgroups for Per-Site Resource Limits

For hard CPU and memory limits per site user:

# Limit site1.com user to 25% CPU and 512 MB RAM
systemctl set-property user-$(id -u site1.com).slice CPUQuota=25%
systemctl set-property user-$(id -u site1.com).slice MemoryMax=512M

This ensures a runaway script on site1.com can never consume more than its allocated share of the server's resources.


Backup Strategy: Per-Site, Automated, Off-Server

With multiple sites sharing one server, a failed disk means everything goes down at once. Your backup strategy must account for this.

What to Back Up Per Site

  • Files: /var/www/site1.com/ — entire site directory
  • Database: Per-site mysqldump or pg_dump
  • Nginx config: /etc/nginx/sites-available/site1.com
  • PHP-FPM config: /etc/php/8.4/fpm/pool.d/site1.com.conf
  • SSL certs: /etc/letsencrypt/live/site1.com/

Automated Backup Script

#!/bin/bash
# backup-site.sh — run via cron for each site

SITE=$1
BACKUP_DIR=/backups/$SITE
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Files
tar -czf $BACKUP_DIR/files_$DATE.tar.gz /var/www/$SITE/public_html/

# Database (MySQL)
mysqldump -u ${SITE}_user -p${SITE}_password ${SITE}_db | \
  gzip > $BACKUP_DIR/db_$DATE.sql.gz

# Keep 7 daily backups
find $BACKUP_DIR -name "*.gz" -mtime +7 -delete

echo "Backup complete for $SITE: $DATE"

Add to cron (crontab -e):

# Back up each site at 2 AM
0 2 * * * /usr/local/bin/backup-site.sh site1.com
0 2 * * * /usr/local/bin/backup-site.sh site2.com

Then sync to an off-server location (S3, Hetzner Storage Box, Backblaze B2) with rclone or rsync. Local-only backups don't protect against server loss.


Monitoring Multiple Sites

When you manage 10+ sites, you can't manually check each one. You need automated monitoring that tells you when something breaks before your client does.

What to Monitor

  • Uptime: HTTP check per site every 1-5 minutes (tools: UptimeRobot, Uptime Kuma self-hosted)
  • SSL expiry: Alert when certificates have less than 30 days remaining
  • Disk usage per site: du -sh /var/www/*/public_html in a cron alert
  • Server resources: CPU, RAM, disk I/O at the server level (Prometheus + Grafana, Netdata)
  • PHP error rates: Watch /var/www/*/logs/php_error.log for spikes
  • Failed login attempts: Fail2ban reports and SSH auth logs

Per-Site Disk Usage Alert

#!/bin/bash
# Check disk quotas per site
LIMIT_GB=5
for SITE in /var/www/*/; do
    SITENAME=$(basename $SITE)
    USAGE=$(du -sb $SITE/public_html 2>/dev/null | awk '{print $1}')
    LIMIT=$((LIMIT_GB * 1024 * 1024 * 1024))
    if [ "$USAGE" -gt "$LIMIT" ]; then
        echo "ALERT: $SITENAME exceeds ${LIMIT_GB}GB ($(du -sh $SITE/public_html | cut -f1))"
    fi
done

When to Scale: Recognizing the Limits

Multi-site VPS hosting has a ceiling. Here are the signals that tell you it's time to move up:

Vertical Scaling Signals

  • Average CPU usage consistently above 60-70%
  • PHP-FPM queuing requests (check pm.status_path output)
  • Swap usage regularly above 20%
  • Response times above 500ms for static content

Vertical scaling (more RAM and CPU on the same server) is usually faster and cheaper than horizontal scaling for the first growth phase. Most VPS providers let you resize without reinstalling.

When to Give a Site Its Own Server

  • A site drives more than 30-40% of total server load on its own
  • A site has different maintenance windows than others (e.g., a client with SLA requirements)
  • A site has security requirements that mandate isolation (payment processing, healthcare)
  • Traffic spikes on one site cause degradation on others (check your monitoring)

Horizontal Scaling

When a single server isn't enough for a specific site, the path is:

  1. Put a load balancer (HAProxy or Nginx upstream) in front
  2. Move the database to a dedicated DB server
  3. Use shared storage (NFS or object storage) for uploaded files
  4. Add web server nodes as needed

This is infrastructure engineering territory, not VPS administration. Get there when you need to — not before.


Complete Setup Checklist

Use this for each new site you add to the server:

  • DNS A record created and propagated
  • Directory structure created: /var/www/sitename.com/{public_html,logs,tmp}
  • Dedicated Linux user created: useradd sitename.com
  • Correct file ownership: chown -R sitename.com:www-data /var/www/sitename.com
  • Nginx/Apache virtual host config created and enabled
  • PHP-FPM pool config created with open_basedir and disable_functions
  • PHP-FPM service restarted
  • Nginx config tested (nginx -t) and reloaded
  • SSL certificate issued with Certbot
  • HTTPS redirect verified (HTTP → HTTPS)
  • Database and user created with site-specific credentials
  • Database credentials stored securely (not world-readable)
  • Uptime monitoring added for this domain
  • Backup cron job added for files + database
  • Resource limits set (PHP-FPM pool, Nginx rate limit, cgroups if needed)

The Managed Panel Alternative

Everything described in this guide — virtual hosts, PHP-FPM pools, SSL, databases, users, isolation, backups, monitoring — is what a modern server management panel automates for you.

Setting it up manually once teaches you how it works, and that knowledge is genuinely valuable. But maintaining it manually across 20+ sites, with multiple team members, across multiple servers, is where the real cost shows up in human hours rather than dollars.

Panels like Panelica implement this entire stack — including the 5-layer security isolation described in the security section — as the default for every site you create. The PHP-FPM pool, the Linux user, the SSL certificate, the Nginx config: all generated and kept in sync automatically, with per-user resource monitoring and scheduled backups built in.

Whether you go the manual route or the managed route, understanding what happens under the hood is what separates a server administrator from someone who just clicks buttons. Now you know both.

Share: