Introduction: The Economics of Multi-Site Hosting
Running each website on its own dedicated server is the most isolated approach, but it is also the most expensive. A basic VPS costs $5-20 per month, meaning 10 websites would cost $50-200 monthly. In contrast, a single $20-40 VPS with 4 GB RAM can comfortably host 20-50 lightweight websites — reducing costs by 80% or more.
Multi-site hosting is the backbone of the web hosting industry. Every shared hosting provider, from budget hosts to premium services, runs hundreds or thousands of websites on shared servers. Understanding how this works gives you the power to build your own hosting infrastructure, save money, and maintain full control.
In this guide, we will cover everything: Nginx server blocks, Apache virtual hosts, PHP-FPM isolation per site, SSL certificates for each domain, file permissions, resource management, and security considerations for running multiple sites safely on a single server.
How Multi-Site Hosting Works
The fundamental mechanism is name-based virtual hosting. When a browser connects to your server, it sends the domain name in the HTTP Host header (or the SNI extension in TLS). The web server examines this header and routes the request to the correct site configuration and document root.
example.com
your server IP
Host header
/var/www/example.com
the response
Nginx Server Blocks
Nginx uses "server blocks" (analogous to Apache's virtual hosts) to define how each domain is handled. Here is a complete setup for hosting multiple sites:
Directory Structure
$ sudo mkdir -p /var/www/site1.com/public_html
$ sudo mkdir -p /var/www/site2.com/public_html
$ sudo mkdir -p /var/www/site3.com/public_html
# Create separate log directories
$ sudo mkdir -p /var/log/nginx/site1.com
$ sudo mkdir -p /var/log/nginx/site2.com
$ sudo mkdir -p /var/log/nginx/site3.com
Server Block Configuration
server {
listen 80;
listen [::]:80;
server_name site1.com www.site1.com;
root /var/www/site1.com/public_html;
index index.php index.html;
access_log /var/log/nginx/site1.com/access.log;
error_log /var/log/nginx/site1.com/error.log;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.3-fpm-site1.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Enable the Site
$ sudo ln -s /etc/nginx/sites-available/site1.com /etc/nginx/sites-enabled/
# Test configuration
$ sudo nginx -t
nginx: configuration file /etc/nginx/nginx.conf test is successful
# Reload Nginx
$ sudo nginx -s reload
Apache Virtual Hosts
If you prefer Apache, the equivalent setup uses VirtualHost directives:
<VirtualHost *:80>
ServerName site1.com
ServerAlias www.site1.com
DocumentRoot /var/www/site1.com/public_html
ErrorLog /var/log/apache2/site1.com-error.log
CustomLog /var/log/apache2/site1.com-access.log combined
<Directory /var/www/site1.com/public_html>
AllowOverride All
Require all granted
</Directory>
<FilesMatch \.php$>
SetHandler "proxy:unix:/var/run/php/php8.3-fpm-site1.sock|fcgi://localhost"
</FilesMatch>
</VirtualHost>
$ sudo apache2ctl configtest
Syntax OK
$ sudo apache2ctl graceful
DNS Setup for Each Domain
Every domain needs an A record pointing to your server's IP address. If you are using multiple domains on the same server, they all point to the same IP — the web server uses the Host header to differentiate them.
| Domain | Record Type | Value |
|---|---|---|
| site1.com | A | 198.51.100.50 |
| www.site1.com | CNAME | site1.com |
| site2.com | A | 198.51.100.50 |
| www.site2.com | CNAME | site2.com |
| site3.com | A | 198.51.100.50 |
| www.site3.com | CNAME | site3.com |
SSL Certificates Per Domain
Each domain needs its own SSL certificate. Let's Encrypt makes this free and automated with Certbot:
$ sudo apt install certbot python3-certbot-nginx -y
# Issue certificates for each domain
$ sudo certbot --nginx -d site1.com -d www.site1.com
$ sudo certbot --nginx -d site2.com -d www.site2.com
$ sudo certbot --nginx -d site3.com -d www.site3.com
# Auto-renewal is configured automatically
$ sudo certbot renew --dry-run
Congratulations, all renewals succeeded.
PHP-FPM Pool Isolation
Running all sites under the same PHP-FPM pool is a security risk. If one site is compromised, the attacker can access files from all other sites. The solution is to create a separate PHP-FPM pool for each site, running under a dedicated system user.
$ sudo useradd -m -s /usr/sbin/nologin site2user
# Set ownership
$ sudo chown -R site1user:site1user /var/www/site1.com
$ sudo chown -R site2user:site2user /var/www/site2.com
[site1]
user = site1user
group = site1user
listen = /var/run/php/php8.3-fpm-site1.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 10
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3
pm.max_requests = 500
; Security: restrict file access to this site only
php_admin_value[open_basedir] = /var/www/site1.com:/tmp
php_admin_value[disable_functions] = exec,passthru,shell_exec,system,proc_open,popen
open_basedir directive restricts which directories PHP can access. Without it, a compromised PHP script in site1 could read files from site2, site3, or even system files. Always set this for multi-tenant environments.
File Permissions and Security
| Path | Owner | Permission | Purpose |
|---|---|---|---|
| /var/www/site1.com/ | site1user:site1user | 750 | Site root, no other users can access |
| /var/www/site1.com/public_html/ | site1user:site1user | 755 | Web-accessible directory |
| PHP files | site1user:site1user | 644 | Readable by web server, not executable |
| wp-config.php | site1user:site1user | 640 | Sensitive config, restricted access |
| uploads/ | site1user:site1user | 755 | Writable by PHP for uploads |
Resource Management: How Many Sites Can One Server Handle?
The answer depends on the type of sites and the server specifications. Here are practical guidelines based on real-world experience:
| Server Spec | Static Sites | WordPress (low traffic) | WordPress (medium traffic) | E-commerce |
|---|---|---|---|---|
| 1 vCPU, 1 GB RAM | 50-100 | 5-10 | 2-3 | 1 |
| 2 vCPU, 4 GB RAM | 200+ | 20-40 | 8-15 | 3-5 |
| 4 vCPU, 8 GB RAM | 500+ | 50-100 | 20-40 | 8-15 |
| 8 vCPU, 16 GB RAM | 1000+ | 100-200 | 50-100 | 20-40 |
Using Cgroups for Resource Limits
In environments with many sites from different customers, you want to prevent one site from consuming all server resources. Linux cgroups (control groups) allow you to set per-user CPU, memory, and I/O limits:
# Limit: 50% of one CPU core, 512MB RAM
$ sudo mkdir -p /sys/fs/cgroup/user.slice/user-site1.slice
$ echo "50000 100000" > /sys/fs/cgroup/user.slice/user-site1.slice/cpu.max
$ echo "536870912" > /sys/fs/cgroup/user.slice/user-site1.slice/memory.max
Security Considerations
Cross-Site Access Prevention
- Separate system users per site
- Per-user PHP-FPM pools
- open_basedir restrictions
- File permissions set to 750 on site roots
- disable_functions in PHP for dangerous calls
Web Application Firewall
- ModSecurity with OWASP Core Rule Set
- Rate limiting per domain in Nginx
- Fail2ban for brute force protection
- Regular security updates and patching
- Automated malware scanning
Monitoring Multiple Sites
$ tail -f /var/log/nginx/site1.com/access.log
# Monitor PHP-FPM pool status
$ curl http://localhost/fpm-status?full
# Check overall server resource usage
$ htop
$ free -h
$ df -h
How Panelica Handles Multi-Site Hosting
When you create a domain in Panelica, the system automatically provisions:
- Nginx server block with optimized configuration for the site type (static, PHP, Node.js, Python)
- Per-user PHP-FPM pool running under a dedicated system user with open_basedir isolation
- Let's Encrypt SSL certificate with automatic renewal
- DNS zone with all required records (A, CNAME, MX, SPF, DKIM, DMARC)
- Cgroups v2 resource limits (CPU, memory, I/O, process count) per user
- Namespace isolation with chroot for SFTP and SSH access
- Per-site access and error logs viewable from the web-based log viewer
The 5-layer isolation architecture (Cgroups, Namespaces, SSH Chroot, PHP-FPM, Unix Permissions) ensures that one site cannot affect another — whether through resource consumption, file access, or process interference. This is the same level of isolation that enterprise hosting providers charge premium prices for.
Conclusion
Hosting multiple websites on a single server is a practical, cost-effective approach that powers most of the internet. With proper configuration — separate Nginx server blocks or Apache virtual hosts, per-site PHP-FPM pools, individual SSL certificates, correct file permissions, and resource monitoring — you can safely run dozens or even hundreds of sites on a single VPS. The key is isolation: each site should be unable to see or affect other sites on the same server. Whether you configure this manually or use a panel like Panelica to automate it, the principles remain the same: isolate, limit, monitor, and secure.