Tutorial

Core Web Vitals: Server-Side Optimization Guide for Better Google Rankings

March 31, 2026

Back to Blog

Introduction: Why Server-Side Optimization Is the Missing Piece

Since 2021, Google has used Core Web Vitals as direct ranking factors. Thousands of articles have been written about optimizing images, reducing JavaScript bundle sizes, and minimizing render-blocking resources. Most of that advice is sound — but it addresses only half the problem.

The other half lives on your server.

A site can have perfectly optimized frontend assets and still score poorly because the server takes 800ms to respond. No amount of lazy-loading or code-splitting will help when the browser is waiting for the first byte. This guide focuses on what your server is doing — and what you can do to make it faster.

We will cover TTFB, Nginx configuration, PHP tuning, database optimization, caching architecture, TLS improvements, and how to measure everything. Each section is actionable. By the end, you will have a concrete checklist you can apply to any production server.

1. What Are Core Web Vitals?

Core Web Vitals are a set of user experience metrics that Google measures in the field using real Chrome users (the Chrome User Experience Report, or CrUX). They are not synthetic lab numbers — they reflect how actual visitors experience your site.

There are three metrics:

  • LCP (Largest Contentful Paint): How long it takes for the largest visible element — typically a hero image or heading — to appear on screen. Good: under 2.5s. Needs improvement: 2.5s–4s. Poor: over 4s.
  • INP (Interaction to Next Paint): Introduced in 2024, replacing FID. Measures the full delay from a user interaction (click, tap, key press) to the next visual update. Good: under 200ms. Needs improvement: 200ms–500ms. Poor: over 500ms.
  • CLS (Cumulative Layout Shift): Measures unexpected visual shifts — content jumping around as the page loads. Good: under 0.1. Needs improvement: 0.1–0.25. Poor: over 0.25.
Metric Good Needs Improvement Poor Server Impact
LCP < 2.5s 2.5s – 4s > 4s Very High
INP < 200ms 200ms – 500ms > 500ms Medium (via API speed)
CLS < 0.1 0.1 – 0.25 > 0.25 Low (mostly frontend)

LCP is where server-side optimization delivers the most impact. The path from user request to painted content runs directly through your server's response time.

2. TTFB — The Server's Responsibility

Time to First Byte (TTFB) is the time elapsed from the moment a user's browser sends an HTTP request to the moment it receives the first byte of the response. It is not a Core Web Vital itself, but Google uses it as a diagnostic metric, and it directly determines your LCP ceiling.

If TTFB is 1.5 seconds, LCP cannot be better than 1.5 seconds no matter how fast the rest of the page loads.

TTFB Thresholds

  • Good: under 200ms
  • Moderate: 200ms – 500ms
  • Poor: over 500ms

What Makes Up TTFB?

TTFB is the sum of several components:

DNS resolution time
+ TCP connection time
+ TLS handshake time
+ Server processing time  ← This is what you control
+ Network transit time

DNS, TCP, and network time depend on geography and infrastructure. TLS time can be improved with configuration. Server processing time is entirely your responsibility.

Measuring TTFB

You can measure TTFB from the command line using curl's built-in timing variables:

curl -o /dev/null -s -w "
DNS lookup:     %{time_namelookup}s
TCP connect:    %{time_connect}s
TLS handshake:  %{time_appconnect}s
TTFB:           %{time_starttransfer}s
Total time:     %{time_total}s
" https://example.com/

Run this from multiple geographic locations to understand where time is being spent. A high time_namelookup points to DNS. A high difference between time_appconnect and time_starttransfer points to server processing — PHP, database queries, or application logic.

3. Nginx Configuration for LCP

Nginx is the first layer in your stack after the network. Getting it right is foundational. These are the configurations that have measurable impact on LCP.

Gzip Compression

Compression reduces the bytes transferred over the network. Text-based assets (HTML, CSS, JS, JSON, SVG) compress well — typically 60–80% size reduction.

gzip on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_vary on;
gzip_proxied any;
gzip_types
    text/plain
    text/css
    text/javascript
    application/javascript
    application/json
    application/xml
    image/svg+xml
    font/woff2;

Compression level 5 is a good balance between CPU usage and compression ratio. Levels 6–9 yield diminishing returns for significantly more CPU.

Brotli Compression

Brotli (developed by Google) consistently outperforms Gzip by 15–25% on text assets. Every modern browser supports it. If your Nginx build includes the brotli module:

brotli on;
brotli_comp_level 6;
brotli_static on;
brotli_types
    text/plain
    text/css
    application/javascript
    application/json
    image/svg+xml
    font/woff2;
Feature Gzip Brotli
Compression ratio Good 15–25% better
CPU usage Low Slightly higher (worth it)
Browser support 100% 97%+
Decompression speed Fast Fast

Static File Caching

Static assets that do not change should be cached aggressively in the browser. The trick is to use fingerprinted URLs (e.g., style.a1b2c3.css) so cache can be busted when the file actually changes.

location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
    expires 365d;
    add_header Cache-Control "public, immutable";
    add_header Vary "Accept-Encoding";
    access_log off;
}

FastCGI Cache for PHP

For PHP-generated pages, full-page caching at the Nginx level eliminates PHP execution entirely for cached requests. This is one of the highest-impact optimizations available — it can reduce TTFB from 300ms to 5ms for anonymous users.

# In http {} block
fastcgi_cache_path /var/cache/nginx/fastcgi
    levels=1:2
    keys_zone=PHPCACHE:100m
    max_size=1g
    inactive=60m
    use_temp_path=off;

# In server {} block
fastcgi_cache_key "$scheme$request_method$host$request_uri";

location ~ \.php$ {
    fastcgi_pass unix:/var/run/php-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;

    fastcgi_cache PHPCACHE;
    fastcgi_cache_valid 200 60m;
    fastcgi_cache_valid 404 1m;
    fastcgi_cache_bypass $cookie_session $http_authorization;
    fastcgi_no_cache $cookie_session $http_authorization;
    add_header X-Cache-Status $upstream_cache_status;
}

The X-Cache-Status header lets you confirm whether a response came from cache (HIT) or was generated fresh (MISS). Check it with curl -I https://yoursite.com/ | grep X-Cache.

Keep-Alive Connections

HTTP keep-alive reuses TCP connections across multiple requests, saving the overhead of establishing a new connection for every asset.

keepalive_timeout 65;
keepalive_requests 100;

Worker Configuration

worker_processes auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

worker_processes auto sets workers equal to CPU cores. epoll is the most efficient I/O multiplexing on Linux.

4. PHP Performance

PHP is often the largest contributor to server processing time. Two areas have the most impact: OPcache and PHP-FPM worker configuration.

OPcache

PHP compiles scripts to bytecode on every request — unless OPcache is enabled. OPcache stores compiled bytecode in shared memory, eliminating parse and compile time on subsequent requests. This is a free, zero-side-effect performance improvement that should be enabled on every production PHP server.

[opcache]
opcache.enable=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.validate_timestamps=0   ; disable in production
opcache.save_comments=1
opcache.revalidate_freq=0
opcache.fast_shutdown=1

Setting validate_timestamps=0 prevents OPcache from checking if files have changed on every request. In production, you manually clear OPcache after deployment with opcache_reset() or by restarting PHP-FPM.

PHP-FPM Worker Pools

If PHP-FPM runs out of workers, new requests queue up and TTFB spikes. Calculate your worker count based on available memory:

Average PHP process memory: ~50MB
Available RAM for PHP: 4GB
Worker count: 4096 / 50 = ~80 workers
[www]
pm = dynamic
pm.max_children = 80
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 500

pm.max_requests = 500 recycles workers after 500 requests, preventing memory leaks from accumulating indefinitely in long-running PHP processes.

5. Database Optimization

Slow database queries are invisible at the network level but completely dominate server processing time. A 200ms query adds 200ms to every TTFB — and most sites run multiple queries per page.

Finding Slow Queries

Enable the MySQL slow query log to identify bottlenecks:

# In my.cnf
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 0.5
log_queries_not_using_indexes = 1

For PostgreSQL, enable log_min_duration_statement:

log_min_duration_statement = 500  # ms

EXPLAIN ANALYZE

Once you have identified slow queries, use EXPLAIN to understand the execution plan:

-- PostgreSQL
EXPLAIN ANALYZE SELECT * FROM posts WHERE user_id = 42 ORDER BY created_at DESC;

-- MySQL
EXPLAIN SELECT * FROM posts WHERE user_id = 42 ORDER BY created_at DESC;

Look for sequential scans (Seq Scan in PostgreSQL, ALL in MySQL) on large tables. These indicate missing indexes.

Indexing Strategy

-- Add index on frequently filtered column
CREATE INDEX idx_posts_user_id ON posts(user_id);

-- Composite index for filter + sort
CREATE INDEX idx_posts_user_created ON posts(user_id, created_at DESC);

A rule of thumb: index every column that appears in a WHERE, ORDER BY, or JOIN ON clause on a large table. But do not over-index — each index slows down writes.

Connection Pooling

Opening a new database connection is expensive — it involves authentication, memory allocation, and process creation. Connection poolers maintain a pool of open connections and reuse them.

  • PgBouncer (PostgreSQL): transaction-mode pooling, widely used, minimal overhead
  • ProxySQL (MySQL): advanced routing, read/write splitting, query caching

For high-traffic sites, a connection pool can reduce database connection overhead by 90%.

The N+1 Query Problem

The most common application-level database performance problem. An N+1 query happens when code fetches a list of records and then queries the database once per record:

-- 1 query to fetch 100 posts
SELECT * FROM posts LIMIT 100;

-- Then 100 separate queries for each post's author
SELECT * FROM users WHERE id = 1;
SELECT * FROM users WHERE id = 2;
... (98 more)

The fix is eager loading — JOIN the related data in a single query, or batch the IDs into an IN clause.

6. Caching Architecture (Multi-Layer)

Caching is not a single technique — it is a layered system. Each layer serves a different purpose. Understanding where each layer fits prevents over-caching (serving stale data) and under-caching (unnecessary server load).

Request
    ↓
Browser Cache          ← Prevents network request entirely
    ↓ (miss)
CDN Edge Cache         ← Serves from geographic edge node
    ↓ (miss)
Nginx FastCGI Cache    ← Serves full PHP response from disk
    ↓ (miss)
Redis / Object Cache   ← Serves cached query results
    ↓ (miss)
PHP + OPcache          ← Bytecode cached, application logic runs
    ↓
Database               ← Actual data source

Browser Cache Headers

The Cache-Control header instructs browsers and CDNs how to cache responses:

# Static assets with fingerprinted URLs — cache forever
Cache-Control: public, max-age=31536000, immutable

# HTML pages — always revalidate
Cache-Control: public, max-age=0, must-revalidate

# API responses — short cache, shared
Cache-Control: public, max-age=60, s-maxage=300

# Private/user-specific content
Cache-Control: private, no-cache

# Never cache
Cache-Control: no-store

CDN Configuration

A CDN places your content at edge nodes close to your users, reducing the physical distance a request must travel. For global audiences, this can reduce TTFB by hundreds of milliseconds.

Key CDN configuration points:

  • Set appropriate cache TTLs — static assets: 1 year, HTML: 5 minutes, API: depends on content
  • Configure cache bypass rules for authenticated or personalized content
  • Use cache purging on deployment to invalidate outdated content immediately
  • Enable HTTP/2 and HTTP/3 at the CDN edge

Redis for Application Caching

Redis is an in-memory data store ideal for caching database query results, computed values, and session data. Retrieving from Redis typically takes under 1ms — compared to 10–300ms for a database query.

# Cache a query result for 5 minutes (PHP pseudocode)
$key = 'user_posts_' . $userId;
$cached = $redis->get($key);
if ($cached) {
    return json_decode($cached);
}

$posts = $db->query('SELECT * FROM posts WHERE user_id = ?', [$userId]);
$redis->setex($key, 300, json_encode($posts));
return $posts;

ETags and Conditional Requests

For resources that change infrequently, ETags allow the browser to verify whether cached content is still current without downloading it again:

etag on;

location / {
    etag on;
    if_modified_since exact;
}

With ETags, a cache validation request returns 304 Not Modified (no body) if the content has not changed, saving bandwidth while keeping cache fresh.

7. Image Optimization at the Server Level

Images are consistently the largest contributors to LCP. The browser cannot paint the LCP element until the image has finished downloading. Server-side image optimization reduces that download time.

Serving Modern Formats with Content Negotiation

WebP provides 25–35% better compression than JPEG at equivalent quality. AVIF provides 50% better compression than JPEG. Both are supported by all modern browsers.

Nginx can serve WebP or AVIF versions of images automatically, falling back to JPEG/PNG for older clients:

map $http_accept $webp_suffix {
    default   "";
    "~*webp"  ".webp";
}

server {
    location ~* ^/images/(.+)\.(jpe?g|png)$ {
        set $img_path $1;
        set $img_ext $2;
        try_files /images/${img_path}${webp_suffix} $uri =404;
    }
}

Pre-generate WebP versions during your deployment or image upload process using tools like cwebp or imagemagick.

On-the-Fly Image Resizing

Serving a 3000×2000px image for a 300×200px thumbnail wastes bandwidth. Self-hosted image processing tools:

  • imgproxy: Fast image processing server, Docker-friendly, built in Go
  • Thumbor: Python-based, feature-rich, with smart cropping
  • Nginx image_filter: Built into Nginx, basic resize/crop without extra software
# Nginx image_filter (resize to 300px wide)
location /thumbnail/ {
    alias /var/www/images/;
    image_filter resize 300 -;
    image_filter_jpeg_quality 85;
    image_filter_buffer 10M;
}

Lazy Loading Headers

While loading="lazy" is a frontend HTML attribute, your server can influence it by generating the correct HTML via server-side rendering — ensuring that only above-the-fold images are marked as eager and below-the-fold images are lazy-loaded.

8. SSL/TLS Optimization

The TLS handshake adds latency to every new connection. Modern TLS configuration reduces this overhead significantly.

TLS 1.3

TLS 1.3 reduces the handshake from 2 round trips (TLS 1.2) to 1 round trip. For users far from your server, this can save 100–300ms per new connection.

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_ecdh_curve X25519:prime256v1:secp384r1;

OCSP Stapling

Without OCSP stapling, the browser must make a separate request to the certificate authority's OCSP server to verify your certificate has not been revoked. This adds 50–200ms of latency. OCSP stapling moves this responsibility to your server, which caches the OCSP response and includes it in the TLS handshake.

ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /path/to/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;

Session Resumption

TLS session resumption allows clients to reconnect without a full handshake, reusing previously negotiated parameters.

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;  # Disable tickets for security; use cache instead

HTTP/2

HTTP/2 multiplexes multiple requests over a single TCP connection, eliminating head-of-line blocking and reducing connection overhead. Enable it at the server level:

server {
    listen 443 ssl;
    http2 on;
    ...
}

103 Early Hints

103 Early Hints is a relatively new HTTP status code that allows the server to start sending preload hints to the browser before the full response is ready — while PHP is still processing. This gives the browser a head start on fetching critical CSS and fonts.

location / {
    add_header Link "</css/main.css>; rel=preload; as=style" always;
    add_header Link "</fonts/inter.woff2>; rel=preload; as=font; crossorigin" always;
}

9. DNS Optimization

DNS resolution is the first step in every request. A slow DNS provider adds latency before a single byte of your content is requested.

  • Use a fast DNS provider: Cloudflare (1.1.1.1), Google (8.8.8.8), or Route53 all have sub-5ms resolution times from most locations
  • Set appropriate TTLs: For stable production records, use 3600s (1 hour) or higher. Low TTL means more DNS queries per visitor.
  • Minimize external domains: Every third-party domain (analytics, fonts, ads) requires an additional DNS lookup. Self-host or consolidate where possible.
  • DNS prefetch: For unavoidable third-party resources, add prefetch hints to your HTML to resolve DNS early
<!-- In your HTML <head> -->
<link rel="dns-prefetch" href="//fonts.googleapis.com">
<link rel="dns-prefetch" href="//cdn.example.com">
<link rel="preconnect" href="//api.example.com" crossorigin>

10. Server Hardware and Kernel Configuration

Beyond software configuration, the underlying hardware and OS configuration matter at scale.

Storage

NVMe SSDs are 10–100x faster than HDDs for random reads. Database performance in particular is heavily I/O bound. If your server is still on spinning disk, storage is likely your biggest bottleneck.

Network Stack Tuning

# /etc/sysctl.conf additions
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15

# Increase file descriptor limits
fs.file-max = 2097152

File Descriptor Limits

Nginx opens a file descriptor for every open connection and every static file being served. Default system limits (1024 per process) are too low for production.

# /etc/security/limits.conf
www-data soft nofile 65535
www-data hard nofile 65535

11. WordPress-Specific Server Optimizations

WordPress is the most common CMS on hosted servers. Several server-side optimizations are specific to WordPress deployments.

Object Cache with Redis

WordPress has a built-in object cache API. By default, it only caches within a single request. The Redis Object Cache plugin persists this cache between requests, dramatically reducing database queries.

Once the plugin is installed, add to wp-config.php:

define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);

Disable wp-cron

WordPress's built-in wp-cron runs on every page load if tasks are due — adding latency to real user requests. Replace it with real server cron:

# In wp-config.php
define('DISABLE_WP_CRON', true);
# In crontab
*/5 * * * * curl -s https://yoursite.com/wp-cron.php?doing_wp_cron > /dev/null 2>&1

PHP-FPM Tuning for WordPress

WordPress typically uses 30–80MB of memory per PHP-FPM worker. Calculate your worker count accordingly:

# For a server with 4GB RAM, 2GB allocated to PHP
pm.max_children = 40
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20

Database Cleanup

-- Remove post revisions (keeps the last 5)
DELETE FROM wp_posts WHERE post_type = 'revision'
AND ID NOT IN (
    SELECT * FROM (
        SELECT ID FROM wp_posts WHERE post_type = 'revision'
        ORDER BY post_date DESC LIMIT 5
    ) as keep
);

-- Remove transients
DELETE FROM wp_options WHERE option_name LIKE '_transient_%';
DELETE FROM wp_options WHERE option_name LIKE '_site_transient_%';

Also audit the wp_options autoloaded values. Large autoloaded option sets (over 1MB) slow down every page load:

SELECT option_name, LENGTH(option_value) as size
FROM wp_options
WHERE autoload = 'yes'
ORDER BY size DESC
LIMIT 20;

12. Measuring and Monitoring Performance

Optimization without measurement is guesswork. Use these tools to establish baselines, identify bottlenecks, and verify improvements.

Lab Testing Tools

  • Google PageSpeed Insights: Combines lab data (Lighthouse) with real-world field data from CrUX. Start here.
  • WebPageTest: Detailed waterfall charts, filmstrip view, testing from multiple locations. Use it to diagnose specific assets.
  • Chrome DevTools: Performance tab for JavaScript profiling, Network tab for request waterfalls, Lighthouse for full audits.
  • GTmetrix: Combines Lighthouse and WebPageTest-style analysis with historical tracking.

Field Data (Real Users)

  • Google Search Console: Core Web Vitals report shows field data aggregated by URL group — the same data Google uses for ranking.
  • CrUX Dashboard: Looker Studio template using the Chrome User Experience Report API. Tracks your metrics over 28-day windows.

Server-Side Response Time Monitoring

Set up a simple cron job to monitor TTFB from the server itself:

#!/bin/bash
TTFB=$(curl -o /dev/null -s -w "%{time_starttransfer}" https://yoursite.com/)
TIMESTAMP=$(date +%s)
echo "$TIMESTAMP $TTFB" >> /var/log/ttfb.log

# Alert if TTFB exceeds 500ms
if (( $(echo "$TTFB > 0.5" | bc -l) )); then
    echo "TTFB alert: ${TTFB}s" | mail -s "Slow TTFB" [email protected]
fi

For production monitoring, track response time percentiles (p50, p95, p99) rather than averages. P99 latency reveals the worst-case experience for 1% of users — often the metric that indicates real problems.

Nginx Access Log Analysis

# Find slowest requests
awk '{print $NF, $7}' /var/log/nginx/access.log | sort -rn | head -20

# Count requests by status code
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

13. Complete Optimization Checklist

Use this as a pre-deployment and post-deployment audit checklist:

Optimization TTFB Impact LCP Impact Effort Done?
OPcache enabled and tuned High High Low [ ]
Gzip/Brotli compression Medium High Low [ ]
Nginx FastCGI page cache Very High Very High Medium [ ]
Redis object/query cache High High Medium [ ]
CDN for static and full-page Very High Very High Low [ ]
TLS 1.3 enabled Medium Medium Low [ ]
OCSP stapling enabled Low Low Low [ ]
HTTP/2 enabled Low Medium Low [ ]
WebP/AVIF image serving None Very High Medium [ ]
Database query optimization High High High [ ]
Database connection pooling Medium Medium Medium [ ]
PHP-FPM worker sizing High High Low [ ]
Static file cache headers None High (repeat visitors) Low [ ]
Keep-alive connections Medium Medium Low [ ]
Server-side TTFB monitoring N/A N/A Low [ ]

Quick Reference: Metrics Thresholds

Metric Good Needs Improvement Poor
LCP < 2.5s 2.5s – 4s > 4s
INP < 200ms 200ms – 500ms > 500ms
CLS < 0.1 0.1 – 0.25 > 0.25
TTFB < 200ms 200ms – 500ms > 500ms
FCP < 1.8s 1.8s – 3s > 3s

Conclusion: Server Speed Is the Foundation

Core Web Vitals optimization is often framed as a frontend problem. It is not. The server's response time sets the floor for everything that follows. If TTFB is 600ms, no amount of JavaScript optimization will get your LCP below 600ms.

The good news is that server-side optimizations tend to have clear, measurable impact. Enable OPcache — measure the TTFB drop. Add FastCGI caching — watch TTFB go from 300ms to under 10ms for cached pages. Each optimization in this guide is discrete, testable, and reversible.

Start with the low-effort, high-impact items: OPcache, gzip/Brotli, and static file caching. These require minimal configuration and deliver immediate results. Then work toward FastCGI caching and Redis for the biggest TTFB improvements. Add CDN last — it amplifies all the other optimizations.

Panelica ships with Nginx, PHP-FPM, Redis, and HTTP/2 configured and running out of the box. OPcache is enabled by default. Brotli compression is available. The infrastructure layer is handled — so you can focus on tuning your application rather than assembling the stack.

Measure first, optimize second, verify third. Core Web Vitals are not a one-time fix — they are a continuous process. Set up monitoring, track your p95 TTFB, and treat regressions as bugs.

Share: