Tutorial

Redis Explained: Caching, Sessions, and Queues for Web Applications

April 07, 2026

Back to Blog

Introduction: The Swiss Army Knife of Data

At some point in every web developer's career, there comes a moment when your application starts showing its seams. Pages slow down. The database groans under repeated identical queries. Sessions break in a load-balanced environment. A background job queue becomes a tangled mess of cron files.

The solution to all of these problems often has the same answer: Redis.

Redis (Remote Dictionary Server) is an open-source, in-memory data structure store that serves as a database, cache, message broker, and queue all at once. It was created by Salvatore Sanfilippo in 2009 and has become one of the most widely deployed software components in the world. Twitter uses it for timelines. GitHub uses it for caching. StackOverflow runs millions of requests per day through it. Pinterest stores billions of relationships in it.

The reason Redis is everywhere is simple: it is extraordinarily fast. Because it holds all data in memory rather than on disk, it can execute 100,000+ read/write operations per second on modest hardware. That speed, combined with a rich set of built-in data structures, makes it suitable for a surprisingly wide range of use cases.

This guide covers everything you need to know: what Redis is, how to install and configure it, all the core data structures, caching patterns, session storage, job queues, persistence, performance tuning, security, and monitoring. By the end, you will understand not just how to use Redis, but why it works the way it does.


1. What Is Redis?

Redis is best described as a data structure server. Unlike traditional databases that store data on disk and bring it into memory only when queried, Redis keeps all data in memory at all times. Disk persistence is optional and happens asynchronously in the background.

This design choice has profound implications:

  • Speed: Memory access is orders of magnitude faster than disk I/O. Redis operations typically take under 1 millisecond.
  • Predictability: No disk seek latency means response times are consistent and low.
  • Versatility: Redis is not just a key-value store. It natively understands strings, hashes, lists, sets, sorted sets, streams, bitmaps, and hyperloglogs.

Why Redis Is Fast: The Single-Threaded Event Loop

One of Redis's most counterintuitive design decisions is that it uses a single-threaded event loop for command processing. There is no locking, no context switching overhead, and no mutex contention. Every command is processed atomically in sequence.

This means Redis commands are inherently atomic — you never have to worry about two commands interleaving. It also means Redis scales by running multiple instances (Redis Cluster or separate instances per use case), not by adding threads.

Redis 6+ introduced I/O threading for network reads and writes, which helps with high-connection-count workloads, but command processing remains single-threaded.

Redis vs Memcached

Feature Redis Memcached
Data structures Strings, hashes, lists, sets, sorted sets, streams, bitmaps Strings only
Persistence RDB snapshots + AOF append log None (restart = data loss)
Replication Master-replica, Sentinel, Cluster None built-in
Pub/Sub Yes No
Lua scripting Yes (atomic scripts) No
TTL granularity Millisecond precision Second precision
Memory efficiency Good (type-aware compression) Slightly better for pure strings
Multi-threading I/O threads (Redis 6+) Multi-threaded

For most modern applications, Redis is the right choice. Memcached's advantage is simplicity and marginally better raw throughput for simple string-only workloads at extreme scale.


2. Installation and Basic Setup

Installing on Ubuntu/Debian

# Install Redis
apt update && apt install redis-server -y

# Start and enable
systemctl enable --now redis-server

# Test
redis-cli ping
# Output: PONG

For production servers, consider installing from the official Redis repository to get the latest stable version:

# Add Redis repository
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" > /etc/apt/sources.list.d/redis.list
apt update && apt install redis -y

Essential Configuration

The main configuration file is at /etc/redis/redis.conf. Key settings to review:

# Network binding — restrict to localhost for security
bind 127.0.0.1 ::1

# Port (default 6379)
port 6379

# Password authentication
requirepass your-strong-password-here

# Memory limit — ALWAYS set this to prevent OOM
maxmemory 2gb

# What to do when memory is full (see eviction policies section)
maxmemory-policy allkeys-lru

# Enable persistence (see persistence section)
appendonly yes
appendfsync everysec

# Unix socket (faster than TCP for local connections)
unixsocket /var/run/redis/redis.sock
unixsocketperm 770

Unix Socket vs TCP: Performance Comparison

For applications running on the same server as Redis, Unix sockets are significantly faster than TCP loopback because they bypass the entire network stack:

Connection Type Latency Throughput Best For
TCP 127.0.0.1 ~0.1ms ~80K ops/sec Remote connections, Docker
Unix Socket ~0.05ms ~120K ops/sec Same-server applications

Panelica configures Redis on a Unix socket at var/run/redis.sock by default, which gives backend services and PHP-FPM pools the fastest possible access.

Using redis-cli

# Connect to Redis
redis-cli

# With password
redis-cli -a your-password

# Connect to socket
redis-cli -s /var/run/redis/redis.sock

# One-liner
redis-cli ping
redis-cli set mykey "hello"
redis-cli get mykey

3. Data Structures — Redis's Superpower

The feature that separates Redis from every other cache is its rich set of native data structures. Understanding these is the key to using Redis effectively.

Strings

The most basic type. A Redis string can hold any binary data up to 512MB — text, JSON, serialized objects, images, anything.

# Basic set and get
SET user:name "Alice"
GET user:name           # "Alice"

# Set with expiry (seconds)
SET session:abc123 "user_data" EX 3600

# Set with expiry (milliseconds)
SET rate:user:42 1 PX 60000

# Atomic increment — thread-safe counter
SET page_views 0
INCR page_views         # 1
INCR page_views         # 2
INCRBY page_views 10    # 12

# Set only if not exists (lock/mutex pattern)
SET lock:resource unique_id NX EX 30

# Get remaining TTL
TTL session:abc123      # 3597 (seconds remaining)
PTTL session:abc123     # milliseconds remaining

Use cases: Caching HTML fragments, API responses, counters, rate limiting, distributed locks, feature flags.

Hashes

A hash is a field-value map, like a dictionary or object. Perfect for storing structured data without serializing to JSON.

# Store user profile
HSET user:1001 name "Alice" email "[email protected]" plan "business" logins 47

# Get one field
HGET user:1001 name         # "Alice"

# Get all fields
HGETALL user:1001
# name    Alice
# email   [email protected]
# plan    business
# logins  47

# Increment a hash field
HINCRBY user:1001 logins 1  # 48

# Check if field exists
HEXISTS user:1001 email     # 1 (true)

# Get multiple fields
HMGET user:1001 name email  # ["Alice", "[email protected]"]

# Delete a field
HDEL user:1001 plan

Use cases: User profiles, configuration objects, shopping cart items, application settings per user.

Lists

An ordered list of strings, implemented as a doubly-linked list. Supports pushing/popping from both ends in O(1) time.

# Push to list (left/right)
LPUSH tasks "send_email"
LPUSH tasks "resize_image"
RPUSH tasks "generate_report"

# Pop from list (blocking — waits up to 30 seconds)
BRPOP tasks 30          # Pops "send_email", waits if empty

# View list contents
LRANGE tasks 0 -1       # All items (0 = first, -1 = last)
LRANGE tasks 0 4        # First 5 items

# List length
LLEN tasks              # Number of items

# Reliable queue: move from queue to processing list atomically
RPOPLPUSH tasks processing_queue

Use cases: Job queues, activity feeds, recent items (keep last 100), undo history, event logs.

Sets

An unordered collection of unique strings. Constant-time add, remove, and membership check.

# Add members
SADD tags:post:42 "linux" "redis" "tutorial" "caching"

# Check membership
SISMEMBER tags:post:42 "redis"  # 1 (yes)
SISMEMBER tags:post:42 "php"    # 0 (no)

# Get all members
SMEMBERS tags:post:42

# Count members
SCARD tags:post:42      # 4

# Set operations
SADD tags:post:99 "redis" "docker" "linux"
SINTER tags:post:42 tags:post:99    # Intersection: redis, linux
SUNION tags:post:42 tags:post:99    # Union: all tags
SDIFF tags:post:42 tags:post:99     # In 42 but not 99: tutorial, caching

# Remove member
SREM tags:post:42 "tutorial"

Use cases: Unique visitor tracking, tag systems, social connections (who follows who), online users, spam filtering (seen message IDs).

Sorted Sets

Like sets, but every member has a floating-point score. Members are stored in score order. This makes sorted sets perfect for leaderboards, priority queues, and time-series data.

# Add members with scores
ZADD leaderboard 1500 "alice"
ZADD leaderboard 2300 "bob"
ZADD leaderboard 1800 "charlie"

# Get range by rank (low to high)
ZRANGE leaderboard 0 -1 WITHSCORES
# charlie 1800
# bob 2300
# alice 1500 ... (sorted ascending)

# Get range (high to low — top 10)
ZREVRANGE leaderboard 0 9 WITHSCORES

# Get rank of member (0-indexed, ascending)
ZRANK leaderboard "alice"       # 0
ZREVRANK leaderboard "alice"    # 2 (last in top scores)

# Get score
ZSCORE leaderboard "bob"        # 2300

# Increment score
ZINCRBY leaderboard 100 "alice" # Alice now has 1600

# Range by score
ZRANGEBYSCORE leaderboard 1000 2000 WITHSCORES

# Count members in score range
ZCOUNT leaderboard 1000 2000

Use cases: Leaderboards, priority queues, autocomplete (sorted by frequency), rate limiting with sliding windows, geospatial data (Redis GEO uses sorted sets internally).

Streams (Redis 5+)

Streams are an append-only log data structure, similar to Kafka topics. They support consumer groups for parallel processing with message acknowledgment.

# Add entry to stream (auto-generate ID with *)
XADD events * action "page_view" user_id "42" page "/dashboard"

# Read entries (from beginning)
XREAD COUNT 10 STREAMS events 0

# Read new entries (blocking, wait up to 5 seconds)
XREAD COUNT 10 BLOCK 5000 STREAMS events $

# Consumer groups — parallel processing
XGROUP CREATE events workers $ MKSTREAM
XREADGROUP GROUP workers consumer1 COUNT 5 STREAMS events >
XACK events workers    # Acknowledge processed message

Use cases: Event sourcing, real-time activity feeds, audit logs, reliable message processing (when you need delivery guarantees that Pub/Sub cannot provide).

Data Structure Selection Guide

Use Case Best Structure Key Reason
Cache a value String Simple, TTL support
Cache an object Hash Update individual fields without deserializing
Job queue List Blocking pop, FIFO/LIFO
Reliable job queue Stream Acknowledgment, consumer groups
Unique items Set Automatic deduplication, O(1) membership
Ranked data Sorted Set Score-ordered with range queries
Counter String (INCR) Atomic, no race conditions
Real-time feed Stream or List Ordered, supports blocking reads
Social graph Set SINTER/SUNION for mutual friends
Session storage Hash or String TTL-based expiry, fast access

4. Caching Patterns

Caching is the most common Redis use case, but it is easy to do wrong. These patterns define the relationship between your cache and your database.

Cache-Aside (Lazy Loading)

The most common pattern. The application manages the cache explicitly.

function getUser(userId):
    # 1. Check cache
    cached = redis.get("user:" + userId)
    if cached:
        return deserialize(cached)          # Cache hit — fast path

    # 2. Cache miss — query database
    user = db.query("SELECT * FROM users WHERE id = ?", userId)

    # 3. Store in cache with TTL
    redis.set("user:" + userId, serialize(user), EX=3600)

    return user

Pros: Only requested data gets cached (memory efficient). Works well with a cold cache. Cache failure does not break the application.

Cons: First request always misses (cold start penalty). Stale data possible if DB is updated without invalidating the cache.

Write-Through

Every write goes to both the cache and the database simultaneously.

function updateUser(userId, data):
    # 1. Write to database
    db.query("UPDATE users SET ... WHERE id = ?", userId, data)

    # 2. Update cache immediately
    redis.set("user:" + userId, serialize(data), EX=3600)

Pros: Cache is always consistent with the database. No stale reads.

Cons: Write latency increases. Unused data still gets cached (memory waste).

Write-Behind (Write-Back)

Writes go to Redis first, and a background process flushes to the database asynchronously.

function updateUser(userId, data):
    # Write to Redis only — immediate response
    redis.hset("user:" + userId, data)
    redis.rpush("dirty_keys", "user:" + userId)

# Background worker
function flushDirtyKeys():
    while key = redis.lpop("dirty_keys"):
        data = redis.hgetall(key)
        db.query("UPDATE ...")   # Async flush

Pros: Extremely fast writes. Database write pressure reduced.

Cons: Data loss risk if Redis crashes before flush. Complex to implement correctly.

TTL Strategies and Cache Stampede Prevention

A cache stampede happens when many requests simultaneously find a cache miss and all hit the database at once. This typically happens when a popular cached value expires.

# Static TTL — simplest approach
SET product:42 "data" EX 3600

# Add jitter to prevent synchronized expiry
ttl = 3600 + random(0, 300)   # 3600–3900 seconds
SET product:42 "data" EX ttl

# Probabilistic early expiration (recalculate before expiry)
# When TTL < 10% of original, start background refresh
remaining_ttl = redis.ttl("product:42")
if remaining_ttl < 360:        # 10% of 3600
    refresh_in_background()

# Locking to prevent stampede
lock_acquired = redis.set("lock:product:42", 1, NX=True, EX=10)
if lock_acquired:
    data = db.query(...)
    redis.set("product:42", data, EX=3600)
    redis.delete("lock:product:42")
else:
    time.sleep(0.05)
    return redis.get("product:42")  # Wait for lock holder

5. Session Storage

HTTP is stateless, so web applications need a place to store session data between requests. Redis is ideal: it is fast, supports automatic expiry, and works across multiple application servers.

Why Redis for Sessions

  • Speed: Sub-millisecond reads on every page load
  • Automatic expiry: TTL handles session cleanup automatically
  • Horizontal scaling: Multiple app servers share the same session store
  • Atomic operations: No race conditions on concurrent requests

PHP Sessions with Redis

# php.ini or .htaccess
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=password"

# Or via Unix socket (faster)
session.save_path = "unix:///var/run/redis/redis.sock?auth=password"
# PHP code — no change needed, sessions work the same
session_start();
$_SESSION['user_id'] = 42;
$_SESSION['role'] = 'admin';

Laravel

# .env
SESSION_DRIVER=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=your-password
REDIS_PORT=6379
# config/session.php
'driver' => env('SESSION_DRIVER', 'redis'),
'lifetime' => 120,
'encrypt' => true,    # Encrypt session data at rest

Node.js with Express

const session = require('express-session');
const RedisStore = require('connect-redis').default;
const { createClient } = require('redis');

const redisClient = createClient({ url: 'redis://localhost:6379' });
await redisClient.connect();

app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: true,       // HTTPS only
    httpOnly: true,     // No JS access
    maxAge: 86400000    // 24 hours in milliseconds
  }
}));

WordPress Redis Object Cache

# Install via WP-CLI
wp plugin install redis-cache --activate
wp redis enable

# wp-config.php
define('WP_CACHE_KEY_SALT', 'your-unique-site-salt');
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_REDIS_PASSWORD', 'your-password');

With Redis object cache, WordPress stores all database query results, transients, and computed values in Redis. On a busy WordPress site, this typically reduces database queries by 80-90% and page generation time by 50-70%.

Session Security

  • Always set requirepass — an unauthenticated Redis instance is a complete session hijacking vector
  • Bind to 127.0.0.1 or Unix socket — never expose Redis on a public IP
  • Encrypt sensitive session data in your application layer
  • Set session TTL to match your application's session lifetime
  • Use separate Redis databases (SELECT 1) or separate instances for sessions vs cache

6. Job Queues and Pub/Sub

Lists as Simple Queues

The simplest job queue pattern uses Redis lists with BRPOP (blocking right pop):

# Producer (your web application)
import json
job = json.dumps({"type": "send_email", "to": "[email protected]", "template": "welcome"})
redis.lpush("jobs:email", job)

# Consumer (background worker process)
while True:
    _, job_json = redis.brpop("jobs:email", timeout=0)  # Block forever
    job = json.loads(job_json)
    send_email(job['to'], job['template'])
    print(f"Processed email job for {job['to']}")

The BRPOP command blocks until a job is available, which is more efficient than polling with RPOP in a loop.

Reliable Queues with RPOPLPUSH

A simple BRPOP queue loses the job if the worker crashes mid-processing. The reliable queue pattern uses an atomic move:

# Atomically move job from queue to "in-progress" list
job = redis.rpoplpush("jobs:email", "jobs:email:processing")

# Process job
try:
    process_job(job)
    redis.lrem("jobs:email:processing", 0, job)  # Remove from in-progress
except Exception as e:
    # On failure: move back to main queue (or dead letter queue)
    redis.rpoplpush("jobs:email:processing", "jobs:email:failed")
    log_error(e)

Pub/Sub for Real-Time Messaging

# Subscriber (runs continuously)
import redis
r = redis.Redis()
pubsub = r.pubsub()
pubsub.subscribe('notifications:user:42')

for message in pubsub.listen():
    if message['type'] == 'message':
        notify_user(message['data'])

# Publisher (from any part of your application)
redis.publish('notifications:user:42', json.dumps({
    'type': 'new_comment',
    'post_id': 123,
    'commenter': 'alice'
}))

Important limitation: Redis Pub/Sub does not persist messages. If the subscriber is offline when a message is published, the message is lost. For reliable messaging, use Streams instead.

Streams: The Reliable Alternative

# Producer
redis.xadd('jobs:email', {
    'to': '[email protected]',
    'template': 'welcome',
    'user_id': '42'
})

# Consumer group — multiple workers, each gets different messages
redis.xgroup_create('jobs:email', 'email_workers', '$', mkstream=True)

# Worker 1
messages = redis.xreadgroup('email_workers', 'worker1', {'jobs:email': '>'}, count=5)
for stream, msgs in messages:
    for msg_id, fields in msgs:
        process_email(fields)
        redis.xack('jobs:email', 'email_workers', msg_id)  # Acknowledge

Streams give you message persistence, consumer groups for parallel processing, and acknowledgment to prevent message loss — all the capabilities of a proper message broker built into Redis.


7. Persistence: Protecting Your Data

By default Redis stores everything in memory, which means a server restart loses all data. For session storage and queues, this is unacceptable. Redis offers two persistence mechanisms.

RDB (Redis Database Snapshotting)

# redis.conf — snapshot triggers
save 900 1    # Save if 1+ key changed in 900 seconds (15 min)
save 300 10   # Save if 10+ keys changed in 300 seconds (5 min)
save 60 10000 # Save if 10000+ keys changed in 60 seconds

# Snapshot file location
dbfilename dump.rdb
dir /var/lib/redis

RDB creates a compact binary snapshot of the entire dataset at a point in time. It is great for backups and fast restarts, but data changed since the last snapshot is lost on crash.

AOF (Append-Only File)

# redis.conf
appendonly yes
appendfilename "appendonly.aof"

# Fsync policy:
# always    — fsync after every write (safest, slowest: ~10K ops/sec)
# everysec  — fsync every second (recommended: ~100K ops/sec, max 1s data loss)
# no        — OS decides (fastest, most data at risk)
appendfsync everysec

# Rewrite AOF when it becomes 100% larger than last rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

AOF logs every write operation. On restart, Redis replays the log to reconstruct the dataset. With appendfsync everysec, you risk losing at most 1 second of data on a crash.

Using Both (Recommended for Production)

# redis.conf — use both mechanisms
save 900 1
save 300 10
appendonly yes
appendfsync everysec

When both are enabled, Redis uses AOF for recovery on restart (more complete data). RDB remains useful for backups. This is the recommended production configuration.

Manual Backup

# Trigger a background save
redis-cli BGSAVE
# Redis replies: Background saving started

# Copy the dump file
cp /var/lib/redis/dump.rdb /backup/redis-$(date +%Y%m%d).rdb

# Automated backup cron
# /etc/cron.d/redis-backup
0 2 * * * redis redis-cli BGSAVE && sleep 5 && cp /var/lib/redis/dump.rdb /backup/redis-daily.rdb

8. Performance Tuning

Memory Limits and Eviction Policies

Always set maxmemory. Without it, Redis will consume all available RAM and potentially trigger the OOM killer.

# redis.conf
maxmemory 4gb
maxmemory-policy allkeys-lru
Policy Description Best Use Case
noeviction Return error when memory full Critical data that must not be lost
allkeys-lru Evict least recently used keys General-purpose cache (most common)
volatile-lru LRU eviction, only keys with TTL Mix of persistent and cached data
allkeys-lfu Evict least frequently used keys Workloads with hot and cold data
volatile-lfu LFU, only keys with TTL Cached data with varying popularity
volatile-ttl Evict keys with shortest remaining TTL first Time-sensitive data
allkeys-random Evict random keys Uniform access patterns (rare)

Pipelining: Batch Commands for 10x Throughput

Every Redis command involves a round-trip: send command → receive response. Pipelining batches multiple commands into a single round-trip:

# Without pipelining: 100 round trips
for user_id in user_ids:
    redis.set(f"user:{user_id}", data[user_id])

# With pipelining: 1 round trip
pipe = redis.pipeline()
for user_id in user_ids:
    pipe.set(f"user:{user_id}", data[user_id])
pipe.execute()    # Send all commands at once

Pipelining can improve throughput by 10x or more for bulk operations. It does not guarantee atomicity — use MULTI/EXEC transactions or Lua scripts for that.

Lua Scripting for Atomic Operations

# Atomic check-and-set (compare-and-swap)
local current = redis.call('GET', KEYS[1])
if current == ARGV[1] then
    redis.call('SET', KEYS[1], ARGV[2])
    return 1
else
    return 0
end

# Execute from command line
redis-cli EVAL "$(cat check_and_set.lua)" 1 mykey old_value new_value

Diagnosing Performance Issues

# Find slow commands (commands taking > 10ms)
redis-cli SLOWLOG GET 10

# Set slowlog threshold (microseconds)
redis-cli CONFIG SET slowlog-log-slower-than 10000

# Real-time command stream (DEBUGGING ONLY — never leave on in production)
redis-cli MONITOR

# Live stats
redis-cli --stat        # Refreshes every second

# Latency measurement
redis-cli --latency     # Measure round-trip latency
redis-cli --latency-history --interval 60    # Every 60 seconds

# Run a benchmark
redis-benchmark -h 127.0.0.1 -p 6379 -n 100000 -c 50 -q

9. Security

A Redis instance exposed to the internet without authentication is a critical vulnerability. Redis has a history of being found open on the internet and used for cryptomining, data exfiltration, and server takeover via the CONFIG SET dir / CONFIG SET dbfilename trick.

Essential Security Configuration

# redis.conf — security hardening

# 1. Bind to localhost only (NEVER 0.0.0.0 in production)
bind 127.0.0.1 ::1

# 2. Require password authentication
requirepass use-a-long-random-password-here-minimum-32-chars

# 3. Disable dangerous commands by renaming them to empty string
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command DEBUG ""
rename-command CONFIG ""
rename-command SLAVEOF ""

# 4. Use protected mode (enabled by default)
protected-mode yes

ACL System (Redis 6+)

Redis 6 introduced Access Control Lists, allowing per-user permissions:

# Create a read-only cache user
ACL SETUSER cache_reader on >cache_password ~* &* +@read

# Create a write user for a specific key prefix
ACL SETUSER app_user on >app_password ~session:* +SET +GET +DEL +EXPIRE

# List all users
ACL LIST

# Test a user's permissions
ACL WHOAMI

TLS Encryption (Redis 6+)

# redis.conf
tls-port 6380
tls-cert-file /etc/ssl/redis/redis.crt
tls-key-file /etc/ssl/redis/redis.key
tls-ca-cert-file /etc/ssl/redis/ca.crt
tls-auth-clients yes

TLS is recommended when Redis is accessed over any network, even an internal LAN. For same-server communication, Unix sockets are both faster and more secure than TLS.


10. Monitoring Redis

The INFO Command

The single most useful Redis command for understanding what is happening:

# All stats
redis-cli INFO

# Specific sections
redis-cli INFO server       # Version, config, uptime
redis-cli INFO clients      # Connected clients, blocked clients
redis-cli INFO memory       # Memory usage breakdown
redis-cli INFO stats        # Ops/sec, hits, misses, evictions
redis-cli INFO replication  # Master/replica status
redis-cli INFO keyspace     # Keys per database, TTL stats

Key Metrics to Watch

# Memory
used_memory_human:          # Current memory usage
maxmemory_human:            # Memory limit
mem_fragmentation_ratio:    # > 1.5 indicates fragmentation problem

# Performance
instantaneous_ops_per_sec:  # Current commands/second
keyspace_hits:              # Cache hits (want high)
keyspace_misses:            # Cache misses (want low)
# Hit ratio = hits / (hits + misses) — aim for > 90%

# Connections
connected_clients:          # Currently connected clients
blocked_clients:            # Clients waiting on BLPOP etc.

# Eviction
evicted_keys:               # Keys evicted due to maxmemory — should be low
expired_keys:               # Keys expired via TTL — normal

Prometheus Monitoring

# Install redis_exporter
wget https://github.com/oliver006/redis_exporter/releases/latest/download/redis_exporter-linux-amd64.tar.gz
tar xzf redis_exporter-linux-amd64.tar.gz
./redis_exporter --redis.addr redis://localhost:6379 --redis.password your-password

Panelica includes Prometheus and Grafana pre-configured. You can add the Redis exporter dashboard (Grafana Dashboard ID 11835) to get a full Redis monitoring view including memory usage, hit rate, connected clients, evictions, and command throughput — all visible in the Panelica monitoring dashboard.


11. Common Real-World Patterns

API Rate Limiting

# Sliding window rate limit: 100 requests per minute per IP
function check_rate_limit(ip_address):
    key = f"rate:{ip_address}"
    current = redis.incr(key)

    if current == 1:
        redis.expire(key, 60)   # Set 60-second window on first request

    if current > 100:
        raise RateLimitExceeded(f"Too many requests. Try again in {redis.ttl(key)}s")

    return current

Full-Page Cache with Nginx + Redis

// In your PHP application
$cache_key = 'page:' . md5($_SERVER['REQUEST_URI']);
$cached_html = $redis->get($cache_key);

if ($cached_html) {
    header('X-Cache: HIT');
    echo $cached_html;
    exit;
}

// Generate page...
ob_start();
render_page();
$html = ob_get_clean();

// Cache for 5 minutes
$redis->setex($cache_key, 300, $html);
echo $html;

Shopping Cart

# Cart stored as a hash
HSET cart:user:42 product:101 2    # 2 units of product 101
HSET cart:user:42 product:207 1    # 1 unit of product 207
EXPIRE cart:user:42 86400          # Expire in 24 hours if abandoned

HGET cart:user:42 product:101      # Get quantity
HGETALL cart:user:42               # Get entire cart
HDEL cart:user:42 product:101      # Remove item
DEL cart:user:42                   # Clear cart on checkout

Real-Time Leaderboard

# Update score when game ends
ZINCRBY leaderboard:weekly 250 "player:alice"

# Get top 10
ZREVRANGE leaderboard:weekly 0 9 WITHSCORES

# Get player rank
ZREVRANK leaderboard:weekly "player:alice"

# Reset weekly leaderboard every Sunday
# (scheduled job: DEL leaderboard:weekly)

12. Redis vs Alternatives

Feature Redis 7.x Memcached KeyDB DragonflyDB
Speed (simple ops) ~100K ops/sec ~150K ops/sec ~150K ops/sec ~300K ops/sec
Data structures Extensive (11 types) Strings only All Redis types All Redis types
Persistence RDB + AOF None RDB + AOF Snapshot
Replication Master-replica, Cluster None built-in Active-active replication Master-replica
Redis compatibility Native N/A Full Full
Threads I/O threads (6+) Fully multi-threaded Fully multi-threaded Fully multi-threaded
License RSAL (Redis Source Available) BSD MIT BSL
Production maturity Excellent (15+ years) Excellent Good Early (2022+)

For the vast majority of use cases, Redis remains the right choice due to its maturity, ecosystem support, and comprehensive documentation. KeyDB and DragonflyDB are interesting alternatives if you need higher single-instance throughput, but Redis's community and tooling are unmatched.


Quick Reference Cheat Sheet

Essential Commands

Command Description
SET key value EX 3600 Set with 1-hour TTL
GET key Get string value
DEL key [key...] Delete keys
EXISTS key 1 if exists, 0 if not
TTL key Remaining TTL in seconds
EXPIRE key seconds Set TTL on existing key
INCR / INCRBY key [n] Atomic increment
KEYS pattern Find keys (NEVER in production — use SCAN)
SCAN 0 MATCH prefix:* COUNT 100 Safe key iteration
FLUSHDB Delete all keys in current DB
INFO memory Memory usage stats
DEBUG SLEEP 0 Test connection

Production Config Tuning

Setting Recommended Value Reason
maxmemory 50-70% of server RAM Leave headroom for OS and apps
maxmemory-policy allkeys-lru Best general-purpose eviction
appendfsync everysec Balance safety vs performance
tcp-keepalive 300 Detect dead connections
timeout 0 Keep connections (managed by app)
hz 20 Background tasks frequency (default 10)
lazyfree-lazy-eviction yes Non-blocking key deletion
activerehashing yes Incremental hash table resizing

Redis and Panelica

Panelica ships with Redis 7 pre-configured and fully integrated. It handles WordPress object caching automatically (one click in the WordPress toolkit), stores PHP sessions for all hosted applications, and powers the real-time monitoring dashboards that show you live resource usage across your server.

Redis runs isolated within Panelica's service architecture at /opt/panelica/var/run/redis.sock — accessible to all services but not exposed to hosted user processes. The built-in monitoring dashboard shows Redis memory usage, hit rate, connected clients, and operations per second without any additional configuration.

If you are running a WordPress site that generates database queries on every page load, enabling the Redis object cache through Panelica's WordPress toolkit is one of the highest-impact performance optimizations you can make — typically reducing page generation time from seconds to under 200ms on the first visit, and under 50ms for cached pages.

Redis is one of those tools that, once you understand it, you start seeing opportunities to use it everywhere. Slow database queries, session scaling problems, job queue chaos, real-time features — Redis solves all of them with a consistent, fast, reliable interface. Master the data structures, choose the right caching pattern, and set your memory limits — the rest follows naturally.

Share: