The temptation to self-host a load balancer
When you need to distribute traffic across multiple servers, the instinct is to set up your own load balancer. HAProxy, Nginx, or Traefik on a dedicated VPS.
It works. But should you do it?
The problem with self-hosted load balancers
Adding a load balancer means adding another layer of infrastructure:
Another server to maintain
- Security patches and updates
- Monitoring and alerting
- Backup and disaster recovery
- SSL certificate management
Your load balancer becomes a critical single point of failure. If it goes down, everything behind it is unreachable.
High availability requires more complexity
A single load balancer defeats the purpose. You need:
- Two load balancers for redundancy
- Keepalived or similar for failover
- Floating IPs or DNS failover
Now you’re maintaining multiple servers just to route traffic.
Geographic limitations
A self-hosted load balancer sits in one location. Users on the other side of the world route through that single point, adding latency.
The Cloudflare alternative
Cloudflare offers load balancing as a service. For a fraction of what you’d spend on load balancer infrastructure, you get:
Global anycast network
Cloudflare has data centers in over 300 cities worldwide. Your users connect to the nearest one automatically. No configuration required.
Health checks and automatic failover
Cloudflare monitors your origin servers and routes around failures automatically. If a server goes down, traffic shifts to healthy servers within seconds.
Geographic steering
Route users to the server closest to them:
- European users → Amsterdam server
- US users → Las Vegas or Dallas server
- Asian users → Singapore server
This happens automatically based on user location.
Simple configuration
No HAProxy config files. No Nginx upstream blocks. Just a web dashboard where you:
- Add your origin servers
- Configure health checks
- Set routing rules
That’s it.
What it costs
Cloudflare’s load balancing pricing is straightforward:
- Load Balancing: Starts at $5/month for basic features
- Additional origins: Small additional cost per origin server
- Health checks: Included
Compare that to:
- Two VPS instances for redundant load balancers: $10-40/month
- Your time maintaining them: priceless (or expensive, depending how you look at it)
How we use it at ServerPoint
We practice what we preach. Our own infrastructure uses Cloudflare’s global load balancer:
- ServerPoint’s Client Portal: Load balanced across multiple backend servers
- APIs: Geographic routing to the nearest data center
When you log into portal.serverpoint.com, Cloudflare routes you to the best available backend automatically.
Setting it up
Step 1: Add your domain to Cloudflare
If you haven’t already, add your domain to Cloudflare and point your nameservers to them.
Step 2: Create a load balancer
In the Cloudflare dashboard:
- Go to Traffic → Load Balancing
- Click Create Load Balancer
- Enter the hostname (e.g.,
app.yourdomain.com)
Step 3: Create origin pools
Origin pools are groups of servers. Create one or more:
Example: Single pool with multiple servers
- Pool name:
web-servers - Origins:
server1.yourdomain.com(your first VPS)server2.yourdomain.com(your second VPS)
Example: Geographic pools
- Pool
us-servers: Dallas and Las Vegas VPS - Pool
eu-servers: Amsterdam VPS - Pool
asia-servers: Singapore VPS
Step 4: Configure health checks
Set up health checks to monitor your servers:
- Type: HTTP or HTTPS
- Path:
/healthor any endpoint that returns 200 OK - Interval: How often to check (60 seconds is common)
Create a simple health endpoint in your application:
app.get('/health', (req, res) => {
res.status(200).send('OK')
})
Step 5: Set traffic steering
Choose how traffic is distributed:
- Off: Random distribution
- Geo Steering: Route to the nearest pool based on user location
- Dynamic Steering: Route based on latency measurements
- Proximity Steering: Route to the geographically closest pool
For most setups, Geo Steering or Proximity Steering works best.
Step 6: Configure fallback
Set a fallback pool in case your primary pools are all unhealthy. This ensures traffic always has somewhere to go.
What about SSL?
Cloudflare handles SSL termination at their edge. You can:
- Use Cloudflare’s free SSL certificate for the frontend
- Use origin certificates for the connection between Cloudflare and your servers
No more managing Let’s Encrypt renewals on a load balancer.
When you might still want self-hosted
There are edge cases where self-hosted load balancing makes sense:
- Internal traffic that shouldn’t route through Cloudflare
- Non-HTTP protocols that Cloudflare doesn’t support
- Ultra-low latency requirements where every millisecond matters
- Regulatory requirements that prohibit third-party routing
For most web applications and APIs, Cloudflare handles it better than you could yourself.
The bottom line
Every layer of infrastructure you add is a layer you have to maintain. Load balancers are critical infrastructure. If they fail, everything fails.
Cloudflare gives you:
- Global presence you couldn’t build yourself
- Automatic failover without complex configuration
- Geographic routing out of the box
- Less to maintain so you can focus on your application
For a few dollars a month, you get enterprise-grade load balancing without the enterprise-grade complexity.
Don’t build what you can buy cheaply, especially when the bought solution is better than what you’d build.
Explore our VPS plans across multiple data centers for geographic redundancy.