Hands-On Load Balancer Setup: Distribute Traffic Across Servers
When you’re building scalable applications, it’s important to make sure your backend can handle increased traffic and stay available even if a server fails.
In this guide, you’ll learn how to set up load balancing on DigitalOcean using multiple API servers and two load balancers for high availability.
Let’s break this down into three simple steps.
Step 1: Set Up Multiple API Servers
Before you can distribute traffic, you need multiple servers running the same API.
If you already have a working droplet (virtual server) on DigitalOcean:
Create a Snapshot of your existing droplet
- A snapshot is just a frozen image of your server at a specific point in time.
- Go to your droplet, click Snapshots, and select Take Live Snapshot.
Create a New Droplet from that snapshot
- Click “More” on the snapshot and select Create Droplet from Snapshot.
- Choose the same region and data center as your original droplet.
- (Optional) Rename the new droplet, e.g.
user-management-api-2
. - Downsize the resources if needed.
Start the API on the new server
- Launch the console for the new droplet.
- Allow port
3000
usingsudo ufw allow 3000
.
- Go to the project folder and start the app with
npm start
(or whatever command your app uses).
Verify everything works by visiting the new droplet’s IP at port 3000
.
Step 2: Create Load Balancers
Now that we have multiple API servers, it’s time to distribute the traffic.
Navigate to the “Networking” tab in DigitalOcean and click Load Balancers
Choose between:
- Regional: Best for distributing traffic within a specific geographic area.
- Global: Centralized, but may cause latency if your users are far away.
- For most projects, go with Regional, especially if your droplets are in the same data center.
Select the same region/data center as your droplets (e.g., San Francisco, datacenter 3)
Set the load balancer visibility to External so it’s accessible via the internet
Enable high availability by creating at least 2 load balancers
- This removes a single point of failure.
- DigitalOcean handles failover automatically.
Step 3: Configure the Load Balancer
With the load balancer created, connect it to your API servers.
Connect Droplets
- Select the droplets running your API.
- Ensure they’re in the same region as the load balancer — otherwise, they won’t show up.
Set Forwarding Rules
- Forward requests from Port 80 (default HTTP port) to Port 3000 (your API server port).
Create the Load Balancer
- Choose a name or accept the default.
- Once created, the traffic will be routed to your droplets based on their health.
Final Result
You now have a scalable, reliable infrastructure with:
- Multiple servers running the same Node.js API.
- Load balancer(s) that split traffic and check for server health.
- Automatic failover, meaning if one server goes down, traffic is rerouted automatically.
You can visit the load balancer’s IP in the browser (or, ideally, attach a domain) and it will automatically route to one of the healthy API servers.
Every refresh may connect to a different instance, and you can monitor performance, active connections, and server health via DigitalOcean’s dashboard.
Watch the full walkthrough on YouTube here:
👉 Watch the video version
If you want to go deeper into system design and build real-world web infrastructure step by step:
👉 Join the Dev Mastery