Proxying Distant HTTP Services for Latency wins
My Rails app, Booko, uses Meilisearch for text search - it's great. So easy to setup and use.
Booko runs in Hetzner's Singapore location, but the Meilisearch instance is in Hetzner's Helsinki region on a physical host. They're hard to beat for value €39 + tax for 64GB of RAM and 20 CPU cores.
When users search for books, the Rail's app creates a new Meilisearch::Client
object to perform the query. Unfortunately that means opening a new connection to Meilisearch for every search, because the Meilisearch objects aren't persisted between requests. Due to the latency between Singapore and Helsinki, searches take ~ 770ms - which is quite a long time. Booko then needs to lookup the search result records in the database and render the results.
The raw latency between Singapore and Helsinki is ~ 182ms as measured by ping.
I got to wondering if there was anyway to reduce the latency when searching Meilisearch by removing the requirement to open new connections for every requests.
There are two main ways to reduce this latency. Firstly, I could create a global Meilisearch object - I'd have to research if it was thread safe, maybe use mutexes / semaphores or a connection pool. Or, I could use an external connection pool - we'd still have the cost of opening connections, but opening connections to localhost is fast. Caddy provides a connection pool for its reverse proxy implementation and it's super easy to set up. Lets do that!
We'll add a reverse proxy on each of the VPSs the Rails app runs on, and we'll configure Caddy to reverse proxy to the remote Meilisearch server.
I ran 20 random searches, using both direct connections, and proxied connections. In both cases, we're creating a new Meilisearch::Client for every search. Here are the results:
Direct Client
Average response time: 769.03ms
Min response time: 749.52ms
Max response time: 800.4ms
Successful requests: 20/20
Errors: 0
Proxied Client
Average response time: 586.24ms
Min response time: 568.96ms
Max response time: 599.82ms
Successful requests: 20/20
Errors: 0
That's a pretty easy win! With a latency reduction: 182.79ms per request, proxied client are 23.77% faster.
Why is it so much faster? Opening a TCP connection requires a three-way handshake: the client sends a SYN packet, waits for the server's SYN-ACK response, then sends an ACK to complete the handshake. Only then can application data flow. With a 182ms round-trip time, this handshake adds significant overhead to every new connection.
The proxy eliminates this by maintaining persistent connections to Meilisearch. Instead of establishing a new connection for each search, the Rails app connects to the local proxy (sub-millisecond latency), which forwards requests over pre-established connections to Helsinki.
It's very simple to setup a connection pool like this - it's a straightforward compose file:
# compose.yaml
services:
meilisearch-proxy:
image: caddy:latest
container_name: meilisearch-proxy
restart: unless-stopped
ports:
- "127.0.0.1:7718:7718"
volumes:
- ./conf:/etc/caddy
And a simple Caddy file:
# conf/Caddyfile
http://127.0.0.1:7718 {
reverse_proxy 100.2.3.4:7718 {
transport http {
keepalive 1800s # 30 minutes
keepalive_idle_conns 20
}
}
}
The default keepalive for Caddy is 2 minutes, and I've asked it to try to hold them open for 30 minutes, but it's network and server dependent, so there's no guarantee that'll happen. You can read more about how to tune Caddy's reverse proxy here.
And there you have it - a reduction in search page load times of 182ms.