In this blog post, we will show you a zero-touch method for integrating HAProxy with Consul by using DNS for service discovery available in HAProxy 1.8.

HAProxy is the most widely used software load balancer in the world, well known for being extremely fast and resource-efficient while minimizing latencies in microservices environments. It also includes a continuously expanding list of features for improved integration with orchestration systems and service discovery tools, such as hitless reloadsdynamic configuration without reloading using the HAProxy Runtime API, and DNS for Service Discovery.

In one of our previous blog posts titled “Dynamic Scaling for Microservices with the HAProxy Runtime API” we explained how to integrate HAProxy with Consul using the HAProxy Runtime API. Here we will show how to use DNS for service discovery instead of the Runtime API, which has the advantage of using less “moving parts” overall.

HAProxy 1.8 has DNS support (SRV/EDNS), so its now zero touch when used with Consul! Pretty slick. https://t.co/5dcud6n5yF

— Armon Dadgar (@armon) September 29, 2017

Consul’s purpose is to centralize and manage information related to service and application locations. Its key points relevant to our application could be summarized as follows:

  • Consul runs in a distributed client / server model

  • Consul clients run on application servers and register locally running services with Consul servers

  • Consul clients also perform health checks on the local services

  • Consul servers maintain a list of endpoints (IPs and ports) for each registered service

  • Consul servers can be queried to return current service endpoints

  • Querying can be done via JSON/REST API or via DNS

For more information, please refer to the excellent Consul documentation or Consul guides.

HAProxy and Consul provide a reliable solution for discovering services and routing requests across your infrastructure.

Microservices Architecture

We will describe a microservices architecture setup based on the following:

  • Consul is used for service registry and monitoring. When an application endpoint is spawned on the network (could be a VM, a bare metal server, or a container), local Consul client will register available services with the Consul server

  • HAProxy is used for load balancing and routing HTTP traffic to services

  • Consul-template running on HAProxy instances is used for generating the HAProxy configuration and reloading HAProxy when services (HAProxy backends) are added or removed

  • HAProxy uses DNS for querying Consul and dynamically scaling service nodes (HAProxy backend servers)

  • Key-value store available in Consul is used for storing some of the HAProxy configuration settings

Architecture diagram

architecture diagram

This microservice architecture setup with HAProxy and Consul provides a reliable solution for service discovery and request routing

All of this could be hosted on bare metal servers, VMs, cloud infrastructures (AWS etc.), or coupled to container orchestrators such as Nomad. (Even Kubernetes, but Kubernetes also provides its own service registry.)

Terminology in HAProxy & Consul

The table below explains HAProxy terminology and its equivalents in Consul:

HAProxy

Consul

backend

service

backend server

service node

Consul server

In our example microservices architecture we will be running one Consul server.

The Consul server will be started with the following command line:

consul agent -server -ui -bootstrap-expect=1 -data-dir=/var/lib/consul \
-node=server -bind=0.0.0.0 -config-dir=/etc/consul.d -client=0.0.0.0 \
-http-port=80 -domain=consul.itchy.local

In the directory /etc/consul.d/ there will be two config files: basic_config.json, used to configure the local Consul server:

{
"dns_config": {
"enable_truncate": true,
"udp_answer_limit": 100
}
}

consulgui.json, used to register the Consul dashboard with Consul:

{
"service": {
"ID": "consul-server",
"Name": "gui",
"Address": "dashboard.consul.itchy.local",
"Port": 80,
"check": {
"http": "http://dashboard.consul.itchy.local",
"interval": "10s",
"timeout": "1s"
}
}
}

HAProxy Server

On the HAProxy server, we will start two services:

  • Consul agent in client mode:

consul agent -data-dir=/var/lib/consul -node=$HOSTNAME -node-id=$(uuidgen) \
-bind=0.0.0.0 -config-dir=/etc/consul.d \
-retry-join <IP addr or hostname of consul agent server>
  • Consul-template:

consul-template -template="/haproxy.conf.tmpl:/haproxy.conf:/haproxy_reload.sh" -log-level=debug

Consul-template starts by reading and parsing the template file (‘/haproxy.conf.tmpl’ in our example). For each dynamic part of the configuration it detects, it watches the corresponding endpoints on the Consul agent API. When any changes occur, consul-template will generate the file ‘/haproxy.conf’ and run the script /haproxyreload.sh. The script /haproxyreload.sh will ensure that the reload is triggered only when a new backend is created or removed (i.e. when a new service has been registered or unregistered in Consul server, and not when just service nodes are added or removed). Since HAProxy will perform application scaling using DNS, we have to explain how to configure HAProxy for this purpose in our /haproxy.conf.tmpl template file:

  • We need a “resolvers” section:

resolvers consul
nameserver consul 127.0.0.1:8600
accepted_payload_size 8192
  • In the backend configuration, we use HAProxy’s “server-template” directive and we ensure it uses SRV record types to query the local Consul client:

{{range services}}{{$servicename := .Name}}{{$nbsrvkeyname := printf "service/haproxy/backend/%s/nbsrv" $servicename}}
backend b_{{$servicename}}.{{key "service/haproxy/domainname"}}
server-template {{$servicename}} {{keyOrDefault $nbsrvkeyname "10"}} _{{$servicename}}._tcp.service.consul resolvers consul resolve-prefer ipv4 check
{{end}}

The full contents of the template file can be found in the blog post’s code repository.

Application servers

On the application servers, we will run two main processes:

  • Consul agent in client mode:

consul agent -data-dir=/var/lib/consul -node=$HOSTNAME -node-id=$(uuidgen) \
-bind=0.0.0.0 -config-dir=/etc/consul.d \
-retry-join <IP addr or hostname of consul agent server>
  • The application server itself

In the Consul configuration folder (/etc/consul.d/) on each application server, we should create a JSON file describing the services provided by that server:

{
"service": {
"ID": "<your server hostname>",
"Name": "<the name of the service delivered by this server>",
"Address": "<local IP address where the service is available>",
"Port": 80,
"check": {
"http": "http://<local IP address where the service is available>:80",
"interval": "10s",
"timeout": "1s"
}
}
}

Note that the “Name” parameter is very important. All application servers sharing the same name will be grouped under the same service by Consul, and HAProxy will configure them in the same backend. From here, scaling the application will simply consist of adding new servers whose Name parameter in the JSON file above will be an existing service name. The magic will then happen automatically – Consuls clients will notify the Consul server of new nodes for a service, and HAProxy will find about the new nodes via DNS queries to Consul.

Routing client traffic to services

After the registration of services has been taken care of, client traffic has to be sent to HAProxy, and HAProxy has to route that traffic to the correct backend servers. There are three possible ways to route the traffic to backend servers:

  1. Using the Host header. For example, “www.domain.com” for service “www”, and “api.domain.com” for service “api”. In this case you should configure two DNS entries, www.domain.com and api.domain.com respectively, both pointing to your HAProxy server.

  2. Using URLs. For example, “www.domain.com” for service “www”, and “www.domain.com/api” for service “api”. In this case you should set up a single DNS entry “www.domain.com” to point to your HAProxy server.

  3. Using a combination of Host headers and URLs. The corresponding configuration of the HAProxy Consul template file (‘/haproxy.conf.tmpl’) may become more complicated and also involve Consul’s key-value store

In any case, to send client traffic to HAProxy, one or more DNS records simply have to be updated to point to the server running HAProxy. (In a High Availability scenario, you would want HAProxy to be running on a Virtual IP address hosted on VRRP.) In this blog post, we will be using option #1 above – routing requests based on the Host header for two services named “www” and “api”. The corresponding Host header values will be “www.consul.itchy.local” and “api.consul.itchy.local”. In the HAProxy configuration, we can route traffic to backends dynamically by using the rule “use_backend”. This rule can take information from live traffic (from the Host headers) and apply needed transformations. For example, starting with the value of the Host header, we must take only the string part without the port and turn it into lowercase. This is done using a rule such as:

use_backend b_%[req.hdr(Host),lower,word(1,:)]

Also, the HAProxy configuration template file must create one backend per service found in Consul:

{{range services}}{{$servicename := .Name}}
backend b_{{$servicename}}.{{key "service/haproxy/domainname"}}
[...]
{{end}}

Please note that the exact accepted domain names are stored in Consul’s key-value store. It is also possible to configure HAProxy to be more permissive and allow more domain names such as “api.otherdomain.com”. This would be done by updating the “use_backend” rule above to match only the string before the first dot and by removing the “key” used to build the backend name.

If you later create additional services, consul-template will create the appropriate backends with the service “Name” as the name and the rule above will also match the traffic automatically. For example, “newservice.consul.itchy.local” would be routed to the Consul service called “newservice”.

What would happen if HAProxy receives traffic for an unknown service? It will return HTTP 503 (Service Unavailable), and that error could also be customized if needed.

The Whole Setup

We are going to run the following servers:

  • One Consul agent in server mode

  • One HAProxy server

  • Two application servers delivering service “api”

  • Four application servers delivering service “www”

Services

Below is a screenshot of the services list as seen on the Consul server:

services

List of services as seen on the Consul server

We can see our “www” and “api” services as well as three other ones:

  • “consul” is the Consul server itself

  • “consul-dashboard” is the Consul Server GUI (the one we have used to take these screenshots)

  • “haproxystats” is the HAProxy statistics page

If we zoom into the “www” and “api” services, we can see the number of nodes associated with each of them:

services

Nodes associated with the “api” service

www service

Nodes associated with the “www” service

HAProxy Statistics Page

Let’s have a look at the quick, built-in HAProxy statistics page, to verify that the services have been configured as expected and to check how many nodes are in them:

haproxy statistics

The statistics page within HAProxy

We can see the backends for our two services “www” and “api”, respectively named “bapi.consul.itchy.local” and “bwww.consul.itchy.local”.

In each backend, we can see that HAProxy has found the correct number of servers associated to each service in Consul – 4 for “www” and 2 for “api”.

You might be wondering why are there so many servers in red for both “www” and “api”?

This is because we have used the “server-template” directive in HAProxy to provision server slots that could be used at a later date to scale backend servers. The number of server slots can be configured in Consul key-value database, or if unspecified it defaults to 10.

Since administrative services (“consul-dashboard”, “haproxystats” and “consul”) are also available in Consul server, consul-template has generated a dedicated backend from them in the HAProxy configuration as well. In reality, however, no traffic for those services will be passing through HAProxy since administrators will be connecting to those services directly.

Scaling Backend Servers

Now let say our service named “api” is receiving more and more traffic and we need to increase the processing capacity.

First, we will spawn two new API application servers. At startup, their local Consul agents will register the additional service nodes with Consul as we could confirm in the Consul Server GUI:

backends api

Local Consul agents register additional service nodes with Consul

After that, HAProxy will also discover the new service nodes through DNS and scale the backend servers for service “api”:

backends stats

After adding the application servers to Consul, they also become visible in the HAProxy statistics page

And that’s all there is to it!

Conclusion

HAProxy 1.8 was released with many new features allowing fully dynamic behavior and making configuration changes apply in runtime without reloading. Leveraging those features in HAProxy, coupled with Consul and consul-template, allows for easily building flexible microservices architectures.

If you would like to build your microservices architecture based on HAProxy Enterprise and backed by professional support from HAProxy Technologies, please see our HAProxy Enterprise – Trial Version or contact us for expert advice.

Happy scaling and stay tuned!

Subscribe to our blog. Get the latest release updates, tutorials, and deep-dives from HAProxy experts.