Service Discovery

In the current context of sharing APIs, it’s easy to think of service discovery as a developer finding an API service to use. For this chapter, that is more of a downstream event (pun intended) of service publishing. Instead, we’re going to focus on service discovery in the context of seeking out available and healthy endpoints for our API.

When achieving the levels of scalability that event-driven architecture offers, we, of course, have many instances of our services running across the unpredictable landscape of the cloud. So, when we make one of our APIs available as a single unified service, it’s almost a certainty that this is backed with multiple instances of the same runtime across almost any combination of infrastructure configurations. How do we make sure that requests to our API are directed to a healthy service instance?

It’s not quite the load-balancing...

This might appear like a job for a load balancer. However, load balancers come with their own challenges.

First, they’re not designed for fast addition and removal of backend pool endpoints. This is a problem when we might have many different infrastructure components spinning up and shutting down instances of our services at a rapid pace.

Secondly, load balancers are not traditionally designed for high amany different infrastructure components spin up and shut down instances of our services rapidlyvailability. They become a choke point for the service. If the load balancer fails, all services behind it fail.

Now, cloud platform services are catching up on these problems but only by adopting the same mechanics, as we’ll explore in service discovery. This means that the services are designed for rapid service registration and deregistration and implemented with a peer-to-peer network of load-balancing services.

We’ll explore one of the leading solutions for service discovery: Consul by HashiCorp.

Exploring Consul

Consul is a service mesh and discovery platform that helps abstract the availability, reliability, and security aspects of how one service communicates with another. In a complete deployment, not only does it know where all instances of a service are and what their health status is, but it also handles the encrypted communication between them and can discover new services with no application code changes required.

For the scope of this chapter, we want to focus only on service discovery. Our desired end state is a highly available Consul cluster that’s aware of all our API service instances and their health statuses and provides a way to address these services that abstracts any logic away from the calling application.

Setting up the Docker network

As we’re using Docker containers with docker-compose, there are some limitations to how Consul can operate. The Consul cluster will eventually act as a DNS server so that services can be addressed using a predictable hostname. Consul will manage the DNS responses so that traffic is balanced across service instances and only healthy instances are addressed. In a full production configuration, DNS binding and forwarding would be used to integrate the Consul DNS into the network setup. For the purposes of this chapter, Consul will be the only DNS used by our service instances. To support this, we must assign static IP addresses to all the containers within the docker-compose configuration.

At the end of the docker-compose.yml file, we define a virtual network and address space.

Get hands-on with 1400+ tech skills courses.