Tasks and Swarm’s Scaling Model

Learn how Swarm acts as an orchestrator.

Swarm’s self-regulation

Services running on Swarm are self-regulating. We define the desired state for service in terms of the number of containers that should run for it, and Swarm acts to ensure that this state is achieved and maintained.

It is this self-regulation that is key to how scaling is implemented, as well as Swarm’s self-healing properties.

Swarm as orchestrator

Swarm is made up of different parts. We can consider one part to be the orchestrator. Having told Swarm to deploy a service, the orchestrator is responsible for determining whether the service is in the desired state and, if not, taking action to correct this.

If the orchestrator sees that an additional container is needed, it creates a task, which represents the desire for a container to exist. It then sees what nodes are available in the cluster, and allocates the task to a node.

In our single-node cluster, the orchestrator has no choice but to allocate all of the tasks onto our only node. However, you can imagine that in a multinode setup, tasks could be allocated evenly across multiple instances in the cluster.

Nodes in the cluster check with the orchestrator to know what tasks they have been allocated. They are then responsible for creating one container per task, thus achieving the desired state of the system. An example should make this clearer. When our web service was deployed, the number of replica containers needed was also not specified, so the default of one was assumed.

The orchestrator sees that we want one web service container running, but that, initially, none are. It creates a task for the web service and assigns it to our node. The node sees that it has been allocated a web service task, and it launches a web container, meeting our desired state.

We can list the tasks in a stack with the following command:

Get hands-on with 1200+ tech skills courses.