Assessing Architectural Structures and Paradigms
Explore how to establish architectural baselines for resilient event-driven microservices in .NET 7. Understand key patterns like event sourcing, producer-consumer, and CQRS, and how they integrate with components such as cloud-hosted services, IoT devices, and Kubernetes to build scalable distributed applications.
We'll cover the following...
Establishing an architectural baseline helps to drive decisions regarding how the application and its components will ultimately be implemented. Additionally, it also provides an opportunity to evaluate different patterns and practices with the ultimate goal of selecting a path forward. This lesson covers the overall architectural design of the sample application and some core tenets that enable the creation and consumption of events.
A high-level logical architecture
The solution is predicated on the use of hardware interfaces (such as equipment) that can communicate to hosted services in the cloud via a standard network connection. There’s a hardware gateway (such as Raspberry Pi) that hosts simple write-only services, which will integrate using relevant domain services to record turnstile usage, facial recognition hits, and possible malfunctions with the turnstile or camera. Any user interface can interact with a common API gateway layer, which allows for data exchange without needing to know all the particulars of the available APIs. The backend runtime is managed by Kubernetes (in this particular case,
The following reference illustration shows the logical construction of the application:
The application uses the Producer-Consumer pattern to produce events, which are later consumed by components who need to know about them. We might also see this pattern referred to as Publish-Subscribe or pub-sub. The key point to take away from the use of this pattern is that any number of components could produce events containing relevant domain information, and any number of possible components could consume those events and act accordingly. We’ll dive into the producer-consumer pattern in much more detail in The Producer-Consumer Pattern chapter.
In particular, there are two technology architecture specifications that we’ll be using. One is for the device board inside the turnstile unit, which hosts the Equipment domain service. The other is the layout of the cloud components, as mentioned in the reference architecture in the above illustration. The high-level flow between the turnstile device and the cloud components is as follows:
On the turnstile, after completing one turn, a message is sent to the equipment service indicating a completed rotation.
The equipment service will send an event to the IoT hub with the results of the turnstile action.
Using Kafka Connect, the message will be forwarded to Kafka, implemented within the Kubernetes cluster using the confluent platform.
The event will be written to the appropriate stream.
Any relevant event handlers will process the event.
A more detailed illustration of the technology architecture can be seen below, where both the turnstile unit and the cloud components are represented.
Next, we’ll explore the design of the event sourcing technique.
Event sourcing
Event sourcing is a technique that allows an application to append data to a log or stream in order to capture a definitive list of changes related to an object. One of the benefits of using event sourcing vs. traditional create, retrieve, update, and delete (CRUD) methods with relational databases is that the performance can be tuned and increased at the service level, as the overhead of using CRUD methods isn’t needed. Also, it facilitates implementing a separation of concerns and the single responsibility principle, as outlined by the SOLID development practices.
Another benefit of using event sourcing is its ability to achieve high message throughput while maintaining a high degree of resiliency. Technologies such as Kafka inherently allow for multiple message brokers and multiple partitions within topics. This design ensures that at least one broker is always available to communicate with, and multiple partitions within a topic allow for data redundancy and scalability since Kafka will replicate partition data to each broker in the cluster. This enables multiple consumers to access or write data in parallel.
When using event stores with streaming capabilities, it enables us to debug point-in-time data and replay events to aid in debugging. For example, if an event has data that causes an error in the service code, we are fully able to go back to the point in time before that error was thrown and replay events to help identify potential bugs. Additionally, it can be used to perform “what if” testing. In some cases, normal use cases might have related edge cases that could either cause issues or introduce complexities that they were not originally designed for. Using “what if” testing allows us to go to a certain point in time and begin issuing new events that would correlate to the edge case while also monitoring application performance and potential failures.
Command-Query Responsibility Segregation (CQRS)
Command-Query Responsibility Segregation (CQRS) is a design pattern introduced by Greg Young that’s used to describe the logical and physical separation of concerns for reading and writing data. Normally, we’ll see specific functionality implemented to only allow writing to an event store (commands) or only allow reading from an event store (queries). This allows for the independent scaling of read and write operations depending on the needs of the application or the needs of a presentation layer, either in the form of business intelligence software, such as Power BI, or web applications accessible from desktop and mobile clients.
Details about how CQRS impacts the design of the application’s domain services are covered in the next section. It’s important to note that having that distinct separation of concerns is vital to leverage the pattern effectively.