Twitter, PayPal, and Netflix encountered profound challenges inherent in their monolithic architecture, grappling with issues such as scaling, quality, and frequent downtimes. While monolithic systems are effective initially and can operate seamlessly with good coding practices, they can evolve into what's commonly referred to as 'monolith hell.' This phase arises when their complexity burgeons to a point where implementing new features becomes increasingly time-consuming, resulting in a surge of bugs and user dissatisfaction.

The evolution of applications over time can transform them into burdens, especially amidst changes in developer skill sets, business priorities, and evolving user expectations. Such applications might devolve into a chaotic 'spaghetti code,' impacting development speed, testing, and deployment efficiency. To address these challenges, the pivot towards a microservices architecture proves pivotal.

This shift allows development teams to extract functionalities from monolithic applications, enabling a more focused approach to feature development and streamlined deployment schedules. Embracing a microservices architecture facilitates a faster pace of development while resolving issues associated with monolithic structures.

Within this post, the discussion delves deeper into the advantages of adopting a microservices architecture while navigating the complexities of architectural changes. It comprehensively examines the stark differences between monolithic and microservices architectures, covering essential aspects such as microservices patterns, messaging, testing methodologies, deployment strategies, and the structural considerations associated with cross-cutting concerns within the architecture. 

Benifits:

For large applications suffering from “monolith hell,” there are several reasons they may benefit by converting to a microservice architecture. Development teams can be more focused on business processes, code quality, and deployment schedules. Microservices scale separately, allowing efficient usage of resources on infrastructure. As communication issues and other faults occur, isolation helps keep a system highly available. Lastly, with architectural boundaries defined and maintained, the system can adapt to changes with greater ease. The details of each benefit are defined in the following.

Team Autonomy

One of the primary advantages of a microservices architecture is the empowerment it provides through team autonomy. This approach divides the architectural concerns, enabling development teams to function independently of others. This autonomy allows teams to work at their own pace, accelerating time to market, a crucial factor in gaining a competitive edge.

A significant flexibility offered by microservices is the ability, though not mandatory, to utilize different programming languages. Unlike monoliths, which usually mandate a singular language for the entire codebase, microservices enable the adoption of diverse languages tailored to specific tasks. For instance, Python is often preferred for data analytics, while mobile and front-end development might leverage different languages, and C# might be used for back-end business logic.

Teams dedicated to microservices are accountable solely for their respective services, focusing solely on their code without the need for extensive knowledge about other areas. Effective communication through API endpoints becomes crucial, detailing HTTP verbs, payload models, and return data structures. Utilizing an API specification such as the OpenAPI Initiative (https://www.openapis.org/) is highly recommended to structure APIs effectively

Service Autonomy

Team autonomy focuses on the development teams and their responsibilities, while service autonomy emphasizes separating concerns at the service layer. Adhering to the 'Single Responsibility Principle,' each microservice should have a singular purpose, avoiding multiple reasons for change. For instance, an Order Management microservice should solely handle order-related processes without integrating unrelated business logic, like Account Management. This segregation enables microservices dedicated to distinct business processes, fostering their independent evolution.

Furthermore, not all microservices exclusively handle business logic; many interact with others based on data processing requirements. Despite this inter-microservice communication, loose coupling ensures code flexibility and evolution. Benefits extend from ease of microservice upgrades with minimal impact on others to the independent evolution of features and business processes at varying paces.

The autonomy between microservices enables individual resilience and availability catering to specific needs. For instance, a credit card payment microservice might require higher availability than an account management microservice. Clients can utilize distinct retry and error handling policies based on the services they're utilizing.

Service autonomy facilitates streamlined deployment of microservices through Continuous Integration/Continuous Deployment (CI/CD) tools like Azure DevOps, Jenkins, and CircleCI. Individual deployments allow frequent releases with minimal impact on other services, ensuring zero downtime. Configurable deployment strategies enable the seamless transition to updated versions without disrupting existing services.

Scalability

Scalability, a significant advantage of microservices, distinguishes service instances from a monolithic application. Monolithic setups often demand larger servers, whereas microservices enable multiple instances across servers, aiding fault isolation.

The microservice architecture allows for diverse server utilization, accommodating varied processing needs. Each microservice can allocate resources according to specific requirements—CPU, RAM, or in-memory processing capabilities. Furthermore, hosting microservices on separate servers from the monolith fosters diversity in programming languages. For example, if the monolith operates on .NET Framework, microservices can be developed in different languages, potentially cutting costs by running on Linux, avoiding operating system license fees.

Fault Isolation

Fault isolation is about handling failures without them taking down an entire system. When a monolith instance goes down, all services in that instance also go down. There is no isolation of services when failures occur. Several things can cause failure:

  • Coding or data issues
  • Extreme CPU and RAM utilization
  • Network • Server hardware
  • Downstream systems

With a microservice architecture, services with any of the preceding conditions will not take down other parts of the system. Think of this as a logical grouping. In one group are services and dependent systems that pertain to a business function. The functionality is separate from those in another group. If a failure occurs in one group, the effects do not spread to another group.

As with any application that relies on remote processing, opportunities for failures are always present. When microservices either restart or are upgraded, any existing connections will be cut. Always consider microservices ephemeral. They will die and need to be restarted at some point. This may be from prolonged CPU or RAM usage exceeding a threshold. Orchestrators like Kubernetes will “evict” a pod that contains an instance of the microservice in those conditions. This is a self-preservation mechanism, so a runaway condition does not take down the server/node.

An unreasonable goal is to have a microservice with an uptime of 100% or 99.999% of the time. If a monolithic application or another microservice is calling a microservice, then retry policies must be in place to handle the absence or disappearance of the microservice. This is no different than having a monolithic application connecting with a SQL Server. It is the responsibility of the calling code to handle the various associated exceptions and react accordingly.

Retry policies in a circuit breaker pattern help tremendously in handling issues when calling microservices. Libraries such as Polly (http://www.thepollyproject.org) provide the ability to use a circuit breaker, retry policy, and others. This allows calling code to react to connection issues by retrying with progressive wait periods, then using an alternative code path if calls to the microservice fail X number of times.

Data Autonomy

Up to this point, numerous reasons have been emphasized favoring a microservice architecture, mostly focusing on business processes. However, the significance of data cannot be overstated. Monolithic applications, as earlier described, heavily rely on a data store, where data integrity is paramount for any business's sustained operation. The hypothetical scenario of a bank 'guessing' your account balance highlights the criticality of data integrity.

Microservices, characterized by loose coupling, enable independent deployment of changes, often including alterations to the data schema. Yet, challenges arise when one team's schema change impacts others, necessitating backward-compatible changes. Moreover, data isolation within each microservice allows for independent modifications with minimal interference, promoting faster business deployment.

Initiating a new feature within a dedicated microservice with fresh data can be easily implemented and contributes to accelerated production timelines for businesses. Additionally, employing separate databases offers the advantage of utilizing diverse data store technologies. This strategy allows for a mix of relational databases such as SQL Server and non-relational databases like MongoDB, Azure Cosmos DB, and Azure Table Storage, ensuring the right tool is used for each specific task.

Challenges to Consider

Transitioning to a microservice architecture involves complexities beyond monolithic systems, demanding a flexible approach that allows for failure. Implementing even a small microservice may require multiple iterations and significant refactoring of the monolith to facilitate the shift. The process demands a new mindset regarding architectural considerations, such as development time costs and infrastructure adjustments impacting networks and servers.

From a monolithic background, interfacing with a new microservice requires more than a simple method call; it involves network-based communication and often necessitates the use of a messaging broker. Challenges arise in applications and team sizes, where small applications or large applications with limited teams might not immediately reap the benefits of a microservice architecture. The true advantages emerge when overcoming the overwhelming complexities often associated with monolithic systems.

Despite the temptation to host monolithic applications on additional servers, this stopgap solution usually proves short-lived and reminiscent of the challenges faced by companies like PayPal and Twitter. Embracing microservices might face resistance from developers due to a steep learning curve and the feeling of managing two projects concurrently. Balancing quality with meeting production goals is crucial, as shortcuts contribute to code fragility and technical debt, potentially prolonging success.

Teams developing microservices encounter challenges beyond coding proficiency. Understanding distributed system design and learning new communication patterns for microservices are pivotal. Addressing cross-cutting concerns impacts every developer, demanding involvement in comprehending and potentially creating elements that assess the system's health. Tasks related to testing the entire system, not solely microservices, become integral and may require additional time in the development cycle.

Microservice Beginning

Interactions between primary and associated systems created challenges, demanding the primary system, typically the main application, to comprehend specific communication intricacies of each connected system. This close-knit arrangement, termed a 'tightly coupled' architecture, necessitated the primary system's deep knowledge of information storage, services provided, and communication methodologies within each connected system. Any change in these connected systems required significant alterations, prompting the emergence of Service-Oriented Architecture (SOA) to alleviate these complexities. SOA established a standardized communication approach, minimizing the interdependencies between systems.

The introduction of the Enterprise Service Bus (ESB) in 2002 facilitated system communication by implementing a 'Publish/Subscribe' model, enabling systems to selectively engage or ignore broadcasted messages. An ESB managed aspects like security, routing, and assured message delivery.

In contrast, the shift to microservices addresses scalability concerns by enabling independent scaling for each service. Transitioning from ESB to protocols like HTTP empowers endpoints to make informed communication decisions, following the 'Smart Endpoints, Dumb Pipes' principle, as articulated by Martin Fowler.

The recent surge in microservices' attention can be attributed to cost-effective infrastructure, improved computing power, and the rise of accessible programming languages and platforms. The present landscape offers powerful computational capabilities, economical network resources, and a variety of programming languages suitable for microservices. Platforms like Kubernetes, Service Fabric, and others adeptly manage microservices within Docker containers, further catalyzing their adoption.

However, despite their cost-effectiveness and the availability of supportive technologies, developing quality software remains a significant challenge. The convenience of creating microservices might mislead developers into believing the process is straightforward, yet the reality demands substantial time, skill, and meticulous attention. Despite their cost advantages and relative ease of creation, microservices remain a demanding endeavor requiring significant effort and skill to ensure quality.

Latest on posts

Blog Archive

Tags