Showing posts with label Microservices. Show all posts
Showing posts with label Microservices. Show all posts

Monday, November 15, 2021

The fundamental principles for using microservices for modernization

The last few years I have spent a lot of time building new application on microservices and also moving parts of monolith to microservices. I I have researched and tried sharing my practical experience in several articles on this topic.

This week my second blog on some foundational principles of microservices published on Capgemini website.

https://www.capgemini.com/se-en/2021/11/the-fundamental-principles-for-using-microservices-for-modernization/

Thursday, July 15, 2021

The Coevolution of API Centric and Event-based Architecture

When evaluating communication between different systems, there is always an argument of choosing between an API-first approach and an event-first approach. In a distributed ecosystem, it’s not one or the other, but the combination of both these strategies that can solve data transmission between one or more systems.

API’s are the de facto way of interacting for synchronous operations. That means performing tasks one at a time in sequential order. When designing systems with a specific responsibility, APIs shield the underlying systems from being accessed directly and expose only the reusable data, thus ensuring no duplication of data happens elsewhere. When using simple API’s all that is needed is a readable API structure and systems that follow a request and response pattern. API’s are beneficial in the case of a real-time integration where the requesting system needs information abruptly.

However, designing and Scaling APIs can also get intricate. In high transactions microservices architecture, throttling and caching of APIs are not simple as APIs need to scale on-demand. Also, in such integrations, API gateway becomes necessary to make the systems loosely coupled.

The below example depicts a reporting system that creates different reports based on the Customer, Order, and Catalog data. The source system exposes an API. The reporting system fetches the data via the API and sends the information to the underlying destination systems.

API First Architecture

This architecture looks fine if there are no changes to the Information from the source systems. But, if the order information has properties that keep getting updated, then the Reporting system needs to have the capability of ensuring that the changed state gets updated in subsequent systems.


Handling Cascading Failures

In a chain of systems that interact using APIs, handling errors or failure can also become cumbersome. Similarly, if there are multiple dependent API calls between two systems, the cascading failures become complex. The complexity further increases when there is a need for systems to react based on dynamic state changes. This is where Event-based architecture can help address some of the issues.

The basis of Event-based strategy is asynchronous means of communication. There is an intermediate system that decouples the source and the destination service interfaces. This strategy is apt for applications that need near real-time communication and when scalability is a bottleneck.




With an Event-based architecture, all the source system has to do is adhere to a contract, and on any state changes, trigger a message to the intermediate broker system. One or more destination systems can subscribe to the broker system to receive messages on any state changes. Also, since the source system triggers an event, the scalability of the APIs is not an issue.

Event First Architecture


With a pure Event-based architecture with an increase in the number of messages, the architecture can get complicated. Tracking the statuses of a message if they are processed or not becomes tricky. In this case, every order tracking needs to happen for the latest state, and error handling needs to be robust. Also, this entire process is slow and there is a huge latency between the end-to-end systems.

Another way of simplifying the architecture is by combining API and the event design. The below diagram illustrates that the Reporting system interacts with the Order system using both API and events. The Order system sends the state change notification to the broken. The Reporting system reads the state change and then triggers an API call to update the Order information. The reporting system makes API calls to the Catalog and Customer systems to fetch the static data. It can further push the created destination messages to consume using the event broker.




In conclusion, both API and events have their pros and cons and solve a specific problem. They are not a replacement for one another and architecture can be made less complex if they co-exist. In a modern micro-services architecture to have both of them handy can help ease distributed system interaction complexities.

Friday, June 19, 2020

10 Fundamental Principles one needs to ask before breaking the monolith platform

          Below are some of the key principles that need to be evaluated when one starts to break out services from a monolithic platform.


1.    Target Core Services or Fringe Services First?

          Target Functionality that doesn’t require changes to the end customer application and possibly also doesn’t need any core database migration or changes. It becomes easier for subsequent services by building CICD pipelines, required alert and monitoring systems, testing strategies, and version control.

2.     Split Schema or Code First?

          If the core services are clear then always first split out the schema and keep the services together before splitting the application code out into microservices. If the services are too coarse-grained they will be split into smaller services creating another data migration. Also, two services accessing the same database results in tight coupling between these services.

3.    Moving out Services Vertically or Horizontally?

        Moving out of Services can happen either vertically or horizontally. Try to move out a single core service at a time by first moving the database, functionality, and then the front end. This technique avoids costly and repeated data migrations and makes it easier to adjust the service granularity when needed.

4.     Building Micro or Macro or Mini services?
       
          When creating a service, first identify the core services and define clear bounded contexts. Until then, the first step is to create a macro service until the core services are clearly demarcated. Once the demarcations are clear it is easy to further split into microservices

5.     Outside in or Inside Out Creation of Services?
        
          The easiest way to create services is from outside-in, understanding how the various integrations need to talk to various applications. However, this leads to data inconsistencies and data integrity issues. Designing service inside-out is more time consuming but cleaner with clear defined boundaries for each service. If approached properly, this will reduce possible data integrity issues. 

6.     Where to build New functionalities?

       Target any new functionality getting created as new micro-services, target services that are business-centric. Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via anti-corruption layer. 

7.     Rewriting Code or Capability?
       
          When building new functionality try to rewrite capability and not the code. This may be time-consuming to build, but the monolithic platform already has a lot of redundant code. By rewriting capability it gives an opportunity to improve the granularity of the service, revisit business functionality, and maintain a clean codebase. 

8.    Incremental or Radical updates?

          Target to decouple modules or services that result in reducing traffic towards the monolithic application, this will improve the performance of the application as well as help in decommissioning of infrastructure and helping cost (licenses).

9.   Versioning Services Incrementally or Concurrently?

          Having multiple versions of the same code leads to issue concerning maintainability and cost, but until the microservices and surrounding integrations are matured, maintaining multiple versions of the service endpoint at any given time helps in reducing failure risks and less dependent on external systems.

10.   Where to build New functionalities?

          Target any new functionality getting created as new micro-services, target services that are business-centric. Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via anti-corruption layer. 



Thursday, June 11, 2020

The myth of Sharing State when breaking large applications


One of the complex puzzles in a microservices journey is how and when to break the database. When thinking about breaking a legacy monolith application, the very first non-risky thought that comes to mind is to decompose the platform module by module as standalone microservices using multiple ORMs and hitting the same database. 

If it was an application with limited tables and modules,  would have been the simplest approach to move towards. If there is a firm partition between each microservices data with fewer dependencies, then it becomes fairly easy to adopt services and maintain one large database with several schemas.

However, legacy applications are seldom portable, and sharing data or state to all intents and purposes is convoluted. Below are some of the typical concerns that need to be evaluated building or maintaining applications with a single state.


Tight Coupling of Services

One of the key principles that Architects strive is to build a loosely coupled application that can be catered to future unknown requirements. In data terms what that essentially means is to build functionalities using new ways of persisting state without impacting the existing application or state. 

Most of the legacy applications are built and maintained for years and years and have a very tight coupling of out of box and custom modules and libraries. This results in huge state dependencies between modules. If any new requirements to either build a module to be event-driven design or build a new non-SQL database for solving certain quality attributes is no easy task and requires a complete revamp of several services. 

 Weak Cohesion

The basic principle of building microservices is the Separation of Concern, i.e. each service, or a group of services to have its own dedicated state.  

Large legacy applications generally have a large database with several schemas. Each database schema is accessed by several services, hence if any change to the logic that requires a DB change, it will impact all corresponding services. If a database table changes, all the related services will have to change and this creates huge dependencies between development teams with huge sunk cost fallacy. 





Sunday, November 24, 2019

Changing Organization Perception in a Microsevices journey - Series Part 6

A very core principle of building microservices is enabling teams to deliver at a constant incremental pace and not in a big bang or spasmodic approach. Teams must be trained and prepared for cogent disruption in adopting agile methodologies processes.
Microservices requires teams to follow the core principle of agile in working at a constant pace, which in turn enables teams to deliver at a constant pace.
microservices

DevOps and DataOps culture needs to be embedded into teams. Teams must be enabled to have greater control over their environment and make multiple releases into production with a fail fast and fail forward approach.
Don’t get into a situation where certain teams are more agile than others. This can lead to a lot of slow down, especially with cross-team communication and integration.
Containers should be a standard part of any agile infrastructure, especially when working on legacy platforms that require specific infrastructure and length installation. Build service containerization to take advantage of the flexibility, speed of deployment, configurability, and automation.
Do spend resources and time on the monolith and its improvement. Seldom, teams just start concentrating on building new-age technology solutions and hardly spend time understanding the legacy system. Without understanding the monolith, it’s hysterical to even attempt breaking it.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the state that it was before.
Moving out of Services can happen either Vertically or Horizontally. Clearly nail down those horizontal services that need to be common overall. Try to identify those vertical services and the best way typically are to migrate them by first moving the data, then the business logic and later the front end.


Always try to target the first step as is to create a macro service until the core services are demarcated. Once the demarcations are clear it is easy to further split into microservices.
In the beginning, the teams have less operational maturity, during this phase minimize the number of microservices and reduce the cognitive load.
The simplest of services, to begin with, in my experience are read-only applications. Especially, the ones that are business-centric and that change very often. This allowed us getting the business team’s confidence that the teams can move faster and deliver those features rapidly.
The team that I was part of initially were depending a lot on the monolith application. The services were deployed multiple times a day, whereas the monolith application was deployed once a week. The services should never wait for the monolith to be deployed.
Also, every time changes went into the monolith, it was always ensured that there were feature flags. This gave the developers the leeway to revert and test changes if in case hell broke loose.
Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via the anti-corruption layer.

Concluding Thoughts

A microservice journey is complex and seldom have organizations been successful. And if you have gone ahead and started that journey, do take intermittent checks on where you stand and correlate to where you started from.
  • If after building services you are in a situation where all developers congregate to fix production issues
  • if teams require several developers and take umpteen number of days to fix issues
  • If your applications have several hours of downtime
  • If your services cannot be tested as a single entity
  • If the teams fear to make changes to the code when adding new features
  • If you are reverting code and releases instead of failing fast or failing forward
  • If you are building services that access the same database
  • If the services and functionality is spread across multiple services and teams
  • If your applications take multiple teams and several people to deploy changes to production
  • If there are too many services depending on each other
  • If your teams are still writing and performing manual tests
  • If you are building services with hundred of Classes having hundreds of lines of code
  • If you have several services that have not been modified for several months

Maybe, you have ended up building another monolith.


Dividing and Conquering Microservices using Concern Based Modularization - Series Part 4

Modularization is another indispensable prerequisite, to begin with when designing microservices. Without modules defined, I have seen teams building services in a wild goose chase fulfilling the anti-patterns of complex data interoperability across distributed systems, tightly coupling of services by maximizing dependencies, minimal composability and abstraction and the list goes on….
Uncle Bob once famously said
“Gather together the things that change for the same reasons. Separate those things that change for different reasons”.
It’s essentially an abstraction of concerns of business capabilities by understanding and defining the exact core services or modules of an application landscape. InIt’she same basic concept followed by designing any monolithic application.
Where teams fail to understand is that with an improper composition of core components or modules, all you are doing is shifting complexities to more gray areas. This is a whole reason why in the first place the monolithic application was designed to be a haphazardly structured application or otherwise famously defined as “Big Ball of Mud”.
“Deciphering the concern based modularization of an enterprise is an essential requisite to amalgamate the intricacies of the microservices system.”


In one of the microservices journeys, I was part of a team that wanted to jump-start creating services and in parallel define their core modules. In doing so they came across a situation where they had the same core source information in multiple systems and this resulted in a pretty complex data interoperability across multiple services. Fixing things at this point is tedious impacting several applications.
Also, another issue seen is the over creating of core services which is a mess. It will lead to several unnecessary layers. It just adds complications to the application landscape where every service is depending on the other without any realm of responsibility.
Get the core and sub-core concern components that are baselined at the earliest.
Domain-driven design (DDD) needs to be adopted to help choose domain boundaries or business context.
Applications need to be divided and conquered to identify the organic sized chunks of these components.
Core services by definition are services that mainly focus on persistence and are pure data-centric. Each isolated core services will be the future master of information and the discrete source of truth.
Once the core services are nailed down, detailing out the Peripheral and Composite services becomes much easier.

Concorde Effect Fallacy Conundrum when building Microservices - Series Part 3

One major point of argument that I have come across in every team when building micro-services is to Use the existing code or rewrite the code?
When building new functionality the majority of the time it makes sense to rewrite capability and not the code. This may be time-consuming to build, but the monolithic platform already has a lot of redundant code.

By rewriting capability it gives an opportunity to improve the granularity of the service, revisit business functionality, and maintain a clean codebase.
Business and IT teams always spend a lot of money on solutions that are deemed to be the right way. But, many a time the only reason these extravagant solutions are running is because they are spending a lot of money on it.
One of the teams that I was part of the business spent a huge amount of licensing money every year on a proprietary solution and were hesitant to change.
However, the software was pretty tedious and it was very slow building functionalities. The team started with a POC and realized that in a few weeks they could build the same core functionalities with open-source software. They went ahead and did that and it opened up a surfeit of opportunities for the business team to improve upon.
Be rational in decision making and avoid sunk cost fallacy situation.
Stay practical in what needs to be rewritten and what needs to be reused. Sometimes the later may be a good choice as not everything in the monolith is a throwaway.
In another instance, one of the teams were struggling to re-architect complex business functionality and moving very slow. The business knew that this was going beyond the scheduled budget and time and did not yield many benefits, but they struggled to decide to dismantle the team.
It’s never too late to pull the plug when building services. In situations where certain core functionalities built don’t change much just reuse stuff. Maybe a good code review is what is required.

Socio-Technical Approach to building Microservices - Series Part 2

Once the initial approach is selected, the next challenge is defining teams and how to manage the key challenges to attain velocity and sizing within teams.
A socio-technical framework is being used more and more when building complex systems, especially those that deal with the principles of working in teams and agile environments where adaptability is the key. Below are a few common questions that the teams need to dwell upon and ask themselves?
How big a microservices application has to be? Should it be a Micro, Macro or even a Mini service?
How to keep in context without swapping out or referring documentation, or human intervention?
How to embrace changes in the application? How to build software that is faster time to market?
How to bring in a new person and develop without any wait times?
The concept of “Cognitive Load” is getting used quite often when sizing microservices. The concept has been borrowed from the psychological dictionary where the cognitive load refers to the information one keeps in the head without referring to something else. It’s a universal concept used for learning and to help maintain optimum load on the working entity. It can be defined as a temporary working memory like a Cache or Ram.
“Producing maintainable software depends on readable information be it code, modules, or documents, and readable information depends on controlling Cognitive Load.”
There are three types of Cognitive Loads defined


  • Intrinsic Load is related to the complexity of information that a team is paying attention to and processing. It’s bae skills and demands on specific tasks or technology that one is grasping and the difficulty of learning the task itself. In building microservices this can be the initial load that is required to create an end to end service.
  • Extraneous load is completely unrelated to the learning task. These are all the negative distractions and additional effort imposed on the team or system.
  • Germane load is the mental processing effort and is directly going towards supporting of development of schema and long-term memory. In building software, it is the repeated decorative knowledge and conglomeration on how this thing to play in a more complicated system.
Intrinsic and Germane load are considered “good” and Extrinsic is “bad”.
Always in teams try to find and offload those extrinsic loads that affect the teams.
The extrinsic loads identified in the teams that I was part of included environmental issues, complicated subsystems, unnecessary complex or business Logic, complex code, misfit teams, team meetings, etc….
Many a time, it could also be as simple as spending time on unnecessary documentation that no one reads or to over-complicate the code or even spending hours contemplating method and object names.
Too much Germane load makes learning a new task harder and slower.
Microservices have a handful of moving parts at the same time and more the moving parts, to begin with, the complicated it gets. Systematic tradeoffs need to be made by teams when building services. Teams have to communicate and learn from each other.
The Intrinsic load should reduce over time as new lower-level knowledge and experience are formed and documented.
Always try to complete that initial simple microservice end to end, and deploy it in production with a database, CICD, alert and monitoring system, testing strategies and version control in place. This makes the subsequent services to be carved out more easily. After a few services spinning up new services and it becomes a cakewalk.
As the experience increases handling cognitive load gets more mature and achievable.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...