Tuesday, December 17, 2019

Product Oriented Development Methodolgy

In several Organizations, the focus of project execution has recently changed from more product-specific deliveries replacing project-specific deliveries. This requires a complete organization-wide change to make every internal/external project run on AGILE ways of working.

What this essentially means is that every project needs to be viewed from an MVP approach. i.e. Most valuable product.  This requires


  1. Changes in the organization and team structures

Teams are more defined based on specific goals. Teams are asked to think more in terms of product innovation and deliver specific features to evolve the product. There is no one-time delivery system and the business teams are aligned directly with these teams to incorporate faster feedbacks on the product roadmaps. Each team is attached to a dedicated product owner who helps create the specific goal required by the business teams. 

  1. Changes to the ways of working.

Different stakeholders (Both IT and business) meet in every few weeks and define the product goals. Teams are more cross-functional so that they can be moved around once the specific need of the goal is achieved. From the business perspective, the money allocation to building these products is more incremental and on a quarter by quarter basis rather than estimating long term projects. If an idea is not flying it can be terminated at any point in time. Teams are more strengthened with Agile and DevOps culture. It’s a more flat structure where the individual teams mostly interact with an Agile Coach, PO, and a PMO in the hierarchy. 

  1. Product Goal Definition

The product goal definition is based on the lifecycle of the idea from start to production and the idea is split into the specific weeks road map. The goals are created based on different objectives related to parameters that can be real-time customer behavior or any quantifiable metric. Each team is asked to present the product's progress in incremental demos every second week. Several of the goals are also based on the market competition and the main idea is faster velocity. 

Sunday, December 15, 2019

Simple way to create automatic alerts for your application using Slack API

Simple Springboot application for Slack notification

Slack has become one of the most widely used channel for communication between teams in many organizations. It has an open source API that can be used to notify alerts and events to different channels and stakeholders.

Below is a simple spring boot application that uses JPA repository to fetch database contents and communicates to Slack via JSlack api



The spring boot application is present in the following path on github.

https://github.com/shailendrabhatt/spring-boot-slack-notification

Saturday, December 7, 2019

Quick 5 Point Website Inspection for Peak Holiday Traffic

The holiday season is around the corner and with the code freezes, teams fear to risk any major changes to production to hamper the critical business time of the year.
Below are quick 5 pointer non-code checks that can help prepare and enhance the website holiday traffic and sales.

1) Examine the Infrastructure Sizing

If on-premise servers are running critical applications, recalculate the approximate capacity or how much traffic these applications can hold up.
This is based on different data points and includes the number of requests per second, average application server response times, instances and their details of CPU, cores, threads, sockets, etc. All the required information typically can be gathered from the analytics or monitoring tools.

Calculating the number of Cores?

For doing this all that is needed is to load the CPU information (lscpu) and view the information related to Threads per socket, Cores per socket and several Sockets.

Calculate the maximum load or throughput of the system?

The next step is to calculate the number of average requests to the application servers. This can again be calculated by viewing the data of the monitoring tool. The information needs to be fetched for the most peak traffic or expected peak traffic for an application or website.

Calculate the Average response times?

The next value that needs to be calculated is the average response times from the application server. This information also is available using any monitoring tool or can also be calculated by using the expected average value in seconds.
Now with the required information in place, the number of cores can be calculated using the formulae
Number of cores = Requests per second * average response time in seconds

2) Tweak the Enterprise Search Relevancy

Several Research articles show that Browse and Search pages are the most visited pages as well as the pages where most customers are lost. Several websites spend a lot of money and time on external Search engine optimization and one of the key features that are overlooked is the Relevancy Ranking of products on the website browse and search.
Below are some tips that can help boost the relevant products to the top and bury the least preferred products on the site.

Prioritization of Attributes

Attributes precedence is key for relevancy. There needs to be a clear definition of the data associated with these attributes. Special considerations have to be taken for data like Category Name, Product Name, Descriptions and other keywords, as these attributes form the basis of the Search Engine Index. Custom attributes can be created as required and added to the prioritization list.

Monitoring Search Results

Use analytics to monitor user behavior on the search and browse results. Information like most clicked products and categories, most popular searches for a product and category, error results, zero product search, the search result with auto spell correction, page behavior, etc indicates clearly how relevant results are shown to the user. These data form the basis of boosting the required results and ignoring irrelevant results.

Enterprise Search Engines

The setting of core search functionalities like Synonyms, Thesaurus, Stemming, Spell Corrections, Did you mean? play a vital role in displaying the relevant results. For sites having multiple locales update custom stemming files with the top key search words. Also, it’s very essential to keep building the dictionary as and when new data sets are added.

3) Keep a Close Vigil on the Browser Load times

Page load times matter a lot during the peak traffic time. While application health is one aspect of evaluation, the end customer load times also need to be closely monitored.
Below are a few of the areas that need to be evaluated so that the browser load times are not hampered.

Evaluate the Caching TTL

Retrieving static content over the public network is not cheap. Larger the file size larger is the bandwidth and higher cost and lower response times. Caching of static content in the browser plays a critical role in reducing those server calls resulting in faster performance and quick page loads.
Ensure HTTP cache is enabled which enables servers to direct the browser cache to serve the content for a longer duration.
If CDN is used, reevaluate the TTL for those static contents especially the images.

Avoid those Content Publishes

Every content push to the live site requires clearing of caches. Try avoiding the risks of these content pushes during the peak times and if required ensure that only the specific content cache is cleared.

Avoid those External Scripts Calls

All the third-party integrations that the website refers to especially from the browsers have to be looked at. During peak traffic hours every external script is extremely vulnerable and can bring down the browser load times and in some unfortunate cases, the website as well.

4) Tune Database Performance

Evaluate the Relational database performance to prevent those long-running queries. Below are a few of the basic tips that can help optimize the database in a quick time.

Evaluate the Database Statistics

One of the critical aspects of tuning the database is collecting and understanding the statistics. While the monitoring tools give certain information on the most time-consuming queries, the statistics collected from the database server can give detailed information about specific tables, the index distribution, details related to database sizing, etc.

Optimize Indexes

It is vital to have the right insight of indexes on tables. This is the most common issue found when dealing with RDBMS. It’s very essential to understand the fine gap between no indexes and too many indexes, Indexing strategies for inserting, reading and updating data.
Most of the modern databases give information regarding columns that are required to be indexed, their priority order, missing or incorrect indexes, etc and also provide required suggestions on improving them.

5) Network Optimization

Retrieving website over enterprise network is underestimated. Responses within the internal organization network require many round trips between the enterprise internet gateway, firewalls, load balancers, security gateways and those web and application servers placed in different data centers. Below are some suggestions that can help optimize the internal network.
Try to remove those additional firewall rules which can gain few milliseconds for every request.
Try to keep the internal network free from any excessive internal traffic.
Identify those infrastructures with a single point of failure and monitor them closely.
Look out for those Bots and Hostile IP addresses that are clogging the network bandwidth.

Thursday, December 5, 2019

How to develop a quick load test for your micro service application

Ensuring the performance and scalability of the applications is crucial. Load testing is a key practice to identify bottlenecks and optimize your Spring Boot microservices. This article explains the process of developing a quick load test on Spring Boot microservice using Apache JMeter.

Load Test Setup:

Follow the below steps to set up a quick load test:

Step 1: Install JMeter Plugins (Optional)
If you haven't installed the JMeter Plugins Manager, do so by following the instructions on the official website: https://jmeter-plugins.org/wiki/PluginsManager/

Once installed, open JMeter, go to "Options," and select "Plugins Manager" to install the necessary plugins for your load test.

Step 2: Create a New JMeter Test Plan
Open JMeter and create a new Test Plan:

Right-click on the Test Plan node.
Choose "Add" > "Threads (Users)" > "Thread Group."
Set the number of threads (virtual users) and loop count according to your testing requirements.

Step 3: Add HTTP Request
Right-click on the Thread Group.
Choose "Add" > "Sampler" > "HTTP Request."
Configure the HTTP Request sampler with your microservice's endpoint details.

Step 4: Add Listeners for Monitoring
To analyze the test results, add listeners:

Right-click on the Thread Group.
Choose "Add" > "Listener."
Add listeners such as "View Results Tree," "Summary Report," and "Response Times Over Time."

Step 5: Configure Timers (Optional)
To simulate realistic user behavior, add timers:

Right-click on the Thread Group.
Choose "Add" > "Timer."
Configure timers like "Constant Timer" or "Gaussian Random Timer" to introduce delays between requests.

Step 6: Run the Load Test
Save your test plan and run the load test:

Click on the "Run" menu and choose "Start" or use the green "Play" button.
Monitor the results in real-time using the listeners you added. Pay attention to response times, throughput, and error rates.

Step 7: Analyze and Optimize
After the test completes, analyze the results to identify performance bottlenecks. Optimize your microservice accordingly and iterate the load testing process.

Conclusion:

Load testing with JMeter is a powerful way to ensure the performance and scalability of any Spring Boot microservices. By following these steps, you can quickly set up a load test, identify potential issues, and fine-tune your microservices for optimal performance in a production environment. Remember to regularly conduct load tests to catch performance issues early in the development lifecycle.

Sunday, November 24, 2019

Changing Organization Perception in a Microsevices journey - Series Part 6

A very core principle of building microservices is enabling teams to deliver at a constant incremental pace and not in a big bang or spasmodic approach. Teams must be trained and prepared for cogent disruption in adopting agile methodologies processes.
Microservices requires teams to follow the core principle of agile in working at a constant pace, which in turn enables teams to deliver at a constant pace.
microservices

DevOps and DataOps culture needs to be embedded into teams. Teams must be enabled to have greater control over their environment and make multiple releases into production with a fail fast and fail forward approach.
Don’t get into a situation where certain teams are more agile than others. This can lead to a lot of slow down, especially with cross-team communication and integration.
Containers should be a standard part of any agile infrastructure, especially when working on legacy platforms that require specific infrastructure and length installation. Build service containerization to take advantage of the flexibility, speed of deployment, configurability, and automation.
Do spend resources and time on the monolith and its improvement. Seldom, teams just start concentrating on building new-age technology solutions and hardly spend time understanding the legacy system. Without understanding the monolith, it’s hysterical to even attempt breaking it.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the state that it was before.
Moving out of Services can happen either Vertically or Horizontally. Clearly nail down those horizontal services that need to be common overall. Try to identify those vertical services and the best way typically are to migrate them by first moving the data, then the business logic and later the front end.


Always try to target the first step as is to create a macro service until the core services are demarcated. Once the demarcations are clear it is easy to further split into microservices.
In the beginning, the teams have less operational maturity, during this phase minimize the number of microservices and reduce the cognitive load.
The simplest of services, to begin with, in my experience are read-only applications. Especially, the ones that are business-centric and that change very often. This allowed us getting the business team’s confidence that the teams can move faster and deliver those features rapidly.
The team that I was part of initially were depending a lot on the monolith application. The services were deployed multiple times a day, whereas the monolith application was deployed once a week. The services should never wait for the monolith to be deployed.
Also, every time changes went into the monolith, it was always ensured that there were feature flags. This gave the developers the leeway to revert and test changes if in case hell broke loose.
Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via the anti-corruption layer.

Concluding Thoughts

A microservice journey is complex and seldom have organizations been successful. And if you have gone ahead and started that journey, do take intermittent checks on where you stand and correlate to where you started from.
  • If after building services you are in a situation where all developers congregate to fix production issues
  • if teams require several developers and take umpteen number of days to fix issues
  • If your applications have several hours of downtime
  • If your services cannot be tested as a single entity
  • If the teams fear to make changes to the code when adding new features
  • If you are reverting code and releases instead of failing fast or failing forward
  • If you are building services that access the same database
  • If the services and functionality is spread across multiple services and teams
  • If your applications take multiple teams and several people to deploy changes to production
  • If there are too many services depending on each other
  • If your teams are still writing and performing manual tests
  • If you are building services with hundred of Classes having hundreds of lines of code
  • If you have several services that have not been modified for several months

Maybe, you have ended up building another monolith.


Dividing and Conquering Microservices using Concern Based Modularization - Series Part 4

Modularization is another indispensable prerequisite, to begin with when designing microservices. Without modules defined, I have seen teams building services in a wild goose chase fulfilling the anti-patterns of complex data interoperability across distributed systems, tightly coupling of services by maximizing dependencies, minimal composability and abstraction and the list goes on….
Uncle Bob once famously said
“Gather together the things that change for the same reasons. Separate those things that change for different reasons”.
It’s essentially an abstraction of concerns of business capabilities by understanding and defining the exact core services or modules of an application landscape. InIt’she same basic concept followed by designing any monolithic application.
Where teams fail to understand is that with an improper composition of core components or modules, all you are doing is shifting complexities to more gray areas. This is a whole reason why in the first place the monolithic application was designed to be a haphazardly structured application or otherwise famously defined as “Big Ball of Mud”.
“Deciphering the concern based modularization of an enterprise is an essential requisite to amalgamate the intricacies of the microservices system.”


In one of the microservices journeys, I was part of a team that wanted to jump-start creating services and in parallel define their core modules. In doing so they came across a situation where they had the same core source information in multiple systems and this resulted in a pretty complex data interoperability across multiple services. Fixing things at this point is tedious impacting several applications.
Also, another issue seen is the over creating of core services which is a mess. It will lead to several unnecessary layers. It just adds complications to the application landscape where every service is depending on the other without any realm of responsibility.
Get the core and sub-core concern components that are baselined at the earliest.
Domain-driven design (DDD) needs to be adopted to help choose domain boundaries or business context.
Applications need to be divided and conquered to identify the organic sized chunks of these components.
Core services by definition are services that mainly focus on persistence and are pure data-centric. Each isolated core services will be the future master of information and the discrete source of truth.
Once the core services are nailed down, detailing out the Peripheral and Composite services becomes much easier.

Concorde Effect Fallacy Conundrum when building Microservices - Series Part 3

One major point of argument that I have come across in every team when building micro-services is to Use the existing code or rewrite the code?
When building new functionality the majority of the time it makes sense to rewrite capability and not the code. This may be time-consuming to build, but the monolithic platform already has a lot of redundant code.

By rewriting capability it gives an opportunity to improve the granularity of the service, revisit business functionality, and maintain a clean codebase.
Business and IT teams always spend a lot of money on solutions that are deemed to be the right way. But, many a time the only reason these extravagant solutions are running is because they are spending a lot of money on it.
One of the teams that I was part of the business spent a huge amount of licensing money every year on a proprietary solution and were hesitant to change.
However, the software was pretty tedious and it was very slow building functionalities. The team started with a POC and realized that in a few weeks they could build the same core functionalities with open-source software. They went ahead and did that and it opened up a surfeit of opportunities for the business team to improve upon.
Be rational in decision making and avoid sunk cost fallacy situation.
Stay practical in what needs to be rewritten and what needs to be reused. Sometimes the later may be a good choice as not everything in the monolith is a throwaway.
In another instance, one of the teams were struggling to re-architect complex business functionality and moving very slow. The business knew that this was going beyond the scheduled budget and time and did not yield many benefits, but they struggled to decide to dismantle the team.
It’s never too late to pull the plug when building services. In situations where certain core functionalities built don’t change much just reuse stuff. Maybe a good code review is what is required.

Socio-Technical Approach to building Microservices - Series Part 2

Once the initial approach is selected, the next challenge is defining teams and how to manage the key challenges to attain velocity and sizing within teams.
A socio-technical framework is being used more and more when building complex systems, especially those that deal with the principles of working in teams and agile environments where adaptability is the key. Below are a few common questions that the teams need to dwell upon and ask themselves?
How big a microservices application has to be? Should it be a Micro, Macro or even a Mini service?
How to keep in context without swapping out or referring documentation, or human intervention?
How to embrace changes in the application? How to build software that is faster time to market?
How to bring in a new person and develop without any wait times?
The concept of “Cognitive Load” is getting used quite often when sizing microservices. The concept has been borrowed from the psychological dictionary where the cognitive load refers to the information one keeps in the head without referring to something else. It’s a universal concept used for learning and to help maintain optimum load on the working entity. It can be defined as a temporary working memory like a Cache or Ram.
“Producing maintainable software depends on readable information be it code, modules, or documents, and readable information depends on controlling Cognitive Load.”
There are three types of Cognitive Loads defined


  • Intrinsic Load is related to the complexity of information that a team is paying attention to and processing. It’s bae skills and demands on specific tasks or technology that one is grasping and the difficulty of learning the task itself. In building microservices this can be the initial load that is required to create an end to end service.
  • Extraneous load is completely unrelated to the learning task. These are all the negative distractions and additional effort imposed on the team or system.
  • Germane load is the mental processing effort and is directly going towards supporting of development of schema and long-term memory. In building software, it is the repeated decorative knowledge and conglomeration on how this thing to play in a more complicated system.
Intrinsic and Germane load are considered “good” and Extrinsic is “bad”.
Always in teams try to find and offload those extrinsic loads that affect the teams.
The extrinsic loads identified in the teams that I was part of included environmental issues, complicated subsystems, unnecessary complex or business Logic, complex code, misfit teams, team meetings, etc….
Many a time, it could also be as simple as spending time on unnecessary documentation that no one reads or to over-complicate the code or even spending hours contemplating method and object names.
Too much Germane load makes learning a new task harder and slower.
Microservices have a handful of moving parts at the same time and more the moving parts, to begin with, the complicated it gets. Systematic tradeoffs need to be made by teams when building services. Teams have to communicate and learn from each other.
The Intrinsic load should reduce over time as new lower-level knowledge and experience are formed and documented.
Always try to complete that initial simple microservice end to end, and deploy it in production with a database, CICD, alert and monitoring system, testing strategies and version control in place. This makes the subsequent services to be carved out more easily. After a few services spinning up new services and it becomes a cakewalk.
As the experience increases handling cognitive load gets more mature and achievable.

Unraveling the Plethora of Gaffes in a Microservices Odyssey - Series Part 1


Microservices in the last few years has become a hackneyed concept and the modus operandi methodology for building applications to drive digital transformations. The so-called adoption of modern Hydra with tentacles to replace the legacy one-eyed Cyclops seems what most organizations are striving towards when strategizing Modernization, Cloud adoption journeys or Agile and DevOps ways of working.
After spending few years learning and understanding the platform revamp of moving away from a bulky monolithic commerce platform to service-based architecture, I have learned myriad ways on how IT organizations bungle up a journey and at times come a whole full circle.
There is no direct method to this madness but for sure are best practices and learning from earlier blunders and possibilities to circumvent hidden traps and pitfalls.

The Onset Enigma

The inception of the microservices journey is typically to combat the imponderable conundrums pertaining to “Slow time to market”, “Complex and heavily customized applications”, “Testing nightmares”, “Scalability and Stability issues”, “Velocity,” “Technology binding”, “Adopting modern stacks”, “Continuous Integration and Continuous Delivery”, “Cost factor” and locking vendor licensing model” and the list goes on.
Microservices approaches differ from organization to organization and there is no ready-made panacea. I have time and time seen teams perform a mere replication of these approaches resulting in incurring insurmountable technical debt.
Before embarking on a microservices journey, it’s essential to comprehend what issues one is trying to deal with and where one needs to start?
A lot of initial time has to be spent to understand the “complexities of the existing monolith”, “core issues with the legacy”, “technology forecast”, “business limitations”, “future business requirements”, “organization processes”, “their strengths and weaknesses, “team sizes”, “operational readiness” “ways of working” , “cultural standpoint”, “integration dependencies” and several other factors before choosing an approach.
It’s pretty common to see that modern IT teams yearn for re-architecting existing monolithic applications and move away from the mundane system to a more services-oriented modern stack and technology.
But, before even attempting the journey, teams need to evaluate if the journey is worth the endeavor? Typical questions teams need to ask themselves
  • Maybe the release cycles of the monolith are of way too less frequency and just leaving it alone is the best solution?
  • Maybe the monolith is of a specific size and estimated to not grow further?
  • Maybe the organization is not ready yet to begin the journey from an operational, technology, process or even from a cultural point of view?
  • Maybe there are way too many features in the monolith that is more crucial from a business and cost standpoint?
  • Maybe the monolith has a short lifespan and is getting replaced?
  • Maybe the organization is not yet agile or has not yet adopted DevOps which is pivotal for a microservices journey?
  • Maybe breaking the monolith is way too complex and it is easier to rewrite code from scratch using new software?
  • Maybe building new features is of more priority than breaking out new services?


Wednesday, August 7, 2019

Sizing an Application

Capacity Planning of Web application.

Recently we have been observing that the website that I have been part of has been running on average constant response times for different peak loads. At this time, we had moved lot of the functionalities to different applications on the cloud where scalability is auto managed.  This made us revisit the number of instances that we have been running in production on our on-premise servers. If these servers were required for the peak load and validate if we could save some money and maintenance on the infrastructure by reducing certain number of servers.

This required re calculating the capacity planning for different environments. To calculate the approximate capacity or how much traffic an application can hold up is based on different data points.

These datapoints include factors number of requests per second, average application server response times, number of instances and their details of CPU, cores, threads, sockets etc. All the required information typically can be gathered from the analytics tool like GA, or Adobe web analytics or monitoring tools like new relic, Dynatrace etc.

Calculating the number of Cores?

For doing this all we need is to load the cpu information (lscpu) and view the information related to Threads per socket, Cores per socket and number of Sockets. In the below case the number of core = 6 * 1* 1 = 6.

This value is for a specific instance or virtual machine and the total cores is calculated by adding  all the virtual machine specific cores. For e.g. If there are 4 virtual machines then the total number of cores present in the infrastructure based on the above alogirthm is 4 * 6 = 24.



Calculate the maximum load or through put of the system?

The next step is to calculate the number of average requests to the application servers. This can be calculated by viewing the data of the monitoring tool. The information needs to be fetched for the most peak traffic or expected peak traffic for an application or website. For e.g. If the peak throughput for an application is say 1000 requests per minute. Then the value in RPS or request per second is 4000/60 = 66.66


Calculate the Average response times?


The next value that needs to be calculated is the average response times from the application server. This information also is available using any monitoring tool or can also be calculated by using the expected average value in seconds. For e.g. Assuming 250 m sec to be the average app server response time.



Now with the required information in place the number of cores can be calculated using the formulae

Number of cores = Requests per second * average response time in seconds

For.e.g Number of cores for peak traffic = 0.250 seconds * 66.66 = 16.665 cores. (app 17 Cores).

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...