Friday, September 3, 2021

The advent of Observability Driven Development

A distributed application landscape with high cardinality makes it difficult for dedicated operation teams to monitor system behavior via a dashboard or react abruptly to system alerts and notifications. In a microservices architecture with several moving parts, detecting failures becomes cumbersome, and developers end up looking at errors like finding a needle in a haystack.

What is Observability?

Observability is more than a quality attribute and one level above monitoring, where the focus applies more to cultivating ways of working within development teams to have a holistic data-driven mindset when it comes to solving system issues.












An observability thought process enables development teams to embed the monitoring aspect right at the nascent stage of development and testing.

Observability in a DevSecOps ecosystem

Several Organizations are adopting a DevSecOps culture, and it has become essential for development teams to become self-reliant and have a proactive approach to identify, heal and prevent systems faults. DevOps focuses on giving the development teams ability to make rapid decisions and more control to access infrastructure assets. Observability enhances this by empowering development teams to be more instinctive when it comes to defining system faults.










Furthermore, the modern ways of working with Agile, Test Driven Development, and Automation enable development teams to get deep insights into operations that can potentially be prone to failures.

Observability on Cloud platforms

Applications deployed on Cloud provide the development teams with several out-of-box myriads of system measurements. Developers can gauge and derive quality attributes of a system even before a code goes into production. Cloud services make it easy to collate information like metrics, diagnostics, logs, and traces for analysis, and they are available at the developer’s behest. AI-based automated diagnostics along with real-time data give developers deep acumen into their System Semantics and characteristics.

Conclusion

Observability is more of an open-ended process of inculcating modern development principles to increase the reliability of complex distributed systems. The benefits of the Observability mindset helps organizations resolve production issues speedily, reduces dependency and cost on manual operations. It also benefits development teams to build dependable systems helping end customers with a seamless user experience.

Thursday, July 15, 2021

The Coevolution of API Centric and Event-based Architecture

When evaluating communication between different systems, there is always an argument of choosing between an API-first approach and an event-first approach. In a distributed ecosystem, it’s not one or the other, but the combination of both these strategies that can solve data transmission between one or more systems.

API’s are the de facto way of interacting for synchronous operations. That means performing tasks one at a time in sequential order. When designing systems with a specific responsibility, APIs shield the underlying systems from being accessed directly and expose only the reusable data, thus ensuring no duplication of data happens elsewhere. When using simple API’s all that is needed is a readable API structure and systems that follow a request and response pattern. API’s are beneficial in the case of a real-time integration where the requesting system needs information abruptly.

However, designing and Scaling APIs can also get intricate. In high transactions microservices architecture, throttling and caching of APIs are not simple as APIs need to scale on-demand. Also, in such integrations, API gateway becomes necessary to make the systems loosely coupled.

The below example depicts a reporting system that creates different reports based on the Customer, Order, and Catalog data. The source system exposes an API. The reporting system fetches the data via the API and sends the information to the underlying destination systems.

API First Architecture

This architecture looks fine if there are no changes to the Information from the source systems. But, if the order information has properties that keep getting updated, then the Reporting system needs to have the capability of ensuring that the changed state gets updated in subsequent systems.


Handling Cascading Failures

In a chain of systems that interact using APIs, handling errors or failure can also become cumbersome. Similarly, if there are multiple dependent API calls between two systems, the cascading failures become complex. The complexity further increases when there is a need for systems to react based on dynamic state changes. This is where Event-based architecture can help address some of the issues.

The basis of Event-based strategy is asynchronous means of communication. There is an intermediate system that decouples the source and the destination service interfaces. This strategy is apt for applications that need near real-time communication and when scalability is a bottleneck.




With an Event-based architecture, all the source system has to do is adhere to a contract, and on any state changes, trigger a message to the intermediate broker system. One or more destination systems can subscribe to the broker system to receive messages on any state changes. Also, since the source system triggers an event, the scalability of the APIs is not an issue.

Event First Architecture


With a pure Event-based architecture with an increase in the number of messages, the architecture can get complicated. Tracking the statuses of a message if they are processed or not becomes tricky. In this case, every order tracking needs to happen for the latest state, and error handling needs to be robust. Also, this entire process is slow and there is a huge latency between the end-to-end systems.

Another way of simplifying the architecture is by combining API and the event design. The below diagram illustrates that the Reporting system interacts with the Order system using both API and events. The Order system sends the state change notification to the broken. The Reporting system reads the state change and then triggers an API call to update the Order information. The reporting system makes API calls to the Catalog and Customer systems to fetch the static data. It can further push the created destination messages to consume using the event broker.




In conclusion, both API and events have their pros and cons and solve a specific problem. They are not a replacement for one another and architecture can be made less complex if they co-exist. In a modern micro-services architecture to have both of them handy can help ease distributed system interaction complexities.

Monday, July 5, 2021

Driving Digital Transformation using Sustainable Software Development


The term Digital Transformation in the last decade or so has become a well-known strategy in various organizations. Businesses across every domain are reviving their traditional businesses to adapt to a more modern digital marketplace.

But in the last few years, sustainable development has become one of the essential mainstays for a successful digital transformation journey. The Covid pandemic has also pushed organizations across different domains to rethink and emphasize environmental factors, climate changes, and human well-being to lure consumers.

Embracing a cloud-first model is one of the critical constituents in digital transformation and sustainable journeys. More and more organizations are speeding up their Cloud computing journeys and investing in modern SaaS/PaaS services, thus reducing environmental impacts and eliminating major infrastructure expenses. Organizations need to be wary and invest wisely in sustainable software-building methodologies for successful software implementations and cloud migrations seamlessly.

Organizations that strive to be data-driven have a better ability to monitor operations and analyze system behaviors accurately. The real-time analysis of information results in better usage of devices and improves the defined sustainable characteristics. Companies that invest in AI/ ML can have a very substantial benefit to sustainability. The science of reliable predictability in the digital realm can bridge gaps in system information interchange, zero wastages, improve storage and distribution mechanisms,  eco-friendly products, free delivery methods, reusable infrastructure, etc. All of these can directly help in subduing environmental consequences. 

In conclusion, the principles of building next-generation Digital software and Sustainable development go hand in hand.  In the modern agile world, both of these journeys have a common goal of not jeopardizing the capability of future needs. These can be applied to systems as much as they can be related to human well-being. Adaptable working methods of Extreme Programming, Agile, Lean, Kanban help teams to strive for rapidly focused executions. These ways of organization working improve distributed system communications, their collaborations, their usages, and velocity. All of these indirectly result in contributing to energy-efficient software development.

Sunday, May 23, 2021

Tips preparing for Professional AWS Solution Architect Exam

I recently cleared my AWS Solutions Architect Professional Exam with a total of 948/1000 and thoroughly enjoyed preparing for the exam. I spent a total of 6 months of preparation. This is in spite of the fact that I got 1000/1000 in the Associate Architect exam last year. 

The exam as such is really tough. It not only evaluates one's knowledge and experience on AWS, but one also has to strategize for reading lengthy questions, time each question, and also be prepared to sit continuously for 190 minutes to finish the exam. 

Below are some of the learnings and tips that I can share so that one can make good use of and benefit from studying for the exam. Preparation of the exam can be divided into basically 3 phases 

Phase 1 Preparation

To start with, the exam requires considerable experience on the platform, I would say at least 2 years of hands-on experience on core AWS services. I would definitely recommend passing the Associate exam as the Professional one is way too tough. 

a) Plan for taking a course and stick to the same. Select a course with a good rating on popular training sites like Udemy/Coursera or Udacity. Try out different courses for few days and choose a course where you are comfortable with the language and flow of the course. The basic content of all the highly rated courses is more or less the same. Also, choose a course that has practical samples on the topics that one is not comfortable with or has not worked on. 

b) Plan a date and book the exam date. Choose somewhere between 2 to 3 months. AWS allows you to change the date twice for a booked exam. 

c) Create a personal AWS account to practice as the exam covers way too many services which one may not have implemented in day-to-day professional work. 

d) The exam is not theoretical and requires vast experience in the services. There are several real-world scenarios based questions and there are multiple ways to solve a specific problem. Read through a lot of use cases from different organizations especially the ones from the latest AWS re: Invent.

Phase 2 Preparation

In this phase, get deeper into the course and practice the below points in structuring and helping to know the services better. 

a) AWS adds new services very frequently and one has to be well versed with each and every service that is present especially the new ones. AWS updates all the latest services in the below white paper. 

AWS overview - https://d1.awsstatic.com/whitepapers/aws-overview.pdf 



b) Each of the areas has several services that can perform the same task. Try to analyze which services are the best fit when considering Non-Functional requirements of Cost Optimization, Scalability, Performance, Duration, Automation, Scalability, Availability, Reliability, Security.

For example, S3 buckets are the most durable and cost optimization in terms of storage. But when it comes to performance EBS/EFS is better. Another example is when it comes to databases DynamoDB gives near real-time performance, but has limited data support. Aurora on the other hand is the most scalable when it comes to multi-region databases but is less scalable. 

c) Try to understand what combination of all of the services is the best fit for requirements.

How to migrate on-premise systems and data to the cloud. It could be using a physical device in Snowball, or Server Migration service or Database Migration service or how to transform content using AWS Transform or AWS DataSync or Storage Gateways.

d) Start attempting to write practice tests and get the feel of the exam complexity. Slowly improve the ability to attempt more and more questions using a stopwatch.  

Phase 3 Preparation

In this phase ensure that you have gone through the course and have a very good hold on the fundamentals of all the areas and are well versed with all services. 

a) It is very difficult to master each and every service in depth. So, it is absolutely ok if one knows just the basics of certain services.  

b) During this phase ensure you are at ease writing practice tests and are able to attempt 45-50 questions in a single sitting.  

c) Your accuracy has improved and so has your reading speed. When attempting questions you are now more confident eliminating the wrong options. 

d) By this time you will know that you have the confidence and better hold on the exam. If time is not a barrier, based on your comfort level try to push yourself to prepare and postpone the exam by a week or 2. This will just help you revise multiple times and improve the chances of clearing the exam.

Sunday, November 29, 2020

Extracting running data out of NRC/Nike + (Nike Run Club) using API's

For the past few weeks, I have been struggling to see the running kilometers getting updated in my  Nike + App. It could be a bug or a weird feature of the app and since this was kind of a demotivation, I decided to go ahead and create my own dashboard to calculate the results. Also, for some reason, Nike discontinued viewing and editing activities on the web.

Considering I had about 8 years of data and you never know when this kind of apps stop to exist or when they become paid versions. It's always better to persist your data to a known source and if required use it to feed it into any other application. I also went ahead and uploaded my data to UnderArmour's "MapMyFitness" App which has much better open-source documentation. 

It turns out that there is a lot of additional information the NRC app captures which are typically not shown on the mobile app. Few of the information include 

  1. Total Steps during the workout including detail split between intervals
  2. Weather Details during the workout 
  3. Amount of the time the workout was halted for 
  4. Location details including latitude and longitude information that can help you plot your own Map

Coming to the API part, I could not get hold of any official Nike documentation, but came across some older blogs https://gist.github.com/niw/858c1ecaef89858893681e46db63db66 in which they mentioned few API endpoints to fetch the historic activities. I ended up creating a  spring-boot version of fetching the activities and storing it in a CSV format in my Google Drive. 

The code can be downloaded here ->  https://github.com/shailendrabhatt/Nike-run-stats

The code also includes a postman repository which contains a Collection that can also be used to fetch one's activities. Just update the {{access_token}} and run the Get requests.

While the blog that had details of the API was good enough, a few tips that can be helpful 

  • Fetching the Authorization token can be tricky and it has an expiry time. For that, you will need a https://www.nike.com/se/en/nrc-app account and fetch the authorization token from the XML HTTP request headers for the URL type api.nike.com. There are few requests hitting this URL and the token can be fetched from any of them.
  • The API described in the link shows details of after_time, one can also fetch before_time information 
/sport/v3/me/activities/after_time/${time}
/sport/v3/me/activities/before_time/${time} 
  • Pagination can be easily achieved using the before_id and after_id. These ids are of different formats ranging from GUIDs to a single-digit number and can be confusing.

Thursday, October 1, 2020

Building Composite Architectures

Recently after Gartner in its recent report highlighted “Composite Architecture” or “Composable Architecture” as one of the five emerging trends in modern innovation and technology for the next 10 years. I started coming across this topic in various technical forums.

Composability” as such is not a new topic, as we have used this frequently in object-oriented programming to achieve Polymorphism. In software architecture terms it is defined as the combination of software systems to produce a new system. In other words, it is directly connected to the goal of agility and reusability and the whole crux of it is to respond to the changing business spectrum.

Domain-Driven Design to build Composable Application

If we take a step back and go back to the way a simple application created using domain-driven design using an onion architecture. The orchestration layer plays a pivotal role in making an application composable by interacting directly with the repository or service layers.

The orchestration layer as such can either be a WebHooks API, a Data importer, API Controller, Messaging service, or a simple REST or SOAP request.

This kind of atomic structure if done properly can result in designing a system that is open to change its external integration seamlessly and also meet the changing business landscape.

Atomic Architecture

If we take the earlier example and apply it in a larger context, the below visualization depicts a circular relationship between different layers in a typical business domain.


Here the applications are interconnected in an atomic way making the organization landscape plug-in and plug-out systems in an easier way. With the advent of the native SaaS-based platforms, this type of “Composable architecture” is getting more and more noticeable.

Elements of Composable Architecture

The basic building blocks of a composable system is still around the foundation of Containerization, Microservices, Cloud, API’s, headless architecture, etc.

Conclusion

With a Composable mindset, organizations can uplift isolated business operating models and move towards a more practical loosely coupled technology landscape where systems can be plugged in and out flexibly.

This kind of model perfectly fits with organizations adopting agile ways of working or building modern omnichannel integrations with different types of native Cloud-based SaaS platforms.

This model can also be applied to bridge gaps across the entire ecosystem of legacy and modern applications including areas of a unified experience, operations, transformations, infrastructure, external and internal system integrations.

Friday, June 19, 2020

10 Fundamental Principles one needs to ask before breaking the monolith platform

          Below are some of the key principles that need to be evaluated when one starts to break out services from a monolithic platform.


1.    Target Core Services or Fringe Services First?

          Target Functionality that doesn’t require changes to the end customer application and possibly also doesn’t need any core database migration or changes. It becomes easier for subsequent services by building CICD pipelines, required alert and monitoring systems, testing strategies, and version control.

2.     Split Schema or Code First?

          If the core services are clear then always first split out the schema and keep the services together before splitting the application code out into microservices. If the services are too coarse-grained they will be split into smaller services creating another data migration. Also, two services accessing the same database results in tight coupling between these services.

3.    Moving out Services Vertically or Horizontally?

        Moving out of Services can happen either vertically or horizontally. Try to move out a single core service at a time by first moving the database, functionality, and then the front end. This technique avoids costly and repeated data migrations and makes it easier to adjust the service granularity when needed.

4.     Building Micro or Macro or Mini services?
       
          When creating a service, first identify the core services and define clear bounded contexts. Until then, the first step is to create a macro service until the core services are clearly demarcated. Once the demarcations are clear it is easy to further split into microservices

5.     Outside in or Inside Out Creation of Services?
        
          The easiest way to create services is from outside-in, understanding how the various integrations need to talk to various applications. However, this leads to data inconsistencies and data integrity issues. Designing service inside-out is more time consuming but cleaner with clear defined boundaries for each service. If approached properly, this will reduce possible data integrity issues. 

6.     Where to build New functionalities?

       Target any new functionality getting created as new micro-services, target services that are business-centric. Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via anti-corruption layer. 

7.     Rewriting Code or Capability?
       
          When building new functionality try to rewrite capability and not the code. This may be time-consuming to build, but the monolithic platform already has a lot of redundant code. By rewriting capability it gives an opportunity to improve the granularity of the service, revisit business functionality, and maintain a clean codebase. 

8.    Incremental or Radical updates?

          Target to decouple modules or services that result in reducing traffic towards the monolithic application, this will improve the performance of the application as well as help in decommissioning of infrastructure and helping cost (licenses).

9.   Versioning Services Incrementally or Concurrently?

          Having multiple versions of the same code leads to issue concerning maintainability and cost, but until the microservices and surrounding integrations are matured, maintaining multiple versions of the service endpoint at any given time helps in reducing failure risks and less dependent on external systems.

10.   Where to build New functionalities?

          Target any new functionality getting created as new micro-services, target services that are business-centric. Do not add a dependency to the monolithic platform. Ensure that the new services do not call the monolithic application directly and always access it via anti-corruption layer. 



Thursday, June 11, 2020

The myth of Sharing State when breaking large applications


One of the complex puzzles in a microservices journey is how and when to break the database. When thinking about breaking a legacy monolith application, the very first non-risky thought that comes to mind is to decompose the platform module by module as standalone microservices using multiple ORMs and hitting the same database. 

If it was an application with limited tables and modules,  would have been the simplest approach to move towards. If there is a firm partition between each microservices data with fewer dependencies, then it becomes fairly easy to adopt services and maintain one large database with several schemas.

However, legacy applications are seldom portable, and sharing data or state to all intents and purposes is convoluted. Below are some of the typical concerns that need to be evaluated building or maintaining applications with a single state.


Tight Coupling of Services

One of the key principles that Architects strive is to build a loosely coupled application that can be catered to future unknown requirements. In data terms what that essentially means is to build functionalities using new ways of persisting state without impacting the existing application or state. 

Most of the legacy applications are built and maintained for years and years and have a very tight coupling of out of box and custom modules and libraries. This results in huge state dependencies between modules. If any new requirements to either build a module to be event-driven design or build a new non-SQL database for solving certain quality attributes is no easy task and requires a complete revamp of several services. 

 Weak Cohesion

The basic principle of building microservices is the Separation of Concern, i.e. each service, or a group of services to have its own dedicated state.  

Large legacy applications generally have a large database with several schemas. Each database schema is accessed by several services, hence if any change to the logic that requires a DB change, it will impact all corresponding services. If a database table changes, all the related services will have to change and this creates huge dependencies between development teams with huge sunk cost fallacy. 





Friday, January 10, 2020

Dealing with Concurrency Issues in large applications

The last few days have been hectic dealing with concurrency issues with our monolith application during the peak traffic period.

Concurrency issues are not easy to resolve, especially when you have an application with thousands of files. The error was in the order pipeline during checkout when hundreds of custom pipelines execute in parallel. When the error occurred, all the previous transactions got revoked. 

Since the issue happened for the first time, to begin with, we just ignored the error, hoping it did not crop up again. As the traffic increased, the errors also increased simultaneously, and every error in the log pointed to a concurrent exception.

We did not have much logging, and that's when we started evaluating every table in the transaction and their relationships. We got the list of all the tables, and there were like close to 100 tables getting accessed. We decided to split the table in terms of read-only and write. Once we got the number of tables that were getting updated, we tried pinpointing the tables that had a foreign-key relationship. That further filtered the number of tables where the issue could potentially be present.

Lastly, on further analysis came across a table where locking was a possibility. Meanwhile, enabling logs gave details about concurrency errors on the same set of tables. The first thing noticed was there was no last-modified timestamp column on these tables. Then went back to the application code and added an explicit locking in the code and a check for validating the last modified timestamp.

All this took a week to resolve, and the issues made me realize how difficult it is to eradicate concurrency in systems. Years later, when I look back at this article, it will be a surprise not to have come across the same issues again.



Tuesday, December 17, 2019

Product Oriented Development Methodolgy

In several Organizations, the focus of project execution has recently changed from more product-specific deliveries replacing project-specific deliveries. This requires a complete organization-wide change to make every internal/external project run on AGILE ways of working.

What this essentially means is that every project needs to be viewed from an MVP approach. i.e. Most valuable product.  This requires


  1. Changes in the organization and team structures

Teams are more defined based on specific goals. Teams are asked to think more in terms of product innovation and deliver specific features to evolve the product. There is no one-time delivery system and the business teams are aligned directly with these teams to incorporate faster feedbacks on the product roadmaps. Each team is attached to a dedicated product owner who helps create the specific goal required by the business teams. 

  1. Changes to the ways of working.

Different stakeholders (Both IT and business) meet in every few weeks and define the product goals. Teams are more cross-functional so that they can be moved around once the specific need of the goal is achieved. From the business perspective, the money allocation to building these products is more incremental and on a quarter by quarter basis rather than estimating long term projects. If an idea is not flying it can be terminated at any point in time. Teams are more strengthened with Agile and DevOps culture. It’s a more flat structure where the individual teams mostly interact with an Agile Coach, PO, and a PMO in the hierarchy. 

  1. Product Goal Definition

The product goal definition is based on the lifecycle of the idea from start to production and the idea is split into the specific weeks road map. The goals are created based on different objectives related to parameters that can be real-time customer behavior or any quantifiable metric. Each team is asked to present the product's progress in incremental demos every second week. Several of the goals are also based on the market competition and the main idea is faster velocity. 

Sunday, December 15, 2019

Simple way to create automatic alerts for your application using Slack API

Simple Springboot application for Slack notification

Slack has become one of the most widely used channel for communication between teams in many organizations. It has an open source API that can be used to notify alerts and events to different channels and stakeholders.

Below is a simple spring boot application that uses JPA repository to fetch database contents and communicates to Slack via JSlack api



The spring boot application is present in the following path on github.

https://github.com/shailendrabhatt/spring-boot-slack-notification

Saturday, December 7, 2019

Quick 5 Point Website Inspection for Peak Holiday Traffic

The holiday season is around the corner and with the code freezes, teams fear to risk any major changes to production to hamper the critical business time of the year.
Below are quick 5 pointer non-code checks that can help prepare and enhance the website holiday traffic and sales.

1) Examine the Infrastructure Sizing

If on-premise servers are running critical applications, recalculate the approximate capacity or how much traffic these applications can hold up.
This is based on different data points and includes the number of requests per second, average application server response times, instances and their details of CPU, cores, threads, sockets, etc. All the required information typically can be gathered from the analytics or monitoring tools.

Calculating the number of Cores?

For doing this all that is needed is to load the CPU information (lscpu) and view the information related to Threads per socket, Cores per socket and several Sockets.

Calculate the maximum load or throughput of the system?

The next step is to calculate the number of average requests to the application servers. This can again be calculated by viewing the data of the monitoring tool. The information needs to be fetched for the most peak traffic or expected peak traffic for an application or website.

Calculate the Average response times?

The next value that needs to be calculated is the average response times from the application server. This information also is available using any monitoring tool or can also be calculated by using the expected average value in seconds.
Now with the required information in place, the number of cores can be calculated using the formulae
Number of cores = Requests per second * average response time in seconds

2) Tweak the Enterprise Search Relevancy

Several Research articles show that Browse and Search pages are the most visited pages as well as the pages where most customers are lost. Several websites spend a lot of money and time on external Search engine optimization and one of the key features that are overlooked is the Relevancy Ranking of products on the website browse and search.
Below are some tips that can help boost the relevant products to the top and bury the least preferred products on the site.

Prioritization of Attributes

Attributes precedence is key for relevancy. There needs to be a clear definition of the data associated with these attributes. Special considerations have to be taken for data like Category Name, Product Name, Descriptions and other keywords, as these attributes form the basis of the Search Engine Index. Custom attributes can be created as required and added to the prioritization list.

Monitoring Search Results

Use analytics to monitor user behavior on the search and browse results. Information like most clicked products and categories, most popular searches for a product and category, error results, zero product search, the search result with auto spell correction, page behavior, etc indicates clearly how relevant results are shown to the user. These data form the basis of boosting the required results and ignoring irrelevant results.

Enterprise Search Engines

The setting of core search functionalities like Synonyms, Thesaurus, Stemming, Spell Corrections, Did you mean? play a vital role in displaying the relevant results. For sites having multiple locales update custom stemming files with the top key search words. Also, it’s very essential to keep building the dictionary as and when new data sets are added.

3) Keep a Close Vigil on the Browser Load times

Page load times matter a lot during the peak traffic time. While application health is one aspect of evaluation, the end customer load times also need to be closely monitored.
Below are a few of the areas that need to be evaluated so that the browser load times are not hampered.

Evaluate the Caching TTL

Retrieving static content over the public network is not cheap. Larger the file size larger is the bandwidth and higher cost and lower response times. Caching of static content in the browser plays a critical role in reducing those server calls resulting in faster performance and quick page loads.
Ensure HTTP cache is enabled which enables servers to direct the browser cache to serve the content for a longer duration.
If CDN is used, reevaluate the TTL for those static contents especially the images.

Avoid those Content Publishes

Every content push to the live site requires clearing of caches. Try avoiding the risks of these content pushes during the peak times and if required ensure that only the specific content cache is cleared.

Avoid those External Scripts Calls

All the third-party integrations that the website refers to especially from the browsers have to be looked at. During peak traffic hours every external script is extremely vulnerable and can bring down the browser load times and in some unfortunate cases, the website as well.

4) Tune Database Performance

Evaluate the Relational database performance to prevent those long-running queries. Below are a few of the basic tips that can help optimize the database in a quick time.

Evaluate the Database Statistics

One of the critical aspects of tuning the database is collecting and understanding the statistics. While the monitoring tools give certain information on the most time-consuming queries, the statistics collected from the database server can give detailed information about specific tables, the index distribution, details related to database sizing, etc.

Optimize Indexes

It is vital to have the right insight of indexes on tables. This is the most common issue found when dealing with RDBMS. It’s very essential to understand the fine gap between no indexes and too many indexes, Indexing strategies for inserting, reading and updating data.
Most of the modern databases give information regarding columns that are required to be indexed, their priority order, missing or incorrect indexes, etc and also provide required suggestions on improving them.

5) Network Optimization

Retrieving website over enterprise network is underestimated. Responses within the internal organization network require many round trips between the enterprise internet gateway, firewalls, load balancers, security gateways and those web and application servers placed in different data centers. Below are some suggestions that can help optimize the internal network.
Try to remove those additional firewall rules which can gain few milliseconds for every request.
Try to keep the internal network free from any excessive internal traffic.
Identify those infrastructures with a single point of failure and monitor them closely.
Look out for those Bots and Hostile IP addresses that are clogging the network bandwidth.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...