Wednesday, August 7, 2019

Sizing an Application

Capacity Planning of Web application.

Recently we have been observing that the website that I have been part of has been running on average constant response times for different peak loads. At this time, we had moved lot of the functionalities to different applications on the cloud where scalability is auto managed.  This made us revisit the number of instances that we have been running in production on our on-premise servers. If these servers were required for the peak load and validate if we could save some money and maintenance on the infrastructure by reducing certain number of servers.

This required re calculating the capacity planning for different environments. To calculate the approximate capacity or how much traffic an application can hold up is based on different data points.

These datapoints include factors number of requests per second, average application server response times, number of instances and their details of CPU, cores, threads, sockets etc. All the required information typically can be gathered from the analytics tool like GA, or Adobe web analytics or monitoring tools like new relic, Dynatrace etc.

Calculating the number of Cores?

For doing this all we need is to load the cpu information (lscpu) and view the information related to Threads per socket, Cores per socket and number of Sockets. In the below case the number of core = 6 * 1* 1 = 6.

This value is for a specific instance or virtual machine and the total cores is calculated by adding  all the virtual machine specific cores. For e.g. If there are 4 virtual machines then the total number of cores present in the infrastructure based on the above alogirthm is 4 * 6 = 24.



Calculate the maximum load or through put of the system?

The next step is to calculate the number of average requests to the application servers. This can be calculated by viewing the data of the monitoring tool. The information needs to be fetched for the most peak traffic or expected peak traffic for an application or website. For e.g. If the peak throughput for an application is say 1000 requests per minute. Then the value in RPS or request per second is 4000/60 = 66.66


Calculate the Average response times?


The next value that needs to be calculated is the average response times from the application server. This information also is available using any monitoring tool or can also be calculated by using the expected average value in seconds. For e.g. Assuming 250 m sec to be the average app server response time.



Now with the required information in place the number of cores can be calculated using the formulae

Number of cores = Requests per second * average response time in seconds

For.e.g Number of cores for peak traffic = 0.250 seconds * 66.66 = 16.665 cores. (app 17 Cores).

Tuesday, August 6, 2019

Key Architectural Considerations to Evaluate when moving applications to cloud - Part 2


  Ø  If an application migration requires lot of integration or coordination between internal and external environments on top of the cloud services, it will become a layer between the cloud provider and inhouse applications will struggle to keep up with the rate of innovation in the cloud provider’s services. Cloud provides numerous services that are portable. Organizations should not build or acquire layers of insulation on top of cloud provider's native features in order to perceive portability.
  Ø  Modern cloud service providers can auto scale in order to create a resilient and highly available applications. The cloud service providers have different solutions to provide the ability to store and replicate data. If a legacy application is critical enough to meet the requirement of fault tolerant,  moving such applications to the cloud can be easier to manage.

  Ø  Cloud is a better fit if Speed and Agility are the primary business drivers of an organization. In order to do so it is required for applications to have continuous and direct access to the cloud provider's fast pace of innovation. Only by building directly upon provider-native features will there be the desired business agility and rate of improvement. Organizations will struggle to easily port applications across cloud providers by sacrificing speed, agility and innovation.


  Ø  Another area to consider is the factor of repeatability for applications. Typical scheduled deployment times in legacy application require a down time along with human intervention in doing the same manual tasks repeatedly. Also, in case of disaster recovery or outage most of the tasks carried out are manual. Typical cloud services excel to execute the same tasks multiple times without failure. Most of the application recovery or deployments are auto managed and incur very little to no human interventions.



  Ø  Cloud services generally provides high flexibility and testability. Applications can be tuned to run on need basis. Test environment application can be a good candidate to move to the cloud especially when doing a load or stress testing. Different applications can be made available on the fly based on different hardware configuration, operating system and different regions and can be scaled up or down on need basis. This gets even easier with cloud providers excelling in containerized application and providing seamless continuous integration and deployment.

  Ø  If high performance, monitoring, volatility and high volume are the key requirement then the application needs quick development and high rate of innovation.  Cloud vendors do provide ready-made solutions to meet all such requirements. Performance benchmarks can be met with different solutions that fulfil the key constraints of Caching, Sharding, Archiving and Storage. Readymade tools can be configured to meet the requirement of in-depth monitoring, logging and analysing. Cloud providers have rich support for state-of-the art agile development modes including DevOps, containers, microservices and will be the first to have mature support for upcoming methods like serverless computing etc. Different pricing models and tenancy are also provided that can ensure cost is kept to the minimum.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...