Wednesday, July 31, 2019

Approach on what pages to move first in a microservices journey

Every microservices journey is different for different organizations based on their core competency. However, few of the basic elements of where to start and where to end are more or less similar. Came across this below diagram on multiple websites which illustrates the typical migration model. Read only pages or static pages with content are much more easier to move to newer platforms especially on cloud services like SaaS than core business components.  

Tuesday, July 30, 2019

Key Architectural Considerations to Evaluate when moving applications to cloud - Part 1

  Ø  Elasticity is one of the major benefits of moving to the cloud. What it essentially means is that the servers can be scaled in or out as needed. While the cloud offers both horizontal and vertical scaling of applications, it’s the horizontal scaling feature that reaps major benefits. If elasticity is the not a key concern, then the application readiness for cloud needs to be evaluated as it could be a better fit to be managed on premise or on a hosted solution.


  Ø  Most of the legacy application mainly scales vertically where the application is dependent on core infrastructure and are tightly coupled with inhouse hardware (low latency + high intensity, or high bandwidth) and specific software and technology stack. Moving such applications to the cloud can create lot of complexities and can require lot of rearchitecting. Also, moving such applications makes them look like a hosting solution instead of a loosely coupled cloud solution.

  Ø   Modern cloud services rely mainly on databases which follow BASE properties, (Basically available, soft state, eventual consistent transactions), and CAP theorem (Consistency, Availability and Partition Tolerant) i.e. if the transactions fail the data will become eventual consistent. Legacy applications are typically monolithic, and the underlying data is mainly designed for ACID transactions, (Atomicity, consistency, isolation, and durability), i.e. transactions are not complete until the data is committed. Applications can get complex to function in the cloud and make use of core cloud features if they are not capable of meeting the goal of eventual consistency and Partition tolerant.



  Ø  When moving application to the cloud another property that needs to be evaluated is the aspect of application state. Cloud is suited well for an application that is stateless, i.e. when the client is unaware of the state of the server. Stateless applications are also easier to size and cache on the cloud. If the legacy application is stateful, i.e. the application has lot of dependency on infrastructure. It will get complex moving them to cloud considering the different requirements around sizing, capacity planning, caching etc.

  Ø  If applications require lot of security and compliance, the organizations also share responsibility with the cloud vendor for lot of IT management if they are moved to the cloud. Organization not only need to maintain adequate governance but also are responsible for meeting the required compliance and audit standards. While most of the major cloud vendors do provide increase IT security including Intrusion Prevention Systems, Web application firewalls, Runtime application self-protection, converged application performance and security monitoring, botnet and DDoS mitigation mechanisms to meet the regulatory standards. Moving such applications to cloud can be more of a hassle in terms of management, quality adherence, maintenance etc than keeping them inhouse.

  Ø  Connectivity and Interoperability is another key consideration when deciding to move applications to the cloud. Every major cloud provider does have the ability to either connect directly or via a virtual private network. But this requires organization to cover up all the critical loopholes for such connections in the targeted and dependent applications. This can be a very tedious task and can lead to several challenges if the organization is not ready. 


Thursday, July 4, 2019

What type of Private Cloud should organization invest?


Private cloud computing is a form of cloud computing that is used by only one organization that is isolated from others. A private cloud is unitenant and prevents exposure of data to external world or internet and results in better performance with less security risks.
A private cloud or the enterprise cloud resides on company’s intranet (internal data center) or hosted data center where the firewall protects the data. Hence, they are classified into two categories
    
       a)    On Premise Hosted Private Cloud   
      b )    Off Premise Hosted Private Cloud

In an On Premise Hosted private cloud the organization has to take care of the all the infrastructure including maintenance, licensing costs etc. Whereas in the Off Premise Hosted Private Cloud solution all the overheads are outsourced to the managing vendor.

In either of the categories one major advantage of Private cloud is that it offers a dedicated secured storage of infrastructure that is not accessible to others. Thus, providing enhanced security and further option to implement various levels of authentication and security for the infrastructure. Further organizations can decide to choose the one or more data centers either on premise or with dedicated service providers.
On Premise Hosted Private Cloud requires initial investment of dedicated hardware infrastructure and hence is suitable for organizations that has determined its cloud priorities with existing infrastructure cost or that require less investment in cost of new infrastructure. Organizations with existing data centers can look at this option as they can utilize the existing infrastructure to move to private cloud.

Since the organization maintains the infrastructure, licensing cost, management of upgrades, installations, maintenance and administration has to be done in house. This can be expensive and not suit organizations that require additional cost in investment to setup on premise or service provider-based data center and administration costs.

One of the other advantages of On Premise Hosted Private Cloud is that it allows customization of their cloud infrastructure according to the business needs and increase scalability and flexibility. However, this is also a disadvantage for organizations where on demand need for scaling virtual computing services is a challenge.  

Private cloud also suits organizations that are investing in the growing awareness of the advantages provided by virtualization technology. It enables organizations to improve infrastructure cost, performance and improve usage of underutilized hardware.

Hence, investing in On premise hosted private cloud is a good option for organizations with considerable predictability and consistency in infrastructure cost and demands. Organizations that have existing services and that can depict data related to infrastructure usage and statistics are in a better position to handle the demands and avoid the cost issue of under-utilized infrastructure. Also, this service suits organizations for whom control and security is of top most priority.

Off Premise or external hosted private cloud can be an option if organization want to be unitenant and cannot afford to scale, maintain and administer the On Premise Private cloud.

Monday, July 1, 2019

Getting website content relevance right

If you have come across a search results page where the top results are irrelevant to the search term entered the majority of the time, it's due to the incorrect setup of Relevancy ranking. Relevancy Ranking of products on Search and Browse pages is the critical search feature that easily gets ignored on several retail sites. 


Most of the modern enterprise search engines provide an easier way of handling relevance using business tools. However, understanding the algorithm and the fitment behind these results is not simple. Online businesses generally have a standard retail strategy applied for relevance, but this may not be applicable for every domain. As complexity increases with data corresponding to multiple sources, locales, and applications, the Relevancy Ranking requires some fine-tuning. 


Below are some of the measures that need to be factored into to improve Website Product Relevancy Ranking:- 


Monitoring Search Results

Use analytics to monitor user behavior on the search and browse results. Information like most clicked products, most popular searches, error results, spell correction, and page behavior gives an indication of how relevant results are shown to the user. These data form the basis of boosting required results and ignoring irrelevant results. 


Interpreting the data

As the online business grows many online retailers fail to understand the importance of data updates. With the creation of huge product-specific properties, understanding the attributes associated with the right data helps to display relevant products in the right order. Applications need to have the real-time capability to refresh this data timely and getting it to end-users in the shortest time possible.


Prioritization of Attributes

Attributes precedence is key for relevancy. There needs to be a clear definition of the data associated with these attributes. Special considerations have to be taken for data like Category Name, Product Name, Descriptions, and other keywords, as these attributes form the basis of the Search Engine Index. Custom attributes can be created as required and added to the prioritization list.


Relevancy Ranking Algorithm

Teams that are responsible for creating Product data have to be well versed with the relevancy ranking modules and their impact on the end online user. Each ranking module is executed during the relevancy ranking process and ranks the results. Teams need to ensure that at the end of every execution if there is a tie-break, appropriate results need to be segregated and ranked as per precedence. 


Enterprise Search Engines

The setting of core search functionalities like Synonyms, Thesaurus, Stemming, Spell Corrections, Did you mean, Most Popular, etc play a vital role in displaying the relevant results. For sites having multiple locales custom stemming files have to be maintained. Also, it's very essential to keep building the dictionary as and when new data sets are added. Many modern search engines provide all the above features Out of the box as an administration tool. 

Who should invest in a private cloud?


Private cloud computing is a form of cloud computing that is used by only one organization that is isolated from others. A private cloud or the enterprise cloud resides on company’s intranet or hosted data center where the firewall protects the data. This prevents exposure of data to external world or internet and results in better performance with less security risks.

One major advantage of Private cloud is that it offers a dedicated secured storage of infrastructure that is not accessible to others. Thus, providing enhanced security and further option to implement various levels of authentication and security for the infrastructure. Organizations can decide to choose the data center either on premise or with dedicated service providers.

Another advantage that should be considered by organizations is that Private Cloud allows customization of their cloud infrastructure according to the business needs and increase scalability and flexibility.  Private cloud also suits organizations that are investing in the growing awareness of the advantages provided by virtualization technology. It enables organizations to improve infrastructure cost, performance and improve usage of underutilized hardware.

Private cloud requires initial investment of dedicated hardware infrastructure and hence is suitable for organization that has determined its cloud priorities with existing infrastructure cost or that require less investment in cost of new infrastructure. Organizations with existing data centers can look at this option as they can utilize the existing infrastructure to move to private cloud.

Since the organization maintains the infrastructure, management of upgrades, installations and administration has to be done in house. This can be expensive and not suit organizations that require additional cost in investment to setup on premise or service provider-based data center and to hire dedicated administrators.

Hence, investing in private cloud is a good option for organizations with considerable predictability and consistency in infrastructure cost and demands. Organizations that have existing services and that can depict data related to infrastructure usage and statistics are in a better position to handle the demands of private cloud.

Friday, February 22, 2019

How to decide to when to move out code from the legacy application

Several times as an Architect, one of the most challenging aspects of decision-making is when to start disintegrating a legacy platform. Working on a Legacy E-commerce platform, I have had my share of struggles with the business trying to understand the long terms goals of a system and to decide how to build around a legacy platform. For years the business teams have been relying on legacy systems to run all critical business processes and operations. They are least bothered about the development and operational struggles, as their focus has been on getting features implemented.  


Also, as the system has been aging, it has been increasingly difficult to maintain and update it as the applications have outdated technology stacks, no documentation, and has grown unwieldy over time. In this article, I have tried to put in some of the key points that we as Architects need to convince the business in order to make them realize various thresholds of when a platform has to change. 


Business Value

One of the primary factors to consider when deciding whether to move out code from a legacy application is the business value that the code provides. It's very important to question and understand the business values in terms of numbers. If the code is critical to the organization's operations and generates significant revenue, it may be worth investing in modernizing the codebase to ensure its long-term viability. On the other hand, if the code is not generating significant business value and can be easily replaced by a newer solution, it may make sense to retire the legacy application and move on.


Scalability

Legacy applications may not be designed to handle the scale of modern business operations. In our case, the licensing model with the infrastructure made it even more difficult. If the application is struggling to keep up with growing demand or is frequently experiencing downtime, this is the time to get businesses' attention. It may be time to move the code out of the legacy application and into a more scalable, modern environment.


Technical Debt

Technical debt has become my go-to word in every meeting. It refers to the cost of maintaining and updating software that has not been built to current best practices or standards. Legacy applications are often associated with significant technical debt, as they may be built with outdated technology stacks and with years of patching over time without proper documentation or testing. If the technical debt associated with the legacy application is making it difficult to maintain and update. Just create a holistic list as it may be time to move out the code to a modern platform. 


Skillset

Legacy applications may rely on outdated technologies and programming languages. This has been one of the main reasons why it has been hard to find developers with the necessary skill set to maintain and update the codebase. This is a clear indication when an organization is struggling to find developers with the necessary expertise, it may be time to move the code out of the legacy application and onto a modern platform that is more widely supported.


Security

Very few business teams pay attention to the Security aspects of a platform. It is very often the ITs problem, and Legacy applications are bound to have security vulnerabilities that make them more susceptible to cyber-attacks. If the legacy application is not being properly maintained and updated, it is putting the organization's data and operations at risk. It's very important for us Architects to explain and educate the stakeholders about these security vulnerabilities and risks.


In conclusion, deciding when to move out code from a legacy application can be a complex decision that requires careful consideration of multiple factors. By assessing some of the above critical points, businesses can make informed decisions about when to modernize their codebase and move to a more sustainable platform. Ultimately, this decision has to tie in with the organization's technology infrastructure to support its current and future business needs.

Sunday, February 10, 2019

What would it take to be able to release your legacy application multiple times/week


Building a simple DB monitoring tool using spring boot


How to optimize the load time in browser in various ways.


How to prepare for a holiday traffic when on a legacy ecommerce platform


The advantanges of API documentation


Are Technical design documents really helpful and how to adapt to minimal documentation approach ?

As Architects and Developers, we have all been in projects writing high-level to low-level design documents. Design documents are a common artifact in software development projects, but their usefulness and effectiveness in an Agile organization have been the subject of debate among IT teams. The main intention of the design documents is to ensure that all stakeholders understand the project goals, technical blueprint, and requirements at a granular level. But, limited audiences are reading this document, and very seldom do these documents get updated. Hence, these documents tend to become cumbersome, time-consuming, and ineffective. 


The Case for writing Technical Design Documents


In a waterfall project software lifecycle moves from one stage to another, and writing design documents made a lot of sense as these were a deliverable and inputs to the next stage. Also, it served as a reference for various teams throughout the project lifecycle without many changes to the document. Additionally, in large projects where the development team expanded rapidly in different stages, design documents became helpful for onboarding new team members. 


However, developers still complained if the document became too long or technical for business stakeholders. Additionally, in a fast-paced project where requirements changed abruptly, design documents quickly became out-of-date or irrelevant as the project progressed, rendering them ineffective as a reference.


The Case for Minimal Documentation


In recent years, with several organizations adopting the Agile work model, there has been a growing trend toward minimal documentation. This approach typically emphasizes lightweight and flexible documentation and more on communication and collaboration among team members. Facilitation happens via regular meetings, syncs, retrospectives, tech guilds, stand-ups, and other forms of communication. Different Agile teams adopt different strategies:-


One of the minimal documentation approaches is to use lightweight documentation tools, such as Wiki or shared Google Docs, to document project goals, requirements, design, Architecture artifacts, decisions, diagrammatic representation of solutions, etc.


Some teams that involve business teams closely follow a BDD/TDD approach and want that as a starting reference for any requirement or even design-specific decision. The development teams also follow a strategic approach towards committing code and documenting every release into production. 


Development teams that hate to write documentation can use code documentation like Doxygen, Javadocs, etc or template-based tools like Markdown, Asciidoc, etc that generate documentation based on source code annotations, structures, or comments using automatic scripts.


Conclusion


Most development teams in today's Agile Organizations hate design documents as they are time-consuming and have maintenance overhead. They can slow down the development process leading to Analysis Paralysis. Also, many tools and techniques are available to automate the creation of diagrammatic representations and technical blueprints in the development process. 



How organizations complicate price information changes


Importance of Logging the right messages in your application


Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...