I have been part of an interesting case study on AWS-managed blockchain. Glad to be part of authoring the new AWS blog post on AWS Managed Blockchain.
Monday, April 4, 2022
Sunday, March 20, 2022
The Sustainable Enterprise - Why cloud is key to business sustainability
I have been writing several articles on this topic and am pleased to contribute to this newly released white paper on the topic of "How Enterprises can achieve sustainable IT via the cloud, teaming up with Microsoft. Nice to share an Architects view and work with some of the market-leading experts on this topic.
Download the white paper here:
Friday, February 4, 2022
Harnessing Green Cloud computing to achieve Sustainable IT evolution
A few months back, I had written an article about Sustainability explaining what it is all about when it comes to software development. Since then I have come across this topic in several forums, including discussions with multiple client organizations that have pledged to quantify and improve on this subject.
Organizations that move their applications towards cloud services tremendously improve their IT environmental impacts and goals of being sustainable. They are several factors that an enterprise has to consider beyond just selecting a cloud provider to be considered environmentally sustainable.
Focus on the following 6 areas can help organizations kick start their Green IT revolution on Cloud.
a) Cost Aware Architecture thinking
In applications built on cloud infrastructure, there are several moving parts with innumerable services. Organizations who have moved to the cloud often find it very difficult to be cost-aware, ensuring optimal usage of these services.
They are so engrossed in building their core business applications that they don’t invest in cost-aware architecture teams that focus on optimizing the spending by eliminating unprovisioned infrastructure, resizing or terminating underutilized and using lifecycle management. Practices like energy audits, alerts and IT cloud analysis helps to identify costs and identify systems that need to be greened.
Cloud provides services like Azure Advisor and AWS Trusted Advisor helps to optimize and reduce overall cloud expenditure by recommending solutions to improve cost-effectiveness. Services like Azure Cost management and Billing, AWS Cost Explorer, and AWS Budgets can be used to analyze, understand, calculate, monitor, and forecast costs.
b) Sustainable
development
Building applications using modern technologies and cloud services help optimize development code and ensures faster deployments. It also enables in reduction of redundant storage and end-users energy levels.
Sustainable development on the cloud has many parts. It involves an end-to-end view of how the data traverses wholistically. Improving load times by optimizing caching strategies reduces the data size, data transfer quantity, and bandwidth. With new innovative edge service solutions and by serving the content from the appropriate systems, energy-efficient applications can be built reducing the distance at which the data travels.
c) Agile Architecture
One of the core Agile principles is to promote sustainable development and improve ways of working by making the development teams deliver at a consistent pace.
Cloud services provide tools like Azure and AWS DevOps, which is commonplace for development teams to organize, plan, collaborate on code development, build and deploy applications. It allows organizations to create and improve products faster than traditional software development approaches.
d) Increase Observability
There is a direct correlation between an organization's Observability maturity and Sustainability. In Observability, the focus is to cultivate ways of working within development teams to have a holistic data-driven mindset when solving system issues. The concept of Observability is becoming more and more prominent with the emergence and improvement of AI and ML-based services.
Service to improve automation diagnostics, automatic infra healing, and the advent of myriads of services used for deep code and infra drills, real-time analysis, debugging and profiling, alerts and notifications, logging and tracing, etc indirectly helps in organizations return of investment, increasing productivity
e)
Consumption-based Utilization
Rightly sized applications, enhanced deployment strategies, automated backup plans, and designing systems using Cloud's well-architected frameworks result in utilizing the underlying hardware and its energy efficiency. It also serves the organization's long-term goals of reducing consumption and power usages, improving network efficiencies, and securing systems. Utilizing the right cloud computing service also helps the applications to Scale Up or Out appropriately.
Using cloud-provided Carbon tracking calculators helps gauge systems or applications that require better optimization in terms of performance or better infrastructure.
Conclusion
Monday, November 15, 2021
The fundamental principles for using microservices for modernization
The last few years I have spent a lot of time building new application on microservices and also moving parts of monolith to microservices. I I have researched and tried sharing my practical experience in several articles on this topic.
This week my second blog on some foundational principles of microservices published on Capgemini website.
Wednesday, October 20, 2021
How to manage the move to microservices in a mature way
The last few years I have spent a lot of time building new application on microservices and also moving parts of monolith to microservices. I I have researched and tried sharing my practical experience in several articles on this topic.
This week my very first blog on this topic is published on Capgemini website.
https://www.capgemini.com/se-en/2021/10/how-to-manage-the-move-to-microservices-in-a-mature-way/
Friday, September 3, 2021
The advent of Observability Driven Development
A distributed application landscape with high cardinality makes it difficult for dedicated operation teams to monitor system behavior via a dashboard or react abruptly to system alerts and notifications. In a microservices architecture with several moving parts, detecting failures becomes cumbersome, and developers end up looking at errors like finding a needle in a haystack.
What is Observability?
Observability is more than a quality attribute and one level above monitoring, where the focus applies more to cultivating ways of working within development teams to have a holistic data-driven mindset when it comes to solving system issues.Observability in a DevSecOps ecosystem
Several Organizations are adopting a DevSecOps culture, and it has become essential for development teams to become self-reliant and have a proactive approach to identify, heal and prevent systems faults. DevOps focuses on giving the development teams ability to make rapid decisions and more control to access infrastructure assets. Observability enhances this by empowering development teams to be more instinctive when it comes to defining system faults.
Furthermore, the modern ways of working with Agile, Test Driven Development, and Automation enable development teams to get deep insights into operations that can potentially be prone to failures.
Observability on Cloud platforms
Applications deployed on Cloud provide the development teams with several out-of-box myriads of system measurements. Developers can gauge and derive quality attributes of a system even before a code goes into production. Cloud services make it easy to collate information like metrics, diagnostics, logs, and traces for analysis, and they are available at the developer’s behest. AI-based automated diagnostics along with real-time data give developers deep acumen into their System Semantics and characteristics.
Conclusion
Observability is more of an open-ended process of inculcating modern development principles to increase the reliability of complex distributed systems. The benefits of the Observability mindset helps organizations resolve production issues speedily, reduces dependency and cost on manual operations. It also benefits development teams to build dependable systems helping end customers with a seamless user experience.Thursday, July 15, 2021
The Coevolution of API Centric and Event-based Architecture
When evaluating communication between different systems, there is always an argument of choosing between an API-first approach and an event-first approach. In a distributed ecosystem, it’s not one or the other, but the combination of both these strategies that can solve data transmission between one or more systems.
API’s are the de facto way of interacting for synchronous operations. That means performing tasks one at a time in sequential order. When designing systems with a specific responsibility, APIs shield the underlying systems from being accessed directly and expose only the reusable data, thus ensuring no duplication of data happens elsewhere. When using simple API’s all that is needed is a readable API structure and systems that follow a request and response pattern. API’s are beneficial in the case of a real-time integration where the requesting system needs information abruptly.
However, designing and Scaling APIs can also get intricate. In high transactions microservices architecture, throttling and caching of APIs are not simple as APIs need to scale on-demand. Also, in such integrations, API gateway becomes necessary to make the systems loosely coupled.
The below example depicts a reporting system that creates different reports based on the Customer, Order, and Catalog data. The source system exposes an API. The reporting system fetches the data via the API and sends the information to the underlying destination systems.
This architecture looks fine if there are no changes to the Information from the source systems. But, if the order information has properties that keep getting updated, then the Reporting system needs to have the capability of ensuring that the changed state gets updated in subsequent systems.
Handling Cascading Failures
In a chain of systems that interact using APIs, handling errors or failure can also become cumbersome. Similarly, if there are multiple dependent API calls between two systems, the cascading failures become complex. The complexity further increases when there is a need for systems to react based on dynamic state changes. This is where Event-based architecture can help address some of the issues.
The basis of Event-based strategy is asynchronous means of communication. There is an intermediate system that decouples the source and the destination service interfaces. This strategy is apt for applications that need near real-time communication and when scalability is a bottleneck.
With an Event-based architecture, all the source system has to do is adhere to a contract, and on any state changes, trigger a message to the intermediate broker system. One or more destination systems can subscribe to the broker system to receive messages on any state changes. Also, since the source system triggers an event, the scalability of the APIs is not an issue.
Event First Architecture |
With a pure Event-based architecture with an increase in the number of messages, the architecture can get complicated. Tracking the statuses of a message if they are processed or not becomes tricky. In this case, every order tracking needs to happen for the latest state, and error handling needs to be robust. Also, this entire process is slow and there is a huge latency between the end-to-end systems.
Another way of simplifying the architecture is by combining API and the event design. The below diagram illustrates that the Reporting system interacts with the Order system using both API and events. The Order system sends the state change notification to the broken. The Reporting system reads the state change and then triggers an API call to update the Order information. The reporting system makes API calls to the Catalog and Customer systems to fetch the static data. It can further push the created destination messages to consume using the event broker.
In conclusion, both API and events have their pros and cons and solve a specific problem. They are not a replacement for one another and architecture can be made less complex if they co-exist. In a modern micro-services architecture to have both of them handy can help ease distributed system interaction complexities.
Monday, July 5, 2021
Driving Digital Transformation using Sustainable Software Development
The term Digital Transformation in the last decade or so has become a well-known strategy in various organizations. Businesses across every domain are reviving their traditional businesses to adapt to a more modern digital marketplace.
But in the last few years, sustainable development has become one of the essential mainstays for a successful digital transformation journey. The Covid pandemic has also pushed organizations across different domains to rethink and emphasize environmental factors, climate changes, and human well-being to lure consumers.
Embracing a cloud-first model is one of the critical constituents in digital transformation and sustainable journeys. More and more organizations are speeding up their Cloud computing journeys and investing in modern SaaS/PaaS services, thus reducing environmental impacts and eliminating major infrastructure expenses. Organizations need to be wary and invest wisely in sustainable software-building methodologies for successful software implementations and cloud migrations seamlessly.
Organizations that strive to be data-driven have a better ability to monitor operations and analyze system behaviors accurately. The real-time analysis of information results in better usage of devices and improves the defined sustainable characteristics. Companies that invest in AI/ ML can have a very substantial benefit to sustainability. The science of reliable predictability in the digital realm can bridge gaps in system information interchange, zero wastages, improve storage and distribution mechanisms, eco-friendly products, free delivery methods, reusable infrastructure, etc. All of these can directly help in subduing environmental consequences.
In conclusion, the principles of building next-generation Digital software and Sustainable development go hand in hand. In the modern agile world, both of these journeys have a common goal of not jeopardizing the capability of future needs. These can be applied to systems as much as they can be related to human well-being. Adaptable working methods of Extreme Programming, Agile, Lean, Kanban help teams to strive for rapidly focused executions. These ways of organization working improve distributed system communications, their collaborations, their usages, and velocity. All of these indirectly result in contributing to energy-efficient software development.
Monday, June 14, 2021
My Capgemini Cloud Expert Profile
I joined Capgemini as a Cloud Solutions Architect and happy to share my profile is on the expert's page of Capgemini Sweden.
Sunday, May 23, 2021
Tips preparing for Professional AWS Solution Architect Exam
I recently cleared my AWS Solutions Architect Professional Exam with a total of 948/1000 and thoroughly enjoyed preparing for the exam. I spent a total of 6 months of preparation. This is in spite of the fact that I got 1000/1000 in the Associate Architect exam last year.
The exam as such is really tough. It not only evaluates one's knowledge and experience on AWS, but one also has to strategize for reading lengthy questions, time each question, and also be prepared to sit continuously for 190 minutes to finish the exam.
Below are some of the learnings and tips that I can share so that one can make good use of and benefit from studying for the exam. Preparation of the exam can be divided into basically 3 phases
Phase 1 Preparation
To start with, the exam requires considerable experience on the platform, I would say at least 2 years of hands-on experience on core AWS services. I would definitely recommend passing the Associate exam as the Professional one is way too tough.
a) Plan for taking a course and stick to the same. Select a course with a good rating on popular training sites like Udemy/Coursera or Udacity. Try out different courses for few days and choose a course where you are comfortable with the language and flow of the course. The basic content of all the highly rated courses is more or less the same. Also, choose a course that has practical samples on the topics that one is not comfortable with or has not worked on.
b) Plan a date and book the exam date. Choose somewhere between 2 to 3 months. AWS allows you to change the date twice for a booked exam.
c) Create a personal AWS account to practice as the exam covers way too many services which one may not have implemented in day-to-day professional work.
d) The exam is not theoretical and requires vast experience in the services. There are several real-world scenarios based questions and there are multiple ways to solve a specific problem. Read through a lot of use cases from different organizations especially the ones from the latest AWS re: Invent.
Phase 2 Preparation
In this phase, get deeper into the course and practice the below points in structuring and helping to know the services better.
a) AWS adds new services very frequently and one has to be well versed with each and every service that is present especially the new ones. AWS updates all the latest services in the below white paper.
AWS overview - https://d1.awsstatic.com/whitepapers/aws-overview.pdf
b) Each of the areas has several services that can perform the same task. Try to analyze which services are the best fit when considering Non-Functional requirements of Cost Optimization, Scalability, Performance, Duration, Automation, Scalability, Availability, Reliability, Security.
a) It is very difficult to master each and every service in depth. So, it is absolutely ok if one knows just the basics of certain services.
b) During this phase ensure you are at ease writing practice tests and are able to attempt 45-50 questions in a single sitting.
c) Your accuracy has improved and so has your reading speed. When attempting questions you are now more confident eliminating the wrong options.
d) By this time you will know that you have the confidence and better hold on the exam. If time is not a barrier, based on your comfort level try to push yourself to prepare and postpone the exam by a week or 2. This will just help you revise multiple times and improve the chances of clearing the exam.
Monday, December 28, 2020
Sunday, November 29, 2020
Extracting running data out of NRC/Nike + (Nike Run Club) using API's
For the past few weeks, I have been struggling to see the running kilometers getting updated in my Nike + App. It could be a bug or a weird feature of the app and since this was kind of a demotivation, I decided to go ahead and create my own dashboard to calculate the results. Also, for some reason, Nike discontinued viewing and editing activities on the web.
Considering I had about 8 years of data and you never know when this kind of apps stop to exist or when they become paid versions. It's always better to persist your data to a known source and if required use it to feed it into any other application. I also went ahead and uploaded my data to UnderArmour's "MapMyFitness" App which has much better open-source documentation.
It turns out that there is a lot of additional information the NRC app captures which are typically not shown on the mobile app. Few of the information include
- Total Steps during the workout including detail split between intervals
- Weather Details during the workout
- Amount of the time the workout was halted for
- Location details including latitude and longitude information that can help you plot your own Map
Coming to the API part, I could not get hold of any official Nike documentation, but came across some older blogs https://gist.github.com/niw/858c1ecaef89858893681e46db63db66 in which they mentioned few API endpoints to fetch the historic activities. I ended up creating a spring-boot version of fetching the activities and storing it in a CSV format in my Google Drive.
The code can be downloaded here -> https://github.com/shailendrabhatt/Nike-run-stats
The code also includes a postman repository which contains a Collection that can also be used to fetch one's activities. Just update the {{access_token}} and run the Get requests.
While the blog that had details of the API was good enough, a few tips that can be helpful
- Fetching the Authorization token can be tricky and it has an expiry time. For that, you will need a https://www.nike.com/se/en/nrc-app account and fetch the authorization token from the XML HTTP request headers for the URL type api.nike.com. There are few requests hitting this URL and the token can be fetched from any of them.
- The API described in the link shows details of after_time, one can also fetch before_time information
/sport/v3/me/activities/after_time/${time}/sport/v3/me/activities/before_time/${time}
- Pagination can be easily achieved using the before_id and after_id. These ids are of different formats ranging from GUIDs to a single-digit number and can be confusing.
Wednesday, November 11, 2020
Thursday, October 1, 2020
Building Composite Architectures
Recently after Gartner in its recent report highlighted “Composite Architecture” or “Composable Architecture” as one of the five emerging trends in modern innovation and technology for the next 10 years. I started coming across this topic in various technical forums.
“Composability” as such is not a new topic, as we have used this frequently in object-oriented programming to achieve Polymorphism. In software architecture terms it is defined as the combination of software systems to produce a new system. In other words, it is directly connected to the goal of agility and reusability and the whole crux of it is to respond to the changing business spectrum.
Domain-Driven Design to build Composable Application
If we take a step back and go back to the way a simple application created using domain-driven design using an onion architecture. The orchestration layer plays a pivotal role in making an application composable by interacting directly with the repository or service layers.
The orchestration layer as such can either be a WebHooks API, a Data importer, API Controller, Messaging service, or a simple REST or SOAP request.
This kind of atomic structure if done properly can result in designing a system that is open to change its external integration seamlessly and also meet the changing business landscape.
Atomic Architecture
If we take the earlier example and apply it in a larger context, the below visualization depicts a circular relationship between different layers in a typical business domain.
Elements of Composable Architecture
The basic building blocks of a composable system is still around the foundation of Containerization, Microservices, Cloud, API’s, headless architecture, etc.
Conclusion
With a Composable mindset, organizations can uplift isolated business operating models and move towards a more practical loosely coupled technology landscape where systems can be plugged in and out flexibly.
This kind of model perfectly fits with organizations adopting agile ways of working or building modern omnichannel integrations with different types of native Cloud-based SaaS platforms.
This model can also be applied to bridge gaps across the entire ecosystem of legacy and modern applications including areas of a unified experience, operations, transformations, infrastructure, external and internal system integrations.
Thursday, July 30, 2020
Friday, June 19, 2020
10 Fundamental Principles one needs to ask before breaking the monolith platform
Below are some of the key principles that need to be evaluated when one starts to break out services from a monolithic platform.
Thursday, June 11, 2020
The myth of Sharing State when breaking large applications
However, legacy applications are seldom portable, and sharing data or state to all intents and purposes is convoluted. Below are some of the typical concerns that need to be evaluated building or maintaining applications with a single state.
Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5
Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...
-
Recently after Gartner in its recent report highlighted “ Composite Architecture ” or “ Composable Architecture ” as one of the five emergin...
-
A distributed application landscape with high cardinality makes it difficult for dedicated operation teams to monitor system behavior via a ...
-
For the past few weeks, I have been struggling to see the running kilometers getting updated in my Nike + App. It could be a bug or a weird...