Sunday, December 11, 2022

Fine Tuning a WAF to avoid False Positives - Part 2

 This week has been an action-packed week with some high-volume DDoS attacks on one of the web applications. We have been spending a lot of time understanding the importance of having a WAF for all our client-facing public domains. In today's Cloud architecture Web Application Firewalls (WAFs) is a crucial part of any organization's security posture. They protect web applications from DoS, DDoS, and attacks, such as SQL injection, cross-site scripting (XSS), and other malicious activities. However, WAFs need to be fine-tuned regularly to ensure they provide maximum protection without causing false positives. In this article, we will discuss some best practices we followed to fine-tune a WAF and prevent multiple attacks on our application.

1.  The first step in fine-tuning a WAF is to understand the web application it is protecting. This includes identifying the application's components, such as the web server, application server, and database. Additionally, it is essential to identify the web application's behavior, including the type of traffic it receives, the HTTP methods it uses, and the expected user behavior. Understanding the web application will help to identify which rules should be enabled or disabled in the WAF.

2. Configure WAF logging WAF logging is a critical component of fine-tuning. It allows security teams to analyze WAF events and understand which rules generate false positives. WAF logs should be enabled for all rules, and log data should be retained for an extended period, such as 90 days or more.

3. Start with a default configuration WAFs come with a default configuration that provides a good starting point for fine-tuning. Start with the default configuration and enable or disable rules as necessary. Additionally, some WAFs have pre-built templates for specific applications, such as WordPress or Drupal. These templates can be an excellent starting point for fine-tuning.

4. Test the WAF Once the WAF is configured, it is essential to test it thoroughly. The WAF should be tested with a variety of traffic, including legitimate traffic and malicious traffic. This will help identify any false positives or negatives generated by the WAF.

5. Tune the WAF Based on the results of testing, the WAF should be fine-tuned. This may include enabling or disabling rules, adjusting rule thresholds, or creating custom rules to address specific attack vectors. Additionally, WAFs may have machine learning or AI capabilities that can help to reduce false positives.

6. Monitor the WAF After fine-tuning, the WAF should be monitored regularly to ensure it is providing maximum protection without causing false positives. WAF logs should be analyzed regularly, and any anomalies should be investigated immediately.

In conclusion, fine-tuning a WAF is a critical component of any organization's security posture. It requires a thorough understanding of the web application, careful configuration, and extensive testing. Additionally, WAFs should be regularly monitored and fine-tuned to ensure they provide maximum protection without generating false positives. By following these best practices, organizations can ensure their WAFs provide maximum protection against web application attacks.


Thursday, December 8, 2022

Demystifying the hidden costs after moving to the Cloud

The web application at a client was hosted using a combination of services on Azure. The architecture was quite simple and used the following services. Front Door, Api Manager, App Service, SQL Database, Service Bus, Redis Cache, and Azure Functions. As the application matured, little did we think of all the hidden costs of the cloud at the start of the project.

Azure Front Door used for efficient load balancing, WAF, Content Delivery Network, and as a DNS. However, the global routing of requests through Microsoft's network incurred data transfer and routing costs. What started as a seamless solution for enhanced user experience turned into a realization that global accessibility came at a price. Also, the complexity of configuring backend pools, health probes, and routing rules can lead to unintended expenses if not optimized.

App Services had a modest cost to begin with on low-scale Premium servers. But as the application garnered a lot of hits, so did the number of users and, subsequently, the resources consumed. The need for auto-scaling to handle increased traffic and custom domains brought unforeseen expenses, turning the initially reasonable hosting costs into a growing concern. So, keep an eye on the server configuration and the frequency of scaling events.

Azure SQL Database brought both power and complexity. Scaling to meet performance demands led to increased DTU consumption and storage requirements. The once manageable monthly expenses now reflected the intricate dance between database size, transaction units, and backup storage. Not scaling down the backups also incurred costs, especially for databases with high transaction rates. Inefficient queries and suboptimal indexing can increase resource consumption, impacting DTU usage and costs.

Azure Service Bus, the messenger between the application's distributed components, began with reasonable costs for message ingress and egress. Yet, as the communication patterns grew, the charges for additional features like transactions and dead-lettering added expenses to the budget. Also, long message TTLs can lead to increased storage costs. 

Azure Cache for Redis, used for in-memory data storage, initially provided high-performance benefits. However, as the application scaled, the usage to accommodate larger datasets, the costs associated with caching capacity, and data transfer began to rise, challenging the notion that performance came without a price. Eviction of data from the cache, may result in increased data transfer costs, especially if the cache is frequently repopulated from the data source. Also, fine-tuning cache expiration policies is crucial to avoid unnecessary storage costs for stale or rarely accessed data.

Lastly, the Azure Functions, with its pay-as-you-go model, was supposed to be the least cost of all services as it allowed to invoke functions as needed. But, the cumulative charges for execution, execution time, and additional resources reminded me that serverless, too, had its hidden cost. Including unnecessary dependencies in your function can inflate execution times and costs.

Demystifying the expenses after moving to Azure required a keen understanding of its pricing models and a strategic approach to balancing innovation with fiscal responsibility.

Building Microservices by decreasing Entropy and increasing Negentropy - Series Part 5

Microservice’s journey is all about gradually overhaul, every time you make a change you need to keep the system in a better state or the ...