Whether you are new to
Cloud or have existing workloads already deployed it never harms to review
Security – an area that evolves constantly as the threats around us change.
Due to the recent
Global Pandemic the Threat Landscape has increased with more opportunities for “bad actors” to attack systems. With staff working from home the attack surface opportunity has increased. Isolated decision making is also coming in to play with the detachment of “working from home” reflecting in risky decisions which the average cyber-threat actor is just waiting for!
The following areas outline Security areas to focus on while organizations are at their most vulnerable.
Let’s begin by looking
at who is responsible for what in Cloud Infrastructure:
Cloud Provider and
Customer Security Responsibilities
Cloud Providers such as AWS and Microsoft are responsible for Facilities, Physical Hardware Security Virtualization and Network Infrastructure.
Custom Security Responsibilities include:
- OS and Patching
- Firewall Rules
- Network zones
- Application Stacks
- Applications and Code
One word of advice is to ensure that your systems are architected in accordance with the Cloud Providers framework – I have seen many instances where infrastructure has been set up incorrectly in the first place and facilitated security breeches.
Vulnerability Scanning and Compliance
Environments should be monitored for vulnerabilities
on an ongoing basis. A vulnerability scan detects and classifies system
weaknesses and predicts the effectiveness of counter measures.
It’s key to know how to interpret the results
and prepare for the Vulnerability Scan so you can get the most out of the reports.
The demand for Vulnerability Scanning is
growing and reports are often requested as part of compliance when dealing with
larger clients (suppliers are often asked to confirm if they complete scans of
Compliance scans can also assess adherence to a
specific compliance framework. e.g. PCI
Network and Firewall Security
A key area where staff are working from home – to avoid unauthorized
access to your systems/ computers being compromised. Consider the following:
- Firewall – blocks or allows network traffic based on type, IP, Port
- Packet Filtering / Inspection – stop traffic based on content of
- Deep packet Inspection firewalls – examines packet data and can look at
application layer attacks
- Network zone design e.g. private, public, IP whitelisting
Perimeter Security can be strengthened by introducing next generation firewall
with sophisticated rule-based access controls and reporting as well as strictly
adhering to client access via encrypted tunnels (VPN)
Geo based protection and content filtering – this can be provided for
Content Delivery Networks (CDN) where specific (or all) content can be restricted
(included or excluded) based on a list of countries.
Threat detection and mitigation (IDS / IPS) – services and tools are
available that will detect potential security threats in (near) real time.
These threats, once identified, can be reported upon in a timely manner and potentially
DNS Routing – In order to protect your online presence from DNS attacks
it is critical that you have a trusted DNS service that is reliable, highly
available and responsive
Firewalls (OWASP, Custom Rules, Geo Filters)
Web Application Firewall (WAF) filters, monitors and blocks HTTP traffic
to and from a Web Application. Consider this an additional layer of defense to
your overall strategy. Many of the larger Cloud providers such as AWS WAF and
Azure WAF have rule based protection.
Geo-location blocking can be implemented as part of the WAF design and WAF
Modes – Protection (Blocking) and Detection (Matching)
OS Vulnerability protection
Operating System Hardening – the disabling and removal of any non-essential services is the first level of defense when it comes to OS level
protection. Standard OS images for Window and Linux should be substituted for Custom hardened images wherever possible
Regular Automated Patching – it is imperative that you develop a
strategy for maintaining an effective patching process. This strategy should consider testing of patches with the associated applications that need to be running and well as consideration of the balance between the requirement for applying the very latest patches against the risk of causing an application failure
Security Updates – often security patches need to be considered ahead of
other patches since this is often driven by vulnerability scanning systems
attempting to reduce the security risk level.
Emergency patching – often zero-day attacks result in the release of an
emergency patch for the particular OS and version. It is important to have a process in place that can handle these events so as to reduce the time where a system could be potentially exposed to the attack.
Patching consistency for applications within the Systems Development Life cycle (SDLC) should be part of your regular patching process. What this means is that a group of patches can be applied in turn to servers associated with each stage of the SDLC. In this way any issue arising from a patch can be caught early in the cycle (e.g. at the development stage).
and Event Management (SIEM)
The combination of Information
and Event Management provides a real time analysis of security alerts and
- Event Log Centralized
Collection and Reporting
- Anomaly Scoring
Threat Protection Layers
The idea behind this is a Network security approach using multiple
levels of security measures. Each single defense component has a backup which has a better chance of stopping intruders compared to a single solution.
Example Layers include:
- Patch management incl. Virtual Patching
- Anti-virus software
- Anti-spam filters
- Digital certificates
- Privacy controls
- Data encryption
- Vulnerability assessment
- Web protection
Virtual patching is a term first used by Intrusion Prevention System (IPS) vendors which has now evolved towards Web Application Firewalls (WAF), often referred to as External Patching/ Just-in-time Patching.
The value in Virtual Patching is where organizations cannot simply change application source code in order to mitigate against a newly identified web vulnerability
Effective Virtual Patching platforms add value by:
– Facilitating installation in a limited number of locations (e.g WAF’s) instead of on all hosts
– Reducing risk of vendor-supplied patches including any code dependencies or potential library conflicts
– Allows a mission critical application to remain online
– Often lowers costs by reducing the need for emergency patching and thereby maintaining normal patching cycles
Reasons for not being able to apply an application source code patch can be lack of patch availability, extended time required to install a patch, cost of installing a patch, legacy applications, unsupported deployments or outsourced applications
OS Vulnerability Assessments and Compliance
A vulnerability assessment helps identify, classify, and prioritize vulnerabilities in network infrastructure, computer systems, and applications.
The first step is to identify any vulnerabilities, and this is typically done by means of automated processes; often involving scanners that can detect potential flaws in networks, hosts and applications.
Once vulnerabilities have been identified they need to be evaluated against various risk ratings and scores such as Common Vulnerability Scoring System (CVSS).
After evaluation the vulnerabilities can either be remediated by means of an applied patch for example, or mitigated, often by a virtual patching system. If the cost of either remediation or mitigation is too great when compared to the risk of the vulnerability, then an option can be to do nothing and thereby accept the risk.
It is vital to perform the vulnerability assessments continuously to lower the risk of threats to the organization’s IT systems. Efficient and timely reporting that can be customized to various levels of stakeholder (e.g., DevOps engineer, CTO or partner / investor) is a cornerstone to this and can improve long-term resilience.
Compliance for PC-DSS: All external IPs and domains exposed in the cardholder data environment (CDE) are required to be scanned by a PCI Approved Scanning Vendor (ASV) at least quarterly.
Compliance for SOC-2: Governed by the AICPA; Vulnerability Assessments (VA) scans can support certain prescriptive controls that may be required for compliance
Application Code Vulnerabilities
Applications, especially those that are cloud native, are a gateway to servers and networks. These applications can present an ideal attack vector for malicious actors who continue to refine their methods to penetrate software. For this reason it is critical that security is an ongoing activity that’s deeply embedded in the development process.
Application security best practices help uncover vulnerabilities before attackers can use them to breach networks and data.
For 2021 OWASP identified the top 10 Web vulnerabilities as:
1. Broken Access Control
2. Cryptographic Failures
4. Insecure Design
5. Security Misconfiguration
6. Vulnerable and Outdated Components
7. Identification and Authentication Failures
8. Software and Data Integrity Failures
9. Security Logging and Monitoring Failures
10. Server-Side Request Forgery
PCI-DSS 3.0 directs software organizations to comply with secure guidelines for developing applications and requires that custom application code can be adequately scanned for potential vulnerabilities. Because PCI security requirements apply both to software in development and software in production, enterprises may need solutions to test many public-facing web applications that are already running.
Enhanced WAF Protection
The WAF service provided by AWS and Azure can easily be configured with a set of rules to mitigate a set of common vulnerabilities (e.g OWASP Top 10). However, In many cases and particularly when protecting legacy applications, these rules do not provide full coverage.
For those situations the WAF service allows the provision of custom rules that can be tailored to the exact application vulnerability identified. As with the common rules, these custom rules can be quickly turned off or on and can also run in monitoring mode where a potential attack is only logged and not blocked. This is useful when there is a risk of the WAF rule actually causing an application issue or outage.
Custom rules are very useful for known situations that lie outside the common vulnerabilities but in situations where the protection needs to be enhanced beyond this ( for example where attacks are more sophisticated and need to be formulated in near real-time and often with the help of Machine Learning) then WAF services offered by the top security vendors are usually the best and in some cases the only option.
These services either integrate with the cloud provider WAF service or work independently as either instance / vm based appliances or a separate cloud that filters the traffic before reaching the AWS or Azure environment that needs protection.
Machine Learning Vulnerabilities
Malicious compromise of data or models is intended to cause degradation or failure of ML systems.
These ML Systems can be vulnerable due to direct data corruption attacks including data poisoning and data evasion attacks.
Data poisoning occurs when an adversary injects or manipulates data in the training dataset leading to incorrect predictions or misclassifications. This can be mitigated by means of various data protection strategies.
Data evasion is where an adversary sends imperceptible perturbations of input data to an ML endpoint to cause an intentional miss-classification or skewed prediction. Since data evasion attacks often require detailed knowledge of the deployed model any mitigations would need to be aimed at protecting this information.
The ML models themselves are also vulnerable to adversarial attack. Algorithms can be manipulated or reprogrammed, or sensitive information about the model can be leaked, enabling development of more sophisticated attacks. These models should be made resilient by using techniques to identify anomalous behavior and prevent manipulation outside of normal boundaries.
Additionally, it is important to establish if a human-in-the-loop is needed to validate the system design, operation, and/or output. In these instances, it is necessary to recognize that humans may also perform malicious or inadvertent actions that compromise the system.
In summary, ensuring that you have covered off these areas in your Cloud Infrastructure Security Plan should provide you with a robust defense but bear in mind threats continue to evolve! Plans should be reviewed on a regular basis and for some environments third-party, arms-length security reviews are recommended