In this article we’ll talk about the tactics and procedures observed by Akamai researchers and security teams, as they work through the operational response process of a mass exploitation campaign. Mass Exploitation is a term used to convey the process in which attackers launch an attack campaign in large scale using CDN services, or mass mailing services to reach more victims in less time.
The idea starts with an attacker who has a well-formed payload or exploit, which they need to use at scale in a relatively short period of time.
A common example is a zero-day vulnerability that is posted to an underground forum and purchased by an attacker. As that code starts to get used in the wild, there is a finite amount of time before research teams and incident response teams notice the attack and try to mitigate it. Also, vendor timelines for issuing a patch can vary. This varied timeline of the exploit being “exploitable” is what drives mass exploitation.
There are three main categories for problems as they relate to identification of mass exploitation attempts:
Category One (False Positives): Identifying mass exploitation among the noise in your environment can be daunting. For instance, if you are trying to filter on Remote File Inclusion (RFI) attempts and executed payloads, there is a tremendous amount of noise from security vendors running application scans against your environment looking for risks.This is a RFI test from WhiteHat Security and is very common. If the system being scanned is vulnerable it will download the script and execute it. A process which will be monitored and added to the list of things you need to fix for that system.
Category Two: Further complicating the identification of attack campaigns is purposeful obfuscation by attackers who use techniques such as FastFlux domains and sites like Google or Github to reference content which usually doesn’t raise a lot of attention unless you know what you’re looking for.
Example 1: raw.githubusercontent.com
By utilizing web shells that are located on Github, the attacker
doesn’t need to manage multiple copies of the files as there is little
chance that site/URL reputation services will blacklist the Github
domain. Here the attacker is calling a PHP shell and using Github’s CDN
to utilize caching and acceleration of said shell to the target system.
Example 2: googleusercontent.com
Attackers have taken advantage of other types of cloud services to
distribute their attack traffic as well – one of which is using Google
App Scripts. By creating scripts in Google Sheets that contain scripted
commands, an attacker can make requests look like they’re coming from a
“trusted” location such as Google. Without context, this may go
unnoticed or at least harder to identify good versus bad requests due to
the nature of where the requests are originating.
Unnoticed, until you start to see the volumes of traffic that can be generated from this infrastructure, or if the requests are taking advantage of an application vulnerability to perform mass exploitation.
Category Three: The third category is “timely”
threat intelligence challenges, and large scale visibility challenges.
If Blue Teams can only see threat info on traffic that comes to their
own environment, then it’s challenging to see larger scale trends or
patterns as efficiently.
We frequently see malicious actors upload their malware/exploit code
to more than one third-party website or CDN in an attempt to re-use it,
usually for redundancy purposes or if security devices add the URL to a
blacklist.
When tracking samples, we identified that they were being accessed by
more than one originating IP/ASN which tells us that attackers are
proxying through IPs/ASNs when attempting large scale RFI attacks. This
technique usually indicates the use of Fast Flux domains.
Fast flux is a DNS
technique used by botnets to hide phishing and malware delivery sites
behind an ever-changing network of compromised hosts acting as proxies.
Fast Flux usage is becoming common for two reasons.
- Compromised hosts storing exploit code are being discovered quickly and taken offline before the attackers can make use of them.
- Because attackers are launching exploits at many systems as part of large scale exploitation attempts, they require payloads to be highly available. For this reason, we are seeing an increase in the use of low cost CDN’s to deploy/host attack exploit data.
Interstitial inspection: Using reverse proxies or some form of interstitial device/process to perform inspection and validation of requests prior to sending traffic to back end application servers.
Large scale data / threat correlation: The more data you have about an inbound request (application workflow wise) or IP address via threat intelligence sources, the better. The main thing to focus on here is that you have as much info as possible which usually involves a threat feed and data sharing relationships with partners.
Layered defenses: It’s highly valuable to have both perimeter and forward-facing defenses as well as defenses inside or behind the firewall to watch for malicious activity inside the corporate network as well as requests going out to the internet.
No comments:
Post a Comment
Thank you for your comment. Will try to react as soon as possible.
Regards,
Networ King