Do you want your Agent Tesla in the 300 MB or 8 kB package?, (Fri, Dec 31st)

This post was originally published on this site

Since today is the last day of 2021, I decided to take a closer look at malware that got caught by my malspam trap over the course of the year.

Of the several hundred unique samples that were collected, probably the most interesting one turned out to be a fairly sizable .NET executable caught in October, which “weight in” at 300 MB and which has 26/64 detection rating on VT at the time of writing[1].

As you may see from the following image, the sample was obfuscated using multiple different tools.

The size of the file was, however, so significant not because of any complex obfuscation, but because the executable had a sizable null byte overlay (i.e., a large number of null bytes added after the end of the file).

Without the overlay, the file would have been less than 700 kB in size.

Although the use of null bytes to inflate the size of a malicious executable to the point when it will not be analyzed by anti-malware tools (AV tools on endpoints as well as on e-mail gateways/web gateways have set limit on the maximum size of files they can scan) is nothing new[2], as the fairly low VT score of this sample shows, it can still be quite effective. Especially when one considers that after further analysis, the executable turned out to be nothing more than a sample of Agent Tesla infostealer[3]…

Two other files I found in my “2021 collection” deserve a short mention in connection with the large executable described above.

They were, again, .NET PE files, and, again, were part of an Agent Tesla infection chain[4].

Besides this, however they were complete opposites of the sample mentioned before. They were only about 8 kB in size each, no obfuscation was used to protect them and their detections on VT are slightly/significantly higher (37/68[5] and 53/68[6] respectively). I mention them together because although there are slight differences in their code, as the following images show, both were very similar, and one can clearly see that they were only supposed to download and run additional code from the internet.

As the preceding text mentions, although all three samples were used in the infection chains of the same malware, the ability of anti-malware tools to detect them varies widely. And since the malware family in question is rather a common one and its samples are often spread by untargeted malspam messages, it goes to show (if anyone still needs to have that pointed out to them at the end of 2021) that depending only on traditional anti-malware tools for (not just) endpoint protection is simply not enough at this point in time…

Nevertheless, since I would like to end this post on a slightly more positive note, let me conclude by wishing you – on behalf of all of us at the SANS Internet Storm Center – a Happy New Year 2022, with as few malware (and other) infections and serious incidents as possible.


Jan Kopriva
Alef Nula

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Agent Tesla Updates SMTP Data Exfiltration Technique, (Thu, Dec 30th)

This post was originally published on this site


Agent Tesla is a Windows-based keylogger and RAT that commonly uses SMTP or FTP to exfiltrate stolen data.  This malware has been around since 2014, and SMTP is its most common method for data exfiltration.

Earlier today, I reviewed post-infection traffic from a recent sample of Agent Tesla.  This activity revealed a change in Agent Tesla's SMTP data exfiltration technique.

Through November 2021 Agent Tesla samples sent their emails to compromised or possibly fraudulent email accounts on mail servers established through hosting providers.  Since December 2021, Agent Tesla now uses those compromised email accounts to send stolen data to Gmail addresses.

Shown above:  Flow chart of recent change in Agent Tesla SMTP data exfiltration.

SMTP exfiltration before the change

Agent Tesla is typically distributed through email, and the following sample was likely an attachment from malicious spam (malspam) sent on 2021-11-28.

SHA256 hash: bdae21952c4e6367fe534a9e5a3b3eb30d045dcb93129c6ce0435c3f0c8d90d3

  • File size: 523,919 bytes
  • File name: Purchase Order Pending
  • Earliest Contents Modification: 2021-11-28 19:55:50 UTC

SHA256 hash: aa4ea361f1f084b054f9871a9845c89d68cde259070ea286babeadc604d6658c

  • File size: 557,056 bytes
  • File name: Purchase Order Pending Quantity.exe
  • Any.Run analysis from 2021-11-29: link

The packet capture (pcap) from Any.Run's analysis shows a typical SMTP data exfiltration path.  The infected Windows host sent a message with stolen data to an email address, and that address was on a mail server established through a hosting provider.

Shown above:  Traffic from the Any.Run analysis filtered in Wireshark.

Shown above:  TCP stream of SMTP traffic shows stolen data sent to the compromised email account.

Example after the change

The following Agent Tesla sample was likely an attachment from malspam sent on 2021-12-01.

SHA256 hash: 6f85cd9df964afc56bd2aed7af28cbc965ea56e49ce84d4f4e91f4478d378f94

  • File size: 375,734 bytes
  • File name: unknown
  • Earliest Contents Modification: 2021-12-01 05:02:06 UTC

SHA256 hash: ff34c1fd26b699489cb814f93a2801ea4c32cc33faf30f32165b23425b0780c7

  • File size: 537,397 bytes
  • File name: Partial Shipment.exe
  • Any.Run analysis from 2021-12-01: link

The pcap from Any.Run's analysis of this malware sample shows a new data exfiltration path.  The infected Windows host sent a message with stolen data to a Gmail address using a compromised email account from a mail server established through a hosting provider.

Shown above:  Traffic from the Any.Run analysis filtered in Wireshark.

Shown above:  TCP stream shows stolen data sent to Gmail address using the compromised email account.

Final words

The basic tactics of Agent Tesla have not changed.  However, post-infection traffic from samples since 2021-12-01 indicates Agent Tesla using STMP for data exfiltration now sends to Gmail addresses.  Based on the names of these addresses, I believe they are fraudulent Gmail accounts, or they were specifically established to receive data from Agent Tesla.

Brad Duncan
brad [at]

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Log4j 2 Security Vulnerabilities Update Guide, (Wed, Dec 29th)

This post was originally published on this site

As Apache Log4j 2 security vulnerabilities continue to surface, and are quickly addressed by the Log4j Security Team, keeping track of specific CVEs, severity, and affected versions can be a bit of a task on the fly. As such, herein is a quick table version of update guidance. The current supported version of Log4j2 for Java 8 is 2.17.1 as of this writing.

Note: Log4j 1 is end of life and no longer supported. Java 7 and 6 are end of life and no longer supported. Please upgrade to current, supported versions accordingly.

Log4j 2 Security Vulnerabilities Update Guide Reference:
Severity CVE fixed Description CVSS Java 8 Java 7 Java 6 Versions Affected
Moderate CVE-2021-44832 Apache Log4j2 vulnerable to RCE via JDBC Appender when attacker controls configuration. 6.6 2.17.1 2.12.4 2.3.2 2.0-alpha7 to 2.17.0, excluding 2.3.2 and 2.12.4
Moderate CVE-2021-45105 Apache Log4j2 does not always protect from infinite recursion in lookup evaluation 5.9 2.17.0 2.12.3 2.3.1 All versions from 2.0-beta9 to 2.16.0, excluding 2.12.3
Critical CVE-2021-45046 Apache Log4j2 Thread Context Lookup Pattern vulnerable to remote code execution in certain non-default configurations 9 2.16.0 2.12.2   All versions from 2.0-beta9 to 2.15.0, excluding 2.12.2
Critical CVE-2021-44228 Apache Log4j2 JNDI features do not protect against attacker controlled LDAP and other JNDI related endpoints. 10 2.15.0     All versions from 2.0-beta9 to 2.14.1

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

LotL Classifier tests for shells, exfil, and miners, (Tue, Dec 28th)

This post was originally published on this site

A supervised learning approach to Living off the Land attack classification from Adobe SI


Happy Holidays, readers!
First, a relevant quote from a preeminent author in the realm of intelligence analysis, Richards J. Heuer, Jr.:
“When inferring the causes of behavior, too much weight is accorded to personal qualities and dispositions of the actor and not enough to situational determinants of the actor’s behavior."
Please consider Mr. Heuer’s Psychology of Intelligence Analysis required reading.
The security intelligence team from Adobe’s Security Coordination Center (SCC) have sought to apply deeper analysis of situational determinants per adversary behaviors as they pertain to living-off-the-land (LotL) techniques. As the authors indicate, “bad actors have been using legitimate software and functions to target systems and carry out malicious attacks for many years…LotL is still one of the preferred approaches even for highly skilled attackers." While we, as security analysts, are party to adversary and actor group qualities and dispositions, the use of LotL techniques (situational determinants) proffer challenges for us. Given that classic LotL detection is rife with false positives, Adobe’s SI team used open source and representative incident data to develop a dynamic and high-confidence LotL Classifier, and open-sourced it. Please treat their Medium post, Living off the Land (LotL) Classifier Open-Source Project and related GitHub repo as mandatory reading before proceeding here. I’ll not repeat what they’ve quite capably already documented.

Their LotL Classifier includes two components: feature extraction and an ML classifier algorithm. Again, read their post on these components, but I do want to focus a bit on their use of the random forest classifier for this project. As LotL Classifier is written in Python the project utilizes the sklearn.ensemble.RandomForestClassifier class from scikit-learn, simple and efficient tools for predictive data analysis and machine learning in Python. Caie, Dimitriou and Arandjelovic (2021), in their contribution to the book Artificial Intelligence and Deep Learning in Pathology, state that random forest classifiers are part of the broad umbrella of ensemble-based learning methods, are simple to implement, fast in operation, and are successful in a variety of domains. The random forest approach makes use of the construction of many “simple” decision trees during the training stage, and the majority vote (mode) across them in the classification stage (Caie et al., 2021). Of particular benefit, this voting strategy corrects for the undesirable tendency of decision trees to overfit training data (Caie et al., 2021). Cotaie, Boros, Vikramjeet, and Malik, the Living off the Land Classifier authors, found that, though they used a variety of different classifiers during testing, their best results in terms of accuracy and speed were achieved using the RandomForest classifier. This was driven in large part due to their use of test data representative of “real world” situations during the training stage.
The authors include with LotL Classifier two datasets: bash_huge.known (Linux) and cmd_huge.known (Windows). Each contain hundreds of commands known to represent LotL attacks as provided via the likes of GTFOBins, a curated list of Unix binaries that can be used to bypass local security restrictions in misconfigured systems, and Living Off The Land Binaries, Scripts and Libraries, a.k.a. LOLBAS. Referring again to LotL Classifier’s two components, feature extraction and a classifier algorithm, vectorization ensues as a step in feature extraction where distinct features are pulled from the text in these datasets for the model to train on. The classifier utilizes the training data to better understand how given input variables relate to the class.
Your choices regarding implementation and utilization of LotL Classifier vary. You can clone the repo, establish a Python virtual environments, and run easily as such. You may find, as I do, that the use of Jupyter notebooks is the most productive means by which to utilize models of this nature. The project includes an examples directory to enable your immediate use and testing via the 01_quick_start.ipynb Jupyter notebook. This notebook includes the same two example scripts as provided in the project readme, one for Linux commands and one for Windows. Note the important definable parameters in these scripts, PlatformType.LINUX and PlatformType.WINDOWS, as you begin your own use and create your own sripts. Cross-pollination won’t yield fruit. 😉 After testing the Quick Start notebook, I created a notebook that extends the project quick start examples to three distinct scenarios (categories) derived from GTFOBins, LOLBAS, and realworld analysis. These include Linux reverse shells, Linux file uploads a.k.a. exfil, and Windows coin miners. Figure 1 represents the categorized scenarios.

Jupyter notebook

Figure 1: Jupyter notebook: LotL reverse shells, filed uploads, coin miners

You can experiment with this notebook for yourselves, via GitHub.

Let’s explore results. Findings are scored GOOD, NEUTRAL, or BAD; I intentionally selected LotL strings that would be scored as BAD. Per the author’s use of feature extraction coupled with secondary validation courtesy of the BiLingual Evaluation Understudy (BLEU) metric, consider the results of our Linux reverse shell examples as seen in Figure 2.

Reverse shells

Figure 2: LotL reverse shells results

The authors employ labels for the same class of features, including binaries, keywords, patterns, paths, networks, and similarity, followed by a BLEU score to express the functional similarity of two command lines that share common patterns in the parameters. As a result, GTFObins examples for Netcat and bash are scored BAD, along with Gimp, but are further bestowed with LOOKS_LIKE_KNOWN_LOL. Indeed it does. Note the model triggering on numerous keywords, commands, and paths.
Please, again, read the author’s related work for a RTFM view into their logic and approach. My Linux file upload examples followed suit with the reverse shells, so no need to rinse and repeat here, but you can on your own with the notebook or the individual Python scripts.
The coin miner samples worked as intended and were again scored as one would hope, BAD with a dash of LOOKS_LIKE_KNOWN_LOL for good measure, as seen in Figure 3.

Coin miners

Figure 3: LotL coin miner results

Again, we note keyword and command matches, and the full treatment for the regsvr32 example.
As the author’s say: “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.”

The Adobe SI crew’s work has piqued my interest such that I intend to explore their One Stop Anomaly Shop (OSAS) next. From their research paper, A Principled Approach to Enriching Security-related Data for Running Processes through Statistics and Natural Language Processing, Boros et al. (2021) propose a different approach to cloud-based anomaly detection in running processes:

  1. enrich the data with labels
  2. automatically analyze the labels to establish their importance and assign weights
  3. score events and instances using these weights

I’ll certainly have opportunities to test this framework at scale; if findings are positive, I’ll share results here.
This has been an innovative offering to explore, I’ve thoroughly enjoyed the effort and highly recommend your pursuit of same.
Cheers…until next time.

Boros T., Cotaie A., Vikramjeet K., Malik V., Park L. and Pachis N. (2021). A Principled Approach to Enriching Security-related Data for Running Processes through Statistics and Natural Language Processing. In Proceedings of the 6th International Conference on Internet of Things, Big Data and Security – Volume 1: IoTBDS, ISBN 978-989-758-504-3, pages 140-147. DOI: 10.5220/0010381401400147

Caie P., Dimitriou N., Arandjelovic O., Chapter 8 – Precision medicine in digital pathology via image analysis and machine learning, Editor(s): Stanley Cohen, Artificial Intelligence and Deep Learning in Pathology, Elsevier, 2021, Pages 149-173, ISBN 9780323675383,

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Attackers are abusing MSBuild to evade defenses and implant Cobalt Strike beacons, (Mon, Dec 27th)

This post was originally published on this site

Microsoft Build Engine is the platform for building applications on Windows, mainly used in environments where Visual Studio is not installed. Also known as MSBuild, the engine provides an XML schema for a project file that controls how the build platform processes and builds software [1]. The project file element named ‘Tasks’ designates independent executable components to run during the project building. Tasks are meant to perform build operations but are being abused by attackers to run malicious code under the MSBuild disguise. The technique is mapped on Mitre ATT&CK as “Trusted Developer Utilities Proxy Execution” – T1127.001.

This is the second malicious campaign I got using MSBuild in less than a week. Usually, it starts with an RDP access using a valid account, spreads over the network via remote Windows Services (SCM), and pushes Cobalt Strike beacon to corporate hosts abusing the MSBuild task feature as described in today’s diary.

Abusing MSBuild

To make it easier to understand how attackers are abusing MSBuild, look at Figure 1. It shows the XML file for a simple project (HelloWorld.csproj) with a task named HelloWorld prepared to compile and execute the custom C# code during project building.

Figure 1 – MSBuild HelloWorld Project

When building the project using MSBuild, as seen in Figure 2, the task HelloWorld will be executed, which in turn will call the ‘Execute()’ and ‘Start()’ methods, which will finally print the “Hello World” message on the console. The ‘Execute()’ method comes from the interface ‘ITask’ implemented by the ‘HelloWorld’ class [2].

Figure 2 – Building HelloWorld with MSBuild

Now, let’s look at the malicious MSBuild project file in Figure 3. Using the same principle, when called by MSBuild, it will compile and execute the custom C#, decode and execute the Cobalt Strike beacon on the victim’s machine.

Figure 3 – Malicious MSBuild project file

Figure 4  – MSBuild executing Cobalt Strike beacon

In Figure 5, it’s possible to see the beacon connected to the C2 server (

Figure 5 – Cobalt Strike beacon connected to the C2 server

Analyzing the Cobalt Strike beacon

To analyze the code executed by the malicious MSBuild project, first, it’s necessary to decrypt the variable ‘buff’ (refer to Figure 3). The variable is decoded during MSBuild execution by the "for" loop marked in Figure 6. It runs an XOR function between each byte of the ‘buff’ and key_code arrays. By the end, the ‘buff’ byte array will store the decrypted malicious content. The rest of the code will allocate memory and execute the payload using Marshal.GetDelegateForFunctionPointer.

Figure 6 – Decrypting malicious payload

I implemented the same decryption function in Python to decrypt the code, as seen in Figure 7. Before the decryption loop, the script reads the content of buff and key_code from the MSBuild project file and copy to the correspondent variables in the Python script. The script code is available here

Figure 7 – Decrypt function in Python

To profile the resulting binary, I started looking for its hash on VirusTotal, which returned no matches. Continuing the low hanging fruit approach, I did a ‘strings’ and found interesting strings that I have already seen in other Cobalt Strike beacons like “could not create remote thread in %d: %d” and “IEX (New-Object Net.Webclient). DownloadString(''); %s”.

To confirm, I used the tool CobaltStrikeParser from Sentinel-One [3]. This tool parses the Cobalt Strike beacon and extracts its configuration, as seen in Figure 8.

Figure 8 – Extracting Cobalt Strike beacon's configuration

The configuration says that the C2 server ( will be contacted via HTTPS (encrypted traffic) on port TCP/8888. It also informs the endpoint /dpixel for GET requests and /submit.php for POST requests and that the spawn processes are rundll32.exe – this is the process used to run commands requested by C2 on the victim’s machine.

One usual next step when I have a C2 up and running is to analyze the traffic and try to discover more about the campaign. To do so in this case, I verified if the private key for the Cobalt Strike beacon is known using a project by Didier Steven’s 1768 [4]. This project not only parses the Cobalt Strike’s beacon configuration but also indicates if the corresponding private key was found by Didier on malicious Cobalt Strike servers on the internet. Read more about this project at [5].

So, after running the 1768 tool, I could find that the private key is known for the Cobalt Strike beacon analyzed, as seen in Figure 9.

Figure 9 – Cobalt Strike's beacon Known Private Key

But, before using the private key to decrypt the Cobalt Strike traffic, remember that the communication with the C2 is SSL encrypted. In Figure 10, there is a sample captured traffic.

Figure 10 – Encrypted C2 traffic

One way to decrypt the SSL traffic is to use a man-in-the-middle approach. To this end, I used the project mitmproxy. The communication schema when using a tool like this is to make the client, the Cobalt Strike beacon, talk to the SSL proxy and make the SSL proxy to talk with the C2 server. In the middle (proxy), we will have the traffic unencrypted.

See below command I used to run the mitmproxy:

$ SSLKEYLOGFILE="~/.mitmproxy/sslkeylogfile.txt" mitmproxy -k –mode transparent

The SSLKEYLOGFILE variable will indicate mitmproxy to store SSL/TLS master keys on the file sslkeylogfile.txt. This file can be used to decrypt the traffic in external tools like Wireshark []. The ‘-k’ says to mitmproxy to do not verify upstream server SSL/TLS certificates and the transparent mode is used when the client does not know or is configured to use a proxy.

Before running the mitmproxy, remember to enable ‘IP forwarding’ and create the necessary NAT rules to redirect the SSL traffic from the client machine (where the Cobalt Strike beacon is running) mitmproxy port. For my case, the commands were:

$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo iptables -t nat -A PREROUTING -i <interface> -p tcp –dport 8888 -j REDIRECT –to-port 8080

Another important thing is installing the mitmproxy certificate on the Windows machine running the beacon. The default certificate is located into ~/.mitmproxy/mitmproxy-ca-cert.cer file. Copy it to Windows and install the certificate on Trusted Root Certification Authorities and Trusted Publishers, as seen in Figure 11.

Figure 11 – MITMPROXY certificate installation

Once the pre-requisites are met, running the mitmproxy could have the SSL unencrypted traffic collected, as seen in Figure 12 and Figure 13.

Figure 12 – C2 traffic collection

Figure 13 – C2 traffic details

Remember from the Cobalt Strike beacon configuration (Figure 8) that the HTTP get metadata is stored into the ‘Cookie’ header encoded with base64. So, the content marked in red in Figure 13 is the content to be decrypted using the Cobalt Strike private key.

To decrypt the content, I used another excellent tool from Didier Stevens called cs-decrypt-metadata [6] as seen in Figure 14.

Figure 14 – Decrypting metadata

Finally, if you want to have the traffic unencrypted on Wireshark, you can use the SSL/TSL keys stored into “~/.mitmproxy/sslkeylogfile.txt”. Import the file using Wireshark menu Edit->Preferences->Protocols->TLS->(Pre)-Master-Secret log file name. After that, it’s possible to see the traffic unencrypted like in Figure 15.

Figure 15 – Decrypted TLS traffic


MSBuild composes the list of applications signed by Microsoft that can allow the execution of other codes. According to Microsoft's recommendations [7], these applications should be blocked by the Windows Defender Application Control (WDAC) policy.

There is a note for MSBuild.exe, though, that if the system is used in a development context to build managed applications, the recommendation is to allow msbuild.exe in the code integrity policies.



Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Quicktip: TShark's Options -e and -T, (Sun, Dec 26th)

This post was originally published on this site

When you use TShark's option -e to display a field value, you need to include option -Tfields.

You don't actually have to memorize this, TShark will help you when you use option -e without option -T:

And by doing so, you learn about other output formats, like JSON:


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TShark Tip: Extracting Field Values From Capture Files, (Sat, Dec 25th)

This post was originally published on this site

TShark is WireShark's console program: it's like WireShark, but with a command-line interface in stead of a GUI.

TShark can process a capture file: use option -r to read and process the capture file, like this:

Option -e can be used to display the value of a field, like ip.src. You have to combine option -e with option -Tfields to display field values:

When I want to produce a list of User Agent Strings found in a capture file, I select field http.user_agent. Piping the output of TShark into my tool allows me to produce a list of unique values (with counter):

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Example of how attackers are trying to push crypto miners via Log4Shell, (Fri, Dec 24th)

This post was originally published on this site

While following Log4Shell's exploit attempts hitting our honeypots, I came across another campaign trying to push a crypto miner on the victim’s machines. The previous campaign I analyzed used a simple post-exploitation Powershell script to download and launch the coin miner xmrig. The new one uses a .Net launcher to download, decrypt, and execute the binaries.

The diagram in Figure 1 outlines the steps that would occur in an application vulnerable to Log4Shell if successfully targeted by this campaign. Follow the numbers in blue and their descriptions below.

Figure 1 – Log4Shell exploitation to implant a Monero crypto miner

1. The attacker sends JNDI strings on an HTTP request to the web application both on the URL as ‘GET’ parameters and on ‘user-agent’. This is an aleatory attack strategy trying to find applications that log the URL or the user-agent using a vulnerable Log4j library anywhere on the internet. The request is shown in Figure 2.

Figure 2 – Attacker request

2. The application sends to a vulnerable Log4j (version < 2.17.0) library data to be logged including the JNDI address;

3. Log4j library, on vulnerable versions, allows variables to be retrieved by JNDI (Java Naming and Directory Interface). To do so, the library looks for “${jndi…” addresses on the data to be logged and, if it finds, it will look up the object using LDAP/S, RMI, DNS. The lookup will return a reference to a remote class which will then be downloaded and executed. For the case of this campaign, the class is named “App.class” and it is hosted on the address hxxp://185[.]18.52.155:8080/App.class;

4. The malicious payload “App.class”, depending on the operating system it is running on, will download and run additional payloads as seen in Figure 3. If on Linux, it will download and run the xmrig binary. If on Windows, it will use a Launcher tool written in .Net to download and execute xmrig, as described in the next steps.

Figure 3 – Malicious class downloading and executing additional artifacts

5. The Windows version of ‘Launcher’ is written in .Net. Analyzing its code, it is possible to see functions to download and decrypt additional artifacts as shown in Figure 4 and Figure 5 respectively.

Figure 4 – Function to download additional artifacts

Figure 5 – Decrypt function

To avoid having to reverse the decrypt function and use it on the downloaded payloads, I debugged the code using the dnSpy tool until the point the files were decrypted and saved them.

6. The downloaded and decrypted files are described below:

  1. LC_KEY_L: the key to decrypt the payloads;
  2. LC_DATA1_L: an xmrig binary;
  3. LC_DATA2_L: the binary WinRing0.sys, part of xmrig;
  4. LC_DLL_L: a DLL loaded by Launcher.exe. It is written in .Net and uses a protector called Confuser which makes it harder to just decompile and analyze the code. By statically analyzing the code, it was possible to see calls for NetFwTypeLib, a library to manage Windows Firewall.

7. The next step is the execution of the xmrig binary, which is the Monero crypto miner itself. The config.json used by the attacker will connect the victim’s machine to a mining pool at


So far, we’ve been able to identify a few different types of attacks against our honeypots trying to take advantage of Log4Shell. Crypto miner implants are prevalent, but we’ve seen attempts to deploy Dridex banking trojan and meterpreter on Linux boxes as well.

See below the IOCs for the campaign covered in this diary.

IOCs (MD5/SHA256 hashes)







data1 (xmrig)


data2 (WinRing0.sys)


LAUNCH_L (xmrig)




Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Defending Cloud IMDS Against log4shell (and more), (Thu, Dec 23rd)

This post was originally published on this site

Thanks to Mick Douglas for contributing these notes on defending against log4shell exploits targeting cloud Internal Meta Data Services (IMDS)

A new twist on using the Log4j vulnerability was discovered that could allow attackers access to the IMDS. This allows them to gather detailed and current information about your cloud infrastructure. Typically, attackers cannot easily do this because cloud providers have protections that prevent this type of access. With Log4j, if your system is vulnerable, attackers can make requests appear to come from trusted infrastructure from within your cloud.

Before we get into defenses, it's imperative that you understand patching is your path of least resistance. Please patch if at all possible. If you cannot patch, I personally suggest the IMMA model. Here's how IMMA would work in this instance.


  • Setup a WAF for all internet-facing systems. There are detects for this the Log4j attack in all main providers.
  • If you can, disable the IMDS web access. Typically this is a bad idea since it breaks almost all automation, but if you don't need it, turn it off for now.


  • Prevent attacker callbacks by doing strong egress controls. Only allow protocols out that are needed.
  • Review network trust boundaries. Consider reviewing all access into your systems, pay careful attention to third parties which have special network linkages. Don't allow your MSSP or cloud VAR to be the source of your attack!
  • Limit access for tokens used by infrastructure. At a minimum, review what your defaults are.


  • Review logs on your applications. You are looking for JNDI requests. These are exceedingly rare.
  • Look for unplanned configuration changes.
  • Look for network enumeration. Only auditors and attackers ask to see network setups. Everything else knows what to talk to!
  • Look for token use from outside your cloud providers' ASN. Tokens should only be coming from the provider that issues them. The only exception will be admins who use tokens from CLI shells. Those you can make an exception list for.
  • Detect new and unexpected connections. Cloud systems have a very predictable interaction mapping. When this changes, you should be able to see it.

Active Defense:

  • Token lifetimes: If you create tokens with non-standard lifetimes, when someone creates a new token of the maximum lifetime (typically 6 hours) that's highly noticeable.
  • Honey tokens: create fake tokens and see if anyone tries to use them anywhere.
  • Setup honey environment variables: IDMS supports almost any data. Why not give attackers something juicy and see if they try to use it anywhere? One of my favorite use cases for this is "AutomationAdminAccessToken=068f073e79fd3b69be0b0a1338537aff" See if they try using it!

Mick Douglas,
Principle SANS Instructor
IANS Faculty
Founder and Managing Partner, InfoSec Innovations

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

log4shell and cloud provider internal meta data services (IMDS), (Thu, Dec 23rd)

This post was originally published on this site

Are you running an application in a cloud environment like Amazon's EC2? Cloud providers usually offer a nifty way to manage credentials: Internal Meta Data Services (IMDS). For AWS, the IMDS listens on 169.254/16 being link-local only,  only code running on the host itself can reach it, and your code can use this service to retrieve credentials. Or, well, any code running on the host can. 

As Mick Douglas pointed out in his tweet here:

Kat Traxler's blog post, referred to by Mick, has more details.

In Amazon's case, two versions of the IMDS are offered. Exploitation is trivial for IMDSv1. All it takes is an SSRF vulnerability, and as JNDI is all about sending requests to arbitrary hosts, this is covered. Amazon hardened its IMDS implementation in version 2 (using a PUT request to retrieve an access token first). But with log4j, you got arbitrary code execution, so sending a PUT and the following GET with the proper authentication header is pretty straightforward.

This issue isn't just a log4j issue. It is common to all remote code execution vulnerabilities in cloud environments (or again to some SSRF issues). 

Can you do something other than patching all your log4j instances?

  • You may disable the IMDS service if not needed (careful!)
  • Verify all your cloud permissions (right… patching is likely easier)
  • most importantly: rotate credentials on systems you suspect are compromised (= all unpatched systems at this point)

Let us know if there is better advice to monitor access to the IMDS.


Johannes B. Ullrich, Ph.D. , Dean of Research,

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.