YARA v4.3.0-rc1 –print-xor-key, (Sat, Dec 31st)

This post was originally published on this site

YARA release candidate 1 for version 4.3.0 brings a new option for XOR strings: –print-xor-key

This option prints out the XOR key that matches an XOR string (0x41 in this example):

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SPF and DMARC use on GOV domains in different ccTLDs, (Fri, Dec 30th)

This post was originally published on this site

Although e-mail is one of the cornerstones of modern interpersonal communication, its underlying Simple Mail Transfer Protocol (SMTP) is far from what we might call “robust” or “secure”[1]. By itself, the protocol lacks any security features related to ensuring (among other factors) integrity or authenticity of transferred data or the identity of their sender, and creating a “spoofed” e-mail is therefore quite easy. This poses a significant issue, especially when one considers that most ordinary people don’t tend to question the validity of officially looking messages if it appears that they were sent from a respectable/well-known domain.

Opening the Door for a Knock: Creating a Custom DShield Listener, (Thu, Dec 29th)

This post was originally published on this site

There are a variety of services listening for connections on DShield honeypots [1]. Different systems scanning the internet can connect to these listening services due to exceptions in the firewall. Any attempted connections blocked by the firewall are logged and can be analyzed later. This can be useful to see TCP port connection attempts, but it's usefulness is limited. Without the ability to complete the SYN, SYN-ACK, ACK handshake process other protocol data may not be sent.

Playing with Powershell and JSON (and Amazon and Firewalls), (Wed, Dec 28th)

This post was originally published on this site

In this post we'll take a look at parsing and manipulating JSON in Powershell.

Taking a look at the problem at hand, my client was using AWS Route53 load balancers, and wanted to permit access to (only) the appropriate services so that the load balancers could do "health checks" on them.  

Simple enough, I said, the address ranges are all public (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-ip-addresses.html).  The download (https://ip-ranges.amazonaws.com/ip-ranges.json) is a JSON file, and looks like this:

{
  "syncToken": "1670257989",
  "createDate": "2022-12-05-16-33-09",
  "prefixes": [
    {
      "ip_prefix": "3.2.34.0/26",
      "region": "af-south-1",
      "service": "AMAZON",
      "network_border_group": "af-south-1"
    },
    {
      "ip_prefix": "3.5.140.0/22",
      "region": "ap-northeast-2",
      "service": "AMAZON",
      "network_border_group": "ap-northeast-2"
    },
    {
      "ip_prefix": "13.34.37.64/27",
      "region": "ap-southeast-4",
      "service": "AMAZON",
      "network_border_group": "ap-southeast-4"
    },
    { … (and so on) …

At was at that point that I realized a "grep" wasn't going to work for this, and either I had do this manually (which would mean I'd miss one for sure) or import this JSON file into something I could use – in my case lately that's PowerShell.  In this case we won't be writing a full-on script, we'll be using PowerShell as a working environment / scratch pad and work towards our solution rather than as a programming language.

Easy enough, let's start import the file, and convert the JSON into PowerShell objects:

$a = get-content -raw -path .ipranges.json | convertfrom-json

Looking at a sample entry, the data (and solving our problem) is now much simpler:

$a.prefixes[3]

ip_prefix      region       service network_border_group
———      ——       ——- ——————–
13.34.65.64/27 il-central-1 AMAZON  il-central-1

To just get the requisite services and regions (us-east and us-west):

$a.prefixes | where-object {$_.region -like "us-*" -and $_.service -eq "ROUTE53_HEALTHCHECKS"}

ip_prefix         region    service              network_border_group
———         ——    ——-              ——————–
107.23.255.0/26   us-east-1 ROUTE53_HEALTHCHECKS us-east-1
54.243.31.192/26  us-east-1 ROUTE53_HEALTHCHECKS us-east-1
54.183.255.128/26 us-west-1 ROUTE53_HEALTHCHECKS us-west-1
54.241.32.64/26   us-west-1 ROUTE53_HEALTHCHECKS us-west-1
54.244.52.192/26  us-west-2 ROUTE53_HEALTHCHECKS us-west-2
54.245.168.0/26   us-west-2 ROUTE53_HEALTHCHECKS us-west-2

What happened?  Well nothing, we got a lot of "access denied" messages on the firewall ACL.  I turns out that at this point in time, none of the load balancer health checks are actually coming from either us-east or us-west, they're all in the "global" range – none of the services we were playing with are datacenter specific (yet)

So I updated my one-liner to look like:

$a.prefixes | where-object {($_.region -like "us-*" -or $_.region -like "GLOBAL") -and $_.service -eq "ROUTE53_HEALTHCHECKS"}

ip_prefix         region    service              network_border_group
———         ——    ——-              ——————–
15.177.0.0/18     GLOBAL    ROUTE53_HEALTHCHECKS GLOBAL
107.23.255.0/26   us-east-1 ROUTE53_HEALTHCHECKS us-east-1
54.243.31.192/26  us-east-1 ROUTE53_HEALTHCHECKS us-east-1
54.183.255.128/26 us-west-1 ROUTE53_HEALTHCHECKS us-west-1
54.241.32.64/26   us-west-1 ROUTE53_HEALTHCHECKS us-west-1
54.244.52.192/26  us-west-2 ROUTE53_HEALTHCHECKS us-west-2
54.245.168.0/26   us-west-2 ROUTE53_HEALTHCHECKS us-west-2

To just get the subnets to put into  the firewall rule:

($a.prefixes | where-object {($_.region -like "us-*" -or $_.region -like "GLOBAL") -and $_.service -eq "ROUTE53_HEALTHCHECKS"}).ip_prefix
15.177.0.0/18
107.23.255.0/26
54.243.31.192/26
54.183.255.128/26
54.241.32.64/26
54.244.52.192/26
54.245.168.0/26

Success!

The final firewall rules now look like:

object-group network AWS_ROUTE53_HEALTHCHECKS
    network-object 54.183.255.128 255.255.255.192
    network-object 54.241.32.64 255.255.255.192
    network-object 54.244.52.192 255.255.255.192
    network-object 54.245.168.0 255.255.255.192
    network-object 107.23.255.0 255.255.255.192
    network-object 54.243.31.192 255.255.255.192
    network-object 15.177.0.0 255.255.192.0

object-group network ADFS-TARGETS
    network-object object SITE01-ADFS
    network-object object SITE02-ADFS

access-list outside_acl line 37 remark AWS Route53 Load Balancer Health Checks
access-list outside_acl line 38 extended permit tcp object-group AWS_ROUTE53_HEALTHCHECKS object-group ADFS-TARGETS eq https

This would have taken roughly the same time to do manually, except (knowing myself) I'd be sure to miss one line or drop a digit during a copy/paste or something.  Plus this is something we'll want to revisit periodically – as the AWS subnets change we'll need to update the firewall rules of course.   And of course this is also a code snip I can use going forward for all kinds of JSON data, not just this specific thing.

I hope this is a useful thing for your toolbox – if you've got a similar situation with a different slant (JSON, XML or some other format) by all means share, post to our comment form below!

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Exchange OWASSRF Exploited for Remote Code Execution, (Thu, Dec 22nd)

This post was originally published on this site

According to a post by Rapid7, they have observed Exchange server 2013, 2016 & 2019 being actively exploited for "a chaining of CVE-2022-41080 and CVE-2022-41082 to bypass URL rewrite mitigations that Microsoft provided for ProxyNotShell allowing for remote code execution (RCE) via privilege escalation via Outlook Web Access (OWA)."[1]

Can you please tell me what time it is? Adventures with public NTP servers., (Wed, Dec 21st)

This post was originally published on this site

Keeping accurate time has never been easier. In the early days of my computing experience, the accuracy of computer clocks was always questionable. Many of them kept worse time than a $5 wristwatch. Having a cheap quartz oscillator on a motherboard with widely varying temperatures just didn't work all that well.

Along came NTP, and now, almost all operating systems, even many IoT devices, come preconfigured with some reasonable NTP server. In addition, "pool.ntp.org" has made available many publicly available servers to choose from. Currently, "pool.ntp.org" claims to consist of about 4,000 servers provided by volunteers. But how good are they? That is a question that often comes up with volunteer projects like that. Pretty much anybody may join "the pool" and of course, there is no guarantee that the times are accurate. So I did a quick test and wrote a little python script to figure out how good they are.

Spoiler alert: They are actually pretty good.

I used various public NTP servers lists, and lists for pool.ntp.org to find as many servers as possible. Overall, I came up with 1,159 IP addresses for publicly advertised servers. Next, I used the Python NTP library to determine the offset of these servers to my own desktop. I realize that my desktop doesn't have a perfect clock, but it should be pretty good. I use two internal GPS-synchronized NTP servers. But overall, I wouldn't trust anything better than may be 10 ms.

Among the 1,158 datapoints, only 5 showed offsets well above one second. 

+-----------------+------------+
| IP Address.     | lastoffset |
+-----------------+------------+
| 85.204.137.77   | 2147483647 | - looks like a consumer IP in Denmark
| 128.4.1.1       |    1175530 | - rackety.udel.edu. Probably the oddest one. A well known time server. 
| 140.203.204.77  |       6999 | - Irish University
| 148.216.0.30    | 2147483647 | - Mexican Univeristy
| 199.249.223.123 |       1414 | - ntp.quintex.com
+-----------------+------------+

Note that 2147483647 is 2^31-1, so these servers were not in sync and returned an empty response. The others need a bit of additional investigation to eliminate a "fluke" or an issue with network connectivity.

Here is a quick frequency distribution:

But overall, these public NTP servers are well suited for your average home or small business network. Don't run a 5G network with them as a time source. More sophisticated time servers usually do not just provide an accurate absolute time but also a frequency standard. For not too much money, you can either build your own with a relatively cheap GPS receiver and a small computer like a Raspberry Pi or buy a ready-made simple appliance from companies like Centerclick.com or timemachinescorp.com. These appliances typically use GPS as a source. Even if you use an external NTP server, try making one machine in your network the "time source" and sync your other machines to this one NTP server. This will help public time servers a bit.

NTP also has a nice "OS Fingerprinting" side effect: Many operating systems use specific NTP servers (like time.apple.com for Apple). In some cases, you may even be able to pick up on different IoT vendors based on the DNS lookup for the NTP service they are using. Use an internal DNS server to direct these requests to the IP address of your internal NTP server. 

Lately, as a replacement for the old "ntpd" NTP server, some Linux operating systems started using "chrony". Chrony was created by Facebook and promised better accuracy. But resource requirements are similar to ntpd, and both use the same network protocol. There are also options to authenticate NTP requests and responses via a simple shared key, or, as with pretty much any protocol these days, there is an "NTP over TLS" protocol currently supported by Cloudflare's NTP servers.

For a list of NTP servers we are tracking, see https://isc.sans.edu/api/threatlist/ntpservers?json . The list is currently updated once a day.

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux File System Monitoring & Actions, (Tue, Dec 20th)

This post was originally published on this site

There can be multiple reasons to keep an eye on a critical/suspicious file or directory. For example, you could track an attacker and wait for some access to the captured credentials in a phishing kit installed on a compromised server. You could deploy an EDR solution or an OSSEC agent that implements an FIM (‘File Integrity Monitoring”)[1]. Upon a file change, an action can be triggered. Nice, but what if you would like a quick solution but agentless? (In the scope of an incident, for example)

Hunting for Mastondon Servers, (Mon, Dec 19th)

This post was originally published on this site

Since Elon Mush took control of Twitter, there has been considerable interest in alternative platforms to the micro-blogging network. Without certainty about Twitter's future, many people switched to the Mastodon[1] network. Most of the ISC Handlers are now present on this decentralized network. For example, I’m reachable via @xme@infosec.exchange[2]. You can find our addresses on the Contact page[3].

Infostealer Malware with Double Extension, (Sun, Dec 18th)

This post was originally published on this site

Got this file attachment this week pretending to be from HSBC Global Payments and Cash Management. The attachment payment_copy.pdf.z is a rar archive, kind of unusual with this type of file archive but when extracted, it comes out as a double extension with pdf.exe. The file is a trojan infostealer and detected by multiple scanning engines.