SmartApeSG campaign uses ClickFix page to push Remcos RAT, (Sat, Mar 14th)

This post was originally published on this site

Introduction

This diary describes a Remcos RAT infection that I generated in my lab on Thursday, 2026-03-11. This infection was from the SmartApeSG campaign that used a ClickFix-style fake CAPTCHA page.

My previous in-depth diary about a SmartApeSG (ZPHP, HANEYMANEY) was in November 2025, when I saw NetSupport Manager RAT. Since then, I've fairly consistently seen what appears to be Remcos RAT from this campaign.

Finding SmartApeSG Activity

As previously noted, I find SmartApeSG indicators from the Monitor SG account on Mastodon, and I use URLscan to pivot on those indicators to find compromised websites with injected SmartApeSG script.

Details

Below is an image of HTML in a page from a legitimate but compromised website that shows the injected SmartApeSG script.


Shown above: Page from a legitimate but compromised site that highlights the injected SmartApeSG script.

The injected SmartApeSG script generates a fake CAPTCHA-style "verify you are human" page, which displays ClickFix-style instructions after checking a box on the page. A screenshot from this infection is shown below, and it notes the ClickFix-style script injected into the user's clipboard. Users are instructed to open a run window, paste the script into it, and hit the Enter key.


Shown above: Fake CAPTCHA page generated by a legitimate but compromised site, showing the ClickFix-style command.

I used Fiddler to reveal URLS from the HTTPS traffic, and I recorded the traffic and viewed it in Wireshark. Traffic from the infection chain is shown in the image below.


Shown above: Traffic from the infection in Fiddler and Wireshark.

After running the ClickFix-style instructions, the malware was sent as a ZIP archive and saved to disk with a .pdf file extension. This appears to be Remcos RAT in a malicious package that uses DLL side-loading to run the malware. This infection was made persistent with an update to the Windows Registry.


Shown above: Malware from the infection persistent on an infected Windows host.

Indicators of Compromise

Injected SmartApeSG script injected into page from legitimate but compromised site:

  • hxxps[:]//cpajoliette[.]com/d.js

Traffic to domain hosting the fake CAPTCHA page:

  • hxxps[:]//retrypoti[.]top/endpoint/signin-cache.js
  • hxxps[:]//retrypoti[.]top/endpoint/login-asset.php?Iah0QU0N
  • hxxps[:]//retrypoti[.]top/endpoint/handler-css.js?00109a4cb788daa811

Traffic generated by running the ClickFix-style script:

  • hxxp[:]//forcebiturg[.]com/boot  <– 302 redirect to HTTPS URL
  • hxxps[:]//forcebiturg[.]com/boot  <– returned HTA file
  • hxxp[:]//forcebiturg[.]com/proc  <– 302 redirect to HTTPS URL
  • hxxps[:]//forcebiturg[.]com/proc  <– returned ZIP archive archive with files for Remcos RAT

Post-infection traffic for Remcos RAT:

  • 193.178.170[.]155:443 – TLSv1.3 traffic using self-signed certificate

Example of ZIP archive for Remcos RAT:

  • SHA256 hash: b170ffc8612618c822eb03030a8a62d4be8d6a77a11e4e41bb075393ca504ab7
  • File size: 92,273,195 bytes
  • File type: Zip archive data, at least v2.0 to extract, compression method=deflate
  • Example of saved file location: C:Users[username]AppDataLocalTemp594653818594653818.pdf

Of note, the files, URLs and domains for SmartApeSG activity change on a near-daily basis, and the indicators described in this article are likely no longer current. However, the overall patterns of activity for SmartApeSG have remained fairly consistent over the past several months.


Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Twenty years of Amazon S3 and building what’s next

This post was originally published on this site

Twenty years ago today, on March 14, 2006, Amazon Simple Storage Service (Amazon S3) quietly launched with a modest one-paragraph announcement on the What’s New page:

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

Even Jeff Barr’s blog post was only a few paragraphs, written before catching a plane to a developer event in California. No code examples. No demo. Very low fanfare. Nobody knew at the time that this launch would shape our entire industry.

The early days: Building blocks that just work
At its core, S3 introduced two straightforward primitives: PUT to store an object and GET to retrieve it later. But the real innovation was the philosophy behind it: create building blocks that handle the undifferentiated heavy lifting, which freed developers to focus on higher-level work.

From day one, S3 was guided by five fundamentals that remain unchanged today.

Security means your data is protected by default. Durability is designed for 11 nines (99.999999999%), and we operate S3 to be lossless. Availability is designed into every layer, with the assumption that failure is always present and must be handled. Performance is optimized to store virtually any amount of data without degradation. Elasticity means the system automatically grows and shrinks as you add and remove data, with no manual intervention required.

When we get these things right, the service becomes so straightforward that most of you never have to think about how complex these concepts are.

S3 today: Scale beyond imagination
Throughout 20 years, S3 has remained committed to its core fundamentals even as it’s grown to a scale that’s hard to comprehend.

When S3 first launched, it offered approximately one petabyte of total storage capacity across about 400 storage nodes in 15 racks spanning three data centers, with 15 Gbps of total bandwidth. We designed the system to store tens of billions of objects, with a maximum object size of 5 GB. The initial price was 15 cents per gigabyte.

S3 key metrics illustration

Today, S3 stores more than 500 trillion objects and serves more than 200 million requests per second globally across hundreds of exabytes of data in 123 Availability Zones in 39 AWS Regions, for millions of customers. The maximum object size has grown from 5 GB to 50 TB, a 10,000 fold increase. If you stacked all of the tens of millions S3 hard drives on top of each other, they would reach the International Space Station and almost back.

Even as S3 has grown to support this incredible scale, the price you pay has dropped. Today, AWS charges slightly over 2 cents per gigabyte. That’s a price reduction of approximately 85% since launch in 2006. In parallel, we’ve continued to introduce ways to further optimize storage spend with storage tiers. For example, our customers have collectively saved more than $6 billion in storage costs by using Amazon S3 Intelligent-Tiering as compared to Amazon S3 Standard.

Over the past two decades, the S3 API has been adopted and used as a reference point across the storage industry. Multiple vendors now offer S3 compatible storage tools and systems, implementing the same API patterns and conventions. This means skills and tools developed for S3 often transfer to other storage systems, making the broader storage landscape more accessible.

Despite all of this growth and industry adoption, perhaps the most remarkable achievement is this: the code you wrote for S3 in 2006 still works today, unchanged. Your data went through 20 years of innovation and technical advances. We migrated the infrastructure through multiple generations of disks and storage systems. All the code to handle a request has been rewritten. But the data you stored 20 years ago is still available today, and we’ve maintained complete API backward compatibility. That’s our commitment to delivering a service that continually “just works.”

The engineering behind the scale
What makes S3 possible at this scale? Continuous innovation in engineering.

Much of what follows is drawn from a conversation between Mai-Lan Tomsen Bukovec, VP of Data and Analytics at AWS, and Gergely Orosz of The Pragmatic Engineer. The in-depth interview goes further into the technical details for those who want to go deeper. In the following paragraphs, I share some examples:

At the heart of S3 durability is a system of microservices that continuously inspect every single byte across the entire fleet. These auditor services examine data and automatically trigger repair systems the moment they detect signs of degradation. S3 is designed to be lossless: the 11 nines design goal reflects how the replication factor and re-replication fleet are sized, but the system is built so that objects aren’t lost.

S3 engineers use formal methods and automated reasoning in production to mathematically prove correctness. When engineers check in code to the index subsystem, automated proofs verify that consistency hasn’t regressed. This same approach proves correctness in cross-Region replication or for access policies.

Over the past 8 years, AWS has been progressively rewriting performance-critical code in the S3 request path in Rust. Blob movement and disk storage have been rewritten, and work is actively ongoing across other components. Beyond raw performance, Rust’s type system and memory safety guarantees eliminate entire classes of bugs at compile time. This is an essential property when operating at S3 scale and correctness requirements.

S3 is built on a design philosophy: “Scale is to your advantage.” Engineers design systems so that increased scale improves attributes for all users. The larger S3 gets, the more de-correlated workloads become, which improves reliability for everyone.

Looking forward
The vision for S3 extends beyond being a storage service to becoming the universal foundation for all data and AI workloads. Our vision is simple: you store any type of data one time in S3, and you work with it directly, without moving data between specialized systems. This approach reduces costs, eliminates complexity, and removes the need for multiple copies of the same data.

Here are a few standout launches from recent years:

  • S3 Tables – Fully managed Apache Iceberg tables with automated maintenance that optimize query efficiency and reduce storage cost over time.
  • S3 Vectors – Native vector storage for semantic search and RAG, supporting up to 2 billion vectors per index with sub-100ms query latency. In only 5 months (July–December 2025), you created more than 250,000 indices, ingested more than 40 billion vectors, and performed more than 1 billion queries.
  • S3 Metadata – Centralized metadata for instant data discovery, removing the need to recursively list large buckets for cataloging and significantly reducing time-to-insight for large data lakes.

Each of these capabilities operates at S3 cost structure. You can handle multiple data types that traditionally required expensive databases or specialized systems but are now economically feasible at scale.

From 1 petabyte to hundreds of exabytes. From 15 cents to 2 cents per gigabyte. From simple object storage to the foundation for AI and analytics. Through it all, our five fundamentals–security, durability, availability, performance, and elasticity–remain unchanged, and your code from 2006 still works today.

Here’s to the next 20 years of innovation on Amazon S3.

— seb

A React-based phishing page with credential exfiltration via EmailJS, (Fri, Mar 13th)

This post was originally published on this site

On Wednesday, a phishing message made its way into our handler inbox that contained a fairly typical low-quality lure, but turned out to be quite interesting in the end nonetheless. That is because the accompanying credential stealing web page was dynamically constructed using React and used a legitimate e-mail service for credential collection.

Introducing account regional namespaces for Amazon S3 general purpose buckets

This post was originally published on this site

Today, we’re announcing a new feature of Amazon Simple Storage Service (Amazon S3) you can use to create general purpose buckets in your own account regional namespace simplifying bucket creation and management as your data storage needs grow in size and scope. You can create general purpose bucket names across multiple AWS Regions with assurance that your desired bucket names will always be available for you to use.

With this feature, you can predictably name and create general purpose buckets in your own account regional namespace by appending your account’s unique suffix in your requested bucket name. For example, I can create the bucket mybucket-123456789012-us-east-1-an in my account regional namespace. mybucket is the bucket name prefix that I specified, then I add my account regional suffix to the requested bucket name: -123456789012-us-east-1-an. If another account tries to create buckets using my account’s suffix, their requests will be automatically rejected.

Your security teams can use AWS Identity and Access Management (AWS IAM) policies and AWS Organizations service control policies to enforce that your employees only create buckets in their account regional namespace using the new s3:x-amz-bucket-namespace condition key, helping teams adopt the account regional namespace across your organization.

Create your S3 bucket with account regional namespace in action
To get started, choose Create bucket in the Amazon S3 console. To create your bucket in your account regional namespace, choose Account regional namespace. If you choose this option, you can create your bucket with any name that is unique to your account and region.

This configuration supports all of the same features as general purpose buckets in the global namespace. The only difference is that only your account can use bucket names with your account’s suffix. The bucket name prefix and the account regional suffix combined must be between 3 and 63 characters long.

Using the AWS Command Line Interface (AWS CLI), you can create a bucket with account regional namespace by specifying the x-amz-bucket-namespace:account-regional request header and providing a compatible bucket name.

$ aws s3api create-bucket --bucket mybucket-123456789012-us-east-1-an 
   --bucket-namespace account-regional 
   --region us-east-1

You can use the AWS SDK for Python (Boto3) to create a bucket with account regional namespace using CreateBucket API request.

import boto3

class AccountRegionalBucketCreator:
    """Creates S3 buckets using account-regional namespace feature."""
    
    ACCOUNT_REGIONAL_SUFFIX = "-an"
    
    def __init__(self, s3_client, sts_client):
        self.s3_client = s3_client
        self.sts_client = sts_client
    
    def create_account_regional_bucket(self, prefix):
        """
        Creates an account-regional S3 bucket with the specified prefix.
        Resolves caller AWS account ID using the STS GetCallerIdentity API.
        Format: ---an
        """
        account_id = self.sts_client.get_caller_identity()['Account']
        region = self.s3_client.meta.region_name
        bucket_name = self._generate_account_regional_bucket_name(
            prefix, account_id, region
        )
        
        params = {
            "Bucket": bucket_name,
            "BucketNamespace": "account-regional"
        }
        if region != "us-east-1":
            params["CreateBucketConfiguration"] = {
                "LocationConstraint": region
            }
        
        return self.s3_client.create_bucket(**params)
    
    def _generate_account_regional_bucket_name(self, prefix, account_id, region):
        return f"{prefix}-{account_id}-{region}{self.ACCOUNT_REGIONAL_SUFFIX}"


if __name__ == '__main__':
    s3_client = boto3.client('s3')
    sts_client = boto3.client('sts')
    
    creator = AccountRegionalBucketCreator(s3_client, sts_client)
    response = creator.create_account_regional_bucket('test-python-sdk')
    
    print(f"Bucket created: {response}")

You can update your infrastructure as code (IaC) tools, such as AWS CloudFormation, to simplify creating buckets in your account regional namespace. AWS CloudFormation offers the pseudo parameters, AWS::AccountId and AWS::Region, making it easy to build CloudFormation templates that create account regional namespace buckets.

The following example demonstrates how you can update your existing CloudFormation templates to start creating buckets in your account regional namespace:

BucketName: !Sub "amzn-s3-demo-bucket-${AWS::AccountId}-${AWS::Region}-an"
BucketNamespace: "account-regional"

Alternatively, you can also use the BucketNamePrefix property to update your CloudFormation template. By using the BucketNamePrefix, you can provide only the customer defined portion of the bucket name and then it automatically adds the account regional namespace suffix based on the requesting AWS account and Region specified.

BucketNamePrefix: 'amzn-s3-demo-bucket'
BucketNamespace: "account-regional"

Using these options, you can build a custom CloudFormation template to easily create general purpose buckets in your account regional namespace.

Things to know
You can’t rename your existing global buckets to bucket names with account regional namespace, but you can create new general purpose buckets in your account regional namespace. Also, the account regional namespace is only supported for general purpose buckets. S3 table buckets and vector buckets already exist in an account-level namespace and S3 directory buckets exist in a zonal namespace.

To learn more, visit Namespaces for general purpose buckets in the Amazon S3 User Guide.

Now available
Creating general purpose buckets in your account regional namespace in Amazon S3 is now available in 37 AWS Regions including the AWS China and AWS GovCloud (US) Regions. You can create general purpose buckets in your account regional namespace at no additional cost.

Give it a try in the Amazon S3 console today and send feedback to AWS re:Post for Amazon S3 or through your usual AWS Support contacts.

Channy

When your IoT Device Logs in as Admin, It?s too Late! [Guest Diary], (Wed, Mar 11th)

This post was originally published on this site

[This is a Guest Diary by Adam Thorman, an ISC intern as part of the SANS.edu BACS program]

Introduction

Have you ever installed a new device on your home or company router? Even when setup instructions are straightforward, end users often skip the step that matters most: changing default credentials. The excitement of deploying a new device frequently outweighs the discipline of securing it.
This diary explains a little real-world short story and then walks through my own internship observations overseeing a honeypot and vulnerability assessment that demonstrate just how quickly default credentials are discovered and abused.

Default Credentials in a Real-World Example

Default usernames and passwords remain the most exploited attack vector for Internet of Things (IoT) devices. Whether installation is performed by an end user or a contracted vendor, organizations must have a defined process to ensure credentials are changed immediately. Without that process, compromise is often a matter of when, not if.
During a routine vulnerability assessment at work, I identified multiple IP addresses that were accessible using default credentials. These IPs belonged to a newly installed security system monitoring sensitive material. The situation was worse than expected:

  • The system was not placed on the proper VLAN
  • Basic end user machines could reach it
  • The username “root” remained unchanged and password “password” was changed to “admin

This configuration was still trivial to guess and exploit, regardless of whether access was internal or external. From my point of view, it was easily guessed and accessed, like Figure 1 below. 


Figure 1 – Meme of Easily Bypassed Security Controls

What Logs Showed?

To better understand how common this issue is, I analyzed SSH and Telnet traffic across an eight-day period (January 18–25) and compared it with more recent data. This ties into the story above based on how many devices are kept with their default settings or slightly changed with common trivial combinations. These graphs were pulled from the Internet Storm Center (ISC) My SSH Reports page [2], while the comparison was generated with ChatGPT tool.

JANUARY 27TH, 2026

FEBRUARY 17TH, 2026

COMPARISON

Across both datasets:

  • The username “root” remained dominant at ~39%
  • The password “123456” increased from 15% to 27%
  • These combinations strongly resembled automated botnet scanning behavior

This aligns with publicly known credential lists that attackers use for large scale reconnaissance.

Successful Connections

During the analysis window, I observed:

  • 44,269 failed connection attempts
  • 1,286 successful logins
  • A success rate of only 2.9%

That percentage may appear low, but it still resulted in over a thousand compromised sessions.
To perform this analysis, I parsed Cowrie JSON logs using jq, converted them to CSV files, and consolidated them into a single spreadsheet.

From the 1,286 successful connections:

  • 621 used the username root
  • 154 used admin as the password
  • 406 shared the same HASSH fingerprint 2ec37a7cc8daf20b10e1ad6221061ca5
  • 47 sessions matched all three indicators

The matched session to that hash is shown in APPENDIX A.

What Attackers did After Logging in?

Four session IDs stood out during review of the full report:
1. eee64da853a9
2. f62aa78aca0b
3. 308d24ec1d36
4. f0bc9f078bdd

Sessions 1 and 4 focused on reconnaissance, executing commands to gather system details such as CPU, uptime, architecture, and GPU information.

With the use of ChatGPT [3], I was able to compare each session and the commands the attacker attempted to use.  It was disclosed that Sessions 1 and 4 had reconnaissance from the topmost digital fingerprint HASSH.  They both had the same command but with different timestamps. Refer to APPENDIX B for Session ID 1 and 2 command outputs.

Sessions 2 and 3 demonstrated more advanced behavior:

  • SSH key persistence
  • Credential manipulation
  • Attempts to modify account passwords

Session 308d24ec1d36 ranked as the most severe due to attempted password changes and persistence mechanisms that could have resulted in long term control if it was attempted on a real-world medium. Refer to APPENDIX C for Session ID 2 and 3 command outputs.

Failed Attempts Tell a Bigger Story

Failed authentication attempts revealed even more.

One digital fingerprint alone accounted for 18,846 failed attempts, strongly suggesting botnet driven scanning activity.

On January 19, 2026, there were 14,057 failed attempts in a single day — a significant spike compared to surrounding dates.

From a Security Operations Center (SOC) analyst’s perspective, this level of activity represents a serious exposure risk.  It could mean a botnet scanning campaign like the one observed by GreyNoise in late August 2025 [4]. 

Below is a visual of the top usernames, passwords, and hashes across the analyzed timeframe.


Figure 2 – Top Usernames, Passwords, and Digital Fingerprints

To note in comparison to the other days, where it’s not even half of 14k, Figure 3 below dictates the spread. 

Figure 3 – Failed Connection Attempts Over Time

Best Practices to Follow Towards Resolving Default Credentials

The SANS Cybersecurity Policy Template for Password Construction Standard states that it “applies to all passwords including but not limited to user-level accounts, system-level accounts, web accounts, e-mail accounts, screen saver protection, voicemail, and local router logins.” More specially, the document also states that “strong passwords that are long, the more characters a password has the stronger it is,” and they “recommend a minimum of 16 characters in all work-related passwords [6].”

Establish an immediate policy to change the default password of IoT devices, such an example is a network printer that is shipped with default usernames and passwords [7].

Practical Experience Without the Real-World Disaster

Having access to a controlled sandbox environment, such as a honeypot lab, provides valuable hands-on experience for cybersecurity practitioners.
Sometimes you may need to deal with and see the real-world disaster in a controlled environment to deal with it and see the ripple effect it may produce. 

Why Might this Apply to you?

MITRE ATT&CK explicitly documents adversary use of manufacturers set default credentials on control systems. They stress that it must be changed as soon as possible.
This isn’t just an enterprise issue. The same risks apply to:

  • Home routers
  • Networked cameras
  • Printers
  • NAS devices

For hiring managers, even job postings that disclose specific infrastructure details can unintentionally assist attackers searching for default credentials.
Ultimately, it’s important to deliberately implement data security measures to protect yourself from data breaches at your home or workplace. 

Who Can Gain Valuable Insight on this Information?

Anyone with an internet or digital fingerprint. More specifically, organization leadership and management, when it comes to training your workforce and training your replacements.
A client-tech department, where a team is dedicated to testing the software or devices on the network, to include validating the version of it through a patching management tool, or reference library to know when versions are outdated. Routine “unauthorized” or “prohibited” software reports is an absolute must have in your workplace.
System administrators and SOC analysts are essential to not just know it, but to maintain it. To continue the trend, Cybersecurity students or Professionals such as Red vs. Blue teams [5] for example will gain significant value in this information.

Moving Forward Even with Good Defense

Defense in depth remains critical:

  • Strong, unique credentials
  • Multi factor authentication where possible [7]
  • Device fingerprinting
  • Continuous monitoring

SANS also encourage to utilize passphrases, passwords made up of multiple words. [6]

A common saying in Cybersecurity is, “the more secure the data is, the less convenient the data is—the less secure, the more convenient.” 
Organizations should also maintain a Business Impact Analysis (BIA) within their cybersecurity program. Even with strong defensive measures, organizations must assume that some security controls may eventually fail. A Business Impact Analysis (BIA) helps organizations prioritize which assets require the strongest protection by identifying critical, operational dependencies, and acceptable downtime thresholds.

Tying it all together.  This recommendation to combined with a defense-in-depth strategy, the BIA ensures that the most important systems receive multiple layers of protection such as network segmentation, strong authentication controls, continuous monitoring, and incident response planning. Without this structured approach, organizations may struggle to recover from a compromise or minimize operational disruption.


Figure 4 – Examples of Enterprise Business Asset Types [9]

Appendix A – Log Sample

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/ 
[2] https://isc.sans.edu/mysshreports/
[3] https://chatgpt.com/
[4] https://eclypsium.com/blog/cisco-asa-scanning-surge-cyberattack/
[5] https://www.techtarget.com/searchsecurity/tip/Red-team-vs-blue-team-vs-purple-team-Whats-the-difference
[6] https://www.sans.org/information-security-policy/password-construction-standard
[7] https://owasp.org/www-project-top-10-infrastructure-security-risks/docs/2024/ISR07_2024-Insecure_Authentication_Methods_and_Default_Credentials
[8] https://attack.mitre.org/techniques/T0812/
[9] https://csrc.nist.gov/pubs/ir/8286/d/upd1/final (PDF: Using Business Impact Analysis to Inform Risk Prioritization)

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Analyzing "Zombie Zip" Files (CVE-2026-0866), (Wed, Mar 11th)

This post was originally published on this site

A new vulnerability (CVE-2026-0866) has been published: Zombie Zip.

It's a method to create a malformed ZIP file that will bypass detection by most anti-virus engines.

The malformed ZIP file can not be opened with a ZIP utility, a custom loader is required.

The trick is to change the compression method to STORED while the contend is still DEFLATED: a flag in the ZIP file header states the content is not compressed, while in reality, the content is compressed.

I will show you how to use my tools to analyze such a malformed ZIP file.

Simple Method

Just run my tool search-for-compression.py on the ZIP file (you can download the Zombie ZIP file here, it contains an EICAR file):

The largest compression blob is number 2, it is 77 bytes long. Let's select it:

That's the EICAR file.

Complex Method

We can use the latest version of zipdump.py to analyze the file:

Just using the tool fails (because the zip file is malformed):

Using option -f to bypass the Python ZIP library for parsing, and using custom code to look for ZIP records (-f l) shows us this is a ZIP file, containing a file with the name eicar.com:

Selecting the FILE record (at position 0x00000000, PK0304 fil) shows us all the meta data:

The compressiontype is 0 (STORED), this means that the content of the file is just stored inside the ZIP file, not compressed.

But notice that the compressedsize and uncompressedsize are different (70 and 68). It should be the same for a STORED file.

Let's select the raw file content (-s data) and perform an asciidump (-a):

This does not look like the EICAR file.

Let's force the decompression of the data: -s forcedecompress:

This reveals the EICAR file content.

Option forcedecompress is a new option that I just coded in zipdump.py version 0.0.35. Option decompress will only decompress if the compression type is DEFLATED, thus it can't be used on this malformed ZIP file. Option forcedecompress will always try to decompress (and potentially fail), regardless of the compression type.

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Weekly Roundup: Amazon Connect Health, Bedrock AgentCore Policy, GameDay Europe, and more (March 9, 2026)

This post was originally published on this site

Fiti AWS Student Community Kenya!

Last week was an incredible whirlwind: a round of meetups, hands-on workshops, and career discussions across Kenya that culminated with the AWS Student Community Day at Meru University of Science and Technology, with keynotes from my colleagues Veliswa and Tiffany, and sessions on everything from GitOps to cloud-native engineering, and a whole lot of AI agent building.

JAWS Days 2026 is the largest AWS Community Day in the world, with over 1,500 attendees on March 7th. This event started with a keynote speech on building an AI-driven development team by Jeff Barr, and included over 100 technical and community experience sessions, lightning talks, and workshops such as Game Days, Builders Card Challenges, and networking parties.

Now, let’s get into this week’s AWS news…

Last week’s launches
Here are some launches and updates from this past week that caught my attention:

  • Introducing Amazon Connect Health, Agentic AI Built for Healthcare — Amazon Connect Health is now generally available with five purpose-built AI agents for healthcare: patient verification, appointment management, patient insights, ambient documentation, and medical coding. All features are HIPAA-eligible and deployable within existing clinical workflows in days.
  • Policy in Amazon Bedrock AgentCore is now generally available — You can now use centralized, fine-grained controls for agent-tool interactions that operate outside your agent code. Security and compliance teams can define tool access and input validation rules using natural language that automatically converts to Cedar, the AWS open-source policy language.
  • Introducing OpenClaw on Amazon Lightsail to run your autonomous private AI agents — You can deploy a private AI assistant on your own cloud infrastructure with built-in security controls, sandboxed agent sessions, one-click HTTPS, and device pairing authentication. Amazon Bedrock serves as the default model provider, and you can connect to Slack, Telegram, WhatsApp, and Discord.
  • AWS announces pricing for VPC Encryption Controls — Starting March 1, 2026, VPC Encryption Controls transitions from free preview to a paid feature. You can audit and enforce encryption-in-transit of all traffic flows within and across VPCs in a region, with monitor mode to detect unencrypted traffic and enforce mode to prevent it.
  • Database Savings Plans now supports Amazon OpenSearch Service and Amazon Neptune Analytics — You can save up to 35% on eligible serverless and provisioned instance usage with a one-year commitment. Savings Plans automatically apply regardless of engine, instance family, size, or AWS Region.
  • AWS Elastic Beanstalk now offers AI-powered environment analysis — When your environment health is degraded, Elastic Beanstalk can now collect recent events, instance health, and logs and send them to Amazon Bedrock for analysis, providing step-by-step troubleshooting recommendations tailored to your environment’s current state.
  • AWS simplifies IAM role creation and setup in service workflows — You can now create and configure IAM roles directly within service workflows through a new in-console panel, without switching to the IAM console. The feature supports Amazon EC2, Lambda, EKS, ECS, Glue, CloudFormation, and more.
  • Accelerate Lambda durable functions development with new Kiro power — You can now build resilient, long-running multi-step applications and AI workflows faster with AI agent-assisted development in Kiro. The power dynamically loads guidance on replay models, step and wait operations, concurrent execution patterns, error handling, and deployment best practices.
  • Amazon GameLift Servers launches DDoS Protection — You can now protect session-based multiplayer games against DDoS attacks with a co-located relay network that authenticates client traffic using access tokens and enforces per-player traffic limits, at no additional cost to GameLift Servers customers.

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.

From AWS community
Here are my personal favorite posts from AWS community and my colleagues:

  • I Built a Portable AI Memory Layer with MCP, AWS Bedrock, and a Chrome Extension — Learn how to build a persistent memory layer for AI agents using MCP and Amazon Bedrock, packaged as a Chrome extension that carries context across sessions and applications.
  • When the Model Is the Machine — Mike Chambers built an experimental app where an AI agent generates a complete, interactive web application at runtime from a single prompt — no codebase, no framework, no persistent state. A thought-provoking exploration of what happens when the model becomes the runtime.

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • AWS Community GameDay Europe — Think you know AWS? Prove it at the AWS Community GameDay Europe on March 17, a gamified learning event where teams compete to solve real-world technical challenges using AWS services.
  • AWS at NVIDIA GTC 2026 — Join us at our AWS sessions, booths, demos, and ancillary events in NVIDIA GTC 2026 on March 16 – 19, 2026 in San Jose. You can receive 20% off event passes through AWS and request a 1:1 meeting at GTC.
  • AWS Summits — Join AWS Summits in 2026: free in-person events where you can explore emerging cloud and AI technologies, learn best practices, and network with industry peers and experts. Upcoming Summits include Paris (April 1), London (April 22), and Bengaluru (April 23–24).
  • AWS Community Days — Community-led conferences where content is planned, sourced, and delivered by community leaders. Upcoming events include Slovakia (March 11), Pune (March 21), and the AWSome Women Summit LATAM in Mexico City (March 28)

Browse here for upcoming AWS led in-person and virtual events, startup events, and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— seb

Encrypted Client Hello: Ready for Prime Time?, (Mon, Mar 9th)

This post was originally published on this site

Last week, two related RFCs were published: 

RFC 9848: Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings
RFC 9849: TLS Encrypted Client Hello

These TLS extensions have been discussed quite a bit already, and Cloudflare, one of the early implementers and proponents, has been in use for a while.

Amidst an increased concern about threats to privacy from government and commercial interests, the "encrypt everything " movement has been on the rise for a while. The community made several improvements to TLS, such as TLS 1.3, the QUIC protocol, the deprecation of OCSP, and encrypted DNS modes, to better protect the privacy of network traffic.

There was one data leak left: For a client to establish a TLS connection, it needs to send a "TLS Client Hello" message. This message contains several sensitive items, most notably the hostname of the site the client attempts to connect to ("Server Name Indication"). One of the early proposals was just to encrypt the Server Name Indication extension. But this does not solve the entire problem, allowing for fingerprinting and other attacks. The same basic principles proposed for encrypting the server name extension can also be applied to encrypt most of the client hello message, resulting in a more complete solution.

One of the basic problems is exchanging key material. The client hello message is the first message sent during the TLS handshake. There is no opportunity for the server and client to negotiate an encryption key, and doing so would require a second handshake. Instead, encrypted client hellos leverage the HTTPS DNS record. The HTTPS record is already used to negotiate HTTP3/QUIC. It is now also used to transmit the keys required for Encrypted Client Hello (ECH). 

Enabling ECH is trivial if you are using Cloudflare. Just "flip the switch" in Cloudflare's edge certificate settings. However, I do not believe this is available on the free plan.

Cloudflare setting for encrypted client hello

To test if a domain supports ECH, use a tool like "dig" to retrieve the HTTP record:

# dig -t HTTPS dshield.org +short
1 . alpn="h2" ipv4hint=104.26.2.15,104.26.3.15,172.67.70.195 ech=AEX+DQBBawAgACBRVO1kCb5b2znHUOTe+L42PHgEjBSNt4LD/qDNxffkAgAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA= ipv6hint=2606:4700:20::681a:20f,2606:4700:20::681a:30f,2606:4700:20::ac43:46c3

Note the "ech=" part. Without ECH support, you may still see an HTTPS record, but it will not contain the "ech=" part. The base64 encoded string following "ech=" contains the public encryption key used to encrypt the client hello. A good test is cloudflare-ech.com, which will show whether your browser is using ECH. You can also use that domain to check if you are seeing the HTTPS record.

When using "dig", make sure you are using a version that supports HTTPS records. Older versions may not, and even the latest version of dig for macOS does not support HTTPS records. You will see a warning (which, as I found out, is easily missed), and you may still get "A" record responses:

% dig -t HTTPS dshield.org +short
;; Warning, ignoring invalid type HTTPS

For all the network traffic analysts grinding their teeth: You could block HTTPS records. This will also prevent QUIC from being used, which may be in your favor. But whether this is appropriate or not for your network is a question you must answer based on your business.


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

YARA-X 1.14.0 Release, (Sat, Mar 7th)

This post was originally published on this site

YARA-X's 1.14.0 release brings 4 improvements and 2 bugfixes.

One of the improvements is a new CLI command: deps.

This command shows you the dependencies of rules.

Here is an example. Rule rule1 has no dependencies, rule rule2 depends on rule rule1 and rule rule3 depends on rule rule2:

Running the deps command on these rules gives you the dependencies:

Didier Stevens
Senior handler
blog.DidierStevens.com

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.