More Scans for SMS Gateways and APIs, (Tue, Apr 29th)

This post was originally published on this site

Last week, I wrote about scans for Teltonika Networks SMS Gateways. Attackers are always looking for cheap (free) ways to send SMS messages and gain access to not-blocklisted numbers. So, I took a closer look at similar scans we have seen. 

There are numerous ways to send SMS messages; using a hardware SMS gateway is probably one of the more fancy ways to do so. Most websites use messaging services. For example, we do see scans for SMS plugins for WordPress:

These scans look for style sheet files (.css) that are part of the respective plugins. It is fair to assume that if the respective style sheet is present, the attacker will attempt to obtain access to the site:

/wp-content/plugins/sms-alert/css/admin.css
/wp-content/plugins/mediaburst-email-to-sms/css/clockwork.css
/wp-content/plugins/verification-sms-targetsms/css/targetvr-style.css
/wp-content/plugins/wp-sms/assets/css/admin-bar.css
/wp-content/plugins/textme-sms-integration/css/textme.css
/wp-content/plugins/sms-alert/css/admin.css

 

We also got a few odd, maybe broken, scans like:

/api/v1/livechat/sms-incoming/%%target%%/wp-content/themes/twentytwentyone/assets/css/print.css
/api/v1/livechat/sms-incoming/%%target%%/wp-content/themes/twentytwentyone/assets/js/responsive-embeds.js

 

The "%%target%%" part was likely supposed to be replaced with a directory name.

And we have scans for some configuration files that may contain credentials for SMS services like:

/twilio/.config/bin/aws/lib/.env
/twilio-labs/configure-env
/twillio_creds.php
/twilio/.secrets
/sms_config.json
/sms/actuator/env
/sms/env

And many similar URLs.

Scans also look for likely API endpoints used to send SMS messages. I am not sure if they are associated with a particular product or software:

/sms/api/
/api/v1/livechat/sms-incoming/twilio
/sms.py
/Sms_Bomber.exe

SMS_bomber.exe is a script designed to send mass SMS messages [1]. The scans may attempt to identify copies left behind by other attackers.

SMS_bomber also suggests the use of a proxy, and we have some scans that are looking for proxies to find websites used to send SMS:

https://sms-activate.org
 

Not properly securing SMS gateways or credentials used to connect to SMS services could result in significant costs if an attacker abuses the service. It may also make the phone number unusable, as telecom providers and end users will block it. This may also result in reputational damage, and you will likely have to use a different phone number after it has been abused.

 

[1] https://github.com/iAditya-Nanda/SMS_Bomber


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Llama 4 models from Meta now available in Amazon Bedrock serverless

This post was originally published on this site

The newest AI models from Meta, Llama 4 Scout 17B and Llama 4 Maverick 17B, are now available as a fully managed, serverless option in Amazon Bedrock. These new foundation models (FMs) deliver natively multimodal capabilities with early fusion technology that you can use for precise image grounding and extended context processing in your applications.

Llama 4 uses an innovative mixture-of-experts (MoE) architecture that provides enhanced performance across reasoning and image understanding tasks while optimizing for both cost and speed. This architectural approach enables Llama 4 to offer improved performance at lower cost compared to Llama 3, with expanded language support for global applications.

The models were already available on Amazon SageMaker JumpStart, and you can now use them in Amazon Bedrock to streamline building and scaling generative AI applications with enterprise-grade security and privacy.

Llama 4 Maverick 17B – A natively multimodal model featuring 128 experts and 400 billion total parameters. It excels in image and text understanding, making it suitable for versatile assistant and chat applications. The model supports a 1 million token context window, giving you the flexibility to process lengthy documents and complex inputs.

Llama 4 Scout 17B – A general-purpose multimodal model with 16 experts, 17 billion active parameters, and 109 billion total parameters that delivers superior performance compared to all previous Llama models. Amazon Bedrock currently supports a 3.5 million token context window for Llama 4 Scout, with plans to expand in the near future.

Use cases for Llama 4 models
You can use the advanced capabilities of Llama 4 models for a wide range of use cases across industries:

Enterprise applications – Build intelligent agents that can reason across tools and workflows, process multimodal inputs, and deliver high-quality responses for business applications.

Multilingual assistants – Create chat applications that understand images and provide high-quality responses across multiple languages, making them accessible to global audiences.

Code and document intelligence – Develop applications that can understand code, extract structured data from documents, and provide insightful analysis across large volumes of text and code.

Customer support – Enhance support systems with image analysis capabilities, enabling more effective problem resolution when customers share screenshots or photos.

Content creation – Generate creative content across multiple languages, with the ability to understand and respond to visual inputs.

Research – Build research applications that can integrate and analyze multimodal data, providing insights across text and images.

Using Llama 4 models in Amazon Bedrock
To use these new serverless models in Amazon Bedrock, I first need to request access. In the Amazon Bedrock console, I choose Model access from the navigation pane to toggle access to Llama 4 Maverick 17B and Llama 4 Scout 17B models.

Console screenshot.

The Llama 4 models can be easily integrated into your applications using the Amazon Bedrock Converse API, which provides a unified interface for conversational AI interactions.

Here’s an example of how to use the AWS SDK for Python (Boto3) with Llama 4 Maverick for a multimodal conversation:

import boto3
import json
import os

AWS_REGION = "us-west-2"
MODEL_ID = "us.meta.llama4-maverick-17b-instruct-v1:0"
IMAGE_PATH = "image.jpg"


def get_file_extension(filename: str) -> str:
    """Get the file extension."""
    extension = os.path.splitext(filename)[1].lower()[1:] or 'txt'
    if extension == 'jpg':
        extension = 'jpeg'
    return extension


def read_file(file_path: str) -> bytes:
    """Read a file in binary mode."""
    try:
        with open(file_path, 'rb') as file:
            return file.read()
    except Exception as e:
        raise Exception(f"Error reading file {file_path}: {str(e)}")

bedrock_runtime = boto3.client(
    service_name="bedrock-runtime",
    region_name=AWS_REGION
)

request_body = {
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "What can you tell me about this image?"
                },
                {
                    "image": {
                        "format": get_file_extension(IMAGE_PATH),
                        "source": {"bytes": read_file(IMAGE_PATH)},
                    }
                },
            ],
        }
    ]
}

response = bedrock_runtime.converse(
    modelId=MODEL_ID,
    messages=request_body["messages"]
)

print(response["output"]["message"]["content"][-1]["text"])

This example demonstrates how to send both text and image inputs to the model and receive a conversational response. The Converse API abstracts away the complexity of working with different model input formats, providing a consistent interface across models in Amazon Bedrock.

For more interactive use cases, you can also use the streaming capabilities of the Converse API:

response_stream = bedrock_runtime.converse_stream(
    modelId=MODEL_ID,
    messages=request_body['messages']
)

stream = response_stream.get('stream')
if stream:
    for event in stream:

        if 'messageStart' in event:
            print(f"nRole: {event['messageStart']['role']}")

        if 'contentBlockDelta' in event:
            print(event['contentBlockDelta']['delta']['text'], end="")

        if 'messageStop' in event:
            print(f"nStop reason: {event['messageStop']['stopReason']}")

        if 'metadata' in event:
            metadata = event['metadata']
            if 'usage' in metadata:
                print(f"Usage: {json.dumps(metadata['usage'], indent=4)}")
            if 'metrics' in metadata:
                print(f"Metrics: {json.dumps(metadata['metrics'], indent=4)}")

With streaming, your applications can provide a more responsive experience by displaying model outputs as they are generated.

Things to know
The Llama 4 models are available today with a fully managed, serverless experience in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) AWS Regions. You can also access Llama 4 in US East (Ohio) via cross-region inference.

As usual with Amazon Bedrock, you pay for what you use. For more information, see Amazon Bedrock pricing.

These models support 12 languages for text (English, French, German, Hindi, Italian, Portuguese, Spanish, Thai, Arabic, Indonesian, Tagalog, and Vietnamese) and English when processing images.

To start using these new models today, visit the Meta Llama models section in the Amazon Bedrock User Guide. You can also explore how our Builder communities are using Amazon Bedrock in their solutions in the generative AI section of our community.aws site.

Danilo


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Reduce your operational overhead today with Amazon CloudFront SaaS Manager

This post was originally published on this site

Today, I’m happy to announce the general availability of Amazon CloudFront SaaS Manager, a new feature that helps software-as-a-service (SaaS) providers, web development platform providers, and companies with multiple brands and websites efficiently manage delivery across multiple domains. Customers already use CloudFront to securely deliver content with low latency and high transfer speeds. CloudFront SaaS Manager addresses a critical challenge these organizations face: managing tenant websites at scale, each requiring TLS certificates, distributed denial-of-service (DDoS) protection, and performance monitoring.

With CloudFront Saas Manager, web development platform providers and enterprise SaaS providers who manage a large number of domains will use simple APIs and reusable configurations that use CloudFront edge locations worldwide, AWS WAF, and AWS Certificate Manager. CloudFront SaaS Manager can dramatically reduce operational complexity while providing high-performance content delivery and enterprise-grade security for every customer domain.

How it works
In CloudFront, you can use multi-tenant SaaS deployments, a strategy where a single CloudFront distribution serves content for multiple distinct tenants (users or organizations). CloudFront SaaS Manager uses a new template-based distribution model called a multi-tenant distribution to serve content across multiple domains while sharing configuration and infrastructure. However, if supporting single websites or application, a standard distribution would be better or recommended.

A template distribution defines the base configuration that will be used across domains such as origin configurations, cache behaviors, and security settings. Each template distribution has a distribution tenant to represent domain-specific origin paths or origin domain names including web access control list (ACL) overrides and custom TLS certificates.

Optionally, multiple distribution tenants can use the same connection group that provides the CloudFront routing endpoint that serves content to viewers. DNS records point to the CloudFront endpoint of the connection group using a Canonical Name Record (CNAME).

To learn more, visit Understand how multi-tenant distributions work in the Amazon CloudFront Developer Guide.

CloudFront SaaS Manager in action
I’d like to give you an example to help you understand the capabilities of CloudFront SaaS Manager. You have a company called MyStore, a popular e-commerce platform that helps your customer easily set up and manage an online store. MyStore’s tenants already enjoy outstanding customer service, security, reliability, and ease-of-use with little setup required to get a store up and running, resulting in 99.95 percent uptime for the last 12 months.

Customers of MyStore are unevenly distributed across three different pricing tiers: Bronze, Silver, and Gold, and each customer is assigned a persistent mystore.app subdomain. You can apply these tiers to different customer segments, customized settings, and operational Regions. For example, you can add AWS WAF service in the Gold tier as an advanced feature. In this example, MyStore has decided not to maintain their own web servers to handle TLS connections and security for a growing number of applications hosted on their platform. They are evaluating CloudFront to see if that will help them reduce operational overhead.

Let’s find how as MyStore you configure your customer’s websites distributed in multiple tiers with the CloudFront SaaS Manager. To get started, you can create a multi-tenant distribution that acts as a template corresponding to each of the three pricing tiers the MyStore offers: Bronze, Sliver, and Gold shown in Multi-tenant distribution under the SaaS menu on the Amazon CloudFront console.

To create a multi-tenant distribution, choose Create distribution and select Multi-tenant architecture if you have multiple websites or applications that will share the same configuration. Follow the steps to provide basic details such as a name for your distribution, tags, and wildcard certificate, specify origin type and location for your content such as a website or app, and enable security protections with AWS WAF web ACL feature.

When the multi-tenant distribution is created successfully, you can create a distribution tenant by choosing Create tenant in the Distribution tenants menu in the left navigation pane. You can create a distribution tenant to add your active customer to be associated with the Bronze tier.

Each tenant can be associated with up to one multi-tenant distribution. You can add one or more domains of your customers to a distribution tenant and assign custom parameter values such as origin domains and origin paths. A distribution tenant can inherit the TLS certificate and security configuration of its associated multi-tenant distribution. You can also attach a new certificate specifically for the tenant, or you can override the tenant security configuration.

When the distribution tenant is created successfully, you can finalize this step by updating a DNS record to route traffic to the domain in this distribution tenant and creating a CNAME pointed to the CloudFront application endpoint. To learn more, visit Create a distribution in the Amazon CloudFront Developer Guide.

Now you can see all customers in each distribution tenant to associate multi-tenant distributions.

By increasing customers’ business needs, you can upgrade your customers from Bronze to Silver tiers by moving those distribution tenants to a proper multi-tenant distribution.

During the monthly maintenance process, we identify domains associated with inactive customer accounts that can be safely decommissioned. If you’ve decided to deprecate the Bronze tier and migrate all customers who are currently in the Bronze tier to the Silver tier, then you can delete a multi-tenant distribution to associate the Bronze tier. To learn more, visit Update a distribution or Distribution tenant customizations in the Amazon CloudFront Developer Guide.

By default, your AWS account has one connection group that handles all your CloudFront traffic. You can enable Connection group in the Settings menu in the left navigation pane to create additional connection groups, giving you more control over traffic management and tenant isolation.

To learn more, visit Create custom connection group in the Amazon CloudFront Developer Guide.

Now available
Amazon CloudFront SaaS Manager is available today. To learn about, visit CloudFront SaaS Manager product page and documentation page. To learn about SaaS on AWS, visit AWS SaaS Factory.

Give CloudFront SaaS Manager a try in the CloudFront console today and send feedback to AWS re:Post for Amazon CloudFront or through your usual AWS Support contacts.

Veliswa.
_______________________________________________

How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Writer Palmyra X5 and X4 foundation models are now available in Amazon Bedrock

This post was originally published on this site

One thing we’ve witnessed in recent months is the expansion of context windows in foundation models (FMs), with many now handling sequence lengths that would have been unimaginable just a year ago. However, building AI-powered applications that can process vast amounts of information while maintaining the reliability and security standards required for enterprise use remains challenging.

For these reasons, we’re excited to announce that Writer Palmyra X5 and X4 models are available today in Amazon Bedrock as a fully managed, serverless offering. AWS is the first major cloud provider to deliver fully managed models from Writer. Palmyra X5 is a new model launched today by Writer. Palmyra X4 was previously available in Amazon Bedrock Marketplace.

Writer Palmyra models offer robust reasoning capabilities that support complex agent-based workflows while maintaining enterprise security standards and reliability. Palmyra X5 features a one million token context window, and Palmyra X4 supports a 128K token context window. With these extensive context windows, these models remove some of the traditional constraints for app and agent development, enabling deeper analysis and more comprehensive task completion.

With this launch, Amazon Bedrock continues to bring access to the most advanced models and the tools you need to build generative AI applications with security, privacy, and responsible AI.

As a pioneer in FM development, Writer trains and fine-tunes its industry leading models on Amazon SageMaker HyperPod. With its optimized distributed training environment, Writer reduces training time and brings its models to market faster.

Palmyra X5 and X4 use cases
Writer Palmyra X5 and X4 are designed specifically for enterprise use cases, combining powerful capabilities with stringent security measures, including System and Organization Controls (SOC) 2, Payment Card Industry Data Security Standard (PCI DSS), and Health Insurance Portability and Accountability Act (HIPAA) compliance certifications.

Palmyra X5 and X4 models excel in various enterprise use cases across multiple industries:

Financial services – Palmyra models power solutions across investment banking and asset and wealth management, including deal transaction support, 10-Q, 10-K and earnings transcript highlights, fund and market research, and personalized client outreach at scale.

Healthcare and life science – Payors and providers use Palmyra models to build solutions for member acquisition and onboarding, appeals and grievances, case and utilization management, and employer request for proposal (RFP) response. Pharmaceutical companies use these models for commercial applications, medical affairs, R&D, and clinical trials.

Retail and consumer goods – Palmyra models enable AI solutions for product description creation and variation, performance analysis, SEO updates, brand and compliance reviews, automated campaign workflows, and RFP analysis and response.

Technology – Companies across the technology sector implement Palmyra models for personalized and account-based marketing, content creation, campaign workflow automation, account preparation and research, knowledge support, job briefs and candidate reports, and RFP responses.

Palmyra models support a comprehensive suite of enterprise-grade capabilities, including:

Adaptive thinking – Hybrid models combining advanced reasoning with enterprise-grade reliability, excelling at complex problem-solving and sophisticated decision-making processes.

Multistep tool-calling – Support for advanced tool-calling capabilities that can be used in complex multistep workflows and agentic actions, including interaction with enterprise systems to perform tasks like updating systems, executing transactions, sending emails, and triggering workflows.

Enterprise-grade reliability – Consistent, accurate results while maintaining strict quality standards required for enterprise use, with models specifically trained on business content to align outputs with professional standards.

Using Palmyra X5 and X4 in Amazon Bedrock
As for all new serverless models in Amazon Bedrock, I need to request access first. In the Amazon Bedrock console, I choose Model access from the navigation pane to enable access to Palmyra X5 and Palmyra X4 models.

Console screenshot

When I have access to the models, I can start building applications with any AWS SDKs using the Amazon Bedrock Converse API. The models use cross-Region inference with these inference profiles:

  • For Palmyra X5: us.writer.palmyra-x5-v1:0
  • For Palmyra X4: us.writer.palmyra-x4-v1:0

Here’s a sample implementation with the AWS SDK for Python (Boto3). In this scenario, there is a new version of an existing product. I need to prepare a detailed comparison of what’s new. I have the old and new product manuals. I use the large input context of Palmyra X5 to read and compare the two versions of the manual and prepare a first draft of the comparison document.

import sys
import os
import boto3
import re

AWS_REGION = "us-west-2"
MODEL_ID = "us.writer.palmyra-x5-v1:0"
DEFAULT_OUTPUT_FILE = "product_comparison.md"

def create_bedrock_runtime_client(region: str = AWS_REGION):
    """Create and return a Bedrock client."""
    return boto3.client('bedrock-runtime', region_name=region)

def get_file_extension(filename: str) -> str:
    """Get the file extension."""
    return os.path.splitext(filename)[1].lower()[1:] or 'txt'

def sanitize_document_name(filename: str) -> str:
    """Sanitize document name."""
    # Remove extension and get base name
    name = os.path.splitext(filename)[0]
    
    # Replace invalid characters with space
    name = re.sub(r'[^a-zA-Z0-9s-()[]]', ' ', name)
    
    # Replace multiple spaces with single space
    name = re.sub(r's+', ' ', name)
    
    # Strip leading/trailing spaces
    return name.strip()

def read_file(file_path: str) -> bytes:
    """Read a file in binary mode."""
    try:
        with open(file_path, 'rb') as file:
            return file.read()
    except Exception as e:
        raise Exception(f"Error reading file {file_path}: {str(e)}")

def generate_comparison(client, document1: bytes, document2: bytes, filename1: str, filename2: str) -> str:
    """Generate a markdown comparison of two product manuals."""
    print(f"Generating comparison for {filename1} and {filename2}")
    try:
        response = client.converse(
            modelId=MODEL_ID,
            messages=[
                {
                    "role": "user",
                    "content": [
                        {
                            "text": "Please compare these two product manuals and create a detailed comparison in markdown format. Focus on comparing key features, specifications, and highlight the main differences between the products."
                        },
                        {
                            "document": {
                                "format": get_file_extension(filename1),
                                "name": sanitize_document_name(filename1),
                                "source": {
                                    "bytes": document1
                                }
                            }
                        },
                        {
                            "document": {
                                "format": get_file_extension(filename2),
                                "name": sanitize_document_name(filename2),
                                "source": {
                                    "bytes": document2
                                }
                            }
                        }
                    ]
                }
            ]
        )
        return response['output']['message']['content'][0]['text']
    except Exception as e:
        raise Exception(f"Error generating comparison: {str(e)}")

def main():
    if len(sys.argv) < 3 or len(sys.argv) > 4:
        cmd = sys.argv[0]
        print(f"Usage: {cmd} <manual1_path> <manual2_path> [output_file]")
        sys.exit(1)

    manual1_path = sys.argv[1]
    manual2_path = sys.argv[2]
    output_file = sys.argv[3] if len(sys.argv) == 4 else DEFAULT_OUTPUT_FILE
    paths = [manual1_path, manual2_path]

    # Check each file's existence
    for path in paths:
        if not os.path.exists(path):
            print(f"Error: File does not exist: {path}")
            sys.exit(1)

    try:
        # Create Bedrock client
        bedrock_runtime = create_bedrock_runtime_client()

        # Read both manuals
        print("Reading documents...")
        manual1_content = read_file(manual1_path)
        manual2_content = read_file(manual2_path)

        # Generate comparison directly from the documents
        print("Generating comparison...")
        comparison = generate_comparison(
            bedrock_runtime,
            manual1_content,
            manual2_content,
            os.path.basename(manual1_path),
            os.path.basename(manual2_path)
        )

        # Save comparison to file
        with open(output_file, 'w') as f:
            f.write(comparison)

        print(f"Comparison generated successfully! Saved to {output_file}")

    except Exception as e:
        print(f"Error: {str(e)}")
        sys.exit(1)

if __name__ == "__main__":
    main()

To learn how to use Amazon Bedrock with AWS SDKs, browse the code samples in the Amazon Bedrock User Guide.

Things to know
Writer Palmyra X5 and X4 models are available in Amazon Bedrock today in the US West (Oregon) AWS Region with cross-Region inference. For the most up-to-date information on model support by Region, refer to the Amazon Bedrock documentation. For information on pricing, visit Amazon Bedrock pricing.

These models support English, Spanish, French, German, Chinese, and multiple other languages, making them suitable for global enterprise applications.

Using the expansive context capabilities of these models, developers can build more sophisticated applications and agents that can process extensive documents, perform complex multistep reasoning, and handle sophisticated agentic workflows.

To start using Writer Palmyra X5 and X4 models today, visit the Writer model section in the Amazon Bedrock User Guide. You can also explore how our Builder communities are using Amazon Bedrock in their solutions in the generative AI section of our community.aws site.

Let us know what you build with these powerful new capabilities!

Danilo


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

AWS Weekly Roundup: Amazon Q Developer, AWS Account Management updates, and more (April 28, 2025)

This post was originally published on this site

Summit season is in full throttle! If you haven’t been to an AWS Summit, I highly recommend you check one out that’s nearby. They are large-scale all-day events where you can attend talks, watch interesting demos and activities, connect with AWS and industry people, and more. Best of all, they are free—so all you need to do is register! You can find a list of them here in the AWS Events page. Incidentally, you can also discover other AWS events going in your area on that same page; just use the filters on the side to find something that interests you.

Speaking of AWS Summits, this week is the AWS Summit London (April 30). It’s local for me, and I have been heavily involved in the planning. You do not want to miss this! Make sure to check it out and hopefully I’ll be seeing you there.

Ready to find out some highlights from last week’s exciting AWS launches? Let’s go!

New features and capabilities highlights
Let’s start by looking at some of the enhancements launched last week.

  • Amazon Q Developer releases state of the art agent for feature development — AWS has announced an update to Amazon Q Developer’s software development agent, which achieves state-of-the-art performance on industry benchmarks and can generate multiple candidate solutions for coding problems. This new agent provides more reliable suggestions helping to reduce debugging time and enabling developers to focus on higher-level design and innovation.
  • Amazon Cognito now supports refresh token rotation — Amazon Cognito now supports OAuth 2.0 refresh token rotation, allowing user pool clients to automatically replace existing refresh tokens with new ones at regular intervals, enhancing security without requiring users to re-authenticate. This feature helps customers achieve both seamless user experience and improved security by automatically updating refresh tokens frequently, rather than having to choose between long-lived tokens for convenience, or short-lived tokens for security.
  • Amazon Bedrock Intelligent Prompt Routing is now generally available — Amazon Bedrock’s Intelligent Prompt Routing, now generally available, automatically routes prompts to different foundation models within a model family to optimize response quality and cost. The service now offers increased configurability across multiple model families including Claude (Anthropic), Llama (Meta), and Nova (Amazon), allowing users to choose any two models from a family and set custom routing criteria.
  • Upgrades to Amazon Q Business integrations for M365 Word and Outlook — Amazon Q Business integrations for Microsoft Word and Outlook now have the ability to search company knowledge bases, support image attachments, and handle larger context windows for more detailed prompts. These enhancements enable users to seamlessly access indexed company data and incorporate richer content while working on documents and emails, without needing to switch between different applications or contexts.

Security
There were a few new security improvements released last week, but these are the ones that caught my eye:

  • AWS Account Management now supports account name update via authorized IAM principals — AWS now allows IAM principals to update account names, removing the previous requirement for root user access. This applies to both standalone accounts and member accounts within AWS Organizations, where authorized IAM principals in management and delegated admin accounts can manage account names centrally.
  • AWS Resource Explorer now supports AWS PrivateLink — AWS Resource Explorer now supports AWS PrivateLink across all commercial Regions, enabling secure resource discovery and search capabilities across AWS Regions and accounts within your VPC, without requiring public internet access.
  • Amazon SageMaker Lakehouse now supports attribute based access control — Amazon SageMaker Lakehouse now supports attribute-based access control (ABAC), allowing administrators to manage data access permissions using dynamic attributes associated with IAM identities rather than creating individual policies. This simplifies access management by enabling permissions to be automatically granted to any IAM principal with matching tags, making it more efficient to handle access control as teams grow.

Networking
As you may be aware, there is a growing industry push to adopt IPv6 as the default protocol for new systems while migrating existing infrastructure where possible. This week, two more services have added their support to help customers towards that goal:

Capacity and costs
Customers using Amazon Kinesis Data Streams can enjoy higher default quotas, while Amazon Redshift Serverless customers get a new cost saving opportunity.

For a full list of AWS announcements, be sure to visit the What’s New with AWS? page.

Recommended Learning Resources
Everyone’s talking about MCP recently! Here are two great blog posts that I think will help you catch up and learn more about the possibilities of how to use MCP on AWS.

Our Weekly Roundup is published every Monday to help you keep up with AWS launches, so don’t forget to check it again next week for more exciting news!

Enjoy the rest of your day!


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

SRUM-DUMP Version 3: Uncovering Malware Activity in Forensics, (Sun, Apr 27th)

This post was originally published on this site

body { font-family: Arial, sans-serif; line-height: 1.6; margin: 20px; }
h1, h2, h3 { color: #333; }
a { color: #007bff; text-decoration: none; }
a:hover { text-decoration: underline; }
table { border-collapse: collapse; width: 100%; margin: 20px 0; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #f2f2f2; }
code { background-color: #f4f4f4; padding: 2px 4px; border-radius: 4px; }
pre { background-color: #f4f4f4; padding: 10px; border-radius: 4px; }

SRUM-DUMP Version 3: Uncovering Malware Activity in Forensics

For digital forensics and incident response professionals, extracting precise evidence from Windows systems is critical to understanding and mitigating threats. I’m excited to introduce SRUM-DUMP Version 3, a powerful forensic tool I’ve developed to analyze the Windows System Resource Usage Monitor (SRUM) database. Available on GitHub at SRUM-DUMP Repository, this version offers significant improvements, including a user-friendly GUI and customizable output. In this post, I’ll guide you through using SRUM-DUMP v3’s GUI to investigate a scenario where malware (malware.exe) exfiltrates intellectual property over a wireless network. We’ll explore the 3-step wizard, customize the analysis to highlight malware.exe, and examine where it appears in the output spreadsheet and what each tab reveals about the incident.

What is SRUM-DUMP Version 3?

SRUM-DUMP v3 is designed to extract and analyze data from the SRUM database (C:WindowsSystem32srusrudb.dat), which logs system resource usage for up to 30 days. This database is a treasure trove for incident response, capturing details about application executions and network activity. Key features of v3 include:

  • 3-step Wizard for Rapid Analysis: Select the output directory, srudb.dat and SOFTWARE registry key and you’re off!
  • Customizable Configuration: A short analysis generates a srum_dump_config.json file allowing you to highlight suspicious terms, map network interfaces, and format output.
  • Automated Artifact Detection: Editing the srum_dump_config.json lets you tag suspect processes, users, and networks before the analysis begins.
  • XLSX Analysis: All of the artifacts are tagged, colorized, calculated, filtered, and placed into an XLSX file for easy analysis.

Scenario: Malware Exfiltrating Intellectual Property

Imagine an attacker compromises a Windows workstation, deploying malware.exe to steal sensitive documents over a wireless network. The malware runs as an application, quietly exfiltrating data to a remote server. There is no EDR or application logging to be found but you must determine what was stolen and how. The incident response team acquires SRUDB.dat and the SOFTWARE registry hive (C:WindowsSystem32configSOFTWARE) and uses SRUM-DUMP v3 to analyze the evidence.

Using SRUM-DUMP v3’s GUI: Step-by-Step

SRUM-DUMP v3’s GUI streamlines the analysis process through a 3-step wizard, followed by configuration customization and result generation.

Step 1: Launch the 3-Step Wizard  

  1. Launch the Tool: Run the prebuilt executable, available from the Releases page.
  2. Select an Output Directory: Choose an empty directory where the tool will save the Excel spreadsheet and configuration file.
  3. Select the SRUDB.DAT File: Locate SRUDB.dat. Either from your forensics image or at C:WindowsSystem32srusrudb.dat on a live system.
  4. Select the SOFTWARE Registry Hive (Optional): Provide the SOFTWARE hive to enrich network data, such as mapping interface LUIDs to SSIDs (e.g., “CorporateWiFi”).

If you selected files that are locked by the OS on live systems, srum-dump will extract the locked files through the Volume Shadow Copies. The files are analyzed and a configuration file is built containing all of the users, network, and processes from the selected files.

Step 2: Customize the Configuration

  • After selecting files, SRUM-DUMP processes the SRUM database and generates an srum_dump_config.json file.
  • Click “EDIT” to open the configuration file.
  • Modify the “dirty_words” section to highlight suspect processes ( malware.exe in this example )

{
    "dirty_words": {
        "malware.exe": "highlight-red"
    }
}
    
  • This ensures any instance of malware.exe in the output is highlighted in red.
  • Optionally, add additional tags to suspicious users, processes, and applications. For example, if we need to (markb) was a compromised user and "CorporateWifi" was a suspicious wifi network you could add tags to the tables in srum_dump_config.json file.

{
    "SRUDbIdMapTable": {
        "3": "S-1-5-21-1234567890-0987654321-1234567890-1001 (markb) - CompromisedUser"
    },
    "network_interfaces": {
        "268435498": "CorporateWiFi - SuspectWifi"
    }
}
    
  • Save the configuration file and click “CONFIRM”.

Step 3: Generate and Review the Spreadsheet

  • Click “CONTINUE” to run the analysis with the customized configuration.
  • A progress dialog appears, and once complete, the tool saves an updated Excel spreadsheet in the output directory.
  • Open the spreadsheet to examine the results.

Where Does malware.exe Appear?

The Excel spreadsheet contains multiple tabs, each corresponding to a SRUM database table. For this scenario, we will examine just two of the locations where malware.exe will appear:

Tab Name Description Relevance to malware.exe
Application Timeline Logs application executions, including executable names, user SIDs, timestamps, and resource usage. Directly lists malware.exe in the AppId column, highlighted if configured.
Network Data Records network activity, including bytes sent/received, interface LUIDs, and timestamps. Indirectly relevant by showing network activity during malware.exe’s execution.

Application Timeline Tab

  • Content: Each row represents an application execution event over the past 30 days.
  • Where malware.exe Appears: In the AppId column, rows containing malware.exe will be highlighted in red (based on the “dirty_words” configuration).
  • Key Columns:
    • AppId: The application’s identifier (e.g., malware.exe).
    • UserSid: The security identifier of the user running the application, mappable to a username (e.g., “CompromisedUser”).
    • TimeStamp: The UTC date and time of execution (e.g., 2025-04-15 02:00:00).
    • CycleTime: CPU usage, indicating the malware’s processing intensity.
    • WorkingSetSize: Memory usage, which may reveal unusual patterns.
  • Insights for the Incident:
    • Confirms malware.exe was executed, providing a timeline of its activity.
    • Identifies the user account involved, aiding in attribution.
    • Reveals resource consumption, suggesting whether the malware was performing tasks like data encryption or exfiltration.

Network Data Tab

  • Content: Each row represents a network activity event, detailing data transfers across interfaces.
  • Relation to malware.exe: While malware.exe isn’t listed directly, you can correlate timestamps with the Application Timeline tab to identify network activity during its execution.
  • Key Columns:
    • InterfaceLuid: Identifies the network interface (e.g., wireless adapter). With the SOFTWARE hive, this may be mapped to an SSID like “CorporateWiFi.”
    • BytesSent and BytesRecvd: Quantities of data transferred (e.g., 500 MB sent).
    • TimeStamp: When the activity occurred (e.g., 2025-04-15 02:00:00).
  • Insights for the Incident:
    • High BytesSent values during malware.exe’s execution suggest data exfiltration.
    • The SSID mapping confirms the use of a specific wireless network, aligning with the scenario.
    • Timestamps link network activity to the malware’s runtime, strengthening evidence of its role.

Correlating Evidence

To reconstruct the incident:

  1. Identify malware.exe Activity: In the Application Timeline tab, note timestamps when malware.exe was active (e.g., 2025-04-15 02:00:00).
  2. Check Network Activity: In the Network Data tab, look for high BytesSent on the wireless interface at matching timestamps.
  3. Build the Timeline: Combine these findings to show that malware.exe executed and simultaneously sent large amounts of data, confirming intellectual property theft.

For example:

  • Application Timeline: malware.exe ran at 2025-04-15 02:00:00 with high CycleTime.
  • Network Data: 500 MB of BytesSent on “CorporateWiFi” at 2025-04-15 02:00:00.

This correlation provides compelling evidence of the malware’s actions.

Getting Started

Download the prebuilt executable from the Releases page and follow the GUI steps outlined above. For advanced configuration options, consult the Configuration File Documentation.

SRUM-DUMP v3 empowers you to tackle malware investigations, insider threats, and system anomalies with precision, making it an indispensable tool for modern incident response.

Learn More

I'm teaching at the following events. Come check it out!

  • SEC673 ADVANCED Python in Miami, FL June 2, 2025
  • SEC573 at SANSFire in Baltimore, MD July 14, 2025
  • SEC573 in Melbourne, VIC AU August 17, 2025
  • SEC573 in Las Vegas, NV September 22, 2025

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Steganography Analysis With pngdump.py, (Sat, Apr 26th)

This post was originally published on this site

I like it when a diary entry like "Example of a Payload Delivered Through Steganography" is published: it gives me an opportunity to test my tools, in particular pngdump.py, a tool to analyze PNG files.

A PNG file consists of a header followed by chunks. pngdump.py shows this (sample c2219ddbd3456e3df0a8b10c7bbdf018da031d8ba5e9b71ede45618f50f2f4b6):

The IHDR chunk gives us information about the image: it's 31744 pixels wide and 1 pixel high. That's already unusual!

There are 8 bits and the colortype is 6. That's RGBA (Red, Green, Blue and Alpha/Transparency). So there are 4 channels in total with a resolution of 8 bits, thus 4 bytes per pixel.

The IDAT chunk contains the compressed (zlib) image: there's only one IDAT chunk in this image. And the line filter is None: this means that the pixel data is not encoded.

So let's take a look at the decompressed raw pixel data:

We see the letters M and Z: these letters are characteristic for the header of a PE file.

And a bit further we see the letters "This program …".

So it's clear that a PE file is embedded in the pixel data.

Every fourth byte is a byte of the PE file, and the first byte of the PE file is the second byte of the pixel data: so the PE file is embedded in the second channel of the pixel data.

We can select these bytes with a tool like translate.py, that takes a Python function to manipulate the bytes it receives.

data[1::4] is a Python slice that starts with the second byte (position 1), ends when there is no more data, and selects every fourth byte (a step of 4). This allows us to extract the second channel:

pngdump.py's option -d performs a binary dump, piping the raw pixel data into translate.py. translate.py's option -f reads all the data in one go (by default, translate.py operates byte per byte). lambda data: data[11:4] is a Python function that takes the raw pixel data as input (data) and returns the bytes of the second channel via a Python slice that selects every fourth byte starting with the second byte of the raw pixel data.

Finally, the extracted PE file is piped into file-magic.py to identify the file type: it is a .NET file, as Xavier explained.

We can also check that it is a PE file with a tool like pecheck.py:

And we can calculate the hashes with hash.py to look up the file on VirusTotal:

This is the SHA256 of the extracted PE file: 8f4cea5d602eaa4e705ef62e2cf00f93ad4b03fb222c35ab39f64c24cdb98462.

It's clear that the steganography works in this example: 0 detections for the PNG file on VirusTotal, and 49 detections for the embedded PE file.

 

 

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Example of a Payload Delivered Through Steganography, (Fri, Apr 25th)

This post was originally published on this site

In this diary, I’ll show you a practical example of how steganography is used to hide payloads (or other suspicious data) from security tools and Security Analysts’ eyes. Steganography can be defined like this: It is the art and science of concealing a secret message, file, or image within an ordinary-looking carrier—such as a digital photograph, audio clip, or text—so that the very existence of the hidden data is undetectable to casual observers (read: security people). Many online implementations of basic steganography allow you to embed a message (a string) into a picture[1].

In the works – New Availability Zone in Maryland for US East (Northern Virginia) Region

This post was originally published on this site

The US East (Northern Virginia) Region was the first Region launched by Amazon Web Services (AWS), and it has seen tremendous growth and customer adoption over the past several years. Now hosting active customers ranging from startups to large enterprises, AWS has steadily expanded the US East (Northern Virginia) Region infrastructure and capacity. The US East (Northern Virginia) Region consists of six Availability Zones, providing customers with enhanced redundancy and the ability to architect highly available applications.

Today, we’re announcing that a new Availability Zone located in Maryland will be added to the US East (Northern Virginia) Region, which is expected to open in 2026. This new Availability Zone will be connected to other Availability Zones by high-bandwidth, low-latency network connections over dedicated, fully redundant fiber. The upcoming Availability Zone in Maryland will also be instrumental in supporting the rapid growth of generative AI and advanced computing workloads in the US East (Northern Virginia) Region.

All Availability Zones are physically separated in a Region by a meaningful distance, many kilometers (km) from any other Availability Zone, although all are within 100 km (60 miles) of each other. The network performance is sufficient to accomplish synchronous replication between Availability Zones in Maryland and Virginia within the US East (Northern Virginia) Region. If your application is partitioned across multiple Availability Zones, your workloads are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more.

With this announcement, AWS now has four new Regions in the works—New Zealand, Kingdom of Saudi Arabia, Taiwan, and the AWS European Sovereign Cloud—and 13 upcoming new Availability Zones.

Geographic information for the new Availability Zone
In March, we provided more granular visibility into the geographic location information of all AWS Regions and Availability Zones. We have updated the AWS Regions and Availability Zones page to reflect the new geographic information for this upcoming Availability Zone in Maryland. As shown in the following screenshot, the infrastructure for the upcoming Availability Zone will be located in Maryland, United States of America, for the US East (Northern Virginia) us-east-1 Region.

You can continue to use this geographic information to choose Availability Zones that align with your regulatory, compliance, and operational requirements.

After the new Availability Zone is launched, it will be available along with other Availability Zones in the US East (Northern Virginia) Region through the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Stay tuned
We plan to make this new Availability Zone in the US East (Northern Virginia) Region generally available in 2026. As usual, check out the Regional news of the AWS News Blog so that you’ll be among the first to know when the new Availability Zone is open!

To learn more, visit the AWS Global Infrastructure Regions and Availability Zones page or AWS Regions and Availability Zones in the AWS documentation and send feedback to AWS re:Post or through your usual AWS Support contacts.

Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Enhance real-time applications with AWS AppSync Events data source integrations

This post was originally published on this site

Today, we are announcing that AWS AppSync Events now supports data source integrations for channel namespaces, enabling developers to create more sophisticated real-time applications. With this new capability you can associate AWS Lambda functions, Amazon DynamoDB tables, Amazon Aurora databases, and other data sources with channel namespace handlers. With AWS AppSync Events, you can build rich, real-time applications with features like data validation, event transformation, and persistent storage of events.

With these new capabilities, developers can create sophisticated event processing workflows by transforming and filtering events using Lambda functions or save batches of events to DynamoDB using the new AppSync_JS batch utilities. The integration enables complex interactive flows while reducing development time and operational overhead. For example, you can now automatically persist events to a database without writing complex integration code.

First look at data source integrations

Let’s walk through how to set up data source integrations using the AWS Management Console. First, I’ll navigate to AWS AppSync in the console and select my Event API (or create a new one).

Screenshot of the AWS Console

Persisting event data directly to DynamoDB

There are multiple kinds of data source integrations to choose from. For this first example, I’ll create a DynamoDB table as a data source. I’m going to need a DynamoDB table first, so I head over to DynamoDB in the console and create a new table called event-messages. For this example, all I need to do is create the table with a Partition Key called id. From here, I can click Create table and accept the default table configuration before I head back to AppSync in the console.

Screenshot of the AWS Console for DynamoDB

Back in the AppSync console, I return to the Event API I set up previously, select Data Sources from the tabbed navigation panel and click the Create data source button.

Screenshot of the AWS Console

After giving my Data Source a name, I select Amazon DynamoDB from the Data source drop down menu. This will reveal configuration options for DynamoDB.

Screenshot of the AWS Console

Once my data source is configured, I can implement the handler logic. Here’s an example of a Publish handler that persists events to DynamoDB:

import * as ddb from '@aws-appsync/utils/dynamodb'
import { util } from '@aws-appsync/utils'

const TABLE = 'events-messages'

export const onPublish = {
  request(ctx) {
    const channel = ctx.info.channel.path
    const timestamp = util.time.nowISO8601()
    return ddb.batchPut({
      tables: {
        [TABLE]: ctx.events.map(({id, payload}) => ({
          channel, id, timestamp, ...payload,
        })),
      },
    })
  },
  response(ctx) {
    return ctx.result.data[TABLE].map(({ id, ...payload }) => ({ id, payload }))
  },
}

To add the handler code, I go the tabbed navigation for Namespaces where I find a new default namespace already created for me. If I click to open the default namespace, I find the button that allows me to add an Event handler just below the configuration details.

Screenshot of the AWS Console

Clicking on Create event handlers brings me to a new dialog where I choose Code with data source as my configuration, and then select the DynamoDB data source as my publish configuration.

Screenshot of the AWS Console

After saving the handler, I can test the integration using the built-in testing tools in the console. The default values here should work, and as you can see below, I’ve successfully written two events to my DynamoDB table.

Screenshot of the AWS Console

Here’s all my messages captured in DynamoDB!

Screenshot of the AWS Console

Error handling and security

The new data source integrations include comprehensive error handling capabilities. For synchronous operations, you can return specific error messages that will be logged to Amazon CloudWatch, while maintaining security by not exposing sensitive backend information to clients. For authorization scenarios, you can implement custom validation logic using Lambda functions to control access to specific channels or message types.

Available now

AWS AppSync Events data source integrations are available today in all AWS Regions where AWS AppSync is available. You can start using these new features through the AWS AppSync console, AWS command line interface (CLI), or AWS SDKs. There is no additional cost for using data source integrations – you pay only for the underlying resources you use (such as Lambda invocations or DynamoDB operations) and your existing AppSync Events usage.

To learn more about AWS AppSync Events and data source integrations, visit the AWS AppSync Events documentation and get started building more powerful real-time applications today.

— Micah;


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)