From a Regular Infostealer to its Obfuscated Version, (Sat, Nov 30th)

This post was originally published on this site

There are many malicious scripts available on the Internet. Github has plenty of info stealers and RATs made available “for testing or research purposes”. Here is one that I found recently: Trap-Stealer[1]. Often those scripts are pretty well obfuscated to pass through security controls and make Security Analysts’ life harder. Let’s review a practical example.

Quickie: Mass BASE64 Decoding, (Fri, Nov 29th)

This post was originally published on this site

I was asked how one can decode a bunch of BASE64 encoded IOCs with my tools.

I'm going to illustrate my method using the phishing SVG samples I found on VirusTotal (see "Increase In Phishing SVG Attachments").

In these phishing SVG files, the victim's email address is encoded in BASE64:

With grep, I can select all these lines with BASE64 encoded email addresses:

Then I can pipe this into base64dump.py, my tool to handle BASE64 (and other encodings):

You can see the email address in the "Decoded" column (they are redacted to protect the victims).

To get just this info (decoded email addresses), you can use option -s a to select all decoded items, and option -d to dump the decoded values to stdout, like this:

The problem now is that all email addresses are concatenated together. To add a newline (or carriage return – newline in Windows) after each email address, use option -s A (uppercase a):

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon FSx for Lustre increases throughput to GPU instances by up to 12x

This post was originally published on this site

Today, we are announcing support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect Storage (GDS) on Amazon FSx for Lustre. EFA is a network interface for Amazon EC2 instances that makes it possible to run applications requiring high levels of inter-node communications at scale. GDS is a technology that creates a direct data path between local or remote storage and GPU memory. With these enhancements, Amazon FSx for Lustre with EFA/GDS support provides up to 12 times higher (up to 1200 Gbps) per-client throughput compared to the previous FSx for Lustre version.

You can use FSx for Lustre to build and run the most performance demanding applications, such as deep learning training, drug discovery, financial modeling, and autonomous vehicle development. As datasets grow and new technologies emerge, you can adopt increasingly powerful GPU and HPC instances such as Amazon EC2 P5, Trn1, and Hpc7a. Until now, when accessing FSx for Lustre file systems, the use of traditional TCP networking limited throughput to 100 Gbps for individual client instances. This adoption is driving the need for FSx for Lustre file systems to provide the performance necessary to optimally utilize the increasing network bandwidth of these cutting-edge EC2 instances when accessing large datasets.

With EFA and GDS support in FSx for Lustre, you can now achieve up to 1,200 Gbps throughput per client instance (twelve times more throughput than previously) when using P5 GPU instances and NVIDIA CUDA in your applications.

With this new capability, you can fully utilize the network bandwidth of the most powerful compute instances and accelerate your machine learning (ML) and HPC workloads. EFA enhances performance by bypassing the operating system and using the AWS Scalable Reliable Datagram (SRD) protocol to optimize data transfer. GDS further improves performance by enabling direct data transfer between the file system and GPU memory, bypassing the CPU and eliminating redundant memory copies.

Let’s see how this works in practice.

Creating an Amazon FSx for Lustre file system with EFA enabled
To get started, in the Amazon FSx console, I choose Create file system and then Amazon FSx for Lustre.

I enter a name for the file system. In the Deployment and storage type section, I select Persistent, SSD and the new with EFA enabled option. I select 1000 MB/s/TiB in the Throughput per unit of storage section. With these settings, I enter 4.8 TiB for Storage capacity, which is the minimum supported with these settings.

Console screenshot.

For networking, I use the default virtual private cloud (VPC) and an EFA-enabled security group. I leave all other options to their default values.

Console screenshot.

I review all the options and proceed to create the file system. After a few minutes, the file system is ready to be used.

Mounting an Amazon FSx for Lustre file system with EFA enabled from an Amazon EC2 instance
In the Amazon EC2 console, I choose Launch instance, enter a name for the instance, and select the Ubuntu Amazon Machine Image (AMI). For Instance type, I select trn1.32xlarge.

Console screenshot.

In Network settings, I edit the default settings and select the same subnet used by the FSx Lustre file system. In Firewall (security groups), I select three existing security groups: the EFA-enabled security group used by the FSx for Lustre file system, the default security group, and a security group that provides Secure Shell (SSH) access.

Console screenshot.

In Advanced network configuration, I select ENA and EFA as Interface type. Without this setting, the instance would use traditional TCP networking and the connection with the FSx for Lustre file system would still be limited to 100 Gbps in throughput.

Console screenshot.

To have more throughput, I can add more EFA network interfaces, depending on the instance type.

I launch the instance and, when the instance is ready, I connect using EC2 Instance Connect and follow the instructions for installing the Lustre client in the FSx for Lustre User Guide and configuring EFA clients.

Then, I follow the instructions for mounting an FSx for Lustre file system from an EC2 instance.

I create a folder to use as mount point:

sudo mkdir -p /fsx

I select the file system in the FSx console and lookup the DNS name and Mount name. Using these values, I mount the file system:

sudo mount -t lustre -o relatime,flock file_system_dns_name@tcp:/mountname /fsx

EFA is automatically used when you access an EFA-enabled file system from client instances that support EFA and are using Lustre version 2.15 or higher.

Things to know
EFA and GDS support is available today with no additional cost on new Amazon FSx for Lustre file systems in all AWS Regions where persistent 2 is offered. FSx for Lustre automatically uses EFA when customers access an EFA-enabled file system from client instances that support EFA, without requiring any additional configuration. For a list of EC2 client instances that support EFA, see supported instance types in the Amazon EC2 User Guide. This network specifications table describes network bandwidths and EFA support for instance types in the accelerated computing category.

To use EFA-enabled instances with FSx for Lustre file systems, you must use Lustre 2.15 clients on Ubuntu 22.04 with kernel 6.8 or higher.

Note that your client instances and your file systems must be located in the same subnet within your Amazon Virtual Private Cloud (Amazon VPC) connection.

GDS is automatically supported on EFA-enabled file systems. To use GDS with your FSx for Lustre file systems, you need the NVIDIA Compute Unified Device Architecture (CUDA) package, the open source NVIDIA driver, and the NVIDIA GPUDirect Storage Driver installed on your client instance. These packages come preinstalled on the AWS Deep Learning AMI. You can then use your CUDA-enabled application to use GPUDirect storage for data transfer between your file system and GPUs.

When planning your deployment, note that EFA-enabled file systems have larger minimum storage capacity increments than file systems that are not EFA-enabled. For instance, if you choose the 1,000 MB/s/TiB throughput tier, the minimum storage capacity for EFA-enabled file systems starts at 4.8 TiB as compared to 1.2TB for FSx for Lustre file systems not enabling EFA. If you’re looking to migrate your existing workloads, you can use AWS DataSync to move your data from an existing file system to a new one that supports EFA and GDS.

For maximum flexibility, FSx for Lustre maintains compatibility with both EFA and non-EFA workloads. When accessing an EFA-enabled file system, traffic from non-EFA client instances automatically flows over traditional TCP/IP networking using Elastic Network Adapter (ENA), allowing seamless access for all workloads without any additional configuration.

To learn more about EFA and GDS support on FSx for Lustre, including detailed setup instructions and best practices, visit the Amazon FSx for Lustre documentation. Get started today and experience the fastest storage performance available for your GPU instances in the cloud.

Danilo

Update 11/27: post updated to reflect 12x throughput

SANS ISC Internship Setup: AWS DShield Sensor + DShield SIEM [Guest Diary], (Tue, Nov 26th)

This post was originally published on this site

[This is a Guest Diary by John Paul Zaguirre , an ISC intern as part of the SANS.edu BACS program]

Introduction

This is a blog post documentation on how to set up the DShield Sensor in AWS, DShield SIEM locally, and connecting them both. I initially setup a Raspberry Pi5 to use as a DShield Sensor, but ultimately switched to AWS after a while. I then added a DShield SIEM hosted locally on my network and connected the AWS sensor into it to utilize for my attack observations. This walkthrough is based on multiple documentations that have been compiled into one, and I have linked their GitHub pages under the “References and Resources” section. You can find the full walkthrough on this page – https://github.com/15HzMonitor/Internship-Blog-Post

Requirements – Hardware, Software, and Accounts

Hardware
1.    A machine to host your DShield SIEM. This can be a laptop, a desktop, or a virtual machine running on your desktop. If you're planning to run the SIEM 24/7, I suggest using something different than your daily production machine. In my case, I reused an old desktop to use as my SIEM so that it can run 24/7. Below are the minimum specs required to run the SIEM based on Guy's documentation for the Ubuntu Setup [1]:
    o    Minimum 8+ GB RAM
        –    If the amount of RAM assigned to each container (see below) is more than 2GB, consider increasing the server RAM capacity.
    o    4-8 Cores
    o    Add 2 partitions, one for the OS, the other for docker
    o    Minimum 300 GB partition assigned to /var/lib/docker (I used a 1TB SATA drive)
2.    USB flash drive(s) to put your Ubuntu ISO to be installed to your repurposed machine. Have at least 16GB, and it must be clear of any data since you will be using Rufus, and it will format the USB stick when creating a boot drive.

Software
1.    Ubuntu 22.04 LTS Live Server 64-bit for your DShield SIEM. You can download the ISO on the Ubuntu Server Page [2].
2.    Rufus Software to put your Ubuntu ISO into a bootable USB. You can find the latest Rufus version on the Rufus downloads page [3].
3.    (Optional) a software such as Parted Magic or DBAN to wipe the machine you will be using.
4.    Your router's software to create a static IP for your SIEM and do port forwarding.

Accounts
1.    A SANS Internet Storm Center (ISC) Account for your API key to enter in your sensor. You can sign up on the ISC sign up page [4].
2.    An AWS Account to deploy your DShield Sensor using the Free Tier offer.
If you don't have one yet, you can sign up on the AWS sign up page [5].
3.    An AlienVault OTX account for generating the API code to link to your DShield SIEM. You can sign up on the AlienVault OTX sign up page [6].

Setup Process

1. Setup your DShield Sensor.

  1. Sign up for an AWS Account.
  2. Setup EC2 Instance
  3. Install & setup DShield Sensor
  4. Configure EC2 Security

2. Setup your DShield SIEM.

  1. Install Ubuntu Server to a physical machine.
  2. (Option) Install Ubuntu Server virtually through VMWare.
  3. Build a Docker Partition.
  4. Install Docker.
  5. Install and Configure DShield ELK.
    •  An optional setup for using Raspberry Pi as a SIEM has been written by another SANS Student and can be found on their GitHub page [7].

3. Configure Filebeat and connect your DShield Sensor to DShield SIEM.

  1. Install and configure Filebeat on your DShield sensor and connect it to your ELK.
  2. Troubleshoot and test Filebeat.
  3. Start Filebeat, Elastic-agent, and Softflowd.
  4. Check the status of Filebeat, Elastic-agent, and Softflowd.
  5. Accessing your dashboards and logs.

4. Harden your DShield SIEM.

  1. Adding non-root user(s), Install updates, Unattended upgrades, Locking down OpenSSH, Fail2ban.
  2. 10 Tips for Hardening your Linux Servers.

[1] https://github.com/bruneaug/DShield-SIEM#ubuntu-setup
[2] https://ubuntu.com/download/server
[3] https://rufus.ie/downloads/
[4] https://isc.sans.edu/register.html
[5] https://signin.aws.amazon.com/signup?request_type=register
[6] https://otx.alienvault.com/
[7] Installing DShield SIEM on a Raspberry Pi 5 – 8 GB RAM
[8] https://www.sans.edu/cyber-security-programs/bachelors-degree/

Special thanks to the writers of these GitHub documents:

dshield: DShield Raspberry Pi Sensor
Installing DShield SIEM on a Raspberry Pi 5 – 8 GB RAM
DShield-SIEM: DShield Sensor Log Collection with ELK
Dshield-ELK

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Time-based snapshot copy for Amazon EBS

This post was originally published on this site

You can now specify a desired completion duration (15 minutes to 48 hours) when you copy an Amazon Elastic Block Store (Amazon EBS) snapshot within or between AWS Regions and/or accounts. This will help you to meet time-based compliance and business requirements for critical workloads. For example:

Testing – Distribute fresh data on a timely basis as part of your Test Data Management (TDM) plan.

Development – Provide your developers with updated snapshot data on a regular and frequent basis.

Disaster Recovery – Ensure that critical snapshots are copied in order to meet a Recovery Point Objective (RPO).

Regardless of your use case, this new feature gives you consistent and predictable copies. This does not affect the performance or reliability of standard copies—you can choose the option and timing that works best for each situation.

Creating a Time-Based Snapshot Copy
I can create time-based snapshot copies from the AWS Management Console, CLI (copy-snapshot), or API (CopySnapshot). While working on this post I created two EBS volumes (100 GiB and 1 TiB), filled each one with files, and created snapshots:

To create a time-based snapshot, I select the source as usual and choose Copy snapshot from the Action menu. I enter a description for the copy, choose the us-east-1 AWS Region as the destination, select Enable time-based copy, and (because this is a time-critical snapshot), enter a 15 minute Completion duration:

When I click Copy snapshot, the request will be accepted (and the copy will become Pending) only if my account’s throughput quotas are not already exceeded due to the throughput consumed by other active copies that I am making to the destination region. If the account level throughput quota is already exceeded, the console will display an error.

I can click Launch copy duration calculator to get a better idea of the minimum achievable copy duration for the snapshot. I open the calculator, enter my account’s throughput limit, and choose an evaluation period:

The calculator then uses historical data collected over the course of previous snapshot copies to tell me the minimum achievable completion duration. In this example I copied 1,800,000 MiB in the last 24 hours; with time-based copy and my current account throughput quota of 2000 MiB/second I can copy this much data in 15 minutes.

While the copy is in progress, I can monitor progress using the console or by calling DescribeSnapshots and examining the progress field of the result. I can also use the following Amazon EventBridge events to take actions (if the copy operation crosses regions, the event is sent in the destination region):

copySnapshot – Sent after the copy operation completes.

copyMissedCompletionDuration – Sent if the copy is still pending when the deadline has passed.

Things to Know
And that’s just about all there is to it! Here’s what you need to know about time-based snapshot copies:

CloudWatch Metrics – The SnapshotCopyBytesTransferred metric is emitted in the destination region, and reflect the amount of data transferred between the source and destination region in bytes.

Duration – The duration can range from 15 minutes to 48 hours in 15 minute increments, and is specified on a per-copy basis.

Concurrency – If a snapshot is being copied and I initiate a second copy of the same snapshot to the same destination, the duration for the second one starts when the first one is completed.

Throughput – There is a default per-account limit of 2000 MiB/second between each source and destination pair. If you need additional throughput in order to meet your RPO you can request an increase via the AWS Support Center. Maximum per-snapshot throughput is 500 MiB/second and cannot be increased.

Pricing – Refer to the Amazon EBS Pricing page for complete pricing information.

Regions – Time-based snapshot copies are available in all AWS Regions.

Jeff;

Announcing future-dated Amazon EC2 On-Demand Capacity Reservations

This post was originally published on this site

Customers use Amazon Elastic Compute Cloud (Amazon EC2) to run every type of workload imaginable, including web hosting, big data processing, high-performance computing (HPC), virtual desktops, live event streaming, and databases. Some of these workloads are so critical that customers asked for the ability to reserve capacity for them.

To help customers flexibly reserve capacity, we launched EC2 On-Demand Capacity Reservations (ODCRs) in 2018. Since then, customers have used capacity reservations (CRs) to run critical applications like hosting consumer websites, streaming lives sporting events and processing financial transactions.

Today, we’re announcing the ability to get capacity for future workloads using CRs. Many customers have future events such as product launches, large migrations, or end-of-year sales events like Cyber Monday or Diwali. These events are critical, and customers want to ensure they have the capacity when and where they need it.

While CRs helped customers reserve capacity for these events, they were only available just-in-time. So customers either needed to provision the capacity ahead of time and pay for it or plan with precision to provision CRs just-in-time at the start of the event.

Now you can plan and schedule your CRs up to 120 days in advance. To get started you specify the capacity you need, the start date, delivery preference, and the minimum duration you commit to use the capacity reservation. There are no upfront charges to schedule a capacity reservation. After Amazon EC2 evaluates and approves the request, it will activate the reservation on the start date, and customers can use it to immediately launch instances.

Getting started with future-dated capacity reservations
To reserve your future-dated capacity, choose Capacity Reservations on the Amazon EC2 console and select Create On-Demand Capacity Reservation, and choose Get started.

To create a capacity reservation, specify the instance type, platform, Availability Zone, platform, tenancy, and number of instances you are requesting.

future-dated-2a

In the Capacity Reservation details section, choose At a future date in the Capacity Reservation starts option and choose your start date and commitment duration.

future-dated-1a

You can also choose to end the capacity reservation at a specific time or manually. If you select Manually, the reservation has no end date. It will remain active in your account and continue to be billed until you manually cancel it. To reserve this capacity, choose Create.

future-dated-4

After you create your capacity request, it appears in the dashboard with an Assessing status. During this state, AWS systems will work to determine if your request is supportable which is usually done within 5 days. Once the systems determine the request is supportable, the status will be changed to Scheduled. In rare cases, your request may be unsupported.

On your scheduled date, the capacity reservation will change to an Active state, the total instance count will be increased to the amount requested, and you can immediately launch instances.

After activation, you must hold the reservation for at least the commitment duration. After the commitment duration elapses, you can continue to hold and use the reservation if you’d like or cancel it if no longer needed.

Things to know
Here are some things that you should know about the future-dated CRs:

  • Evaluation – Amazon EC2 considers multiple factors when evaluating your request. Along with forecasted supply, Amazon EC2 considers how long you plan to hold the capacity, how early you create the Capacity Reservation relative to your start date, and the size of your request. To improve the ability of Amazon EC2 to support your request, create your reservation at least 56 days (8 weeks) before the start date. You need to submit a request for at least 100 vCPUs for only C, M, R, T, I instance types. The recommended minimum commitment for most requests will be 14 days.
  • Notification – We recommend monitoring the status of your request through the console or Amazon EventBridge You can use these notifications to trigger automation or send an email or text update. To learn more, visit Send an email when events happen using Amazon EventBridge in the Amazon EventBridge User Guide.
  • Pricing – Future dated capacity reservations are billed just like regular CRs. It is charged at the equivalent On-Demand rate whether you run instances in reserved capacity or not. For example, if you create a future dated CR for 20 instances and run 15 instances, you will be charged for 15 active instances and for 5 unused instances in the reservation including the minimum duration. Savings Plans apply to both unused reservations and instances running on the reservation. To learn more, visit Capacity Reservation pricing and billing in the Amazon EC2 User Guide.

Now available
Future dated EC2 Capacity Reservations are now available today in all AWS Regions where Amazon EC2 Capacity Reservations are available.

Give Amazon EC2 Capacity Reservations a try in the Amazon EC2 console. To learn more, visit On-Demand Capacity Reservations in the Amazon EC2 User Guide and send feedback to AWS re:Post for Amazon EC2 or through your usual AWS Support contacts.

Channy

AWS Weekly Roundup: multiple new launches, AI training partnership with Anthropic, and join AWS re:Invent virtually (Nov 25, 2024)

This post was originally published on this site

Last week, I saw an astonishing number of new feature and service launches from AWS. This means we are getting closer to AWS re:Invent 2024! Our News Blog team is also finalizing blog posts for re:Invent to introduce some awesome launches from service teams for your reading pleasure.

The most interesting news is that we’re expanding our strategic collaboration with Anthropic as our primary training partner for development of our AWS Trainium chips. This is in addition to being their primary cloud provider for deploying Anthropic’s Claude models in Amazon Bedrock. We’ll keep pushing the boundaries of what customers can achieve with generative AI technologies with these kinds of collaborations.

Last week’s launches
Here are some AWS bundled feature launches:

Amazon Aurora – Amazon Aurora Serverless v2 now supports scaling to 0 Aurora Capacity Units (ACUs). With 0 ACUs, you can now save cost during periods of database inactivity. Instead of scaling down to 0.5 ACUs, the database can now scale down to 0 ACUs. Amazon Aurora is now compatible with MySQL 8.0.39 and PostgreSQL 17.0 in the Amazon RDS Database preview environment.

Amazon Bedrock – You can quickly build and execute complex generative AI workflows without writing code with the general availability of Amazon Bedrock Flows (previously known as Prompt Flows). Amazon Bedrock Knowledge Bases now supports binary vector embeddings for building Retrieval Augmented Generation (RAG) applications. Amazon Bedrock also introduce a preview launch of Prompt Optimization to rewrite prompts for higher quality responses from foundational models (FMs). You can use AWS Amplify AI kit to easily leverage your data to get customized responses from Bedrock AI models to build web apps with AI capabilities such as chat, conversational search, and summarization.

Amazon CloudFront – You can use gRPC applications in Amazon CloudFront that allows bidirectional communication between a client and a server over HTTP/2 connections. Amazon CloudFront introduces Virtual Private Cloud (VPC) origins to deliver content from applications hosted in VPC private subnets, and Anycast Static IPs to provide you with a dedicated list of IP addresses for connecting to all CloudFront edge locations worldwide. You can also conditionally change or update origin servers on each request with origin modification within CloudFront Functions, and use new log configuration and delivery options.

Amazon CloudWatch – You can use field indexes and log transformation to improve log analytics at scale in the CloudWatch Logs. You can also use enhanced search and analytics experience and runtime metrics support with CloudWatch Application Signals, and percentile aggregation and simplified events-based troubleshooting directly from the web vitals anomaly in CloudWatch Real User Monitoring (RUM).

Amazon Cognito – You can secure user access to your applications with passwordless authentication, including sign-in with passkeys, email, and text message. Amazon Cognito introduces Managed Login, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding. Cognito launches new user pool feature tiers: Essentials and Plus as well as a new developer-focused console experience. To learn more, visit Donnie’s blog post.

Amazon Connect – You can use new customer profiles and outbound campaigns to help you proactively address customer needs before they become potential issues. Amazon Connect Contact Lens now supports creating custom dashboards, as well as adding or removing widgets from existing dashboards. With new Amazon Connect Email, you can receive and respond to emails sent by customers to business addresses or submitted via web forms on your website or mobile app.

Amazon EC2 – You can shift the launches of EC2 instances in an Auto Scaling Group (ASG) away from an impaired Availability Zone (AZ) to quickly recover your unhealthy application in another AZ with Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift. Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.

AWS End User Messaging (aka Amazon Pinpoint) – You can now track feedback for messages sent through the SMS and MMS channel, explicitly block or allow messages to individual phone numbers overriding your country rule settings, and cost allocation tags for SMS resources to track spend for each tag associated with a resource. AWS End User Messaging also now support integration with Amazon EventBridge.

AWS Lambda – You can use Lambda SnapStart for Python and .NET functions to deliver as low as sub-second startup performance. AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations and Amazon CloudWatch Application Signals to easily monitor the health and performance of serverless applications built using Lambda. You can also use a new Node.js 22 runtime and Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources.

Amazon OpenSearch Service – You can scale a single cluster to 1000 data nodes (1000 hot nodes and/or 750 warm nodes) to manage 25 petabytes of data. Amazon OpenSearch Service introduces Custom Plugins, a new plugin management option to extend the search and analysis functions in OpenSearch.

Amazon Q Business – You can use tabular search to extract answers from tables embedded in documents ingested in Q Business. You can drag and drop files to upload and reuse any recently uploaded files in new conversations without uploading the files again. Amazon Q Business now supports integrations to Smartsheet in general, and Asana, Google Calendar in preview to automatically sync your index with your selected data sources. You can also use Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge.

Amazon Q Developer – You can ask questions directly related to the AWS Management Console page you’re viewing, eliminating the need to specify the service or resource in your query. You can also use customizable chat responses generated by Q Developer in the IDE to securely connect Q Developer to your private codebases to receive more precise chat responses. Finally, you can use voice input and output capabilities in the AWS Console Mobile App along conversational prompts to list resources in your AWS account.

Amazon QuickSight – You can use Layer Map to visualize custom geographic boundaries, such as sales territories, or user-defined regions, and Image Component to upload your images directly for a variety of use cases, such as adding company logos. Amazon QuickSight also provides the ability to import visuals from an existing dashboard or analysis into your current analysis and Highcharts visuals to create custom visualizations using the Highcharts Core library in preview.

Amazon Redshift – You can ingest data from a wider range of streaming sources from Confluent Managed Cloud and self-managed Apache Kafka clusters on Amazon EC2 instances. You can also use enhanced security defaults which helps you adhere to best practices in data security and reduce the risk of potential misconfigurations.

AWS System Manager – You can use a new and improved version of AWS Systems Manager that brings a highly requested cross-account, and cross-Region experience for managing nodes at scale. AWS Systems Manager now supports instances running Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10.

Amazon S3 – You can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf and append data to objects in S3 Express One Zone. You can also use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. Amazon S3 Connector for PyTorch now supports Distributed Checkpoint (DCP), improving the time to write checkpoints to Amazon S3.

Amazon VPC – You can use Block Public Access (BPA) for VPC, a new centralized declarative control that enables network and security administrators to authoritatively block Internet traffic for their VPCs. Amazon VPC Lattice now provides native integration with Amazon ECS, easily to deploy, manage, and scale containerized applications.

There’s a lot more launch news that I haven’t covered here. See AWS What’s New for more details.

See you virtually in AWS re:Invent
AWS re:Invent 2023Next week we’ll hear the latest news from AWS, learn from experts, and connect with the global cloud community in Las Vegas. If you come, check out the agenda, session catalog, and attendee guides before your departure.

If you’re not able to attend re:Invent in person, we’re offering the option to livestream our Keynotes and Innovation Talks. With the registration for online pass, you will have access to on-demand keynote, Innovation Talks, and selected breakout sessions after the event. You can also register with AWS Builder ID, a personal account that enables one-click event registration and provides access to many AWS tools and services.

Please stay tuned in the next week!

Channy

AWS Weekly Roundup: 197 new launches, AI training partnership with Anthropic, and join AWS re:Invent virtually (Nov 25, 2024)

This post was originally published on this site

Last week, I saw an astonishing 197 new service launches from AWS. This means we are getting closer to AWS re:Invent 2024! Our News Blog team is also finalizing blog posts for re:Invent to introduce some awesome launches from service teams for your reading pleasure.

The most interesting news is that we’re expanding our strategic collaboration with Anthropic as our primary training partner for development of our AWS Trainium chips. This is in addition to being their primary cloud provider for deploying Anthropic’s Claude models in Amazon Bedrock. We’ll keep pushing the boundaries of what customers can achieve with generarive AI technologies with these kinds of collaborations.

Last week’s launches
Here are some AWS bundled feature launches:

Amazon Aurora – Amazon Aurora Serverless v2 now supports scaling to 0 Aurora Capacity Units (ACUs). With 0 ACUs, you can now save cost during periods of database inactivity. Instead of scaling down to 0.5 ACUs, the database can now scale down to 0 ACUs. Amazon Aurora is now compatible with MySQL 8.0.39 and PostgreSQL 17.0 in the Amazon RDS Database preview environment.

Amazon Bedrock – You can quickly build and execute complex generative AI workflows without writing code with the general availability of Amazon Bedrock Flows (previously known as Prompt Flows). Amazon Bedrock Knowledge Bases now supports binary vector embeddings for building Retrieval Augmented Generation (RAG) applications. Amazon Bedrock also introduce a preview launch of Prompt Optimization to rewrite prompts for higher quality responses from foundational models (FMs). You can use AWS Amplify AI kit to easily leverage your data to get customized responses from Bedrock AI models to build web apps with AI capabilities such as chat, conversational search, and summarization.

Amazon CloudFront – You can use gRPC applications in Amazon CloudFront that allows bidirectional communication between a client and a server over HTTP/2 connections. Amazon CloudFront introduces Virtual Private Cloud (VPC) origins to deliver content from applications hosted in VPC private subnets, and Anycast Static IPs to provide you with a dedicated list of IP addresses for connecting to all CloudFront edge locations worldwide. You can also conditionally change or update origin servers on each request with origin modification within CloudFront Functions, and use new log configuration and delivery options.

Amazon CloudWatch – You can use field indexes and log transformation to improve log analytics at scale in the CloudWatch Logs. You can also use enhanced search and analytics experience and runtime metrics support with CloudWatch Application Signals, and percentile aggregation and simplified events-based troubleshooting directly from the web vitals anomaly in CloudWatch Real User Monitoring (RUM).

Amazon Cognito – You can secure user access to your applications with passwordless authentication, including sign-in with passkeys, email, and text message. Amazon Cognito introduces Managed Login, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding. Cognito launches new user pool feature tiers: Essentials and Plus as well as a new developer-focused console experience. To learn more, visit Donnie’s blog post.

Amazon Connect – You can use new customer profiles and outbound campaigns to help you proactively address customer needs before they become potential issues. Amazon Connect Contact Lens now supports creating custom dashboards, as well as adding or removing widgets from existing dashboards. With new Amazon Connect Email, you can receive and respond to emails sent by customers to business addresses or submitted via web forms on your website or mobile app.

Amazon EC2 – You can shift the launches of EC2 instances in an Auto Scaling Group (ASG) away from an impaired Availability Zone (AZ) to quickly recover your unhealthy application in another AZ with Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift. Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.

AWS End User Messaging (aka Amazon Pinpoint) – You can now track feedback for messages sent through the SMS and MMS channel, explicitly block or allow messages to individual phone numbers overriding your country rule settings, and cost allocation tags for SMS resources to track spend for each tag associated with a resource. AWS End User Messaging also now support integration with Amazon EventBridge.

AWS Lambda – You can use Lambda SnapStart for Python and .NET functions to deliver as low as sub-second startup performance. AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations and Amazon CloudWatch Application Signals to easily monitor the health and performance of serverless applications built using Lambda. You can also use a new Node.js 22 runtime and Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources.

Amazon OpenSearch Service – You can scale a single cluster to 1000 data nodes (1000 hot nodes and/or 750 warm nodes) to manage 25 petabytes of data. Amazon OpenSearch Service introduces Custom Plugins, a new plugin management option to extend the search and analysis functions in OpenSearch.

Amazon Q Business – You can use tabular search to extract answers from tables embedded in documents ingested in Q Business. You can drag and drop files to upload and reuse any recently uploaded files in new conversations without uploading the files again. Amazon Q Business now supports integrations to Smartsheet in general, and Asana, Google Calendar in preview to automatically sync your index with your selected data sources. You can also use Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge.

Amazon Q Developer – You can ask questions directly related to the AWS Management Console page you’re viewing, eliminating the need to specify the service or resource in your query. You can also use customizable chat responses generated by Q Developer in the IDE to securely connect Q Developer to your private codebases to receive more precise chat responses. Finally, you can use voice input and output capabilities in the AWS Console Mobile App along conversational prompts to list resources in your AWS account.

Amazon QuickSight – You can use Layer Map to visualize custom geographic boundaries, such as sales territories, or user-defined regions, and Image Component to upload your images directly for a variety of use cases, such as adding company logos. Amazon QuickSight also provides the ability to import visuals from an existing dashboard or analysis into your current analysis and Highcharts visuals to create custom visualizations using the Highcharts Core library in preview.

Amazon Redshift – You can ingest data from a wider range of streaming sources from Confluent Managed Cloud and self-managed Apache Kafka clusters on Amazon EC2 instances. You can also use enhanced security defaults which helps you adhere to best practices in data security and reduce the risk of potential misconfigurations.

AWS System Manager – You can use a new and improved version of AWS Systems Manager that brings a highly requested cross-account, and cross-Region experience for managing nodes at scale. AWS Systems Manager now supports instances running Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10.

Amazon S3 – You can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf and append data to objects in S3 Express One Zone. You can also use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. Amazon S3 Connector for PyTorch now supports Distributed Checkpoint (DCP), improving the time to write checkpoints to Amazon S3.

Amazon VPC – You can use Block Public Access (BPA) for VPC, a new centralized declarative control that enables network and security administrators to authoritatively block Internet traffic for their VPCs. Amazon VPC Lattice now provides native integration with Amazon ECS, easily to deploy, manage, and scale containerized applications.

There’s a lot more launch news that I haven’t covered here. See AWS What’s New for more details.

See you virtually in AWS re:Invent
AWS re:Invent 2023Next week we’ll hear the latest news from AWS, learn from experts, and connect with the global cloud community in Las Vegas. If you come, check out the agenda, session catalog, and attendee guides before your departure.

If you’re not able to attend re:Invent in person, we’re offering the option to livestream our Keynotes and Innovation Talks. With the registration for online pass, you will have access to on-demand keynote, Innovation Talks, and selected breakout sessions after the event. You can also register with AWS Builder ID, a personal account that enables one-click event registration and provides access to many AWS tools and services.

Please stay tuned in the next week!

Channy

The strange case of disappearing Russian servers, (Mon, Nov 25th)

This post was originally published on this site

Few months ago, I noticed that something strange was happening with the number of servers seen by Shodan in Russia…

In order to identify any unusual changes on the internet that might be worth a closer look, I have put together a simple script few years ago. It periodically goes over data that was gathered from the Shodan search engine by my TriOp tool[1], and looks for significant changes in the number of public IP addresses with various services enabled on them. This script alerts me any time there seems to be something unusual – i.e., if Shodan detects more than a 10 % increase in the number of HTTPS servers during the course of a week, or if there is more than a 20 % decrease in the number of e-mail servers in a specific country in the course of a month.

Around the beginning of August, the script started alerting me to a decrease in the number of basically all types of servers that Shodan detected in Russia.

Since internet-wide scanning and service identification that is performed by Shodan, Censys and similar search engines, is hardly an exact science, the number of systems that they detect can oscillate significantly in the short term, and a single alert by my script therefore seldom means that a real change is occurring. Nevertheless, the alerts kept coming for multiple days and weeks in a row, and so I decided to take a closer look at the underlying data… And, indeed, from the point of view of Shodan, it looked as if significant portions of the Russian internet were disappearing.

My theory was that it might have been caused by introduction of some new functionality into the internet filtering technology that is used by Russia in order to censor internet traffic and block access to various external services[2], which started interfering with Shodan probes. And while I still believe that this might be the case, looking at the data now, when the number of Russian servers has been more or less stable for about 6 weeks, it seems that the cause for the decrease was at least partially different.

Although it is true that the decrease was spread over basically all types of services and ports (i.e., there was a significant drop in the number of accessible HTTP/S servers, SSH servers, DNS servers etc.), and some overall filtering affecting Shodan therefore seems to be in place, there was one port on which the drop was much more significant than on any of the others.

That port was TCP/7547[3].

The aforementioned port is associated with the CPE WAN Management Protocol (CWMP), which is sometimes also called TR-069[4]. The CWMP is a HTTP-based protocol, which enables ISPs to perform remote management and provisioning of routers and other devices that offer their clients direct access to the internet (e.g., SOHO routers that are provided to customers by ISPs).

This protocol is used by many ISPs around the world, and although it has not always been implemented securely and some vulnerable implementations of it (well, mostly of it and TR-064) have been historically used in successful attacks[5], it is – in general – considered secure.

Nevertheless, it seems that Russian ISPs either suddenly decided to stop using it, or – much more likely – decided to severely limit access to the corresponding port. This was most significantly visible in IP ranges that are part of AS12389, which is assigned to the national Russian ISP, Rostelecom[6,7].

While CWMP accounts for significant portion of the overall decrease in the number of Russian servers seen by Shodan, it is far from being its only cause, as we can see from the second chart.

And since it is unlikely that large amounts of servers of various types would have been suddenly removed from the internet, it is much more likely that Shodan has lost some visibility into Russian IP space in the past few months due to some increase in filtering of network traffic. In any case, it will certainly be interesting to see how things develop in the future… Especially given the continued Russian attempts at creating a “sovereign internet”[8].

[1] https://isc.sans.edu/diary/27034
[2] https://en.wikipedia.org/wiki/Internet_censorship_in_Russia
[3] https://trends.shodan.io/search?query=country%3A%22RU%22#facet/port
[4] https://en.wikipedia.org/wiki/TR-069
[5] https://censys.com/the-most-common-protocol-youve-never-heard-of/
[6] https://trends.shodan.io/search?query=ASN%3AAS12389#facet/port
[7] https://ipinfo.io/AS12389
[8] https://www.scientificamerican.com/article/russia-is-trying-to-leave-the-internet-and-build-its-own/

———–
Jan Kopriva
@jk0pr | LinkedIn
Nettles Consulting

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.