Category Archives: AWS

New AWS Billing Transfer for centrally managing AWS billing and costs across multiple organizations

This post was originally published on this site

Today, we’re announcing the general availability of Billing Transfer, a new capability to centrally manage and pay bills across multiple organizations by transferring payment responsibility to other billing administrators, such as company affiliates and Amazon Web Services (AWS) Partners. This feature provides customers operating across multiple organizations with comprehensive visibility of cloud costs across their multi-organization environment, but organization administrators maintain security management autonomy over their accounts.

Customers use AWS Organizations to centrally administer and manage billing for their multi-account environment. However, when they operate in a multi-organization environment, billing administrators must access the management account of each organization separately to collect invoices and pay bills. This decentralized approach to billing management creates unnecessary complexity for enterprises managing costs and paying bills across multiple AWS organizations. This feature also will be useful for AWS Partners to resell AWS products and solutions, and assume the responsibility of paying AWS for the consumption of their multiple customers.

With Billing Transfer, customers operating in multi-organization environments can now use a single management account to manage aspects of billing— such as invoice collection, payment processing, and detailed cost analysis. This makes billing operations more efficient and scalable while individual management accounts can maintain complete security and governance autonomy over their accounts. Billing Transfer also helps protect proprietary pricing data by integrating with AWS Billing Conductor, so billing administrators can control cost visibility.

Getting started with Billing Transfer
To set up Billing Transfer, an external management account sends a billing transfer invitation to a management account called a bill-source account. If accepted, the external account becomes the bill-transfer account, managing and paying for the bill-source account’s consolidated bill, starting on the date specified on the invitation.

To get started, go to the Billing and Cost Management console, choose Preferences and Settings in the left navigation pane and choose Billing transfer. Choose Send invitation from a management account you’ll use to centrally manage billing across your multi-organization environment.

Now, you can send a billing transfer invitation by entering the email address or account ID of the bill-source accounts for which you want to manage billing. You should choose the monthly billing period for when invoicing and payment will begin and a pricing plan from AWS Billing Conductor to control the cost data visible to the bill-source accounts.

When you choose Send invitation, the bill-source accounts will get a billing transfer notice in the Outbound billing tab.

Choose View details, review the invitation page, and choose Accept.

After the transfer is accepted, all usage from the bill-source accounts will be billed to the bill-transfer account using its billing and tax settings, and invoices will no longer be sent to the bill-source accounts. Any party (bill-source accounts and bill-transfer account) can withdraw the transfer at any time.

After your billing transfer begins, the bill-transfer account will receive a bill at the end of the month for each of your billing transfers. To view transferred invoices reflecting the usage of the bill-source accounts, choose the Invoices tab in the Bills page.

You can identify the transferred invoices by bill-source account IDs. You can also find the payments for the bill-source accounts invoices in the Payments menu. These appear only in the bill-transfer account.

The bill-transfer account can use billing views to access the cost data of the bill-source accounts in AWS Cost Explorer, AWS Cost and Usage Report, AWS Budgets and Bills page. When enabling billing view mode, you can choose your desired billing view for each bill-source account.

The bill-source accounts will experience these changes:

  • Historical cost data will no longer be available and should be downloaded before accepting
  • Cost and Usage Reports should be reconfigured after transfer

Transferred bills in the bill-transfer account always use the tax and payment settings of the account to which they’re delivered. Therefore, all the invoices reflecting the usage of the bill-source accounts and the member accounts in their AWS Organizations will contain taxes (if applicable) calculated on the tax settings determined by the bill-transfer account.

Similarly, the seller of record and payment preferences are also based on the configuration determined by the bill-transfer account. You can customize the tax and payments settings by creating the invoice units available in the Invoice Configuration functionality.

To learn more about details, visit Billing Transfer in the AWS documentation.

Now available
Billing Transfer is available today in all commercial AWS Regions. To learn more, visit the AWS Cloud Financial Management Services product page.

Give Billing Transfer a try today and send feedback to AWS re:Post for AWS Billing or through your usual AWS Support contacts.

Channy

Monitor network performance and traffic across your EKS clusters with Container Network Observability

This post was originally published on this site

Organizations are increasingly expanding their Kubernetes footprint by deploying microservices to incrementally innovate and deliver business value faster. This growth places increased reliance on the network, giving platform teams exponentially complex challenges in monitoring network performance and traffic patterns in EKS. As a result, organizations struggle to maintain operational efficiency as their container environments scale, often delaying application delivery and increasing operational costs.

Today, I’m excited to announce Container Network Observability in Amazon Elastic Kubernetes Service (Amazon EKS), a comprehensive set of network observability features in Amazon EKS that you can use to better measure your network performance in your system and dynamically visualize the landscape and behavior of network traffic in EKS.

Here’s a quick look at Container Network Observability in Amazon EKS:

Container Network Observability in EKS addresses observability challenges by providing enhanced visibility of workload traffic. It offers performance insights into network flows within the cluster and those with cluster-external destinations. This makes your EKS cluster network environment more observable while providing built-in capabilities for more precise troubleshooting and investigative efforts.

Getting started with Container Network Observability in EKS

I can enable this new feature for a new or existing EKS cluster. For a new EKS cluster, during the Configure observability setup, I navigate to the Configure network observability section. Here, I select Edit container network observability. I can see there are three included features: Service map, Flow table, and Performance metric endpoint, which are enabled by Amazon CloudWatch Network Flow Monitor.

On the next page, I need to install the AWS Network Flow Monitor Agent.

After it’s enabled, I can navigate to my EKS cluster and select Monitor cluster.

This will bring me to my cluster observability dashboard. Then, I select the Network tab.


Comprehensive observability features
Container Network Observability in EKS provides several key features, including performance metrics, service map, and flow table with three views: AWS service view, cluster view, and external view.

With Performance metrics, you can now scrape network-related system metrics for pods and worker nodes directly from the Network Flow Monitor agent and send them to your preferred monitoring destination. Available metrics include ingress/egress flow counts, packet counts, bytes transferred, and various allowance exceeded counters for bandwidth, packets per second, and connection tracking limits. The following screenshot shows an example of how you can use Amazon Managed Grafana to visualize the performance metrics scraped using Prometheus.


With the Service map feature, you can dynamically visualize intercommunication between workloads in your cluster, making it straightforward to understand your application topology with a quick look. The service map helps you quickly identify performance issues by highlighting key metrics such as retransmissions, retransmission timeouts, and data transferred for network flows between communicating pods.

Let me show you how this works with a sample e-commerce application. The service map provides both high-level and detailed views of your microservices architecture. In this e-commerce example, we can see three core microservices working together: the GraphQL service acts as an API gateway, orchestrating requests between the frontend and backend services.

When a customer browses products or places an order, the GraphQL service coordinates communication with both the products service (for catalog data, pricing, and inventory) and the orders service (for order processing and management). This architecture allows each service to scale independently while maintaining clear separation of concerns.

For deeper troubleshooting, you can expand the view to see individual pod instances and their communication patterns. The detailed view reveals the complexity of microservices communication. Here, you can see multiple pod instances for each service and the network of connections between them.

This granular visibility is crucial for identifying issues like uneven load distribution, pod-to-pod communication bottlenecks, or when specific pod instances are experiencing higher latency. For example, if one GraphQL pod is making disproportionately more calls to a particular products pod, you can quickly spot this pattern and investigate potential causes.

Use the Flow table to monitor the top talkers across Kubernetes workloads in your cluster from three different perspectives, each providing unique insights into your network traffic patterns.

Flow table – Monitor the top talkers across Kubernetes workloads in your cluster from three different perspectives, each providing unique insights into your network traffic patterns:

  • AWS service view shows which workloads generate the most traffic to Amazon Web Services (AWS) services such as Amazon DynamoDB and Amazon Simple Storage Service (Amazon S3), so you can optimize data access patterns and identify potential cost optimization opportunities.
  • The Cluster view reveals the heaviest communicators within your cluster (east-west traffic), which means you can spot chatty microservices that might benefit from optimization or colocation strategies
  • External viewidentifies workloads with the highest traffic to destinations outside AWS (internet or on premises), which is useful for security monitoring and bandwidth management.

The flow table provides detailed metrics and filtering capabilities to analyze network traffic patterns. In this example, we can see the flow table displaying cluster view traffic between our e-commerce services. The table shows that the orders pod is communicating with multiple products pods, transferring amounts of data. This pattern suggests the orders service is making frequent product lookups during order processing.

The filtering capabilities are useful for troubleshooting, for example, to focus on traffic from a specific orders pod. This granular filtering helps you quickly isolate communication patterns when investigating performance issues. For instance, if customers are experiencing slow checkout times, you can filter to see if the orders service is making too many calls to the products service, or if there are network bottlenecks between specific pod instances.

Additional things to know
Here are key points to note about Container Network Observability in EKS:

  • Pricing – For network monitoring, you pay standard Amazon CloudWatch Network Flow Monitor pricing.
  • Availability – Container Network Observability in EKS is available in all commercial AWS regions where Amazon CloudWatch Network Flow Monitor is available.
  • Export metrics to your preferred monitoring solution – Metrics are available in OpenMetrics format, compatible with Prometheus and Grafana. For configuration details, refer to Network Flow Monitor documentation.

Get started with Container Network Observability in Amazon EKS today to improve network observability in your cluster.

Happy building!
Donnie

New Amazon Bedrock service tiers help you match AI workload performance with cost

This post was originally published on this site

Today, Amazon Bedrock introduces new service tiers that give you more control over your AI workload costs while maintaining the performance levels your applications need.

I’m working with customers building AI applications. I’ve seen firsthand how different workloads require different performance and cost trade-offs. Many organizations running AI workloads face challenges balancing performance requirements with cost optimization. Some applications need rapid response times for real-time interactions, whereas others can process data more gradually. With these challenges in mind, today we’re announcing additional options pricing that give you more flexibility in matching your workload requirements with cost optimization.

Amazon Bedrock now offers three service tiers for workloads: Priority, Standard, and Flex. Each tier is designed to match specific workload requirements. Applications have varying response time requirements based on the use case. Some applications—such as financial trading systems—demand the fastest response times, others need rapid response times to support business processes like content generation, and applications such as content summarization can process data more gradually.

The Priority tier processes your requests ahead of other tiers, providing preferential compute allocation for mission-critical applications like customer-facing chat-based assistants and real-time language translation services, though at a premium price point. The Standard tier provides consistent performance at regular rates for everyday AI tasks, ideal for content generation, text analysis, and routine document processing. For workloads that can handle longer latency, the Flex tier offers a more cost-effective option with lower pricing, which is well suited for model evaluations, content summarization, and multistep analysis and agentic workflows.

You can now optimize your spending by matching each workload to the most appropriate tier. For example, if you’re running a customer service chat-based assistant that needs quick responses, you can use the Priority tier to get the fastest processing times. For content summarization tasks that can tolerate longer processing times, you can use the Flex tier to reduce costs while maintaining reliable performance. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier.

Check the Amazon Bedrock documentation for an up-to-date list of models supported for each service tier.

Choosing the right tier for your workload

Here is a mental model to help you choose the right tier for your workload.

Category Recommended service tier Description
Mission-critical Priority Requests are handled ahead of other tiers. Lower latency responses for user-facing apps (for example, customer service chat assistants, real-time language translation, interactive AI assistants)
Business-standard Standard Responsive performance for important workloads (for example, content generation, text analysis, routine document processing)
Business-noncritical Flex Cost-efficient for less urgent workloads (for example, model evaluations, content summarization, multistep agentic workflows)

Start by reviewing with application owners your current usage patterns. Next, identify which workloads need immediate responses and which ones can process data more gradually. You can then begin routing a small portion of your traffic through different tiers to test performance and cost benefits.

The AWS Pricing Calculator helps you estimate costs for different service tiers by entering your expected workload for each tier. You can estimate your budget based on your specific usage patterns.

To monitor your usage and costs, you can use the AWS Service Quotas console or turn on model invocation logging in Amazon Bedrock and observe the metrics with Amazon CloudWatch. These tools provide visibility into your token usage and help you track performance across different tiers.

Amazon Bedrock invocations observability

You can start using the new service tiers today. You choose the tier on a per-API call basis. Here is an example using the ChatCompletions OpenAI API, but you can pass the same service_tier parameter in the body of InvokeModel, InvokeModelWithResponseStream, Converse, andConverseStream APIs (for supported models):

from openai import OpenAI

client = OpenAI(
    base_url="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1",
    api_key="$AWS_BEARER_TOKEN_BEDROCK" # Replace with actual API key
)

completion = client.chat.completions.create(
    model= "openai.gpt-oss-20b-1:0",
    messages=[
        {
            "role": "developer",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "Hello!"
        }
    ]
    service_tier= "priority"  # options: "priority | default | flex"
)

print(completion.choices[0].message)

To learn more, check out the Amazon Bedrock User Guide or contact your AWS account team for detailed planning assistance.

I’m looking forward to hearing how you use these new pricing options to optimize your AI workloads. Share your experience with me online on social networks or connect with me at AWS events.

— seb

Accelerate large-scale AI applications with the new Amazon EC2 P6-B300 instances

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, our next-generation GPU platform accelerated by NVIDIA Blackwell Ultra GPUs. These instances deliver 2 times more networking bandwidth, and 1.5 times more GPU memory compared to previous generation instances, creating a balanced platform for large-scale AI applications.

With these improvements, P6-B300 instances are ideal for training and serving large-scale AI models, particularly those employing sophisticated techniques such as Mixture of Experts (MoE) and multimodal processing. For organizations working with trillion-parameter models and requiring distributed training across thousands of GPUs, these instances provide the perfect balance of compute, memory, and networking capabilities.

Improvements made compared to predecessors
The P6-B300 instances deliver 6.4Tbps Elastic Fabric Adapter (EFA) networking bandwidth, supporting efficient communication across large GPU clusters. These instances feature 2.1TB of GPU memory, allowing large models to reside within a single NVLink domain, which significantly reduces model sharding and communication overhead. When combined with EFA networking and the advanced virtualization and security capabilities of AWS Nitro System, these instances provide unprecedented speed, scale, and security for AI workloads.

The specs for the EC2 P6-B300 instances are as follows.

Instance size VCPUs System memory GPUs GPU memory GPU-GPU interconnect EFA network bandwidth ENA bandwidth EBS bandwidth Local storage
P6-B300.48xlarge 192 4TB 8x B300 GPU 2144GB HBM3e 1800 GB/s 6.4 Tbps 300 Gbps 100 Gbps 8x 3.84TB

Good to know
In terms of persistent storage, AI workloads primarily use a combination of high performance persistent storage options such as Amazon FSx for Lustre, Amazon S3 Express One Zone, and Amazon Elastic Block Store (Amazon EBS), depending on price performance considerations. For illustration, the dedicated 300Gbps Elastic Network Adapter (ENA) networking on P6-B300 enables high-throughput hot storage access with S3 Express One Zone, supporting large-scale training workloads. If you’re using FSx for Lustre, you can now use EFA with GPUDirect Storage (GDS) to achieve up to 1.2Tbps of throughput to the Lustre file system on the P6-B300 instances to quickly load your models.

Available now
The P6-B300 instances are now available through Amazon EC2 Capacity Blocks for ML and Savings Planin the US West (Oregon) AWS Region.
For on-demand reservation of P6-B300 instances, please reach out to your account manager. As usual with Amazon EC2, you pay only for what you use. For more information, refer to Amazon EC2 Pricing. Check out the full collection of accelerated computing instances to help you start migrating your applications.

To learn more, visit our Amazon EC2 P6-B300 instances page. Send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

– Veliswa

AWS Weekly Roundup: AWS Lambda, load balancers, Amazon DCV, Amazon Linux 2023, and more (November 17, 2025)

This post was originally published on this site

The weeks before AWS re:Invent, my team is full steam ahead preparing content for the conference. I can’t wait to meet you at one of my three talks: CMP346 : Supercharge AI/ML on Apple Silicon with EC2 Mac, CMP344: Speed up Apple application builds with CI/CD on EC2 Mac, and DEV416: Develop your AI Agents and MCP Tools in Swift.

Last week, AWS announced three new AWS Heroes. The AWS Heroes program recognizes a vibrant, worldwide group of AWS experts whose enthusiasm for knowledge-sharing has a real impact within the community. Welcome to the community, Dimple, Rola, and Vivek.

We also opened the GenAI Loft in Tel Aviv, Israel. AWS Gen AI Lofts are collaborative spaces and immersive experiences for startups and developers. The Loft content is tailored to address local customer needs – from startups and enterprises to public sector organizations, bringing together developers, investors, and industry experts under one roof.

GenAI Loft - TLV

The loft is open in Tel Aviv until Wednesday, November 19. If you’re in the area, check the list of sessions, workshops, and hackathons today.

If you are a serverless developer, last week was really rich with news. Let’s start with these.

Last week’s launches
Here are the launches that got my attention this week:

Additional updates
Here are some additional projects, blog posts, and news items that I found interesting:

  • Amazon Elastic Kubernetes Service gets independent affirmation of its zero operator access design – Amazon EKS offers a zero operator access posture. AWS personnel cannot access your content. This is achieved through a combination of AWS Nitro System-based instances, restricted administrative APIs, and end-to-end encryption. An independent review by NCC Group confirmed the effectiveness of these security measures.
  • Make your web apps hands-free with Amazon Nova Sonic – Amazon Nova Sonic, a foundation model from AAmazon Bedrock, provides you with the ability to create natural, low-latency, bidirectional speech conversations for applications. This provides users with the ability to collaborate with applications through voice and embedded intelligence, unlocking new interaction patterns and enhancing usability. This blog post demonstrates a reference app, Smart Todo App. It shows how voice can be integrated to provide a hands-free experience for task management.
  • AWS X-Ray SDKs & Daemon migration to OpenTelemetry – AWS X-Ray is transitioning to OpenTelemetry as its primary instrumentation standard for application tracing. OpenTelemetry-based instrumentation solutions are recommended for producing traces from applications and sending them to AWS X-Ray. X-Ray’s existing console experience and functionality continue to be fully supported and remains unchanged by this transition.
  • Powering the world’s largest events: How Amazon CloudFront delivers at scale – Amazon CloudFront achieved a record-breaking peak of 268 terabits per second on November 1, 2025, during major game delivery workloads—enough bandwidth to simultaneously stream live sports in HD to approximately 45 million concurrent viewers. This milestone demonstrates the CloudFront massive scale, powered by 750+ edge locations across 440+ cities globally and 1,140+ embedded PoPs within 100+ ISPs, with the latest generation delivering 3x the performance of previous versions.

Upcoming AWS events
Check your calendars so that you can sign up for these upcoming events:

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person events, developer-focused events, and events for startups.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— seb

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

AWS Lambda enhances event processing with provisioned mode for SQS event-source mapping

This post was originally published on this site

Today, we’re announcing the general availability of provisioned mode for AWS Lambda with Amazon Simple Queue Service (Amazon SQS) Event Source Mapping (ESM), a new feature that customers can use to optimize the throughput of their event-driven applications by configuring dedicated polling resources. Using this new capability, which provides 3x faster scaling, and 16x higher concurrency, you can process events with lower latency, handle sudden traffic spikes more effectively, and maintain precise control over your event processing resources.

Modern applications increasingly rely on event-driven architectures where services communicate through events and messages. Amazon SQS is commonly used as an event source for Lambda functions, so developers can build loosely coupled, scalable applications. Although the SQS ESM automatically handles queue polling and function invocation, customers with stringent performance requirements have asked for more control over the polling behavior to handle spiky traffic patterns and maintain low processing latency.

Provisioned mode for SQS ESM addresses these needs by introducing event pollers, which are dedicated resources that remain ready to handle expected traffic patterns. These event pollers can auto scale up to 1000 per concurrent executions per minute, more than three times faster than before to handle sudden spikes in event traffic and provide up to 20,000 concurrency–16 times higher capacity to process millions of events with Lambda functions. This enhanced scaling behavior helps customers maintain predictable low latency even during traffic surges.

Enterprises across various industries, from financial services to gaming companies, are using AWS Lambda with Amazon SQS to process real-time events for their mission-critical applications. These organizations, which include some of the largest online gaming platforms and financial institutions, require consistent subsecond processing times for their event-driven workloads, particularly during periods of peak usage. Provisioned mode for SQS ESM is a capability you can use to meet your stringent performance requirements while maintaining cost controls.

Enhanced control and performance

With provisioned mode, you can configure both minimum and maximum numbers of event pollers for your SQS ESM. Each event poller represents a unit of compute that handles queue polling, event batching, and filtering before invoking Lambda functions. Each event poller can handle up to 1 MB/sec of throughput, up to 10 concurrent invokes, or up to 10 SQS polling API calls per second. By setting a minimum number of event pollers, you enable your application to maintain a baseline processing capacity that can immediately handle sudden traffic increases. We recommend that you set the minimum event pollers required to handle your known peak workload requirements. The optional maximum setting helps prevent overloading downstream systems by limiting the total processing throughput.

The new mode delivers significant improvements in how your event-driven applications handle varying workloads. When traffic increases, your ESM detects the growing backlog within seconds and dynamically scales event pollers between your configured minimum and maximum values three times faster than before. This enhanced scaling capability is complemented by a substantial increase in processing capacity, with support for up to 2 GBps of aggregate traffic, and up to 20K concurrent requests—16x higher than previously possible. By maintaining a minimum number of ready-to-use event pollers, your application achieves predictable performance, handling sudden traffic spikes without the delay typically associated with scaling up resources. During low traffic periods, your ESM automatically scales down to your configured minimum number of event pollers, which means you can optimize costs while maintaining responsiveness.

Let’s try it out

Enabling provisioned mode is straightforward in the AWS Management Console. You need to already have an SQS queue configured and a Lambda function. To get started, in the Configuration tab for your Lambda function, choose Triggers, then Add trigger. This will bring up a user interface where you can configure your trigger. Choose SQS from the dropdown menu for source and then select the SQS queue you want to use.

Under Event poller configuration, you will now see a new option called Provisioned mode. Select Configure to reveal settings for Minimum event pollers and Maximum event pollers, each with defaults and minimum and maximum values displayed.

Configuration panel for SQS provisioned Mode

After you have configured Provisioned mode, you can save your trigger. If you need to make changes later, you can find the current configuration under the Triggers tab in the AWS Lambda configuration section, and you can modify your current settings there.

SQS Provisioned Poller confiig

Monitoring and observability

You can monitor your provisioned mode usage through Amazon CloudWatch metrics. The ProvisionedPollers metric shows the number of active event pollers processing events in one-minute windows.

Now available

Provisioned mode for Lambda SQS ESM is available today in all commercial AWS Regions. You can start using this feature through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs. Pricing is based on the number of event pollers provisioned and the duration they’re provisioned for, measured in Event Poller Units (EPUs). Each EPU supports up to 1 MB per second throughput capacity per event poller, with minimum 2 event pollers per ESM. See the AWS pricing page for more information on EPU charges.

To learn more about provisioned mode for SQS ESM, visit the AWS Lambda documentation. Start building more responsive event-driven applications today with enhanced control over your event processing resources.

Introducing AWS IoT Core Device Location integration with Amazon Sidewalk

This post was originally published on this site

Today, I’m happy to announce a new capability to resolve location data for Amazon Sidewalk enabled devices with the AWS IoT Core Device Location service. This feature removes the requirement to install GPS modules in a Sidewalk device and also simplifies the developer experience of resolving location data. Devices powered by small coin cell batteries, such as smart home sensor trackers, use Sidewalk to connect. Supporting built-in GPS modules for products that move around is not only expensive, it can creates challenge in ensuring optimal battery life performance and longevity.

With this launch, Internet of Things (IoT) device manufacturers and solution developers can build asset tracking and location monitoring solutions using Sidewalk-enabled devices by sending Bluetooth Low Energy (BLE), Wi-Fi, or Global Navigation Satellite System (GNSS) information to AWS IoT for location resolution. They can then send the resolved location data to an MQTT topic or AWS IoT rule and route the data to other Amazon Web Services (AWS) services, thus using different capabilities of AWS Cloud through AWS IoT Core. This would simplify their software development and give them more options to choose the optimal location source, thereby improving their product performance.

This launch addresses previous challenges and architecture complexity. You don’t need location sensing on network-based devices when you use the Sidewalk network infrastructure itself to determine device location, which eliminates the need for power-hungry and costly GPS hardware on the device. And, this feature also allows devices to efficiently measure and report location data from GNSS and Wi-Fi, thus extending the product battery life. Therefore, you can build a more compelling solution for asset tracking and location-aware IoT applications with these enhancements.

For those unfamiliar with Amazon Sidewalk and the AWS IoT Core Device Location service, I’ll briefly explain their history and context. If you’re already familiar with them, you can skip to the section on how to get started.

AWS IoT Core integrations with Amazon Sidewalk
Amazon Sidewalk is a shared network that helps devices work better through improved connectivity options. It’s designed to support a wide range of customer devices with capabilities ranging from locating pets or valuables, to smart home security and lighting control and remote diagnostics for appliances and tools.

Amazon Sidewalk is a secure community network that uses Amazon Sidewalk Gateways (also called Sidewalk Bridges), such as compatible Amazon Echo and Ring devices, to provide cloud connectivity for IoT endpoint devices. Amazon Sidewalk enables low-bandwidth and long-range connectivity at home and beyond using BLE for short-distance communication and LoRa and frequency-shift keying (FSK) radio protocols at 900MHz frequencies to cover longer distances.

Sidewalk now provides coverage to more than 90% of the US population and supports long-range connected solutions for communities and enterprises. Users with Ring cameras or Alexa devices that act as a Sidewalk Bridge can choose to contribute a small portion of their internet bandwidth, which is pooled to create a shared network that benefits all Sidewalk-enabled devices in a community.

In March 2023, AWS IoT Core deepened its integration with Amazon Sidewalk to seamlessly provision, onboard, and monitor Sidewalk devices with qualified hardware development kits (HDKs), SDKs, and sample applications. As of this writing, AWS IoT Core is the only way for customers to connect the Sidewalk network.

In the AWS IoT Core console, you can add your Sidewalk device, provision and register your devices, and connect your Sidewalk endpoint to the cloud. To learn more about onboarding your Sidewalk devices, visit the Getting started with AWS IoT Core for Amazon Sidewalk in the AWS IoT Wireless Developer Guide.

In November 2022, we announced AWS IoT Core Device Location service, a new feature that you can use to get the geo-coordinates of their IoT devices even when the device doesn’t have a GPS module. You can use the Device Location service as a simple request and response HTTP API, or you can use it with IoT connectivity pathways like MQTT, LoRaWAN, and now with Amazon Sidewalk.

In the AWS IoT Core console, you can test the Device Location service to resolve the location of your device by importing device payload data. Resource location is reported as a GeoJSON payload. To learn more, visit the AWS IoT Core Device Location in the AWS IoT Core Developer Guide.

Customers across multiple industries like automotive, supply chain, and industrial tools have requested a simplified solution such as the Device Location service to extract location-data from Sidewalk products. This would streamline customer software development and give them more options to choose the optimal location source, thereby improving their product.

Get started with a Device Location integration with Amazon Sidewalk
To enable Device Location for Sidewalk devices, go to the AWS IoT Core for Amazon Sidewalk section under LPWAN devices in the AWS IoT Core console. Choose Provision device or your existing device to edit the setting and select Activate positioning in the Geolocation option when creating and updating your Sidewalk devices.

While activating position, you need to specify a destination where you want to send your location data. The destination can either be an AWS IoT rule or an MQTT topic.

Here is a sample AWS Command Line Interface (AWS CLI) command to enable position while provisioning a new Sidewalk device:

$ aws iotwireless createwireless device --type Sidewalk 
  --name "demo-1" --destination-name "New-1" 
  --positioning Enabled

After your Sidewalk device establishes a connection to the Amazon Sidewalk network, the device SDK will send the GNSS-, Wi-Fi- or BLE-based information to AWS IoT Core for Amazon Sidewalk. If the customer has enabled Positioning, then AWS IoT Core Device Location will resolve the location data and send the location data to the specified Destination. After your Sidewalk device transmits location measurement data, the resolved geographic coordinates and a map pin will also be displayed in the Position section for the selected device.

You will also get location information delivered to your destination in GeoJSON format, as shown in the following example:

{
    "coordinates": [
        13.376076698303223,
        52.51823043823242
    ],
    "type": "Point",
    "properties": {
        "verticalAccuracy": 45,
        "verticalConfidenceLevel": 0.68,
        "horizontalAccuracy": 303,
        "horizontalConfidenceLevel": 0.68,
        "country": "USA",
        "state": "CA",
        "city": "Sunnyvale",
        "postalCode": "91234",
        "timestamp": "2025-11-18T12:23:58.189Z"
    }
}

You can monitor the Device Location data between your Sidewalk devices and AWS Cloud by enabling Amazon CloudWatch Logs for AWS IoT Core. To learn more, visit the AWS IoT Core for Amazon Sidewalk in the AWS IoT Wireless Developer Guide.

Now available
AWS IoT Core Device Location integration with Amazon Sidewalk is now generally available in the US East (N. Virginia) Region. To learn more about use cases, documentation, sample codes, and partner devices, visit the AWS IoT Core for Amazon Sidewalk product page.

Give it a try in the AWS IoT Core console and send feedback to AWS re:Post for AWS IoT Core or through your usual AWS Support contacts.

Channy

Introducing Our Final AWS Heroes of 2025

This post was originally published on this site

With AWS re:Invent approaching, we’re celebrating three exceptional AWS Heroes whose diverse journeys and commitment to knowledge sharing are empowering builders worldwide. From advancing women in tech and rural communities to bridging academic and industry expertise and pioneering enterprise AI solutions, these leaders exemplify the innovative spirit that drives our community forward. Their stories showcase how technical excellence, combined with passionate advocacy and mentorship, strengthens the global AWS community.

Dimple Vaghela – Ahmedabad, India

Community Hero Dimple Vaghela leads both the AWS User Group Ahmedabad and AWS User Group Vadodara, where she drives cloud education and technical growth across the region. Her impact spans organizing numerous AWS meetups, workshops, and AWS Community Days that have helped thousands of learners advance their cloud careers. Dimple launched the “Cloud for Her” project to empower girls from rural areas in technology careers and serves as co-organizer of the Women in Tech India User Group. Her exceptional leadership and community contributions were recognized at AWS re:Invent 2024 with the AWS User Group Leader Award in the Ownership category, while she continues building a more inclusive cloud community through speaking, mentoring, and organizing impactful tech events.

Rola Dali – Montreal, Canada

Community Hero Rola Dali is a senior Data, ML, and AI expert specializing in AWS cloud, bringing unique perspective from her PhD in neuroscience and bioinformatics with expertise in human genomics. As co-organizer of the AWS Montreal User Group and a former AWS Community Builder, her commitment to the cloud community earned her the prestigious Golden Jacket recognition in 2024. She actively shapes the tech community by architecting AWS solutions, sharing knowledge through blogs and lectures, and mentoring women entering tech, academics transitioning to industry, and students starting their careers.

Vivek Velso – Toronto, Canada

Machine Learning Hero Vivek Velso is a seasoned technology leader with over 27 years of IT industry experience, specializing in helping organizations modernize their cloud infrastructure for generative AI workloads. His deep AWS expertise earned him the prestigious Golden Jacket award for completing all AWS certifications, and he actively contributes to the AWS Subject Matter Expert (SME) program for multiple certification exams. A former AWS Community Builder and AWS Ambassador, he continues to share his knowledge through more than 100 technical blogs, articles, conference engagements, and AWS livestreams, helping the community confidently embrace cloud innovation.

Learn More

Visit the AWS Heroes webpage if you’d like to learn more about the AWS Heroes program, or to connect with a Hero near you.

Taylor

Secure EKS clusters with the new support for Amazon EKS in AWS Backup

This post was originally published on this site

Today, we’re announcing support for Amazon EKS in AWS Backup to provide the capability to secure Kubernetes applications using the same centralized platform you trust for your other Amazon Web Services (AWS) services. This integration eliminates the complexity of protecting containerized applications while providing enterprise-grade backup capabilities for both cluster configurations and application data. AWS Backup is a fully managed service to centralize and automate data protection across AWS and on-premises workloads. Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service to manage availability and scalability of the Kubernetes clusters. With this new capability, you can centrally manage and automate data protection across your Amazon EKS environments alongside other AWS services.

Until now, for backups, customers relied on custom solutions or third-party tools to back up their EKS clusters, requiring complex scripting and maintenance for each cluster. The support for Amazon EKS in AWS Backup eliminates this overhead by providing a single, centralized, and policy-driven solution that protects both EKS clusters (Kubernetes deployments and resources) and stateful data (stored in Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), and Amazon Simple Storage Service (Amazon S3) only) without the need to manage custom scripts across clusters. For restores, customers were previously required to restore their EKS backups to a target EKS cluster which was either the source EKS cluster, or a new EKS cluster, requiring that an EKS cluster infrastructure is provisioned ahead of time prior to the restore. With this new capability, during a restore of EKS cluster backups, customers also have the option to create a new EKS cluster based on previous EKS cluster configuration settings and restore to this new EKS cluster, with AWS Backup managing the provisioning of the EKS cluster on the customer’s behalf.

This support includes policy-based automation for protecting single or multiple EKS clusters. This single data protection policy provides a consistent experience across all services AWS Backup supports. It allows creation of immutable backups to prevent malicious or inadvertent changes, helping customers meet their regulatory compliance needs. In case there is a customer data loss or cluster downtime event, customers can easily recover their EKS cluster data from encrypted, immutable backups using an easy-to-use interface and maintain business continuity of running their EKS clusters at scale.

How it works
Here’s how I set up support for on-demand backup of my EKS cluster in AWS Backup. First, I’ll show a walkthrough of the backup process, then demonstrate a restore of the EKS cluster.

Backup
In the AWS Backup console, in the left navigation pane, I choose Settings and then Configure resources to opt in to enable protection of EKS clusters in AWS Backup.

Now that I’ve enabled Amazon EKS, in Protected resources I choose Create on-demand backup to create a backup for my already existing EKS cluster floral-electro-unicorn.

Enabling EKS in Settings ensures that it shows up as a Resource type when I create on-demand backup for the EKS cluster. I proceed to select the EKS resource type and the cluster.

I leave the rest of the information as default, then select Choose an IAM role to select a role (test-eks-backup) that I’ve created and customized with the necessary permissions for AWS Backup to assume when creating and managing backups on my behalf. I choose Create on-demand backup to finalize the process.


The job is initiated, and it will start running to back up both the EKS cluster state and the persistent volumes. If Amazon S3 buckets are attached to the backup, you’ll need to add the additional Amazon S3 backup permissions AWSBackupServiceRolePolicyForS3Backup to your role. This policy contains the permissions necessary for AWS Backup to back up any Amazon S3 bucket, including access to all objects in a bucket and any associated AWS KMS key.


The job is completed successfully and now EKS clusterfloral-electro-unicorn is backed up by AWS Backup.


Restore
Using the AWS Backup Console, I choose the EKS backup composite recovery point to start the process of restoring the EKS cluster backups, then choose Restore.


I choose Restore full EKS cluster to restore the full EKS backup. To restore to an existing cluster, I Choose an existing cluster then select the cluster from the drop-down list. I choose the Default order as the order in which individual Kubernetes resources will be restored.

I then configure the restore for the persistent storage resources, that will be restored alongside my EKS clusters.


Next, I Choose an IAM role to execute the restore action. The Protected resource tags checkbox is selected by default and I’ll leave it as is, then choose Next.

I review all the information before I finalize the process by choosing Restore, to start the job.


Selecting the drop-down arrow gives details of the restore status for both the EKS cluster state and persistent volumes attached. In this walkthrough, all the individual recovery points are restored successfully. If portions of the backup fail, it’s possible to restore the successfully backed up persistent stores (for example, Amazon EBS volumes) and cluster configuration settings individually. However, it’s not possible to restore full EKS backup. The successfully backed up resources will be available for restore, listed as nested recovery points under the EKS cluster recovery point. If there’s a partial failure, there will be a notification of the portion(s) that failed.


Benefits
Here are some of the benefits provided by the support for Amazon EKS in AWS Backup:

  • A fully managed multi-cluster backup experience, removing the overhead associated with managing custom scripts and third-party solutions.
  • Centralized, policy-based backup management that simplifies backup lifecycle management and makes it seamless to back up and recover your application data across AWS services, including EKS.
  • The ability to store and organize your backups with backup vaults. You assign policies to the backup vaults to grant access to users to create backup plans and on-demand backups but limit their ability to delete recovery points after they’re created.

Good to know
The following are some helpful facts to know:

  • Use either the AWS Backup Console, API, or AWS Command Line Interface (AWS CLI) to protect EKS clusters using AWS Backup. Alternatively, you can create an on-demand backup of the cluster after it has been created.
  • You can create secondary copies of your EKS backups across different accounts and AWS Regions to minimize risk of accidental deletion.
  • Restoration of EKS backups is available using the AWS Backup Console, API, or AWS CLI.
  • Restoring to an existing cluster will not override the Kubernetes versions, or any data as restores are non-destructive. Instead, there will be a restore of the delta between the backup and source resource.
  • Namespaces can only be restored to an existing cluster to ensure a successful restore as Kubernetes resources may be scoped at the cluster level.

Voice of the customer

Srikanth Rajan, Sr. Director of Engineering at Salesforce said “Losing a Kubernetes control plane because of software bugs or unintended cluster deletion can be catastrophic without a solid backup and restore plan. That’s why it’s exciting to see AWS rolling out the new EKS Backup and Restore feature, it’s a big step forward in closing a critical resiliency gap for Kubernetes platforms.”

Now available
Support for Amazon EKS in AWS Backup is available today in all AWS commercial Regions (except China) and in the AWS GovCloud (US) where AWS Backup and Amazon EKS are available. Check the full Region list for future updates.

To learn more, check out the AWS Backup product page and the AWS Backup pricing page.

Try out this capability for protecting your EKS clusters in AWS Backup and let us know what you think by sending feedback to AWS re:Post for AWS Backup or through your usual AWS Support contacts.

Veliswa.

AWS Weekly Roundup: Amazon S3, Amazon EC2, and more (November 10, 2025)

This post was originally published on this site

AWS re:Invent 2025 is only 3 weeks away and I’m already looking forward to the new launches and announcements at the conference. Last year brought 60,000 attendees from across the globe to Las Vegas, Nevada, and the atmosphere was amazing. Registration is still open for AWS re:Invent 2025. We hope you’ll join us in Las Vegas December 1–5 for keynotes, breakout sessions, chalk talks, interactive learning opportunities, and networking with cloud practitioners from around the world.

AWS and OpenAI announced a multi-year strategic partnership that provides OpenAI with immediate access to AWS infrastructure for running advanced AI workloads. The $38 billion agreement spans 7 years and includes access to AWS compute resources comprising hundreds of thousands of NVIDIA GPUs, with the ability to scale to tens of millions of CPUs for agentic workloads. The infrastructure deployment that AWS is building for OpenAI features a sophisticated architectural design optimized for maximum AI processing efficiency and performance. Clustering the NVIDIA GPUs—both GB200s and GB300s—using Amazon EC2 UltraServers on the same network enables low-latency performance across interconnected systems, allowing OpenAI to efficiently run workloads with optimal performance. The clusters are designed to support various workloads, from serving inference for ChatGPT to training next generation models, with the flexibility to adapt to OpenAI’s evolving needs.

AWS committed $1 million through its Generative AI Innovation Fund to digitize the Jane Goodall Institute’s 65 years of primate research archives. The project will transform handwritten field notes, film footage, and observational data on chimpanzees and baboons from analog to digital formats using Amazon Bedrock and Amazon SageMaker. The digital transformation will employ multimodal large language models (LLMs) and embedding models to make the research archives searchable and accessible to scientists worldwide for the first time. AWS is collaborating with Ode to build the user experience, helping the Jane Goodall Institute adopt AI technologies to advance research and conservation efforts. I was deeply saddened when I heard that world-renowned primatologist Jane Goodall had passed away. Learning that this project will preserve her life’s work and make it accessible to researchers around the world brought me comfort. It’s a fitting tribute to her remarkable legacy.

Transforming decades of research through cloud and AI. Dr. Jane Goodall and field staff observe Goblin at Gombe National Park, Tanzania. CREDIT: the Jane Goodall Institute

Last week’s launches
Let’s look at last week’s new announcements:

  • Amazon S3 now supports tags on S3 Tables – Amazon S3 now supports tags on S3 Tables for attribute-based access control (ABAC) and cost allocation. You can use tags for ABAC to automatically manage permissions for users and roles accessing table buckets and tables, eliminating frequent AWS Identity and Access Management (IAM) or S3 Tables resource-based policy updates and simplifying access governance at scale. Additionally, tags can be added to individual tables to track and organize AWS costs using AWS Billing and Cost Management.
  • Amazon EC2 R8a Memory-Optimized Instances now generally available – R8a instances feature 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, and they deliver up to 30% higher performance and up to 19% better price-performance compared to R7a instances, with 45% more memory bandwidth. Built on the AWS Nitro System using sixth-generation Nitro Cards, these instances are designed for high-performance, memory-intensive workloads, including SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and electronic design automation (EDA) applications. R8a instances are SAP certified and offer 12 sizes, including two bare metal sizes.
  • EC2 Auto Scaling announces warm pool support for mixed instances policies – EC2 Auto Scaling groups now support warm pools for Auto Scaling groups configured with mixed instances policies. Warm pools create a pool of pre-initialized EC2 instances ready to quickly serve application traffic, improving application elasticity. The feature benefits applications with lengthy initialization processes, such as writing large amounts of data to disk or running complex custom scripts. By combining warm pools with instance type flexibility, Auto Scaling groups can rapidly scale out to maximum size while deploying applications across multiple instance types to enhance availability. The feature works with Auto Scaling groups configured for multiple On-Demand Instance types through manual instance type lists or attribute-based instance type selection.
  • Amazon Bedrock AgentCore Runtime now supports direct code deployment – Amazon Bedrock AgentCore Runtime now offers two deployment methods for AI agents: container-based deployment and direct code upload. You can choose between direct code–zip file upload for rapid prototyping and iteration or container-based options for complex use cases requiring custom configurations. AgentCore Runtime provides a serverless framework and model agnostic runtime for running agents and tools at scale. The direct code–zip upload feature includes drag-and-drop functionality, enabling faster iteration cycles for prototyping while maintaining enterprise security and scaling capabilities for production deployments.
  • AWS Capabilities by Region now available for Regional planning – AWS Capabilities by Region helps discover and compare AWS services, features, APIs, and AWS CloudFormation resources across Regions. This planning tool provides an interactive interface to explore service availability, compare multiple Regions side by side, and view forward-looking roadmap information. You can search for specific services or features, view API operations availability, verify CloudFormation resource type support, and check EC2 instance type availability including specialized instances. The tool displays availability states including Available, Planning, Not Expanding, and directional launch planning by quarter. The AWS Capabilities by Region data is also accessible through the AWS Knowledge MCP server, enabling automation of Region expansion planning and integration into development workflows and continuous integration and continuous delivery (CI/CD) pipelines.

Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:

  • AWS re:Invent 2025 – Join us in Las Vegas December 1–5 as cloud pioneers gather from across the globe for the latest AWS innovations, peer-to-peer learning, expert-led discussions, and invaluable networking opportunities. Don’t forget to explore the event catalog.
  • AWS Builder Loft – A tech hub in San Francisco where builders share ideas, learn, and collaborate. The space offers industry expert sessions, hands-on workshops, and community events covering topics from AI to emerging technologies. Browse the upcoming sessions and join the events that interest you.
  • AWS Skills Center Seattle 4th Anniversary Celebration – A free, public event on November 20 with a keynote, learning panels, recruiter insights, raffles, and virtual participation options.

Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development. Browse here for upcoming AWS led in-person and virtual events, developer-focused events, and events for startups.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Esra

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!