Category Archives: AWS

Announcing Amazon EC2 M4 and M4 Pro Mac instances

This post was originally published on this site

As someone who has been using macOS since 2001 and Amazon EC2 Mac instances since their launch 4 years ago, I’ve helped numerous customers scale their continuous integration and delivery (CI/CD) pipelines on AWS. Today, I’m excited to share that Amazon EC2 M4 and M4 Pro Mac instances are now generally available.

Development teams building applications for Apple platforms need powerful computing resources to handle complex build processes and run multiple iOS simulators simultaneously. As development projects grow larger and more sophisticated, teams require increased performance and memory capacity to maintain rapid development cycles.

Apple M4 Mac mini at the core
EC2 M4 Mac instances (known as mac-m4.metal in the API) are built on Apple M4 Mac mini computers and are built on the AWS Nitro System. They feature Apple silicon M4 chips with 10-core CPU (four performance and six efficiency cores), 10-core GPU, 16-core Neural Engine, and 24 GB unified memory, delivering enhanced performance for iOS and macOS application build workloads. When building and testing applications, M4 Mac instances deliver up to 20 percent better application build performance compared to EC2 M2 Mac instances.

EC2 M4 Pro Mac (mac-m4pro.metal in the API) instances are powered by Apple silicon M4 Pro chips with 14-core CPU, 20-core GPU, 16-core Neural Engine, and 48 GB unified memory. These instances offer up to 15 percent better application build performance compared to EC2 M2 Pro Mac instances. The increased memory and computing power make it possible to run more tests in parallel using multiple device simulators.

Each M4 and M4 Pro Mac instance now comes with 2 TB of local storage, providing low-latency storage for improved caching and build and test performance.

Both instance types support macOS Sonoma version 15.6 and later as Amazon Machine Images (AMIs). The AWS Nitro System provides up to 10 Gbps of Amazon Virtual Private Cloud (Amazon VPC) network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth through high-speed Thunderbolt connections.

Amazon EC2 Mac instances integrate seamlessly with AWS services, which means you can:

Let me show you how to get started
You can launch an EC2 M4 or M4 Pro Mac instances through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs.

For this demo, let’s start an M4 Pro instance from the console. I first allocate a dedicated host to run my instances. On the AWS Management Console, I navigate to EC2, then Dedicated Hosts, and I select Allocate Dedicated Host.

Then, I enter a Name tag and I select the Instance family (mac-m4pro) and an Instance type (mac-m4pro.metal). I choose one Availability Zone and I clear Host maintenance.

EC2 Mac M$ - Dedicated hosts

Alternatively, I can use the command line interface:

aws ec2 allocate-hosts                          
        --availability-zone-id "usw2-az4"       
        --auto-placement "off"                  
        --host-recovery "off"                   
        --host-maintenance "off"                
        --quantity 1                            
        --instance-type "mac-m4pro.metal"

After the dedicated host is allocated to my account, I select the host I just allocated, then I select the Actions menu and choose Launch instance(s) onto host.

Notice the console gives you, among other information, the Latest supported macOS versions for this type of host. In this case, it’s macOS 15.6.

EC2 Mac M4 - Dedicated hosts Launch 

On the Launch an instance page, I enter a Name. I select a macOS Sequoia Amazon Machine Image (AMI). I make sure the Architecture is 64-bit Arm and the Instance type is mac-m4pro.metal.

The rest of the parameters arn’t specific to Amazon EC2 Mac: the network and storage configuration. When starting an instance for development use, make sure you select a volume with minimum 200 Gb or more. The default 100 Gb volume size isn’t sufficient to download and install Xcode.

EC2 Mac M4 - Dedicated hosts Launch DetailsWhen ready, I select the Launch instance orange button on the bottom of the page. The instance will rapidly appear as Running in the console. However, it might take up to 15 minutes to allow you to connect over SSH.

Alternatively, I can use this command:

aws ec2 run-instances 
    --image-id "ami-000420887c24e4ac8"   # AMI ID depends on the region !
    --instance-type "mac-m4pro.metal"   
    --key-name "my-ssh-key-name"        
    --network-interfaces '{"AssociatePublicIpAddress":true,"DeviceIndex":0,"Groups":["sg-0c2f1a3e01b84f3a3"]}'  # Security Group ID depends on your config
    --tag-specifications '{"ResourceType":"instance","Tags":[{"Key":"Name","Value":"My Dev Server"}]}' 
    --placement '{"HostId":"h-0e984064522b4b60b","Tenancy":"host"}'  # Host ID depends on your config 
    --private-dns-name-options '{"HostnameType":"ip-name","EnableResourceNameDnsARecord":true,"EnableResourceNameDnsAAAARecord":false}' 
    --count "1" 

Install Xcode from the Terminal
After the instance is reachable, I can connect using SSH to it and install my development tools. I use xcodeinstall to download and install Xcode 16.4.

From my laptop, I open a session with my Apple developer credentials:

# on my laptop, with permissions to access AWS Secret Manager
» xcodeinstall authenticate -s eu-central-1                                                                                               

Retrieving Apple Developer Portal credentials...
Authenticating...
🔐 Two factors authentication is enabled, enter your 2FA code: 067785
✅ Authenticated with MFA.

I connect to the EC2 Mac instance I just launched. Then, I download and install Xcode:

» ssh ec2-user@44.234.115.119                                                                                                                                                                   

Warning: Permanently added '44.234.115.119' (ED25519) to the list of known hosts.
Last login: Sat Aug 23 13:49:55 2025 from 81.49.207.77

    ┌───┬──┐   __|  __|_  )
    │ ╷╭╯╷ │   _|  (     /
    │  └╮  │  ___|___|___|
    │ ╰─┼╯ │  Amazon EC2
    └───┴──┘  macOS Sequoia 15.6

ec2-user@ip-172-31-54-74 ~ % brew tap sebsto/macos
==> Tapping sebsto/macos
Cloning into '/opt/homebrew/Library/Taps/sebsto/homebrew-macos'...
remote: Enumerating objects: 227, done.
remote: Counting objects: 100% (71/71), done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 227 (delta 22), reused 63 (delta 14), pack-reused 156 (from 1)
Receiving objects: 100% (227/227), 37.93 KiB | 7.59 MiB/s, done.
Resolving deltas: 100% (72/72), done.
Tapped 1 formula (13 files, 61KB).

ec2-user@ip-172-31-54-74 ~ % brew install xcodeinstall 
==> Fetching downloads for: xcodeinstall
==> Fetching sebsto/macos/xcodeinstall
==> Downloading https://github.com/sebsto/xcodeinstall/releases/download/v0.12.0/xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
Already downloaded: /Users/ec2-user/Library/Caches/Homebrew/downloads/9f68a7a50ccfdc479c33074716fd654b8528be0ec2430c87bc2b2fa0c36abb2d--xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
==> Installing xcodeinstall from sebsto/macos
==> Pouring xcodeinstall-0.12.0.arm64_sequoia.bottle.tar.gz
🍺  /opt/homebrew/Cellar/xcodeinstall/0.12.0: 8 files, 55.2MB
==> Running `brew cleanup xcodeinstall`...
Disable this behaviour by setting `HOMEBREW_NO_INSTALL_CLEANUP=1`.
Hide these hints with `HOMEBREW_NO_ENV_HINTS=1` (see `man brew`).
==> No outdated dependents to upgrade!

ec2-user@ip-172-31-54-74 ~ % xcodeinstall download -s eu-central-1 -f -n "Xcode 16.4.xip"
                        Downloading Xcode 16.4
100% [============================================================] 2895 MB / 180.59 MBs
[ OK ]
✅ Xcode 16.4.xip downloaded

ec2-user@ip-172-31-54-74 ~ % xcodeinstall install -n "Xcode 16.4.xip"
Installing...
[1/6] Expanding Xcode xip (this might take a while)
[2/6] Moving Xcode to /Applications
[3/6] Installing additional packages... XcodeSystemResources.pkg
[4/6] Installing additional packages... CoreTypes.pkg
[5/6] Installing additional packages... MobileDevice.pkg
[6/6] Installing additional packages... MobileDeviceDevelopment.pkg
[ OK ]
✅ file:///Users/ec2-user/.xcodeinstall/download/Xcode%2016.4.xip installed

ec2-user@ip-172-31-54-74 ~ % sudo xcodebuild -license accept

ec2-user@ip-172-31-54-74 ~ % 

EC2 Mac M4 - install xcode

Things to know
Select an EBS volume with minimum 200 Gb for development purposes. The 100 Gb default volume size is not sufficient to install Xcode. I usually select 500 Gb. When you increase the EBS volume size after the launch of the instance, remember to resize the APFS filesystem.

Alternatively, you can choose to install your development tools and framework on the low-latency local 2 Tb SSD drive available in the Mac mini. Pay attention that the content of that volume is bound to the instance lifecycle, not the dedicated host. This means that everything will be deleted from the internal SSD storage when you stop and restart the instance.

Themac-m4.metal and mac-m4pro.metal instances support macOS Sequoia 15.6 and later.

You can migrate your existing EC2 Mac instances when the migrated instance runs macOS 15 (Sequoia). Create a custom AMI from your existing instance and start an M4 or M4 Pro instance from this AMI.

Finally, I suggest checking the tutorials I wrote to help you to get started with Amazon EC2 Mac:

Pricing and availability
EC2 M4 and M4 Pro Mac instances are currently available in US East (N. Virginia) and US West (Oregon), with additional Regions planned for the future.

Amazon EC2 Mac instances are available for purchase as Dedicated Hosts through the On-Demand and Savings Plans pricing models. Billing for EC2 Mac instances is per second with a 24-hour minimum allocation period to comply with the Apple macOS Software License Agreement. At the end of the 24-hour minimum allocation period, the host can be released at any time with no further commitment

As someone who works closely with Apple developers, I’m curious to see how you’ll use these new instances to accelerate your development cycles. The combination of increased performance, enhanced memory capacity, and integration with AWS services opens new possibilities for teams building applications for iOS, macOS, iPadOS, tvOS, watchOS, and visionOS platforms. Beyond application development, Apple silicon’s Neural Engine makes these instances cost-effective candidates for running machine learning (ML) inference workloads. I’ll be discussing this topic in detail at AWS re:Invent 2025, where I’ll share benchmarks and best practices for optimizing ML workloads on EC2 Mac instances.

To learn more about EC2 M4 and M4 Pro Mac instances, visit the Amazon EC2 Mac Instances page or refer to the EC2 Mac documentation. You can start using these instances today to modernize your Apple development workflows on AWS.

— seb

Accelerate serverless testing with LocalStack integration in VS Code IDE

This post was originally published on this site

Today, we’re announcing LocalStack integration in the AWS Toolkit for Visual Studio Code that makes it easier than ever for developers to test and debug serverless applications locally. This enhancement builds upon our recent improvements to the AWS Lambda development experience, including the console to IDE integration and remote debugging capabilities we launched in July 2025, continuing our commitment to simplify serverless development on Amazon Web Services (AWS).

When building serverless applications, developers typically focus on three key areas to streamline their testing experience: unit testing, integration testing, and debugging resources running in the cloud. Although AWS Serverless Application Model Command Line Interface (AWS SAM CLI) provides excellent local unit testing capabilities for individual Lambda functions, developers working with event-driven architectures that involve multiple AWS services, such as Amazon Simple Queue Service (Amazon SQS), Amazon EventBridge, and Amazon DynamoDB, need a comprehensive solution for local integration testing. Although LocalStack provided local emulation of AWS services, developers had to previously manage it as a standalone tool, requiring complex configuration and frequent context switching between multiple interfaces, which slowed down the development cycle.

LocalStack integration in AWS Toolkit for VS Code
To address these challenges, we’re introducing LocalStack integration so developers can connect AWS Toolkit for VS Code directly to LocalStack endpoints. With this integration, developers can test and debug serverless applications without switching between tools or managing complex LocalStack setups. Developers can now emulate end-to-end event-driven workflows involving services such as Lambda, Amazon SQS, and EventBridge locally, without needing to manage multiple tools, perform complex endpoint configurations, or deal with service boundary issues that previously required connecting to cloud resources.

The key benefit of this integration is that AWS Toolkit for VS Code can now connect to custom endpoints such as LocalStack, something that wasn’t possible before. Previously, to point AWS Toolkit for VS Code to their LocalStack environment, developers had to perform manual configuration and context switching between tools.

Getting started with LocalStack in VS Code is straightforward. Developers can begin with the LocalStack Free version, which provides local emulation for core AWS services ideal for early-stage development and testing. Using the guided application walkthrough in VS Code, developers can install LocalStack directly from the toolkit interface, which automatically installs the LocalStack extension and guides them through the setup process. When it’s configured, developers can deploy serverless applications directly to the emulated environment and test their functions locally, all without leaving their IDE.

Let’s try it out
First, I’ll update my copy of the AWS Toolkit for VS Code to the latest version. Once, I’ve done this, I can see a new option when I go to Application Builder and click on Walkthrough of Application Builder. This allows me to install LocalStack with a single click.

Once I’ve completed the setup for LocalStack, I can start it up from the status bar and then I’ll be able to select LocalStack from the list of my configured AWS profiles. In this illustration, I am using Application Composer to build a simple serverless architecture using Amazon API Gateway, Lambda, and DynamoDB. Normally, I’d deploy this to AWS using AWS SAM. In this case, I’m going to use the same AWS SAM command to deploy my stack locally.

I just do `sam deploy –guided –profile localstack` from the command line and follow the usual prompts. Deploying to LocalStack using AWS SAM CLI provides the exact same experience I’m used to when deploying to AWS. In the screenshot below, I can see the standard output from AWS SAM, as well as my new LocalStack resources listed in the AWS Toolkit Explorer.

I can even go in to a Lambda function and edit the function code I’ve deployed locally!

Over on the LocalStack website, I can login and take a look at all the resources I have running locally. In the screenshot below, you can see the local DynamoDB table I just deployed.

Enhanced development workflow
These new capabilities complement our recently launched console-to-IDE integration and remote debugging features, creating a comprehensive development experience that addresses different testing needs throughout the development lifecycle. AWS SAM CLI provides excellent local testing for individual Lambda functions, handling unit testing scenarios effectively. For integration testing, the LocalStack integration enables testing of multiservice workflows locally without the complexity of AWS Identity and Access Management (IAM) permissions, Amazon Virtual Private Cloud (Amazon VPC) configurations, or service boundary issues that can slow down development velocity.

When developers need to test using AWS services in development environments, they can use our remote debugging capabilities, which provide full access to Amazon VPC resources and IAM roles. This tiered approach frees up developers to focus on business logic during early development phases using LocalStack, then seamlessly transition to cloud-based testing when they need to validate against AWS service behaviors and configurations. The integration eliminates the need to switch between multiple tools and environments, so developers can identify and fix issues faster while maintaining the flexibility to choose the right testing approach for their specific needs.

Now available
You can start using these new features through the AWS Toolkit for VS Code by updating to v3.74.0. The LocalStack integration is available in all commercial AWS Regions except AWS GovCloud (US) Regions. To learn more, visit the AWS Toolkit for VS Code and Lambda documentation.

For developers who need broader service coverage or advanced capabilities, LocalStack offers additional tiers with expanded features. There are no additional costs from AWS for using this integration.

These enhancements represent another significant step forward in our ongoing commitment to simplifying the serverless development experience. Over the past year, we’ve focused on making VS Code the tool of choice for serverless developers, and this LocalStack integration continues that journey by providing tools for developers to build and test serverless applications more efficiently than ever before.

Now Open — AWS Asia Pacific (New Zealand) Region

This post was originally published on this site

Kia ora! Today, I’m pleased to share the general availability of the AWS Asia Pacific (New Zealand) Region with three Availability Zones and API name ap-southeast-6. With the new Region, customers can now run workloads and securely store data in New Zealand while serving end users with even lower latency.

The new AWS Asia Pacific (New Zealand) Region will help organizations run their applications and serve end users while maintaining data residency in New Zealand. The NZD $7.5 billion Amazon Web Services (AWS) investment to establish an AWS Region in New Zealand is expected to contribute NZD $10.8 billion to New Zealand’s gross domestic product (GDP) which is estimated to create 1,000 new jobs annually and will enable Kiwi organizations of all sizes to innovate and scale faster using the most secure and resilient infrastructure.

AWS in New Zealand
Since we opened our first office in New Zealand in 2013, we’ve been continuously expanding our infrastructure to better serve Kiwi customers:

Connectivity to the global AWS network – In 2016, AWS enhanced New Zealand’s connectivity to the AWS Global Infrastructure by establishing diverse, high-capacity subsea cable connections, improving network reliability and performance for customers.

Amazon CloudFront – In 2020, AWS expanded its infrastructure footprint in New Zealand by adding two Amazon CloudFront edge locations in Auckland.

AWS Local Zones – To further enhance its infrastructure offerings in New Zealand, AWS introduced an AWS Local Zone in Auckland in 2023 helping customers deliver applications that require single-digit millisecond latency.

AWS Direct Connect – In the same year, AWS also added a Direct Connect location in Auckland to help customers securely link their on-premises networks to AWS resulting in lower networking costs and improved application performance. With this Region launch, AWS is adding another Direct Connect location in Auckland.

Let’s take a look at how AWS customers are leveraging AWS capabilities for diverse needs.

Security and compliance
The New Zealand government has a cloud first policy to encourage cloud adoption across the public sector. AWS supports 143 security standards and compliance certifications, including Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH), Federal Risk and Authorization Management Program (FedRAMP), General Data Protection Regulation (GDPR), Federal Information Processing Standard (FIPS) 140-3, and National Institute of Standards and Technology (NIST) 800-171, helping customers satisfy compliance requirements around the globe and providing a secure cloud infrastructure.

MATTR, a New Zealand-based organization providing infrastructure and digital trust services to businesses and governments, sees significant benefits from the new Region. To learn more about how MATTR and other organizations like Kiwibank and Deloitte plan to use the AWS New Zealand Region, visit this news article.

Accelerating AI innovation in New Zealand
AWS delivers the most comprehensive set of capabilities for generative AI at every layer of the stack, including a choice of cutting-edge large language models (LLMs) for implementing generative AI with Amazon Bedrock, and the most capable generative AI assistant to transform how work gets done with Amazon Q.

New Zealand customers are already benefiting from the generative AI capabilities offered by AWS.

Thematic is a New Zealand-based global leader in customer intelligence and feedback analysis. Thematic uses generative AI to turn customer feedback data from multiple channels into curated, accurate, and reliable customer intelligence.

“Using Amazon Bedrock is just so incredibly easy that it just makes sense. Whenever we design a solution, we do test more than 10 large language models (LLMs). Consistently the ones offered by AWS are winning those competitions,” said Nathan Holmberg, CTO and Co-Founder, Thematic.

To learn more on other customers like One NZ utilized generative AI, visit this article.

Building cloud skills together
Since signing a memorandum of understanding (MoU) with the New Zealand government in 2022, Amazon has trained more than 50,000 Kiwis toward our goal of 100,000. Amazon is committed to continuing to invest in cloud education through programs including AWS Academy, AWS Skills Builder, AWS Educate, and AWS re/Start. Organizations are using AWS to scale globally while investing in local talent development, supporting New Zealand’s growing demand for cloud expertise.

Xero, a global small business platform helps customers supercharge their business by bringing together the most important small business tools, including accounting, payroll and payments — on one platform. Leveraging AWS since 2016, Xero has scaled its platform globally, enhancing its features and enabling continual innovation.

“Amazon’s commitment to the New Zealand tech industry through their NZD $7.5B investment is promising. It’s a significant vote of confidence that will help connect New Zealand tech exporters with new global opportunities across the AWS ecosystem and the broader Amazon network,” says Bridget Snelling, Xero Country Manager, Aotearoa New Zealand.

Sustainable digital transformation
Through The Climate Pledge, Amazon is committed to reaching net-zero carbon across its business by 2040. AWS is committed to supporting New Zealand’s sustainability goals with efficient and responsible operations of its data centers in the country. The AWS Asia Pacific (New Zealand) Region is underpinned by renewable energy from day one through its agreement with Mercury New Zealand.

Energy companies are using AWS to modernize operations while advancing sustainability goals. Sharesies, a wealth development platform, is using AWS to modernize operations while advancing sustainability goals.

“Sharesies is very supportive of storing customer data in-country and being able to use renewable energy, “ says Sharesies Chief Technical Officer Richard Clark. “To do this in New Zealand on the AWS Cloud and have it fully powered by Mercury’s wind energy is a huge step forward. And very exciting!”

AWS partners in New Zealand
The AWS Partner Network (APN) in New Zealand includes a growing ecosystem of consulting and technology partners helping customers of all sizes design, architect, build, migrate, and manage their workloads on AWS. AWS Partners like Custom D, Grant Thornton Digital, MongoDB, and Parallo are actively supporting customers to deliver innovative solutions tailored to the unique needs of New Zealand organizations across various industries. With the new Region, these partners can now leverage the full capabilities of AWS cloud services locally.

AWS community in New Zealand
New Zealand is also home to one AWS Hero, 26 AWS Community Builders, 6 AWS User Groups and almost 9,000 community members across AWS User Groups in Auckland, Wellington, and Christchurch. If you’re interested in joining AWS User Groups New Zealand, visit their Meetup and social media pages.

Here’s what our AWS Hero Arshad Zackeriya, says about the new Region:

“The launch of the AWS Region in New Zealand is a game-changer for our country. It’s not just about a new set of data centers; it’s about unlocking the potential of New Zealand’s businesses and developer communities, allowing us to build a better, more connected Aotearoa for all.”

Available now
The AWS Asia Pacific (New Zealand) Region is the first infrastructure Region in New Zealand and sixteenth Region in Asia Pacific. With this launch, AWS now spans 120 Availability Zones within 38 geographic Regions around the world, with announced plans for 10 more Availability Zones and three more AWS Regions in the Kingdom of Saudi Arabia, Chile, and the European Sovereign Cloud.

The new Asia Pacific (New Zealand) Region is ready to support your business, and you can find a detailed list of the services available in this Region on the AWS Services by Region page. To learn more, visit the AWS Global Infrastructure page, and start building on ap-southeast-6!

Happy building!
Donnie

AWS Weekly Roundup: Amazon EC2, Amazon Q Developer, IPv6 updates, and more (September 1, 2025)

This post was originally published on this site

My LinkedIn feed was absolutely packed this week with pictures from the AWS Heroes Summit event in Seattle. It was heartwarming to see so many familiar faces and new Heroes coming together.

AWS Heroes Summit 2025

For those not familiar with the AWS Heroes program, it’s a global community recognition initiative that honors individuals who make outstanding contributions to the AWS community. These Heroes share their deep AWS knowledge through content creation, speaking at events, organizing community gatherings, and contributing to open-source projects.

The AWS Heroes Summit brings these exceptional community leaders together, providing a unique platform for knowledge exchange, networking, and collaboration. As someone who regularly interacts with Heroes through our AWS initiatives, I always find these summits invaluable – they offer deep technical discussions, early access to AWS roadmaps, and opportunities to provide direct feedback to AWS service teams. The insights and connections made at these events often translate into better resources and guidance for the broader AWS community.

Last week’s launches

In addition to this inspiring community, here are some AWS launches that caught my attention:

  • AWS expands Internet Protocol v6 (IPv6) support to AWS App Runner, AWS Client VPN, and RDS Data API — Three more AWS services now support IPv6 connectivity, helping you meet compliance requirements and removes the need for handling address translation between IPv4 and IPv6. AWS App Runner now supports IPv6-based inbound and outbound traffic on both public and private App Runner service endpoints. AWS Client VPN announced support for remote access to IPv6 workloads, allowing you to establish secure VPN connections to your IPv6-enabled VPC resources. Finally, RDS Data API now supports IPv6, enabling dual-stack configuration (IPv4 and IPv6) connectivity for your Aurora databases.
  • We launched two new instance families this week: the new storage-optimized I8ge and the general-purpose M8i instances —Our I8ge instances, powered by AWS Graviton4 processors, deliver up to 60% better compute performance compared to their Graviton2-based predecessors. These instances feature third-generation AWS Nitro SSDs, providing up to 55% better real-time storage performance per TB and significantly lower I/O latency. With 120 TB of storage and sizes up to 48xlarge (including two metal options), they offer the highest storage density among AWS Graviton-based storage optimized instances. We also launched M8i and M8i-flex instances with custom Intel Xeon 6 processors. These instances deliver up to 15% better price-performance and 2.5x more memory bandwidth than their predecessors. M8i-flex instances are ideal for general-purpose workloads, available from large to 16xlarge. For demanding applications, you can choose from our SAP-certified M8i instances in 13 sizes, including 2 bare metal options and a new 96xlarge size.
  • Amazon EC2 Mac Dedicated hosts now support Host Recovery and Reboot-based host maintenance — you can enable two new capabilities for your EC2 Mac Dedicated Hosts: Host Recovery and Reboot-based Host Maintenance. Host Recovery automatically detects potential hardware issues on Mac Dedicated Hosts and seamlessly migrates Mac instances to a new replacement host, minimizing disruption to workloads. Reboot-based Host Maintenance automatically stops and restarts instances on replacement hosts when scheduled maintenance events occur, eliminating the need for manual intervention during planned maintenance windows.
  • Amazon Q Developer now supports MCP admin control — Administrators have now the ability to enable or disable the MCP functionality for all the Q Developer clients in their organization. When an administrator disables the functionality, users will not be allowed to add any MCP servers, nor will any previously defined servers be initialized.

Other AWS news

Here are some additional projects and blog posts that you might find interesting:

  • Mastering Amazon Q Developer with Rules — I read an interesting article about Amazon Q Developer’s rules feature this weekend that I want to share with you. What caught my attention is how it solves a pain point I often encounter when working with AI assistants – having to repeatedly explain my coding preferences and standards. With rules, you define your preferences once in Markdown files, and Amazon Q Developer automatically follows them for every interaction. I particularly like how transparent the system is, showing which rules it’s following, and how it helps maintain consistency across teams. Since implementing rules in my projects, I’ve seen more consistent code quality, all while reducing the cognitive load of having to repeatedly explain our standards.
  • Strategies for excelling across all four exam domains of the AWS Certified Machine Learning – Specialty certification. The AWS Training & Certification team, where I spent my first three years at AWS, shared how to prepare for the AWS Certified Machine Learning – Specialty certification, whether you’re starting from scratch or building upon existing AWS Certifications. They share the prerequisites and guidance to help you get ready for this certification and demonstrate your expertise in building ML solutions with AWS.
  • As is now our tradition after Prime Day, we shared the impressive metrics showing how AWS services scaled to support one of the world’s largest shopping events. Amazon Prime Day 2025 was the biggest ever, setting records for both sales volume and total items sold during the 4-day event. This year was particularly special as we saw a significant transformation in the Prime Day experience through advancements in our generative AI offerings, with customers using Alexa+, Rufus, and AI Shopping Guides to discover deals and get product information. The numbers are staggering – Amazon DynamoDB handled tens of trillions of API calls while maintaining high availability, delivering single-digit millisecond responses and peaking at 151 million requests per second. Amazon API Gateway processed over 1 trillion internal service requests—a 30 percent increase in requests on average per day compared to Prime Day 2024.

Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events:

  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Toronto (September 4), Los Angeles (September 17), and Bogotá (October 9).
  • AWS re:Invent 2025 — This flagship annual conference is coming to Las Vegas from December 1–5. The event catalog is now available. Mark your calendars for this not to be missed gathering of the AWS community.
  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Adria (September 5), Baltic (September 10), Aotearoa (September 18), South Africa (September 20), Bolivia (September 20), Portugal (September 27).

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— seb

New general-purpose Amazon EC2 M8i and M8i Flex instances are now available

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) general-purpose M8i and M8i-Flex instances powered by custom Intel Xeon 6 processors available only on AWS with sustained all-core 3.9 GHz turbo frequency. These instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They also deliver up to 15 percent better price performance, up to 20 percent higher performance, and 2.5 times more memory bandwidth compared to previous generation M7i and M7i-Flex instances.

M8i and M8i-flex instances are ideal for running general purpose workloads such as general web application servers, virtual desktops, batch processing, microservices, databases, and enterprise applications. In terms of performance, these instances are specifically up to 60 percent faster for NGINX web applications, up to 30 percent faster for PostgreSQL database workloads, and up to 40 percent faster for AI deep learning recommendation models compared to M7i and M7i-Flex instances.

As like R8i and R8i-Flex instances, these instances use the new sixth generation AWS Nitro Cards, delivering up to two times more network and Amazon Elastic Block Storage (Amazon EBS) bandwidth compared to the previous generation instances. It greatly improves network throughput for workloads handling small packets such as web, application, and gaming servers. They also support bandwidth configuration with 25 percent allocation adjustments between network and Amazon EBS bandwidth, enabling better database performance, query processing, and logging speeds.

M8i instances
M8i instances provide up to 384 vCPUs and 1.5 TB memory including bare metal instances that provide dedicated access to the underlying physical hardware. These SAP-certified instances help you to run large application servers and databases, gaming servers, CPU-based inference, and video streaming that need the largest instance sizes or high CPU continuously.

Here are the specs for M8i instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
m8i.large 2 8 Up to 12.5 Up to 10
m8i.xlarge 4 16 Up to 12.5 Up to 10
m8i.2xlarge 8 32 Up to 15 Up to 10
m8i.4xlarge 16 64 Up to 15 Up to 10
m8i.8xlarge 32 128 15 10
m8i.12xlarge 48 192 22.5 15
m8i.16xlarge 64 256 30 20
m8i.24xlarge 96 384 40 30
m8i.32xlarge 128 512 50 40
m8i.48xlarge 192 768 75 60
m8i.96xlarge 384 1536 100 80
m8i.metal-48xl 192 768 75 60
m8i.metal-96xl 384 1536 100 80

M8i-Flex instances
M8i-Flex instances are a lower-cost variant of the M8i instances, with 5 percent better price performance at 5 percent lower prices. They’re designed for workloads that benefit from the latest generation performance but don’t fully utilize all compute resources. These instances can reach up to the full CPU performance 95 percent of the time.

Here are the specs for the M8i-Flex instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
m8i-flex.large 2 8 Up to 12.5 Up to 10
m8i-flex.xlarge 4 16 Up to 12.5 Up to 10
m8i-flex.2xlarge 8 32 Up to 15 Up to 10
m8i-flex.4xlarge 16 64 Up to 15 Up to 10
m8i-flex.8xlarge 32 128 Up to 15 Up to 10
m8i-flex.12xlarge 48 192 Up to 22.5 Up to 15
m8i-flex.16xlarge 64 256 Up to 30 Up to 20

If you’re currently using earlier generations of general-purpose instances, you can adopt M8i-Flex instances without having to make changes to your application or your workload.

Now available
Amazon EC2 M8i and M8i-Flex instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain) AWS Regions. M8i and M8i-Flex instances can be purchased as On-Demand, Savings Plan, and Spot instances. M8i instances are also available in Dedicated Instances and Dedicated Hosts. To learn more, visit the Amazon EC2 Pricing page.

Give M8i and M8i-Flex instances a try in the Amazon EC2 console. To learn more, visit the Amazon EC2 M8i instances page and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

AWS services scale to new heights for Prime Day 2025: key metrics and milestones

This post was originally published on this site

Amazon Prime Day 2025 was the biggest Amazon Prime Day shopping event ever, setting records for both sales volume and total items sold during the 4-day event. Prime members saved billions while shopping Amazon’s millions of deals during the event.

This year marked a significant transformation in the Prime Day experience through advancements in the generative AI offerings from Amazon and AWS. Customers used Alexa+—the Amazon next-generation personal assistant now available in early access to millions of customers—along with the AI-powered shopping assistant, Rufus, and AI Shopping Guides. These features, built on more than 15 years of cloud innovation and machine learning expertise from AWS, combined with deep retail and consumer experience from Amazon, helped customers quickly discover deals and get product information, complementing the fast, free delivery that Prime members enjoy year-round.

As part of our annual tradition to tell you about how AWS powered Prime Day for record-breaking sales, I want to share the services and chart-topping metrics from AWS that made your amazing shopping experience possible.


Prime Day 2025 – all the numbers
During the weeks leading up to big shopping events like Prime Day, Amazon fulfillment centers and delivery stations work to get ready and ensure operations run efficiently and safely. For example, the Amazon automated storage and retrieval system (ASRS) operates a global fleet of industrial mobile robots that move goods around Amazon fulfillment centers.

AWS Outposts, a fully managed service that extends the AWS experience on-premises, powers software applications that manage the command-and-control of Amazon ASRS and supports same-day and next-day deliveries through low-latency processing of critical robotic commands.

During Prime Day 2025, AWS Outposts at one of the largest Amazon fulfillment centers sent more than 524 million commands to over 7,000 robots, reaching peak volumes of 8 million commands per hour—a 160 percent increase compared to Prime Day 2024.

Here are some more interesting, mind-blowing metrics:

  • Amazon Elastic Compute Cloud (Amazon EC2) – During Prime Day 2025, AWS Graviton, a family of processors designed to deliver the best price performance for cloud workloads running in Amazon EC2, powered more than 40 percent of the Amazon EC2 compute used by Amazon.com. Amazon also deployed over 87,000 AWS Inferentia and AWS Trainium chips – custom silicon chips for deep learning and generative AI training and inference – to power Amazon Rufus for Prime Day.
  • Amazon SageMaker AI — Amazon SageMaker AI, a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML), processed more than 626 billion inference requests during Prime Day 2025.
  • Amazon Elastic Container Service (Amazon ECS) and AWS Fargate– Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that works seamlessly with AWS Fargate, a serverless compute engine for containers. During Prime Day 2025, Amazon ECS launched an average of 18.4 million tasks per day on AWS Fargate, representing a 77 percent increase from the previous year’s Prime Day average.
  • AWS Fault Injection Service (AWS FIS) – We ran over 6,800 AWS FIS experiments—over eight times more than we conducted in 2024—to test resilience and ensure Amazon.com remains highly available on Prime Day. This significant increase was made possible by two improvements: new Amazon ECS support for network fault injection experiments on AWS Fargate, and the integration of FIS testing in continuous integration and continuous delivery (CI/CD) pipelines.
  • AWS Lambda – AWS Lambda, a serverless compute service that lets you run code without managing infrastructure, handled over 1.7 trillion invocations per day during Prime Day 2025.
  • Amazon API Gateway – During Prime Day 2025, Amazon API Gateway, a fully managed service that makes it easy to create, maintain, and secure APIs at any scale, processed over 1 trillion internal service requests—a 30 percent increase in requests on average per day compared to Prime Day 2024.
  • Amazon CloudFront – Amazon CloudFront, a content delivery network (CDN) service that securely delivers content with low latency and high transfer speeds, delivered over 3 trillion HTTP requests during the global week of Prime Day 2025, a 43 percent increase in requests compared to Prime Day 2024.
  • Amazon Elastic Block Store (Amazon EBS) – During Prime Day 2025, Amazon EBS, our high-performance block storage service, peaked at 20.3 trillion I/O operations, moving up to an exabyte of data daily.
  • Amazon Aurora – On Prime Day, Amazon Aurora, a relational database management system (RDBMS) built for high performance and availability at global scale for PostgreSQL, MySQL, and DSQL, processed 500 billion transactions, stored 4,071 terabytes of data, and transferred 999 terabytes of data.
  • Amazon DynamoDB – Amazon DynamoDB, a serverless, fully managed, distributed NoSQL database, powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made tens of trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 151 million requests per second.
  • Amazon ElastiCache – During Prime Day, Amazon ElastiCache, a fully managed caching service delivering microsecond latency, peaked at serving over 1.5 quadrillion daily requests and over 1.4 trillion requests in a minute.
  • Amazon Kinesis Data Streams – Amazon Kinesis Data Streams, a fully managed serverless data streaming service, processed a peak of 807 million records per second during Prime Day 2025.
  • Amazon Simple Queue Service (Amazon SQS) – During Prime Day 2025, Amazon SQS – a fully managed message queuing service for microservices, distributed systems, and serverless applications – set a new peak traffic record of 166 million messages per second.
  • Amazon GuardDuty – During Prime Day 2025, Amazon GuardDuty, an intelligent threat detection service, monitored an average of 8.9 trillion log events per hour, a 48.9 percent increase from last year’s Prime Day.
  • AWS CloudTrail – AWS CloudTrail, which tracks user activity and API usage on AWS, as well as in hybrid and multicloud environments, processed over 2.5 trillion events during Prime Day 2025, compared to 976 billion events in 2024.

Prepare to scale
If you’re preparing for similar business-critical events, product launches, and migrations, I recommend that you take advantage of our newly branded AWS Countdown (formerly known as AWS Infrastructure Event Management, or IEM). This comprehensive support program helps assess operational readiness, identify and mitigate risks, and plan capacity, using proven playbooks developed by AWS experts. We’ve expanded to include: generative AI implementation support to help you confidently launch and scale AI initiatives; migration and modernization support, including mainframe modernization; and infrastructure optimization for specialized sectors including election systems, retail operations, healthcare services, and sports and gaming events.

I look forward to seeing what other records will be broken next year!

Channy

AWS Weekly Roundup: Amazon Aurora 10th anniversary, Amazon EC2 R8 instances, Amazon Bedrock and more (August 25, 2025)

This post was originally published on this site

As I was preparing for this week’s roundup, I couldn’t help but reflect on how database technology has evolved over the past decade. It’s fascinating to see how architectural decisions made years ago continue to shape the way we build modern applications. This week brings a special milestone that perfectly captures this evolution in cloud database innovation as Amazon Aurora celebrated 10 years of database innovation.

Birthday cake with words Happy Birthday Amazon Aurora!

Amazon Web Services (AWS) Vice President Swami Sivasubramanian reflected on LinkedIn about his journey with Amazon Aurora, calling it “one of the most interesting products” he’s worked on. When Aurora launched in 2015, it shifted the database landscape by separating compute and storage. Now trusted by hundreds of thousands of customers across industries, Aurora has grown from a MySQL-compatible database to a comprehensive platform featuring innovations such as Aurora DSQL, serverless capabilities, I/O-Optimized pricing, zero-ETL integrations, and generative AI support. Last week’s celebration on August 21 highlighted this decade-long transformation that continues to simplify database scaling for customers.

Last week’s launches

In addition to the inspiring celebrations, here are some AWS launches that caught my attention:

  • AWS Billing and Cost Management introduces customizable Dashboards — This new feature consolidates cost data into visual dashboards with multiple widget types and visualization options, combining information from Cost Explorer, Savings Plans, and Reserved Instance reports to help organizations track spending patterns and share standardized cost reporting across accounts.
  • Amazon Bedrock simplifies access to OpenAI open weight models — AWS has streamlined access to OpenAI’s open weight models (gpt-oss-120b and gpt-oss-20b), making them automatically available to all users without manual activation while maintaining administrator control through IAM policies and service control policies.
  • Amazon Bedrock adds batch inference support for Claude Sonnet 4 and GPT-OSS models —This feature provides asynchronous processing of multiple inference requests with 50 percent lower pricing compared to on-demand inference, optimizing high-volume AI tasks such as document analysis, content generation, and data extraction with Amazon CloudWatch metrics for tracking batch workload progress
  • AWS launching Amazon EC2 R8i and R8i-flex memory-optimized instances — Powered by custom Intel Xeon 6 processors, these new instances deliver up to 20 percent better performance and 2.5 times higher memory throughput than R7i instances, making them ideal for memory-intensive workloads like databases and big data analytics, with R8i-flex offering additional cost savings for applications that don’t fully utilize compute resources.
  • Amazon S3 introduces batch data verification feature — A new capability in S3 Batch Operations that offers efficient verification of billions of objects using multiple checksum algorithms without downloading or restoring data, generating detailed integrity reports for compliance and audit purposes regardless of storage class or object size.

Other AWS news

Here are some additional projects and blog posts that you might find interesting:

  • Amazon introduces DeepFleet foundation models for multirobot coordination — Trained on millions of hours of data from Amazon fulfillment and sortation centers, these pioneering models predict future traffic patterns for robot fleets, representing the first foundation models specifically designed for coordinating multiple robots in complex environments.
  • Building Strands Agents with a few lines of code — A new blog demonstrates how to build multi-agent AI systems with a few lines of code, enabling specialized agents to collaborate seamlessly, handle complex workflows, and share information through standardized protocols for creating distributed AI systems beyond individual agent capabilities.
  • AWS Security Incident Response introduces ITSM integrations — New integrations with Jira and ServiceNow provide bidirectional synchronization of security incidents, comments, and attachments, streamlining response while maintaining existing processes, with open source code available on GitHub for customization and extension to additional IT service management (ITSM) platforms.
  • Finding root-causes using a network digital twin graph and agentic AI — A detailed blog post shows how AWS collaborated with NTT DOCOMO to build a network digital twin using graph databases and autonomous AI agents, helping telecom operators to move beyond correlation to identify true root causes of complex network issues, predict future problems, and improve overall service reliability.

Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events:

  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Toronto (September 4), Los Angeles (September 17), and Bogotá (October 9).
  • AWS re:Invent 2025 — This flagship annual conference is coming to Las Vegas from December 1–5. The event catalog is now available. Mark your calendars for this not to be missed gathering of the AWS community.
  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Adria (September 5), Baltic (September 10), Aotearoa (September 18), South Africa (September 20), Bolivia (September 20), Portugal (September 27).

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Betty

Best performance and fastest memory with the new Amazon EC2 R8i and R8i-flex instances

This post was originally published on this site

Today, we’re announcing general availability of the new eighth generation, memory optimized Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances powered by custom Intel Xeon 6 processors, available only on AWS. They deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These instances deliver up to 15 percent better price performance, 20 percent higher performance, and 2.5 times more memory throughput compared to previous generation instances.

With these improvements, R8i and R8i-flex instances are ideal for a variety of memory intensive workloads such as SQL and NoSQL databases, distributed web scale in-memory caches (Memcached and Redis), in-memory databases such as SAP HANA, and real-time big data analytics (Apache Hadoop and Apache Spark clusters). For a majority of the workloads that don’t fully utilize the compute resources, the R8i-flex instances are a great first choice to achieve an additional 5 percent better price performance and 5 percent lower prices.

Improvements made to both instances compared to their predecessors
In terms of performance, R8i and R8i-flex instances offer 20 percent better performance than R7i instances, with even higher gains for specific workloads. These instances are up to 30 percent faster for PostgreSQL databases, up to 60 percent faster for NGINX web applications, and up to 40 percent faster for AI deep learning recommendation models compared to previous generation R7i instances, with sustained all-core turbo frequency now reaching 3.9 GHz (compared to 3.2 GHz in the previous generation). They also feature a 4.6x larger L3 cache and significantly better memory throughput, offering 2.5 times higher memory bandwidth than the seventh generation. With this higher performance across all the vectors, you can run a greater number of workloads while keeping costs down.

R8i instances now scale up to 96xlarge with up to 384 vCPUs and 3TB memory (versus 48xlarge sizes in the seventh generation), helping you to scale up database applications. R8i instances are SAP certified to deliver 142,100 aSAPS, which is highest among all comparable machines in on premises and cloud environments, delivering exceptional performance for your mission-critical SAP workloads. R8i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. Both R8i and R8i-flex instances use the latest sixth generation AWS Nitro Cards, delivering up to two times more network and Amazon Elastic Block Storage (Amazon EBS) bandwidth compared to the previous generation, which greatly improves network throughput for workloads handling small packets, such as web, application, and gaming servers.

R8i and R8i-flex instances also support bandwidth configuration with 25 percent allocation adjustments between network and Amazon EBS bandwidth, enabling better database performance, query processing, and logging speeds. Additional enhancements include FP16 datatype support for Intel AMX to support workloads such as deep learning training and inference and other artificial intelligence and machine learning (AI/ML) applications.

The specs for the R8i instances are as follows.

Instance size
vCPUs
Memory (GiB)
Network bandwidth (Gbps)
EBS bandwidth (Gbps)
r8i.large 2 16 Up to 12.5 Up to 10
r8i.xlarge 4 32 Up to 12.5 Up to 10
r8i.2xlarge 8 64 Up to 15 Up to 10
r8i.4xlarge 16 128 Up to 15 Up to 10
r8i.8xlarge 32 256 15 10
r8i.12xlarge 48 384 22.5 15
r8i.16xlarge 64 512 30 20
r8i.24xlarge 96 768 40 30
r8i.32xlarge 128 1024 50 40
r8i.48xlarge 192 1536 75 60
r8i.96xlarge 384 3072 100 80
r8i.metal-48xl 192 1536 75 60
r8i.metal-96xl 384 3072 100 80

The specs for the R8i-flex instances are as follows.

Instance size
vCPUs
Memory (GiB)
Network bandwidth (Gbps)
EBS bandwidth (Gbps)
r8i-flex.large 2 16 Up to 12.5 Up to 10
r8i-flex.xlarge 4 32 Up to 12.5 Up to 10
r8i-flex.2xlarge 8 64 Up to 15 Up to 10
r8i-flex.4xlarge 16 128 Up to 15 Up to 10
r8i-flex.8xlarge 32 256 Up to 15 Up to 10
r8i-flex.12xlarge 48 384 Up to 22.5 Up to 15
r8i-flex.16xlarge 64 512 Up to 30 Up to 20

When to use the R8i-flex instances
As stated earlier, R8i-flex instances are more affordable versions of the R8i instances, offering up to 5 percent better price performance at 5 percent lower prices. They’re designed for workloads that benefit from the latest generation performance but don’t fully use all compute resources. These instances can reach up to the full CPU performance 95 percent of the time and work well for in-memory databases, distributed web scale cache stores, mid-size in-memory analytics, real-time big data analytics, and other enterprise applications. R8i instances are recommended for more demanding workloads that need sustained high CPU, network, or EBS performance such as analytics, databases, enterprise applications, and web scale in-memory caches.

Available now
R8i and R8i-flex instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain) AWS Regions. As usual with Amazon EC2, you pay only for what you use. For more information, refer to Amazon EC2 Pricing. Check out the full collection of memory optimized instances to help you start migrating your applications.

To learn more, visit our Amazon EC2 R8i instances page and Amazon EC2 R8i-flex instances page. Send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

– Veliswa

AWS Weekly Roundup: Single GPU P5 instances, Advanced Go Driver, Amazon SageMaker HyperPod and more (August 18, 2025)

This post was originally published on this site

Let me start this week’s update with something I’m especially excited about – the upcoming BeSA (Become a Solutions Architect) cohort. BeSA is a free mentoring program that I host along with a few other AWS employees on a volunteer basis to help people excel in their cloud careers. Last week, the instructors’ lineup was finalized for the 6-week cohort starting September 6. The cohort will focus on migration and modernization on AWS. Visit the BeSA website to learn more.

Another highlight for me last week was the announcement of six new AWS Heroes for their technical leadership and exceptional contributions to the AWS community. Read the full announcement to learn more about these community leaders.

Last week’s launches
Here are some launches from last week that got my attention:

  • Amazon EC2 Single GPU P5 instances are now generally available — You can right-size your machine learning (ML) and high performance computing (HPC) resources cost-effectively with the new Amazon Elastic Compute Cloud (Amazon EC2) P5 instance size with one NVIDIA H100 GPU.
  • AWS Advanced Go Driver is generally available — You can now use the AWS Advanced Go Driver with Amazon Relational Database Service (Amazon RDS) and Amazon Aurora PostgreSQL-Compatible and MySQL-Compatible database clusters for faster switchover and failover times, Federated Authentication, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM). You can install the PostgreSQL and MySQL packages for Windows, Mac, or Linux, by following the installation guides in GitHub.
  • Expanded support for Cilium with Amazon EKS Hybrid Nodes — Cilium is a Cloud Native Computing Foundation (CNCF) graduated project that provides core networking capabilities for Kubernetes workloads. Now, you can receive support from AWS for a broader set of Cilium features when using Cilium with Amazon EKS Hybrid Nodes including application ingress, in-cluster load balancing, Kubernetes network policies, and kube-proxy replacement mode.
  • Amazon SageMaker AI now supports P6e-GB200 UltraServers — You can accelerate training and deployment of foundational models (FMs) at trillion-parameter scale by using up to 72 NVIDIA Blackwell GPUs under one NVLink domain with the new P6e-GB200 UltraServer support in Amazon SageMaker HyperPod and Model Training.
  • Amazon SageMaker HyperPod now supports fine-grained quota allocation of compute resources, topology-aware-scheduling of LLM tasks and custom Amazon Machine Images (AMIs) — You can allocate fine-grained compute quota for GPU, Trainium accelerator, vCPU, and vCPU memory within an instance to optimize compute resource distribution. With topology-aware scheduling, you can schedule your large language model (LLM) tasks on an optimal network topology to minimize network communication and enhance training efficiency. Using custom AMIs, you can deploy clusters with pre-configured, security-hardened environments that meet your specific organizational requirements.

Additional updates
Here are some additional news items and blog posts that I found interesting:

Upcoming AWS events
Check your calendars and sign up for upcoming AWS and AWS Community events:

  • AWS re:Invent 2025 (December 1-5, 2025, Las Vegas) — The AWS flagship annual conference offering collaborative innovation through peer-to-peer learning, expert-led discussions, and invaluable networking opportunities.
  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Coming up soon are summits in Johannesburg (August 20) and Toronto (September 4).
  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Adria (September 5), Baltic (September 10), Aotearoa (September 18), and South Africa (September 20).

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Prasad

AWS named as a Leader in 2025 Gartner Magic Quadrant for Strategic Cloud Platform Services for 15 years in a row

This post was originally published on this site

On August 4, 2025, Gartner published its Gartner Magic Quadrant for Strategic Cloud Platform Services (SCPS). Amazon Web Services (AWS) is the longest-running Magic Quadrant Leader, with Gartner naming AWS a Leader for the fifteenth consecutive year.

In the report, Gartner once again placed AWS highest on the “Ability to Execute” axis. We believe this reflects our ongoing commitment to giving customers the broadest and deepest set of capabilities to accelerate innovation as well as unparalleled security, reliability, and performance they can trust for their most critical applications.

Here is the graphical representation of the 2025 Magic Quadrant for Strategic Cloud Platform Services.

Gartner recognized AWS strengths as:

  • Largest cloud community – AWS has built a strong global community of cloud professionals, providing significant opportunities for learning and engagement.
  • Cloud-inspired silicon – AWS has used its cloud computing experience to develop custom silicon designs, including AWS Graviton, AWS Inferentia, and AWS Trainium, which enable tighter integration between hardware and software, improved power efficiency, and greater control over supply chains.
  • Global scale and operational execution – AWS’s significant share of global cloud market revenue has enabled it to build a larger and more robust network of integration partners than some other providers in this analysis, which in turn helps organizations successfully adopt cloud.

The most common feedback I hear from customers is that AWS has the largest and most dynamic cloud community, making it easy to ask questions and learn from millions of active customers and tens of thousands of partners globally. We recently launched our community hub, AWS Builder Center to connect directly with AWS Heroes and AWS Community Builders. You can also explore and join AWS User Groups and AWS Cloud Clubs in a city near you.

We have also focused on facilitating the digital transformation of enterprise customers through a number of enterprise programs, such as the AWS Migration Acceleration Program. Using generative AI on migration and modernization, we introduced AWS Transform, the first agentic AI service developed to accelerate enterprise modernization of mission-critical business workloads such as .NET, mainframe, and VMware.

Access the complete full Gartner report to learn more. It outlines the methodology and evaluation criteria used to develop their assessments of each cloud service provider included in the report. This report can serve as a guide when choosing a cloud provider that helps you innovate on behalf of your customers.

Channy

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.