Tag Archives: AWS

Join AWS Cloud Infrastructure Day to learn cutting-edge innovations building global cloud infrastructure

This post was originally published on this site

I want to introduce the AWS Cloud Infrastructure Day to provide a comprehensive showcase of latest innovations in AWS cloud infrastructure. This event will highlight cutting-edge advances across compute, artificial intelligence and machine learning (AI/ML), storage solutions, networking capabilities, serverless, and accelerated technologies, and global infrastructure.

Join us for AWS Cloud Infrastructure Day, a free-to-attend one-day virtual event on May 22, 2025, starting at 11:00 AM PDT (2:00 PM EDT). We will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch.

Here are some of the highlights you can expect from this event:

Willem Visser, VP of EC2 Technology will open with the introduction of the AWS journey since 2006, when Amazon Elastic Compute Cloud (Amazon EC2) was launched with the goal of customer-obsessed innovation. He will speak about the progress made over nearly two decades in cloud infrastructure to support both startups and enterprise workloads based on scale, capacity, and flexibility.

You can learn how AWS developed beyond computing instances to create a complete cloud infrastructure, including the parallel evolution of services like storage and networking capabilities.

Todd Kennedy, Principal Engineer, GoDaddy, will share GoDaddy’s Graviton adoption journey and the benefits it reaped from Graviton. Todd will walk through an example to demonstrate moving Rust workloads to Graviton. Learn how GoDaddy achieved 40 percent compute cost savings and over 20 percent performance gains.

This event covers a variety of topics related to AWS Cloud infrastructure. Here are interesting topics that caught my interest:

  • Generative AI at the edge – You can learn how to select, fine-tune, and deploy small language models (SLMs) for on-premises and edge use cases due to data residency requirements using AWS hybrid and edge services.
  • Serverless for agentic AI auditability – You can learn how AWS Step Functions and AWS Lambda transform opaque agentic AI system operations into transparent, auditable workflows.
  • Accelerated computing – You can get a close look at AWS innovation across silicon, server, and data centers and learn how customers are using AI chips. Learn how you can get started and reduce your generative AI costs.
  • Networking capability – You can learn how AWS infrastructure—from physical fiber to software-defined networking—enables unparalleled performance and reliability at global scale. The session covers modern application networking patterns while emphasizing secure connectivity solutions for hybrid environments.

This event is perfect for technical decision-makers and developers and offers deep technical insights and hands-on demonstrations of the latest AWS Cloud infrastructure solutions.

To learn more details, review the event schedule and register for AWS Cloud Infrastructure Day.

Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Amazon Inspector enhances container security by mapping Amazon ECR images to running containers

This post was originally published on this site

When running container workloads, you need to understand how software vulnerabilities create security risks for your resources. Until now, you could identify vulnerabilities in your Amazon Elastic Container Registry (Amazon ECR) images, but couldn’t determine if these images were active in containers or track their usage. With no visibility if these images were being used on running clusters, you had limited ability to prioritize fixes based on actual deployment and usage patterns.

Starting today, Amazon Inspector offers two new features that enhance vulnerability management, giving you a more comprehensive view of your container images. First, Amazon Inspector now maps Amazon ECR images to running containers, enabling security teams to prioritize vulnerabilities based on containers currently running in your environment. With these new capabilities, you can analyze vulnerabilities in your Amazon ECR images and prioritize findings based on whether they are currently running and when they last ran in your container environment. Additionally, you can see the cluster Amazon Resource Name (ARN), number EKS pods or ECS tasks where an image is deployed, helping you prioritize fixes based on usage and severity.

Second, we’re extending vulnerability scanning support to minimal base images including scratch, distroless, and Chainguard images, and extending support for additional ecosystems including Go toolchain, Oracle JDK & JRE, Amazon Corretto, Apache Tomcat, Apache httpd, WordPress (core, themes, plugins), and Puppeteer, helping teams maintain robust security even in highly optimized container environments.

Through continual monitoring and tracking of images running on containers, Amazon Inspector helps teams identify which container images are actively running in their environment and where they’re deployed, detecting Amazon ECR images running on containers in Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS), and any associated vulnerabilities. This solution supports teams managing Amazon ECR images across single AWS accounts, cross-account scenarios, and AWS Organizations with delegated administrator capabilities, enabling centralized vulnerability management based on container images running patterns.

Let’s see it in action
Amazon ECR image scanning helps identify vulnerabilities in your container images through enhanced scanning, which integrates with Amazon Inspector to provide automated, continual scanning of your repositories. To use this new feature you have to enable enhanced scanning through the Amazon ECR console, you can do it by following the steps in the Configuring enhanced scanning for images in Amazon ECR documentation page. I already have Amazon ECR enhanced scanning, so I don’t have to do any action.

In the Amazon Inspector console, I navigate to General settings and select ECR scanning settings from the navigation panel. Here, I can configure the new Image re-scan mode settings by choosing between Last in-use date and Last pull date. I leave it as it is by default with Last in-use date and set the Image last in use date to 14 days. These settings make it so that Inspector monitors my images based on when they were running in the last 14 days in my Amazon ECS or Amazon EKS environments. After applying these settings, Amazon Inspector starts tracking information about images running on containers and incorporating it into vulnerability findings, helping me focus on images actively running in containers in my environment.

After it’s configured, I can view information about images running on containers in the Details menu, where I can see last in-use and pull dates, along with EKS pods or ECS tasks count.

When selecting the number of Deployed ECS Tasks/EKS Pods, I can see the cluster ARN, last use dates, and Type for each image.

For cross-account visibility demonstration, I have a repository with EKS pods deployed in two accounts. In the Resources coverage menu, I navigate to Container repositories, select my repository name and choose the Image tag. As before, I can see the number of deployed EKS pods/ECS tasks.

When I select the number of deployed EKS pods/ECS tasks, I can see that it is running in a different account.

In the Findings menu, I can review any vulnerabilities, and by selecting one, I can find the Last in use date and Deployed ECS Tasks/EKS Pods involved in the vulnerability under Resource affected data, helping me prioritize remediation based on actual usage.

In the All Findings menu, you can now search for vulnerabilities within account management, using filters such as Account ID, Image in use count and Image last in use at.

Key features and considerations
Monitoring based on container image lifecycle – Amazon Inspector now determines image activity based on: image push date ranging duration 14, 30, 60, 90, or 180 days or lifetime, image pull date from 14, 30, 60, 90, or 180 days, stopped duration from never to 14, 30, 60, 90, or 180 days and status of image running on the container. This flexibility lets organizations tailor their monitoring strategy based on actual container image usage rather than only repository events. For Amazon EKS and Amazon ECS workloads, last in use, push and pull duration are set to 14 days, which is now the default for new customers.

Image runtime-aware finding details – To help prioritize remediation efforts, each finding in Amazon Inspector now includes the lastInUseAt date and InUseCount, indicating when an image was last running on the containers and the number of deployed EKS pods/ ECS tasks currently using it. Amazon Inspector monitors both Amazon ECR last pull date data and images running on Amazon ECS tasks or Amazon EKS pods container data for all accounts, updating this information at least once daily. Amazon Inspector integrates these details into all findings reports and seamlessly works with Amazon EventBridge. You can filter findings based on the lastInUseAt field using rolling window or fixed range options, and you can filter images based on their last running date within the last 14, 30, 60, or 90 days.

Comprehensive security coverage – Amazon Inspector now provides unified vulnerability assessments for both traditional Linux distributions and minimal base images including scratch, distroless, and Chainguard images through a single service. This extended coverage eliminates the need for multiple scanning solutions while maintaining robust security practices across your entire container ecosystem, from traditional distributions to highly optimized container environments. The service streamlines security operations by providing comprehensive vulnerability management through a centralized platform, enabling efficient assessment of all container types.

Enhanced cross-account visibility – Security management across single accounts, cross-account setups, and AWS Organizations is now supported through delegated administrator capabilities. Amazon Inspector shares images running on container information within the same organization, which is particularly valuable for accounts maintaining golden image repositories. Amazon Inspector provides all ARNs for Amazon EKS and Amazon ECS clusters where images are running, if the resource belongs to the account with an API, providing comprehensive visibility across multiple AWS accounts. The system updates deployed EKS pods or ECS tasks information at least one time daily and automatically maintains accuracy as accounts join or leave the organization.

Availability and pricing – The new container mapping capabilities are available now in all AWS Regions where Amazon Inspector is offered at no additional cost. To get started, visit the AWS Inspector documentation. For pricing details and Regional availability, refer to the AWS Inspector pricing page.

PS: Writing a blog post at AWS is always a team effort, even when you see only one name under the post title. In this case, I want to thank Nirali Desai, for her generous help with technical guidance, and expertise, which made this overview possible and comprehensive.

— Eli


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

New Amazon EC2 P6-B200 instances powered by NVIDIA Blackwell GPUs to accelerate AI innovations

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B200 instances powered by NVIDIA B200 to address customer needs for high performance and scalability in artificial intelligence (AI), machine learning (ML), and high performance computing (HPC) applications.

Amazon EC2 P6-B200 instances accelerate a broad range of GPU-enabled workloads but are especially well-suited for large-scale distributed AI training and inferencing for foundation models (FMs) with reinforcement learning (RL) and distillation, multimodal training and inference, and HPC applications such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling.

When combined with Elastic Fabric Adapter (EFAv4) networking, hyperscale clustering by EC2 UltraClusters, and advanced virtualization and security capabilities by AWS Nitro System, you can train and serve FMs with increased speed, scale, and security. These instances also deliver up to two times the performance for AI training (time to train) and inference (tokens/sec) compared to EC2 P5en instances.

You can accelerate time-to-market for training FMs and deliver faster inference throughput, which lowers inference cost and helps increase adoption of generative AI applications as well as increased processing performance for HPC applications.

EC2 P6-B200 instances specifications
New EC2 P6-B200 instances provide eight NVIDIA B200 GPUs with 1440 GB of high bandwidth GPU memory, 5th Generation Intel Xeon Scalable processors (Emerald Rapids), 2 TiB of system memory, and 30 TB of local NVMe storage.

Here are the specs for EC2 P6-B200 instances:

Instance size GPUs (NVIDIA B200) GPU
memory (GB)
vCPUs GPU Peer to peer (GB/s) Instance storage (TB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
P6-b200.48xlarge 8 1440 HBM3e 192 1800 8 x 3.84 NVMe SSD 8 x 400 100

These instances feature up to 125 percent improvement in GPU TFLOPs, 27 percent increase in GPU memory size, and 60 percent increase in GPU memory bandwidth compared to P5en instances.

P6-B200 instances in action
You can use P6-B200 instances in the US West (Oregon) AWS Region through EC2 Capacity Blocks for ML. To reserve your EC2 Capacity Blocks, choose Capacity Reservations on the Amazon EC2 console.

Select Purchase Capacity Blocks for ML and then choose your total capacity and specify how long you need the EC2 Capacity Block for p6-b200.48xlarge instances. The total number of days that you can reserve EC2 Capacity Blocks is 1-14 days, 21 days, 28 days, or multiples of 7 up to 182 days. You can choose your earliest start date for up to 8 weeks in advance.

Now, your EC2 Capacity Block will be scheduled successfully. The total price of an EC2 Capacity Block is charged up front, and the price doesn’t change after purchase. The payment will be billed to your account within 12 hours after you purchase the EC2 Capacity Blocks. To learn more, visit Capacity Blocks for ML in the Amazon EC2 User Guide.

When launching P6-B200 instances, you can use AWS Deep Learning AMIs (DLAMI) to support EC2 P6-B200 instances. DLAMI provides ML practitioners and researchers with the infrastructure and tools to quickly build scalable, secure, distributed ML applications in preconfigured environments.

To run instances, you can use AWS Management Console, AWS Command Line Interface (AWS CLI) or AWS SDKs.

You can integrate EC2 P6-B200 instances seamlessly with various AWS managed services such as Amazon Elastic Kubernetes Services (Amazon EKS), Amazon Simple Storage Service (Amazon S3), and Amazon FSx for Lustre. Support for Amazon SageMaker HyperPod is also coming soon.

Now available
Amazon EC2 P6-B200 instances are available today in the US West (Oregon) Region and can be purchased as EC2 Capacity blocks for ML.

Give Amazon EC2 P6-B200 instances a try in the Amazon EC2 console. To learn more, refer to the Amazon EC2 P6 instance page and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Accelerate CI/CD pipelines with the new AWS CodeBuild Docker Server capability

This post was originally published on this site

Starting today, you can use AWS CodeBuild Docker Server capability to provision a dedicated and persistent Docker server directly within your CodeBuild project. With Docker Server capability, you can accelerate your Docker image builds by centralizing image building to a remote host, which reduces wait times and increases overall efficiency.

From my benchmark, with this Docker Server capability, I reduced the total building time by 98 percent, from 24 minutes and 54 seconds to 16 seconds. Here’s a quick look at this feature from my AWS CodeBuild projects.

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. Building Docker images is one of the most common use cases for CodeBuild customers, and the service has progressively improved this experience over time by releasing features such as Docker layer caching and reserved capacity features to improve Docker build performance.

With the new Docker Server capability, you can reduce build time for your applications by providing a persistent Docker server with consistent caching. When enabled in a CodeBuild project, a dedicated Docker server is provisioned with persistent storage that maintains your Docker layer cache. This server can handle multiple concurrent Docker build operations, with all builds benefiting from the same centralized cache.

Using AWS CodeBuild Docker Server
Let me walk you through a demonstration that showcases the benefits with the new Docker Server capability.

For this demonstration, I’m building a complex, multi-layered Docker image based on the official AWS CodeBuild curated Docker images repository, specifically the Dockerfile for building a standard Ubuntu image. This image contains numerous dependencies and tools required for modern continuous integration and continuous delivery (CI/CD) pipelines, making it a good example of the type of large Docker builds that development teams regularly perform.


# Copyright 2020-2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#    http://aws.amazon.com/asl/
#
# or in the "license" file accompanying this file.
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied.
# See the License for the specific language governing permissions and limitations under the License.
FROM public.ecr.aws/ubuntu/ubuntu:20.04 AS core

ARG DEBIAN_FRONTEND="noninteractive"

# Install git, SSH, Git, Firefox, GeckoDriver, Chrome, ChromeDriver,  stunnel, AWS Tools, configure SSM, AWS CLI v2, env tools for runtimes: Dotnet, NodeJS, Ruby, Python, PHP, Java, Go, .NET, Powershell Core,  Docker, Composer, and other utilities
COMMAND REDACTED FOR BREVITY
# Activate runtime versions specific to image version.
RUN n $NODE_14_VERSION
RUN pyenv  global $PYTHON_39_VERSION
RUN phpenv global $PHP_80_VERSION
RUN rbenv  global $RUBY_27_VERSION
RUN goenv global  $GOLANG_15_VERSION

# Configure SSH
COPY ssh_config /root/.ssh/config
COPY runtimes.yml /codebuild/image/config/runtimes.yml
COPY dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh
COPY legal/bill_of_material.txt /usr/share/doc/bill_of_material.txt
COPY amazon-ssm-agent.json /etc/amazon/ssm/amazon-ssm-agent.json

ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]

This Dockerfile creates a comprehensive build environment with multiple programming languages, build tools, and dependencies – exactly the type of image that would benefit from persistent caching.

In the build specification (buildspec), I use the docker buildx build . command:

version: 0.2
phases:
  build:
    commands:
      - cd ubuntu/standard/5.0
      - docker buildx build -t codebuild-ubuntu:latest .

To enable the Docker Server capability, I navigate to the AWS CodeBuild console and select Create project. I can also enable this capability when editing existing CodeBuild projects.

I fill in all details and configuration. In the Environment section, I select Additional configuration.

Then, I scroll down and find Docker server configuration and select Enable docker server for this project. When I select this option, I can choose a compute type configuration for the Docker server. When I’m finished with the configurations, I create this project.

Now, let’s see the Docker Server capability in action.

The initial build takes approximately 24 minutes and 54 seconds to complete because it needs to download and compile all dependencies from scratch. This is expected for the first build of such a complex image.

For subsequent builds with no code changes, the build takes only 16 seconds and that shows 98% reduction in build time.

Looking at the logs, I can see that with Docker Server, most layers are pulled from the persistent cache:

The persistent caching provided by the Docker Server maintains all layers between builds, which is particularly valuable for large, complex Docker images with many layers. This demonstrates how Docker Server can dramatically improve throughput for teams running numerous Docker builds in their CI/CD pipelines.

Additional things to know
Here are a couple of things to note:

  • Architecture support – The feature is available for both x86 (Linux) and ARM builds.
  • Pricing – To learn more about pricing for Docker Server capability, refer to the AWS CodeBuild pricing page.
  • Availability – This feature is available in all AWS Regions where AWS CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page.

You can learn more about the Docker Server feature in the AWS CodeBuild documentation.

Happy building! —

Donnie Prakoso


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Accelerate the modernization of Mainframe and VMware workloads with AWS Transform

This post was originally published on this site

Generative AI has brought many new possibilities to organizations. It has equipped them with new abilities to retire technical debt, modernize legacy systems, and build agile infrastructure to help unlock the value that is trapped in their internal data. However, many enterprises still rely heavily on legacy IT infrastructure, particularly mainframes and VMware-based systems. These platforms have been the backbone of critical operations for decades, but they hinder organizations’ ability to innovate, scale effectively, and reduce technical debt in an era where cloud-first strategies dominate. The need to modernize these workloads is clear, but the journey has traditionally been complex and risky.

The complexity spans multiple dimensions. Financially, organizations face mounting licensing costs and expensive migration projects. Technically, they must untangle legacy dependencies while meeting compliance requirements. Organizationally, they must manage the transition of teams who’ve built careers around legacy systems and navigate undocumented institutional knowledge.

AWS Transform directly addresses these challenges with purpose-built agentic AI that accelerates and de-risks your legacy modernization. It automates the assessment, planning, and transformation of both mainframe and VMware workloads into cloud based architectures, streamlining the entire process. Through intelligent insights, automated code transformation, and human-in-the-loop workflows, organizations can now tackle even the most challenging modernization projects with greater confidence and efficiency.

Mainframe workload migration
AWS Transform for mainframe is the first agentic AI service for modernizing mainframe workloads at scale. The specialized mainframe agent accelerates mainframe modernization by automating complex, resource-intensive tasks across every phase of modernization — from initial assessment to final deployment. It streamlines the migration of legacy applications built on IBM z/OS Db2, including COBOL, CICS, DB2, and VSAM, to modern cloud environments–cutting modernization timelines from years to months.

Let’s look at a few examples of how AWS Transform can help you through different aspects of the migration process.

Code analysis – AWS Transform provides comprehensive insights into your codebase, automatically examining mainframe codebases, creating detailed dependency graphs, measuring code complexity, and identifying component relationships

Documentation – AWS Transform for mainframe creates comprehensive technical and functional documentation of mainframe applications, preserving critical knowledge about features, program logic, and data flows. You can interact with the generated documentation through an AI-powered chat interface to discover and retrieve information quickly.

Business rule extraction – AWS Transform extracts and presents complex logic in plain language so you can gain visibility into business processes embedded within legacy applications. This enables both business and technical stakeholders to gain a greater understanding of application functionality.

Code decomposition – AWS Transform offers sophisticated code decomposition tools, including interactive dependency graphs and domain separation capabilities, enabling users to visualize and modify relationships between components while identifying key business functions. The solution also streamlines migration planning through an interactive wave sequence planner that considers user preferences to generate optimized migration strategies.

Modernization Wave Planning – With its specialized agent, AWS Transform for mainframe creates prioritized modernization wave sequences based on code and data dependencies, code volume, and business priorities. It enables modernization teams to make data-driven, customized migration plans that align to their specific organizational needs.

Code refactoring – AWS Transform can refactor millions of lines of mainframe code in minutes, converting COBOL, VSAM, and DB2 systems into modern Java Spring Boot applications while maintaining functional equivalence and transforming CICS transactions into web services and JCL batch processes into Groovy scripts. The solution provides high-quality output through configurable settings and bundled runtime capabilities, producing Java code that emphasizes readability, maintainability, and technical excellence.

Deployments – AWS Transform provides customizable deployment templates that streamline the deployment process through user-defined inputs. For added efficiency, the solution bundles the selected runtime version with the migrated application, enabling seamless deployment as a complete package.

By integrating intelligent documentation analysis, business rules extraction, and human-in-the-loop collaboration capabilities, AWS Transform helps organizations accelerate their mainframe transformation while reducing risk and maintaining business continuity.

VMware modernization
With rapid changes in VMware licensing and support model, organizations are increasingly exploring alternatives despite the difficulties associated with migrating and modernizing VMware workloads. This is aggravated by the fact that the accumulation of technical debt typically creates complex, poorly documented environments managed by multiple teams, leading to vendor lock-in and collaboration challenges that hinder migration efforts further.

AWS Transform is the first agentic AI service for VMware modernization of its kind that helps you to overcome those difficulties. It can offset risk and accelerate the modernization of VMware workloads by automating application discovery, dependency mapping, migration planning, network conversion, and EC2 instance optimization, reducing manual effort and accelerating cloud adoption.

The process is organized into four phases: inventory discovery, wave planning, network conversion, and server migration. It uses agentic AI capabilities to analyze and map complex VMware environments, converting network configurations into AWS built-in constructs and helps you to orchestrate dependency-aware migration waves for seamless cutovers. In addition, it also provides a collaborative web interface that keeps AWS teams, partners, and customers aligned throughout the modernization journey.

Let’s take a quick tour to see how this works.

Setting up
Before you can start using the service, you must first enable it by navigating to the AWS Transform console. AWS Transform requires AWS IAM Identity Center (IdC) to manage users and setup appropriate permissions. If you don’t yet have IdC set up it will ask you to configure it first and return to the AWS Transform console later to continue the process.

With IdC available, you can then proceed to choosing the encryption settings. AWS Transform gives you the option to use a default AWS managed key or you can use your own custom keys through AWS Key Management Service (AWS KMS).

After completing this step, AWS Transform will be enabled. You can manage admin access to the console by navigating to Users and using the search box to find them. You must create users or groups in IdC first if they don’t already exist. The service console will help admins provision users who will get access to the web app. Each provisioned user receives an email with a link to set password and get their personalized URL for the webapp.

You interact with AWS Transform through a dedicated web experience. To get the url, navigate to Settings where you can check your configurations and copy the links to the AWS Transform web experience where you and your teams can start using the service.

Discovery
AWS Transform can discover your VMware environment either automatically through AWS Application Discovery Service collectors or you can provide your own data by importing existing RVTools export files.

To get started, choose the Create or select connectors task and provide the account IDs for one or more AWS accounts that will be used for discovery. This will generate links that you can follow to authorize each account for usage within AWS Transform. You can then move on to the Perform discovery task, where you can choose to install AWS Application Discovery Service collectors or upload your own files such as exports from RVTools.

Provisioning
The steps for the provisioning phase are similar to the ones described earlier for discovery. You connect target AWS accounts by entering their account IDs and validating the authorization requests which will then enable the next steps such as the Generate VPC configuration step. Here, you can import your RVTools files or NSX exports from Import/Export from NSX, if applicable, and enable AWS Transform to understand your networking requirements.

You should then continue working through the job plan until you reach the point where it’s ready to deploy your Amazon Virtual Private Cloud (Amazon VPC). All the infrastructure as code (IaC) code is stored in Amazon Simple Storage Service (Amazon S3) buckets in the target AWS account.

Review the proposed changes and, if you’re happy, start the deployment process of the AWS resources to the target accounts.

Deployment
AWS Transform requires you to set up AWS Application Migration Service (MGN) in the target AWS accounts to automate the migration process. Choose the Initiate VM migration task and use the link to navigate to the service console, then follow the instructions to configure it.

After setting up service permissions, you’ll proceed to the implementation phase of the waves created by AWS Transform and start the migration process. For each wave, you’ll first be asked to make various choices such as setting the sizing preference and tenancy for the Amazon Elastic Compute Cloud (Amazon EC2) instances. Confirm your selections and continue following the instructions given by AWS Transform until you reach the Deploy replication agents stage, where you can start the migration for that wave.

After you start the waves migration process, you can switch to the dashboard at any time to check on progress.

With its agentic AI capabilities, AWS Transform offers a powerful solution for accelerating and de-risking mainframe and VMware modernization workloads. By automating complex assessment and transformation processes, AWS Transform reduces the time associated with legacy system migration while minimizing the potential for errors and business disruption enabling more agile, efficient, and future-ready IT environments within your organization.

Things to know
Availability –  AWS Transform for mainframe is available in US East (N. Virginia) and Europe (Frankfurt) Regions. AWS Transform for VMware offers different availability options for data collection and migrations. Please refer to the AWS Transform for VMware FAQ for more details.

Pricing –  Currently, we offer our core features—including assessment and transformation—at no cost to AWS customers.

Here are a few links for further reading.

Dive deeper into mainframe modernization and learn more about about AWS Transform for mainframe.

Explore more about VMware modernization and how to get started with your VMware migration journey.

Check out this interactive demo of AWS Transform for mainframe and this interactive demo of AWS Transform for VMware.

Matheus Guimaraes | @codingmatheus


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

AWS Transform for .NET, the first agentic AI service for modernizing .NET applications at scale

This post was originally published on this site

I started my career as a .NET developer and have seen .NET evolve over the last couple of decades. Like many of you, I also developed multiple enterprise applications in .NET Framework that ran only on Windows. I fondly remember building my first enterprise application with .NET Framework. Although it served us well, the technology landscape has significantly shifted. Now that there is an open source and cross-platform version of .NET that can run on Linux, these legacy enterprise applications built on .NET Framework need to be ported and modernized.

The benefits of porting to Linux are compelling: applications cost 40 percent less to operate because they save on Windows licensing costs, run 1.5–2 times faster with improved performance, and handle growing workloads with 50 percent better scalability. Having helped port several applications, I can say the effort is worth the rewards.

However, porting .NET Framework applications to cross-platform .NET is a labor-intensive and error-prone process. You have to perform multiple steps, such as analyzing the codebase, detecting incompatibilities, implementing fixes while porting the code, and then validating the changes. For enterprises, the challenge becomes even more complex because they might have hundreds of .NET Framework applications in their portfolio.

At re:Invent 2024, we previewed this capability as Amazon Q Developer transformation capabilities for .NET to help port your .NET applications at scale. The experience is available as a unified web experience for at-scale transformation and within your integrated development environment (IDE) for individual project and solution porting.

Now that we’ve incorporated your valuable feedback and suggestions, we’re excited to announce today the general availability of AWS Transform for .NET. We’ve also added new capabilities to support projects with private NuGet packages, port model-view-controller (MVC) Razor views to ASP .NET Core Razor views, and execute the ported unit tests.

I’ll expand on the key new capabilities in a moment, but let’s first take a quick look at the two porting experiences of AWS Transform for .NET.

Large-scale porting experience for .NET applications
Enterprise digital transformation is typically driven by central teams responsible for modernizing hundreds of applications across multiple business units. Different teams have ownership of different applications and their respective repositories. Success requires close coordination between these teams and the application owners and developers across business units. To accelerate this modernization at scale, AWS Transform for .NET provides a web experience that enables teams to connect directly to source code repositories and efficiently transform multiple applications across the organization. For select applications requiring dedicated developer attention, the same agent capabilities are available to developers as an extension for Visual Studio IDE.

Let’s start by looking at how the web experience of AWS Transform for .NET helps port hundreds of .NET applications at scale.

Web experience of AWS Transform for .NET
To get started with the web experience of AWS Transform, I onboard using the steps outlined in the documentation, sign in using my credentials, and create a job for .NET modernization.

Create a new job for .NET Transformation

AWS Transform for .NET creates a job plan, which is a sequence of steps that the agent will execute to assess, discover, analyze, and transform applications at scale. It then waits for me to set up a connector to connect to my source code repositories.

Setup connector to connect to source code repository

After the connector is in place, AWS Transform begins discovering repositories in my account. It conducts an assessment focused on three key areas: repository dependencies, required private packages and third-party libraries, and supported project types within your repositories.

Based on this assessment, it generates a recommended transformation plan. The plan orders repositories according to their last modification dates, dependency relationships, private package requirements, and the presence of supported project types.

AWS Transform for .NET then prepares for the transformation process by requesting specific inputs, such as the target branch destination, target .NET version, and the repositories to be transformed.

To select the repositories to transform, I have two options: use the recommended plan or customize the transformation plan by selecting repositories manually. For selecting repositories manually, I can use the UI or download the repository mapping and upload the customized list.

select the repositories to transform

AWS Transform for .NET automatically ports the application code, builds the ported code, executes unit tests, and commits the ported code to a new branch in my repository. It provides a comprehensive transformation summary, including modified files, test outcomes, and suggested fixes for any remaining work.

While the web experience helps accelerate large-scale porting, some applications may require developer attention. For these cases, the same agent capabilities are available in the Visual Studio IDE.

Visual Studio IDE experience of AWS Transform for .NET
Now, let’s explore how AWS Transform for .NET works within Visual Studio.

To get started, I install the latest version of AWS Toolkit extension for Visual Studio and set up the prerequisites.

I open a .NET Framework solution, and in the Solution Explorer, I see the context menu item Port project with AWS Transform for an individual project.

Context menu for Port project with AWS Transform in Visual Studio

I provide the required inputs, such as the target .NET version and the approval for the agents to autonomously transform code, execute unit tests, generate a transformation summary, and validate Linux-readiness.

Transformation summary after the project is transformed in Visual Studio

I can review the code changes made by the agents locally and continue updating my codebase.

Let’s now explore some of the key new capabilities added to AWS Transform for .NET.

Support for projects with private NuGet package dependencies 
During preview, only projects with public NuGet package dependencies were supported. With general availability, we now support projects with private NuGet package dependencies. This has been one of the most requested features during the preview.

The feature I really love is that AWS Transform can detect cross-repository dependencies. If it finds the source code of my private NuGet package, it automatically transforms that as well. However, if it can’t locate the source code, in the web experience, it provides me the flexibility to upload the required NuGet packages.

AWS Transform displays the missing package dependencies that need to be resolved. There are two ways to do this: I can either use the provided PowerShell script to create and upload packages, or I can build the application locally and upload the NuGet packages from the packages folder in the solution directory.

Upload packages to resolve missing dependencies

After I upload the missing NuGet packages, AWS Transform is able to resolve the dependencies. It’s best to provide both the .NET Framework and cross platform .NET versions of the NuGet packages. If the cross platform .NET version is not available, then at a minimum the .NET Framework version is required for AWS Transform to add it as an assembly reference and proceed for transformation.

Unit test execution
During preview, we supported porting unit tests from .NET Framework to cross-platform .NET. With general availability, we’ve also added support for executing unit tests after the transformation is complete.

After the transformation is complete and the unit tests are executed, I can see the results in the dashboard and view the status of the tests at each individual test project level.

Dashboard after successful transformation in web showing exectuted unit tests

Transformation visibility and summary
After the transformation is complete, I can download a detailed report in JSON format that gives me a list of transformed repositories, details about each repository, and the status of the transformation actions performed for each project within a repository. I can view the natural language transformation summary at the project level to understand AWS Transform output with project-level granularity. The summary provides me with an overview of updates along with key technical changes to the codebase.

detailed report of transformed project highlighting transformation summary of one of the project

Other new features
Let’s have a quick look at other new features we’ve added with general availability:

  • Support for porting UI layer – During preview, you could only port the business logic layers of MVC applications using AWS Transform, and you had to port the UI layer manually. With general availability, you can now use AWS Transform to port MVC Razor views to ASP.NET Core Razor views.
  • Expanded connector support – During preview, you could connect only to GitHub repositories. Now with general availability, you can connect to GitHub, GitLab, and Bitbucket repositories.
  • Cross repository dependency – When you select a repository for transformation, dependent repositories are automatically selected for transformation.
  • Download assessment report – You can download a detailed assessment report of the identified repositories in your account and private NuGet packages referenced in these repositories.
  • Email notifications with deep links – You’ll receive email notifications when a job’s status changes to completed or stopped. These notifications include deep links to the transformed code branches for review and continued transformation in your IDE.

Things to know
Some additional things to know are:

  • Regions – AWS Transform for .NET is generally available today in the Europe (Frankfurt) and US East (N. Virginia) Regions.
  • Pricing – Currently, there is no additional charge for AWS Transform. Any resources you create or continue to use in your AWS account using the output of AWS Transform will be billed according to their standard pricing. For limits and quotas, refer to the documentation.
  • .NET versions supported – AWS Transform for .NET supports transforming applications written using .NET Framework versions 3.5+, .NET Core 3.1, and .NET 5+, and the cross-platform .NET version, .NET 8.
  • Application types supported – AWS Transform for .NET supports porting C# code projects of the following types: console application, class library, unit tests, WebAPI, Windows Communication Foundation (WCF) service, MVC, and single-page application (SPA).
  • Getting started – To get started, visit AWS Transform for .NET User Guide.
  • Webinar – Join the webinar Accelerate .NET Modernization with Agentic AI to experience AWS Transform for .NET through a live demonstration.

– Prasad


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

AWS Weekly Roundup: South America expansion, Q Developer in OpenSearch, and more (May 12, 2025)

This post was originally published on this site

I’ve always been fascinated by how quickly we’re able to stand up new Regions and Availability Zones at AWS. Today there are 36 launched Regions and 114 launched Availability Zones. That’s amazing!

This past week at AWS was marked by significant expansion to our global infrastructure. The announcement of a new Region in the works for South America means customers will have more options for meeting their low latency and data residency requirements. Alongside the expansion, AWS announced the availability of numerous instance types in additional Regions.

In addition to the infrastructure expansion, AWS is also expanding the reach of Amazon Q Developer into Amazon OpenSearch Service.

Last week’s launches

Instance announcements

AWS expanded instance availability for an array of instance types across additional Regions.

Additional updates

Upcoming events

We are in the middle of AWS Summit season! AWS Summits run throughout the summer in cities all around the world. Be sure to check the calendar to find out when a AWS Summit is happening near you. Here are the remaining Summits for May, 2025.


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

In the works – AWS South America (Chile) Region

This post was originally published on this site

Today, Amazon Web Services (AWS) announced plans to launch a new AWS Region in Chile by the end of 2026. The AWS South America (Chile) Region will consist of three Availability Zones at launch, bringing AWS infrastructure and services closer to customers in Chile. This new Region joins the AWS South America (São Paulo) and AWS Mexico (Central) Regions as our third AWS Region in Latin America. Each Availability Zone is separated by a meaningful distance to support applications that need low latency while significantly reducing the risk of a single event impacting availability.

Skyline of Santiago de Chile with modern office buildings in the financial district in Las Condes

The new AWS Region will bring advanced cloud technologies, including artificial intelligence (AI) and machine learning (ML), closer to customers in Latin America. Through high-bandwidth, low-latency network connections over dedicated, fully redundant fiber, the Region will support applications requiring synchronous replication while giving you the flexibility to run workloads and store data locally to meet data residency requirements.

AWS in Chile
In 2017, AWS established an office in Santiago de Chile to support local customers and partners. Today, there are business development teams, solutions architects, partner managers, professional services consultants, support staff, and personnel in various other roles working in the Santiago office.

As part of our ongoing commitment to Chile, AWS has invested in several infrastructure offerings throughout the country. In 2019, AWS launched an Amazon CloudFront edge location in Chile. This provides a highly secure and programmable content delivery network that accelerates the delivery of data, videos, applications, and APIs to users worldwide with low latency and high transfer speeds.

AWS strengthened its presence in 2021 with two significant additions. First, an AWS Ground Station antenna location in Punta Arenas, offering a fully managed service for satellite communications, data processing, and global satellite operations scaling. Second, AWS Outposts in Chile, bringing fully managed AWS infrastructure and services to virtually any on-premises or edge location for a consistent hybrid experience.

In 2023, AWS further enhanced its infrastructure with two key developments, an AWS Direct Connect location in Chile that lets you create private connectivity between AWS and your data center, office, or colocation environment, and AWS Local Zones in Santiago, placing compute, storage, database, and other select services closer to large population centers and IT hubs. The AWS Local Zone in Santiago helps customers deliver applications requiring single-digit millisecond latency to end users.

The upcoming AWS South America (Chile) Region represents our continued commitment to fueling innovation in Chile. Beyond building infrastructure, AWS plays a crucial role in developing Chile’s digital workforce through comprehensive cloud education initiatives. Through AWS Academy, AWS Educate, and AWS Skill Builder, AWS provides essential cloud computing skills to diverse groups—from students and developers to business professionals and emerging IT leaders. Since 2017, AWS has trained more than two million people across Latin America on cloud skills, including more than 100,000 in Chile.

AWS customers in Chile
AWS customers in Chile have been increasingly moving their applications to AWS and running their technology infrastructure in AWS Regions around the world. With the addition of this new AWS Region, customers will be able to provide even lower latency to end users and use advanced technologies such as generative AI, Internet of Things (IoT), mobile services, banking industry, and more, to drive innovation. This Region will give AWS customers the ability to run their workloads and store their content in Chile.

Here are some examples of customers in Chile using AWS to drive innovation:

The Digital Government Secretariat (SGD) is the Chilean government institution responsible for proposing and coordinating the implementation of the Digital Government Strategy, providing an integrated government approach. SGD coordinates, advises, and provides cross-sector support in the strategic use of digital technologies, data, and public information to improve state administration and service delivery. To fulfill this mission, SGD relies on AWS to operate critical digital platforms including Clave Única (single sign-on), FirmaGob (digital signature), the State Electronic Services Integration Platform (PISEE), DocDigital, SIMPLE, and the Administrative Procedures and Services Catalog (CPAT), among others.

Transbank, Chile’s largest payment solutions ecosystem managing the largest percentage of national transactions, used AWS to significantly reduce time-to-market for new products. Moreover, Transbank implemented multiple AWS-powered solutions, enhancing team productivity and accelerating innovation. These initiatives showcase how financial technology companies can use AWS to drive innovation and operational efficiency. “The new AWS Region in Chile will be very important for us,” said Jorge Rodríguez M., Chief Architecture and Technology Officer (CA&TO) of Transbank. “It will further reduce latency, improve security and expand the possibilities for innovation, allowing us to serve our customers with new and better services and products.”

To learn more about AWS customers in Chile, visit AWS Customer Success Stories.

AWS sustainability efforts in Chile
AWS is committed to water stewardship in Chile through innovative conservation projects. In the Maipo Basin, which provides essential water for the Metropolitan Santiago and Valparaiso regions, AWS has partnered with local farmers and climate-tech company Kilimo to implement water-saving initiatives. The project involves converting 67 hectares of agricultural land from flood to drip irrigation, which will save approximately 200 million liters of water annually.

This water conservation effort supports AWS commitment to be water positive by 2030 and demonstrates our dedication to environmental sustainability in the communities where AWS operate. The project uses efficient drip irrigation systems that deliver water directly to plant root systems through a specialized pipe network, maximizing water efficiency for agricultural use. To learn more about this initiative, read our blog post AWS expands its water replenishment program to China and Chile—and adds projects in the US and Brazil.

AWS community in Chile
The AWS community in Chile is one of the most active in the region, comprising of AWS Community Builders, two AWS User Groups (AWS User Group Chile and AWS Girls Chile), and an AWS Cloud Club. These groups hold monthly events and have organized two AWS Community Days. At the first Community Day, held in 2023, we had the honor of having Jeff Barr as the keynote speaker.

Chile AWS Community Day 2023

Stay tuned
We’ll announce the opening of this and the other Regions in future blog posts, so be sure to stay tuned! To learn more, visit the AWS Region in Chile page.

Eli

Thanks to Leonardo Vilacha for the Chile AWS Community Day 2023 photo.


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Accelerate the transfer of data from an Amazon EBS snapshot to a new EBS volume

This post was originally published on this site

Today we are announcing the general availability of Amazon Elastic Block Store (Amazon EBS) Provisioned Rate for Volume Initialization, a feature that accelerates the transfer of data from an EBS snapshot, a highly durable backup of volumes stored in Amazon Simple Storage Service (Amazon S3) to a new EBS volume.

With Amazon EBS Provisioned Rate for Volume Initialization, you can create fully performant EBS volumes within a predictable amount of time. You can use this feature to speed up the initialization of hundreds of concurrent volumes and instances. You can also use this feature when you need to recover from an existing EBS Snapshot and need your EBS volume to be created and initialized as quickly as possible. You can use this feature to quickly create copies of EBS volumes with EBS Snapshots in a different Availability Zone, AWS Region, or AWS account. Provisioned Rate for Volume Initialization for each volume is charged based on the full snapshot size and the specified volume initialization rate.

This new feature expedites the volume initialization process by fetching the data from an EBS Snapshot to an EBS volume at a consistent rate that you specify between 100 MiB/s and 300 MiB/s. You can specify this volume initialization rate at which the snapshot blocks are to be downloaded from Amazon S3 to the volume.

With specifying the volume initialization rate, you can create a fully performant volume in a predictable time, enabling increased operational efficiency and visibility on the expected time of completion. If you run utilities like fio/dd to expedite volume initialization for your workflows like application recovery and volume copy for testing and development, it will remove the operational burden of managing such scripts with the consistency and predictability to your workflows.

Get started with specifying the volume initialization rate
To get started, you can choose the volume initialization rate when you launch your EC2 instance or create your volume from the snapshot.

1. Create a volume in the EC2 launch wizard
When launching new EC2 instances in the launch wizard of EC2 console, you can enter a desired Volume initialization rate in the Storage (volumes) section.

You can also set the volume initialization rate when creating and modifying the EC2 Launch Templates.

In the AWS Command Line Interface (AWS CLI), you can add VolumeInitializationRate parameter to the block device mappings when call run-instances command.

aws ec2 run-instances 
    --image-id ami-0abcdef1234567890 
    --instance-type t2.micro 
    --subnet-id subnet-08fc749671b2d077c 
    --security-group-ids sg-0b0384b66d7d692f9 
    --key-name MyKeyPair 
    --block-device-mappings file://mapping.json

Contents of mapping.json. This example adds /dev/sdh an empty EBS volume with a size of 8 GiB.

[
    {
        "DeviceName": "/dev/sdh",
        "Ebs": {
            "VolumeSize": 8
            "VolumeType": "gp3",            
            "VolumeInitializationRate": 300
		 } 
     } 
]

To learn more, visit block device mapping options, which defines the EBS volumes and instance store volumes to attach to the instance at launch.

2. Create a volume from snapshots
When you create a volume from snapshots, you can also choose Create volume in the EC2 console and specify the Volume initialization rate.

Confirm your new volume with the initialization rate.

In the AWS CLI, you can use VolumeInitializationRate parameter and when calling create-volume command.

aws ec2 create-volume --region us-east-1 --cli-input-json '{
    "AvailabilityZone": "us-east-1a",
    "VolumeType": "gp3",
    "SnapshotId": "snap-07f411eed12ef613a",
    "VolumeInitializationRate": 300
}'

If the command is run successfully, you will receive the result below.

{
    "AvailabilityZone": "us-east-1a",
    "CreateTime": "2025-01-03T21:44:53.000Z",
    "Encrypted": false,
    "Size": 100,
    "SnapshotId": "snap-07f411eed12ef613a",
    "State": "creating",
    "VolumeId": "vol-0ba4ed2a280fab5f9",
    "Iops": 300,
    "Tags": [],
    "VolumeType": "gp2",
    "MultiAttachEnabled": false,
    "VolumeInitializationRate": 300
}

You can also set the volume initialization rate when replacing root volumes of EC2 instances and provisioning EBS volumes using the EBS Container Storage Interface (CSI) driver.

After creation of the volume, EBS will keep track of the hydration progress and publish an Amazon EventBridge notification for EBS to your account when the hydration completes so that they can be certain when their volume is fully performant.

To learn more, visit Create an Amazon EBS volume and Initialize Amazon EBS volumes in the Amazon EBS User Guide.

Now available
Amazon EBS Provisioned Rate for Volume Initialization is now available and supported for all EBS volume types today. You will be charged based on the full snapshot size and the specified volume initialization rate. To learn more, visit Amazon EBS Pricing page.

To learn more about Amazon EBS including this feature, take the free digital course on the AWS Skill Builder portal. Course includes use cases, architecture diagrams and demos.

Give this feature a try in the Amazon EC2 console today and send feedback to AWS re:Post for Amazon EBS or through your usual AWS Support contacts.

— Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Amazon Q Developer in GitHub (in preview) accelerates code generation

This post was originally published on this site

Starting today, you can now use Amazon Q Developer in GitHub in preview! This is fantastic news for the millions of developers who use GitHub on a daily basis, whether at work or for personal projects. They can now use Amazon Q Developer for feature development, code reviews, and Java code migration directly within the GitHub interface.

To demonstrate, I’m going to use Amazon Q Developer to help me create an application from zero called StoryBook Teller. I want this to be an ASP.Core website using .NET 9 that takes three images from the user and uses Amazon Bedrock with Anthropic’s Claude to generate a story based on them.

Let me show you how this works.

Installation

The first thing you need to do is install the Amazon Q Developer application in GitHub, and you can begin using it immediately without connecting to an AWS account.

You’ll then be presented with a choice to add it to all your repositories or select specific ones. In this case, I want to add it to my storybook-teller-demo repo, so I choose Only selected repositories and type in the name to find it.

This is all you need to do to make the Amazon Q Developer app ready to use inside your selected repos. You can verify that the app is installed by navigating to your GitHub account Settings and the app should be listed in the Applications page.

You can choose Configure to view permissions and add Amazon Q Developer to repositories or remove it at any time.

Now let’s use Amazon Q Developer to help us build our application.

Feature development
When Amazon Q Developer is installed into a repository, you can assign GitHub issues to the Amazon Q development agent to develop features for you. It will then generate code using the whole codebase in your repository as context as well as the issue’s description. This is why it’s important to list your requirements as accurately and clearly as possible in your GitHub issues, the same way that you should always strive for anyway.

I have created five issues in my StoryBook Teller repository that cover all my requirements for this app, from creating a skeleton .NET 9 project to implementing frontend and backend.

Let’s use Amazon Q Developer to develop the application from scratch and help us implement all these features!

To begin with, I want Amazon Q Developer to help me create the .NET project. To do this, I open the first issue, and in the Labels section, I find and select Amazon Q development agent.

That’s all there is to it! The issue is now assigned to Amazon Q Developer. After the label is added, the Amazon Q development agent automatically starts working behind the scenes providing progress updates through the comments, starting with one saying, I'm working on it.

As you might expect, the amount of time it takes will depend on the complexity of the feature. When it’s done, it will automatically create a pull request with all the changes.

The next thing I want to do is make sure that the generated code works, so I’m going to download the code changes and run the app locally on my computer.

I go to my terminal and type git fetch origin pull/6/head:pr-6 to get the code for the pull request it created. I double-check the contents and I can see that I do indeed have an ASP.Core project generated using .NET 9, as I expected.

I then run dotnet run and open the app with the URL given in the output.

Brilliant, it works! Amazon Q Developer took care of implementing this one exactly as I wanted based on the requirements I provided in the GitHub issue. Now that I have tested that the app works, I want to review the code itself before I accept the changes.

Code review
I go back to GitHub and open the pull request. I immediately notice that Amazon Q Developer has performed some automatic checks on the generated code.

This is great! It has already done quite a bit of the work for me. However, I want to review it before I merge the pull request. To do that, I navigate to the Files changed tab.

I review the code, and I like what I see! However, looking at the contents of .gitignore, I notice something that I want to change. I can see that Amazon Q Developer made good assumptions and added exclusion rules for Visual Studio (VS) Code files. However, JetBrains Rider is my favorite integrated development environment (IDE) for .NET development, so I want to add rules for it, too.

You can ask Amazon Q Developer to reiterate and make changes by using the normal code review flow in the GitHub interface. In this case, I add a comment to the .gitignore code saying, add patterns to ignore Rider IDE files. I then choose Start a review, which will queue the change in the review.

I select Finish your review and Request changes.

Soon after I submit the review, I’m redirected to the Conversation tab. Amazon Q Developer starts working on it, resuming the same feedback loop and encouraging me to continue with the review process until I’m satisfied.

Every time Q Developer makes changes, it will run the automated checks on the generated code. In this case, the code was somewhat straightforward, so it was expected that the automatic code review wouldn’t raise any issues. But what happens if we have more complex code?

Let’s take another example and use Amazon Q Developer to implement the feature for enabling image uploads on the website. I use the same flow I described in the previous section. However, I notice that the automated checks on the pull request flagged a warning this time, stating that the API generated to support image uploads on the backend is missing authorization checks effectively allowing direct public access. It explains the security risk in detail and provides useful links.

It then automatically generates a suggested code fix.

When it’s done, you can review the code and choose to Commit changes if you’re happy with the changes.

After fixing this and testing it, I’m happy with the code for this issue and move on applying the same process to other ones. I assign the Amazon Q development agent to each one of my remaining issues, wait for it to generate the code, and go through the iterative review process asking it to fix any issues for me along the way. I then test my application at the end of that software cycle and am very pleased to see that Amazon Q Developer managed to handle all issues, from project setup, to boilerplate code, to more complex backend and frontend. A true full-stack developer!

I did notice some things that I wanted to change along the way. For example, it defaulted to using the Invoke API to send the uploaded images to Amazon Bedrock instead of the Converse API. However, because I didn’t state this in my requirements, it had no way of knowing. This highlights the importance of being as precise as possible in your issue’s titles and descriptions to give Q Developer the necessary context and make the development process as efficient as possible.

Having said that, it’s still straightforward to review the generated code on the pull requests, add comments, and let the Amazon Q Developer agent keep working on changes until you’re happy with the final result. Alternatively, you can accept the changes in the pull request and create separate issues that you can assign to Q Developer later when you’re ready to develop them.

Code transformation
You can also transform legacy Java codebases to modern versions with Q Developer. Currently, it can update applications from Java 8 or Java 11 to Java 17, with more options coming in future releases.

The process is very similar to the one I demonstrated earlier in this post, except for a few things.

First, you need to create an issue within a GitHub repository containing a Java 8 or Java 11 application. The title and description don’t really matter in this case. It might even be a short title such as “Migration,” leaving the description empty. Then, on Labels, you assign the Amazon Q transform agent label to the issue.

Much like before, Amazon Q Developer will start working immediately behind the scenes before generating the code on a pull request that you can review. This time, however, it’s the Amazon Q transform agent doing the work which is specialized in code migration and will take all the necessary steps to analyze and migrate the code from Java 8 to Java 17.

Notice that it also needs a workflow to be created, as per the documentation. If you don’t have it enabled yet, it will display clear instructions to help you get everything set up before trying again.

As expected, the amount of time needed to perform a migration depends on the size and complexity of your application.

Conclusion
Using Amazon Q Developer in GitHub is like having a full-stack developer that you can collaborate with to develop new features, accelerate the code review process, and rely on to enhance the security posture and quality of your code. You can also use it to automate migration from Java 8 and 11 applications to Java 17 making it much easier to get started on that migration project that you might have been postponing for a while. Best of all, you can do all this from the comfort of your own GitHub environment.

Now available
You can now start using Amazon Q Developer today for free in GitHub, no AWS account setup needed.

Amazon Q Developer in GitHub is currently in preview.

Matheus Guimaraes | codingmatheus


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)