New Amazon EC2 P6-B200 instances powered by NVIDIA Blackwell GPUs to accelerate AI innovations

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B200 instances powered by NVIDIA B200 to address customer needs for high performance and scalability in artificial intelligence (AI), machine learning (ML), and high performance computing (HPC) applications.

Amazon EC2 P6-B200 instances accelerate a broad range of GPU-enabled workloads but are especially well-suited for large-scale distributed AI training and inferencing for foundation models (FMs) with reinforcement learning (RL) and distillation, multimodal training and inference, and HPC applications such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling.

When combined with Elastic Fabric Adapter (EFAv4) networking, hyperscale clustering by EC2 UltraClusters, and advanced virtualization and security capabilities by AWS Nitro System, you can train and serve FMs with increased speed, scale, and security. These instances also deliver up to two times the performance for AI training (time to train) and inference (tokens/sec) compared to EC2 P5en instances.

You can accelerate time-to-market for training FMs and deliver faster inference throughput, which lowers inference cost and helps increase adoption of generative AI applications as well as increased processing performance for HPC applications.

EC2 P6-B200 instances specifications
New EC2 P6-B200 instances provide eight NVIDIA B200 GPUs with 1440 GB of high bandwidth GPU memory, 5th Generation Intel Xeon Scalable processors (Emerald Rapids), 2 TiB of system memory, and 30 TB of local NVMe storage.

Here are the specs for EC2 P6-B200 instances:

Instance size GPUs (NVIDIA B200) GPU
memory (GB)
vCPUs GPU Peer to peer (GB/s) Instance storage (TB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
P6-b200.48xlarge 8 1440 HBM3e 192 1800 8 x 3.84 NVMe SSD 8 x 400 100

These instances feature up to 125 percent improvement in GPU TFLOPs, 27 percent increase in GPU memory size, and 60 percent increase in GPU memory bandwidth compared to P5en instances.

P6-B200 instances in action
You can use P6-B200 instances in the US West (Oregon) AWS Region through EC2 Capacity Blocks for ML. To reserve your EC2 Capacity Blocks, choose Capacity Reservations on the Amazon EC2 console.

Select Purchase Capacity Blocks for ML and then choose your total capacity and specify how long you need the EC2 Capacity Block for p6-b200.48xlarge instances. The total number of days that you can reserve EC2 Capacity Blocks is 1-14 days, 21 days, 28 days, or multiples of 7 up to 182 days. You can choose your earliest start date for up to 8 weeks in advance.

Now, your EC2 Capacity Block will be scheduled successfully. The total price of an EC2 Capacity Block is charged up front, and the price doesn’t change after purchase. The payment will be billed to your account within 12 hours after you purchase the EC2 Capacity Blocks. To learn more, visit Capacity Blocks for ML in the Amazon EC2 User Guide.

When launching P6-B200 instances, you can use AWS Deep Learning AMIs (DLAMI) to support EC2 P6-B200 instances. DLAMI provides ML practitioners and researchers with the infrastructure and tools to quickly build scalable, secure, distributed ML applications in preconfigured environments.

To run instances, you can use AWS Management Console, AWS Command Line Interface (AWS CLI) or AWS SDKs.

You can integrate EC2 P6-B200 instances seamlessly with various AWS managed services such as Amazon Elastic Kubernetes Services (Amazon EKS), Amazon Simple Storage Service (Amazon S3), and Amazon FSx for Lustre. Support for Amazon SageMaker HyperPod is also coming soon.

Now available
Amazon EC2 P6-B200 instances are available today in the US West (Oregon) Region and can be purchased as EC2 Capacity blocks for ML.

Give Amazon EC2 P6-B200 instances a try in the Amazon EC2 console. To learn more, refer to the Amazon EC2 P6 instance page and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Accelerate CI/CD pipelines with the new AWS CodeBuild Docker Server capability

This post was originally published on this site

Starting today, you can use AWS CodeBuild Docker Server capability to provision a dedicated and persistent Docker server directly within your CodeBuild project. With Docker Server capability, you can accelerate your Docker image builds by centralizing image building to a remote host, which reduces wait times and increases overall efficiency.

From my benchmark, with this Docker Server capability, I reduced the total building time by 98 percent, from 24 minutes and 54 seconds to 16 seconds. Here’s a quick look at this feature from my AWS CodeBuild projects.

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. Building Docker images is one of the most common use cases for CodeBuild customers, and the service has progressively improved this experience over time by releasing features such as Docker layer caching and reserved capacity features to improve Docker build performance.

With the new Docker Server capability, you can reduce build time for your applications by providing a persistent Docker server with consistent caching. When enabled in a CodeBuild project, a dedicated Docker server is provisioned with persistent storage that maintains your Docker layer cache. This server can handle multiple concurrent Docker build operations, with all builds benefiting from the same centralized cache.

Using AWS CodeBuild Docker Server
Let me walk you through a demonstration that showcases the benefits with the new Docker Server capability.

For this demonstration, I’m building a complex, multi-layered Docker image based on the official AWS CodeBuild curated Docker images repository, specifically the Dockerfile for building a standard Ubuntu image. This image contains numerous dependencies and tools required for modern continuous integration and continuous delivery (CI/CD) pipelines, making it a good example of the type of large Docker builds that development teams regularly perform.


# Copyright 2020-2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Amazon Software License (the "License"). You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#    http://aws.amazon.com/asl/
#
# or in the "license" file accompanying this file.
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied.
# See the License for the specific language governing permissions and limitations under the License.
FROM public.ecr.aws/ubuntu/ubuntu:20.04 AS core

ARG DEBIAN_FRONTEND="noninteractive"

# Install git, SSH, Git, Firefox, GeckoDriver, Chrome, ChromeDriver,  stunnel, AWS Tools, configure SSM, AWS CLI v2, env tools for runtimes: Dotnet, NodeJS, Ruby, Python, PHP, Java, Go, .NET, Powershell Core,  Docker, Composer, and other utilities
COMMAND REDACTED FOR BREVITY
# Activate runtime versions specific to image version.
RUN n $NODE_14_VERSION
RUN pyenv  global $PYTHON_39_VERSION
RUN phpenv global $PHP_80_VERSION
RUN rbenv  global $RUBY_27_VERSION
RUN goenv global  $GOLANG_15_VERSION

# Configure SSH
COPY ssh_config /root/.ssh/config
COPY runtimes.yml /codebuild/image/config/runtimes.yml
COPY dockerd-entrypoint.sh /usr/local/bin/dockerd-entrypoint.sh
COPY legal/bill_of_material.txt /usr/share/doc/bill_of_material.txt
COPY amazon-ssm-agent.json /etc/amazon/ssm/amazon-ssm-agent.json

ENTRYPOINT ["/usr/local/bin/dockerd-entrypoint.sh"]

This Dockerfile creates a comprehensive build environment with multiple programming languages, build tools, and dependencies – exactly the type of image that would benefit from persistent caching.

In the build specification (buildspec), I use the docker buildx build . command:

version: 0.2
phases:
  build:
    commands:
      - cd ubuntu/standard/5.0
      - docker buildx build -t codebuild-ubuntu:latest .

To enable the Docker Server capability, I navigate to the AWS CodeBuild console and select Create project. I can also enable this capability when editing existing CodeBuild projects.

I fill in all details and configuration. In the Environment section, I select Additional configuration.

Then, I scroll down and find Docker server configuration and select Enable docker server for this project. When I select this option, I can choose a compute type configuration for the Docker server. When I’m finished with the configurations, I create this project.

Now, let’s see the Docker Server capability in action.

The initial build takes approximately 24 minutes and 54 seconds to complete because it needs to download and compile all dependencies from scratch. This is expected for the first build of such a complex image.

For subsequent builds with no code changes, the build takes only 16 seconds and that shows 98% reduction in build time.

Looking at the logs, I can see that with Docker Server, most layers are pulled from the persistent cache:

The persistent caching provided by the Docker Server maintains all layers between builds, which is particularly valuable for large, complex Docker images with many layers. This demonstrates how Docker Server can dramatically improve throughput for teams running numerous Docker builds in their CI/CD pipelines.

Additional things to know
Here are a couple of things to note:

  • Architecture support – The feature is available for both x86 (Linux) and ARM builds.
  • Pricing – To learn more about pricing for Docker Server capability, refer to the AWS CodeBuild pricing page.
  • Availability – This feature is available in all AWS Regions where AWS CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page.

You can learn more about the Docker Server feature in the AWS CodeBuild documentation.

Happy building! —

Donnie Prakoso


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Accelerate the modernization of Mainframe and VMware workloads with AWS Transform

This post was originally published on this site

Generative AI has brought many new possibilities to organizations. It has equipped them with new abilities to retire technical debt, modernize legacy systems, and build agile infrastructure to help unlock the value that is trapped in their internal data. However, many enterprises still rely heavily on legacy IT infrastructure, particularly mainframes and VMware-based systems. These platforms have been the backbone of critical operations for decades, but they hinder organizations’ ability to innovate, scale effectively, and reduce technical debt in an era where cloud-first strategies dominate. The need to modernize these workloads is clear, but the journey has traditionally been complex and risky.

The complexity spans multiple dimensions. Financially, organizations face mounting licensing costs and expensive migration projects. Technically, they must untangle legacy dependencies while meeting compliance requirements. Organizationally, they must manage the transition of teams who’ve built careers around legacy systems and navigate undocumented institutional knowledge.

AWS Transform directly addresses these challenges with purpose-built agentic AI that accelerates and de-risks your legacy modernization. It automates the assessment, planning, and transformation of both mainframe and VMware workloads into cloud based architectures, streamlining the entire process. Through intelligent insights, automated code transformation, and human-in-the-loop workflows, organizations can now tackle even the most challenging modernization projects with greater confidence and efficiency.

Mainframe workload migration
AWS Transform for mainframe is the first agentic AI service for modernizing mainframe workloads at scale. The specialized mainframe agent accelerates mainframe modernization by automating complex, resource-intensive tasks across every phase of modernization — from initial assessment to final deployment. It streamlines the migration of legacy applications built on IBM z/OS Db2, including COBOL, CICS, DB2, and VSAM, to modern cloud environments–cutting modernization timelines from years to months.

Let’s look at a few examples of how AWS Transform can help you through different aspects of the migration process.

Code analysis – AWS Transform provides comprehensive insights into your codebase, automatically examining mainframe codebases, creating detailed dependency graphs, measuring code complexity, and identifying component relationships

Documentation – AWS Transform for mainframe creates comprehensive technical and functional documentation of mainframe applications, preserving critical knowledge about features, program logic, and data flows. You can interact with the generated documentation through an AI-powered chat interface to discover and retrieve information quickly.

Business rule extraction – AWS Transform extracts and presents complex logic in plain language so you can gain visibility into business processes embedded within legacy applications. This enables both business and technical stakeholders to gain a greater understanding of application functionality.

Code decomposition – AWS Transform offers sophisticated code decomposition tools, including interactive dependency graphs and domain separation capabilities, enabling users to visualize and modify relationships between components while identifying key business functions. The solution also streamlines migration planning through an interactive wave sequence planner that considers user preferences to generate optimized migration strategies.

Modernization Wave Planning – With its specialized agent, AWS Transform for mainframe creates prioritized modernization wave sequences based on code and data dependencies, code volume, and business priorities. It enables modernization teams to make data-driven, customized migration plans that align to their specific organizational needs.

Code refactoring – AWS Transform can refactor millions of lines of mainframe code in minutes, converting COBOL, VSAM, and DB2 systems into modern Java Spring Boot applications while maintaining functional equivalence and transforming CICS transactions into web services and JCL batch processes into Groovy scripts. The solution provides high-quality output through configurable settings and bundled runtime capabilities, producing Java code that emphasizes readability, maintainability, and technical excellence.

Deployments – AWS Transform provides customizable deployment templates that streamline the deployment process through user-defined inputs. For added efficiency, the solution bundles the selected runtime version with the migrated application, enabling seamless deployment as a complete package.

By integrating intelligent documentation analysis, business rules extraction, and human-in-the-loop collaboration capabilities, AWS Transform helps organizations accelerate their mainframe transformation while reducing risk and maintaining business continuity.

VMware modernization
With rapid changes in VMware licensing and support model, organizations are increasingly exploring alternatives despite the difficulties associated with migrating and modernizing VMware workloads. This is aggravated by the fact that the accumulation of technical debt typically creates complex, poorly documented environments managed by multiple teams, leading to vendor lock-in and collaboration challenges that hinder migration efforts further.

AWS Transform is the first agentic AI service for VMware modernization of its kind that helps you to overcome those difficulties. It can offset risk and accelerate the modernization of VMware workloads by automating application discovery, dependency mapping, migration planning, network conversion, and EC2 instance optimization, reducing manual effort and accelerating cloud adoption.

The process is organized into four phases: inventory discovery, wave planning, network conversion, and server migration. It uses agentic AI capabilities to analyze and map complex VMware environments, converting network configurations into AWS built-in constructs and helps you to orchestrate dependency-aware migration waves for seamless cutovers. In addition, it also provides a collaborative web interface that keeps AWS teams, partners, and customers aligned throughout the modernization journey.

Let’s take a quick tour to see how this works.

Setting up
Before you can start using the service, you must first enable it by navigating to the AWS Transform console. AWS Transform requires AWS IAM Identity Center (IdC) to manage users and setup appropriate permissions. If you don’t yet have IdC set up it will ask you to configure it first and return to the AWS Transform console later to continue the process.

With IdC available, you can then proceed to choosing the encryption settings. AWS Transform gives you the option to use a default AWS managed key or you can use your own custom keys through AWS Key Management Service (AWS KMS).

After completing this step, AWS Transform will be enabled. You can manage admin access to the console by navigating to Users and using the search box to find them. You must create users or groups in IdC first if they don’t already exist. The service console will help admins provision users who will get access to the web app. Each provisioned user receives an email with a link to set password and get their personalized URL for the webapp.

You interact with AWS Transform through a dedicated web experience. To get the url, navigate to Settings where you can check your configurations and copy the links to the AWS Transform web experience where you and your teams can start using the service.

Discovery
AWS Transform can discover your VMware environment either automatically through AWS Application Discovery Service collectors or you can provide your own data by importing existing RVTools export files.

To get started, choose the Create or select connectors task and provide the account IDs for one or more AWS accounts that will be used for discovery. This will generate links that you can follow to authorize each account for usage within AWS Transform. You can then move on to the Perform discovery task, where you can choose to install AWS Application Discovery Service collectors or upload your own files such as exports from RVTools.

Provisioning
The steps for the provisioning phase are similar to the ones described earlier for discovery. You connect target AWS accounts by entering their account IDs and validating the authorization requests which will then enable the next steps such as the Generate VPC configuration step. Here, you can import your RVTools files or NSX exports from Import/Export from NSX, if applicable, and enable AWS Transform to understand your networking requirements.

You should then continue working through the job plan until you reach the point where it’s ready to deploy your Amazon Virtual Private Cloud (Amazon VPC). All the infrastructure as code (IaC) code is stored in Amazon Simple Storage Service (Amazon S3) buckets in the target AWS account.

Review the proposed changes and, if you’re happy, start the deployment process of the AWS resources to the target accounts.

Deployment
AWS Transform requires you to set up AWS Application Migration Service (MGN) in the target AWS accounts to automate the migration process. Choose the Initiate VM migration task and use the link to navigate to the service console, then follow the instructions to configure it.

After setting up service permissions, you’ll proceed to the implementation phase of the waves created by AWS Transform and start the migration process. For each wave, you’ll first be asked to make various choices such as setting the sizing preference and tenancy for the Amazon Elastic Compute Cloud (Amazon EC2) instances. Confirm your selections and continue following the instructions given by AWS Transform until you reach the Deploy replication agents stage, where you can start the migration for that wave.

After you start the waves migration process, you can switch to the dashboard at any time to check on progress.

With its agentic AI capabilities, AWS Transform offers a powerful solution for accelerating and de-risking mainframe and VMware modernization workloads. By automating complex assessment and transformation processes, AWS Transform reduces the time associated with legacy system migration while minimizing the potential for errors and business disruption enabling more agile, efficient, and future-ready IT environments within your organization.

Things to know
Availability –  AWS Transform for mainframe is available in US East (N. Virginia) and Europe (Frankfurt) Regions. AWS Transform for VMware offers different availability options for data collection and migrations. Please refer to the AWS Transform for VMware FAQ for more details.

Pricing –  Currently, we offer our core features—including assessment and transformation—at no cost to AWS customers.

Here are a few links for further reading.

Dive deeper into mainframe modernization and learn more about about AWS Transform for mainframe.

Explore more about VMware modernization and how to get started with your VMware migration journey.

Check out this interactive demo of AWS Transform for mainframe and this interactive demo of AWS Transform for VMware.

Matheus Guimaraes | @codingmatheus


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

AWS Transform for .NET, the first agentic AI service for modernizing .NET applications at scale

This post was originally published on this site

I started my career as a .NET developer and have seen .NET evolve over the last couple of decades. Like many of you, I also developed multiple enterprise applications in .NET Framework that ran only on Windows. I fondly remember building my first enterprise application with .NET Framework. Although it served us well, the technology landscape has significantly shifted. Now that there is an open source and cross-platform version of .NET that can run on Linux, these legacy enterprise applications built on .NET Framework need to be ported and modernized.

The benefits of porting to Linux are compelling: applications cost 40 percent less to operate because they save on Windows licensing costs, run 1.5–2 times faster with improved performance, and handle growing workloads with 50 percent better scalability. Having helped port several applications, I can say the effort is worth the rewards.

However, porting .NET Framework applications to cross-platform .NET is a labor-intensive and error-prone process. You have to perform multiple steps, such as analyzing the codebase, detecting incompatibilities, implementing fixes while porting the code, and then validating the changes. For enterprises, the challenge becomes even more complex because they might have hundreds of .NET Framework applications in their portfolio.

At re:Invent 2024, we previewed this capability as Amazon Q Developer transformation capabilities for .NET to help port your .NET applications at scale. The experience is available as a unified web experience for at-scale transformation and within your integrated development environment (IDE) for individual project and solution porting.

Now that we’ve incorporated your valuable feedback and suggestions, we’re excited to announce today the general availability of AWS Transform for .NET. We’ve also added new capabilities to support projects with private NuGet packages, port model-view-controller (MVC) Razor views to ASP .NET Core Razor views, and execute the ported unit tests.

I’ll expand on the key new capabilities in a moment, but let’s first take a quick look at the two porting experiences of AWS Transform for .NET.

Large-scale porting experience for .NET applications
Enterprise digital transformation is typically driven by central teams responsible for modernizing hundreds of applications across multiple business units. Different teams have ownership of different applications and their respective repositories. Success requires close coordination between these teams and the application owners and developers across business units. To accelerate this modernization at scale, AWS Transform for .NET provides a web experience that enables teams to connect directly to source code repositories and efficiently transform multiple applications across the organization. For select applications requiring dedicated developer attention, the same agent capabilities are available to developers as an extension for Visual Studio IDE.

Let’s start by looking at how the web experience of AWS Transform for .NET helps port hundreds of .NET applications at scale.

Web experience of AWS Transform for .NET
To get started with the web experience of AWS Transform, I onboard using the steps outlined in the documentation, sign in using my credentials, and create a job for .NET modernization.

Create a new job for .NET Transformation

AWS Transform for .NET creates a job plan, which is a sequence of steps that the agent will execute to assess, discover, analyze, and transform applications at scale. It then waits for me to set up a connector to connect to my source code repositories.

Setup connector to connect to source code repository

After the connector is in place, AWS Transform begins discovering repositories in my account. It conducts an assessment focused on three key areas: repository dependencies, required private packages and third-party libraries, and supported project types within your repositories.

Based on this assessment, it generates a recommended transformation plan. The plan orders repositories according to their last modification dates, dependency relationships, private package requirements, and the presence of supported project types.

AWS Transform for .NET then prepares for the transformation process by requesting specific inputs, such as the target branch destination, target .NET version, and the repositories to be transformed.

To select the repositories to transform, I have two options: use the recommended plan or customize the transformation plan by selecting repositories manually. For selecting repositories manually, I can use the UI or download the repository mapping and upload the customized list.

select the repositories to transform

AWS Transform for .NET automatically ports the application code, builds the ported code, executes unit tests, and commits the ported code to a new branch in my repository. It provides a comprehensive transformation summary, including modified files, test outcomes, and suggested fixes for any remaining work.

While the web experience helps accelerate large-scale porting, some applications may require developer attention. For these cases, the same agent capabilities are available in the Visual Studio IDE.

Visual Studio IDE experience of AWS Transform for .NET
Now, let’s explore how AWS Transform for .NET works within Visual Studio.

To get started, I install the latest version of AWS Toolkit extension for Visual Studio and set up the prerequisites.

I open a .NET Framework solution, and in the Solution Explorer, I see the context menu item Port project with AWS Transform for an individual project.

Context menu for Port project with AWS Transform in Visual Studio

I provide the required inputs, such as the target .NET version and the approval for the agents to autonomously transform code, execute unit tests, generate a transformation summary, and validate Linux-readiness.

Transformation summary after the project is transformed in Visual Studio

I can review the code changes made by the agents locally and continue updating my codebase.

Let’s now explore some of the key new capabilities added to AWS Transform for .NET.

Support for projects with private NuGet package dependencies 
During preview, only projects with public NuGet package dependencies were supported. With general availability, we now support projects with private NuGet package dependencies. This has been one of the most requested features during the preview.

The feature I really love is that AWS Transform can detect cross-repository dependencies. If it finds the source code of my private NuGet package, it automatically transforms that as well. However, if it can’t locate the source code, in the web experience, it provides me the flexibility to upload the required NuGet packages.

AWS Transform displays the missing package dependencies that need to be resolved. There are two ways to do this: I can either use the provided PowerShell script to create and upload packages, or I can build the application locally and upload the NuGet packages from the packages folder in the solution directory.

Upload packages to resolve missing dependencies

After I upload the missing NuGet packages, AWS Transform is able to resolve the dependencies. It’s best to provide both the .NET Framework and cross platform .NET versions of the NuGet packages. If the cross platform .NET version is not available, then at a minimum the .NET Framework version is required for AWS Transform to add it as an assembly reference and proceed for transformation.

Unit test execution
During preview, we supported porting unit tests from .NET Framework to cross-platform .NET. With general availability, we’ve also added support for executing unit tests after the transformation is complete.

After the transformation is complete and the unit tests are executed, I can see the results in the dashboard and view the status of the tests at each individual test project level.

Dashboard after successful transformation in web showing exectuted unit tests

Transformation visibility and summary
After the transformation is complete, I can download a detailed report in JSON format that gives me a list of transformed repositories, details about each repository, and the status of the transformation actions performed for each project within a repository. I can view the natural language transformation summary at the project level to understand AWS Transform output with project-level granularity. The summary provides me with an overview of updates along with key technical changes to the codebase.

detailed report of transformed project highlighting transformation summary of one of the project

Other new features
Let’s have a quick look at other new features we’ve added with general availability:

  • Support for porting UI layer – During preview, you could only port the business logic layers of MVC applications using AWS Transform, and you had to port the UI layer manually. With general availability, you can now use AWS Transform to port MVC Razor views to ASP.NET Core Razor views.
  • Expanded connector support – During preview, you could connect only to GitHub repositories. Now with general availability, you can connect to GitHub, GitLab, and Bitbucket repositories.
  • Cross repository dependency – When you select a repository for transformation, dependent repositories are automatically selected for transformation.
  • Download assessment report – You can download a detailed assessment report of the identified repositories in your account and private NuGet packages referenced in these repositories.
  • Email notifications with deep links – You’ll receive email notifications when a job’s status changes to completed or stopped. These notifications include deep links to the transformed code branches for review and continued transformation in your IDE.

Things to know
Some additional things to know are:

  • Regions – AWS Transform for .NET is generally available today in the Europe (Frankfurt) and US East (N. Virginia) Regions.
  • Pricing – Currently, there is no additional charge for AWS Transform. Any resources you create or continue to use in your AWS account using the output of AWS Transform will be billed according to their standard pricing. For limits and quotas, refer to the documentation.
  • .NET versions supported – AWS Transform for .NET supports transforming applications written using .NET Framework versions 3.5+, .NET Core 3.1, and .NET 5+, and the cross-platform .NET version, .NET 8.
  • Application types supported – AWS Transform for .NET supports porting C# code projects of the following types: console application, class library, unit tests, WebAPI, Windows Communication Foundation (WCF) service, MVC, and single-page application (SPA).
  • Getting started – To get started, visit AWS Transform for .NET User Guide.
  • Webinar – Join the webinar Accelerate .NET Modernization with Agentic AI to experience AWS Transform for .NET through a live demonstration.

– Prasad


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

Web Scanning SonicWall for CVE-2021-20016 – Update, (Wed, May 14th)

This post was originally published on this site

I published on the 29 Apr 2025 a diary [1] on scanning activity looking for SonicWall and since this publication this activity has grown 10-fold. Over the past 14 days, several BACS students have reported activity related to SonicWall scans all related for the same 2 URLs [4][5] previously mentioned in my last diary. My own DShield sensor was probed by 25 separate IPs during those last 14 days. The three most active IPs were all from the same subnet – 141.98.80.0/24

Microsoft Patch Tuesday: May 2025, (Tue, May 13th)

This post was originally published on this site

Today, Microsoft released its expected update for the May patch on Tuesday. This update fixes 78 vulnerabilities. 11 are rated as critical, and 66 as important. Five of the vulnerabilities have already been exploited and two were publicly known but not yet exploited. 70 of the vulnerabilities were patched today, 8 had patches delivered earlier this month.

Apple Updates Everything: May 2025 Edition, (Mon, May 12th)

This post was originally published on this site

Apple released its expected update for all its operating systems. The update, in addition to providing new features, patches 65 different vulnerabilities. Many of these vulnerabilities affect multiple operating systems within the Apple ecosystem.

Of note is CVE-2025-31200. This vulnerability is already exploited in "targeted attacks". Apple released patches for this vulnerability in mid-April for its current operating Systems (iOS 18, macOS 15, tvOS 18, and visionOS 2). This update includes patches for older versions of macOS and iPadOS/iOS.

 

iOS 18.5 and iPadOS 18.5 iPadOS 17.7.7 macOS Sequoia 15.5 macOS Sonoma 14.7.6 macOS Ventura 13.7.6 watchOS 11.5 tvOS 18.5 visionOS 2.5
CVE-2025-24097: An app may be able to read arbitrary file metadata.
Affects AirDrop
  x            
CVE-2025-24111: An app may be able to cause unexpected system termination.
Affects Display
  x            
CVE-2025-24142: An app may be able to access sensitive user data.
Affects Notification Center
    x x x      
CVE-2025-24144: An app may be able to leak sensitive kernel state.
Affects Kernel
  x   x x      
CVE-2025-24155: An app may be able to disclose kernel memory.
Affects WebContentFilter
      x x      
CVE-2025-24213: A type confusion issue could lead to memory corruption.
Affects WebKit
x x x     x x x
CVE-2025-24220: An app may be able to read a persistent device identifier.
Affects Sandbox Profiles
  x            
CVE-2025-24222: Processing maliciously crafted web content may lead to an unexpected process crash.
Affects BOM
    x          
CVE-2025-24223: Processing maliciously crafted web content may lead to memory corruption.
Affects WebKit
    x          
CVE-2025-24225: Processing an email may lead to user interface spoofing.
Affects Mail Addressing
x x            
CVE-2025-24258: An app may be able to gain root privileges.
Affects DiskArbitration
      x x      
CVE-2025-24259: An app may be able to retrieve Safari bookmarks without an entitlement check.
Affects Parental Controls
  x            
CVE-2025-24274: A malicious app may be able to gain root privileges.
Affects Mobile Device Service
    x x x      
CVE-2025-30440: An app may be able to bypass ASLR.
Affects Libinfo
    x x x      
CVE-2025-30442: An app may be able to gain elevated privileges.
Affects SoftwareUpdate
      x x      
CVE-2025-30443: An app may be able to access user-sensitive data.
Affects Found in Apps
    x          
CVE-2025-30448: An attacker may be able to turn on sharing of an iCloud folder without authentication.
Affects iCloud Document Sharing
x x   x x     x
CVE-2025-30453: A malicious app may be able to gain root privileges.
Affects DiskArbitration
      x x      
CVE-2025-31196: Processing a maliciously crafted file may lead to a denial-of-service or potentially disclose memory contents.
Affects CoreGraphics
  x   x x      
CVE-2025-31200: Processing an audio stream in a maliciously crafted media file may result in code execution. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS released before iOS 18.4.1..
Affects CoreAudio
          x    
CVE-2025-31204: Processing maliciously crafted web content may lead to memory corruption.
Affects WebKit
x         x x x
CVE-2025-31205: A malicious website may exfiltrate data cross-origin.
Affects WebKit
x   x     x x x
CVE-2025-31206: Processing maliciously crafted web content may lead to an unexpected Safari crash.
Affects WebKit
x x x     x x x
CVE-2025-31207: An app may be able to enumerate a user's installed apps.
Affects FrontBoard
x              
CVE-2025-31208: Parsing a file may lead to an unexpected app termination.
Affects CoreAudio
x x x x x x x x
CVE-2025-31209: Parsing a file may lead to disclosure of user information.
Affects CoreGraphics
x x x x x x x x
CVE-2025-31210: Processing web content may lead to a denial-of-service.
Affects FaceTime
x x            
CVE-2025-31212: An app may be able to access sensitive user data.
Affects Core Bluetooth
x   x     x x x
CVE-2025-31213: An app may be able to access associated usernames and websites in a user's iCloud Keychain.
Affects Security
  x x x x      
CVE-2025-31214: An attacker in a privileged network position may be able to intercept network traffic.
Affects Baseband
x              
CVE-2025-31215: Processing maliciously crafted web content may lead to an unexpected process crash.
Affects WebKit
x x x     x x x
CVE-2025-31217: Processing maliciously crafted web content may lead to an unexpected Safari crash.
Affects WebKit
x x x     x x x
CVE-2025-31218: An app may be able to observe the hostnames of new network connections.
Affects NetworkExtension
    x          
CVE-2025-31219: An attacker may be able to cause unexpected system termination or corrupt kernel memory.
Affects Kernel
x x x x x x x x
CVE-2025-31220: A malicious app may be able to read sensitive location information.
Affects Weather
  x x x x      
CVE-2025-31221: A remote attacker may be able to leak memory.
Affects Security
x x x x x x x x
CVE-2025-31222: A user may be able to elevate privileges.
Affects mDNSResponder
x   x x x x x x
CVE-2025-31224: An app may be able to bypass certain Privacy preferences.
Affects Sandbox
    x x x      
CVE-2025-31225: Call history from deleted apps may still appear in spotlight search results.
Affects Call History
x              
CVE-2025-31226: Processing a maliciously crafted image may lead to a denial-of-service.
Affects ImageIO
x x x     x x x
CVE-2025-31227: An attacker with physical access to a device may be able to access a deleted call recording.
Affects Notes
x              
CVE-2025-31228: An attacker with physical access to a device may be able to access notes from the lock screen.
Affects Notes
x x            
CVE-2025-31232: A sandboxed app may be able to access sensitive user data.
Affects Installer
    x x x      
CVE-2025-31233: Processing a maliciously crafted video file may lead to unexpected app termination or corrupt process memory.
Affects CoreMedia
x x x x x x x x
CVE-2025-31234: An attacker may be able to cause unexpected system termination or corrupt kernel memory.
Affects Pro Res
x   x       x x
CVE-2025-31235: An app may be able to cause unexpected system termination.
Affects Audio
  x x x x      
CVE-2025-31236: An app may be able to access sensitive user data.
Affects Finder
    x          
CVE-2025-31237: Mounting a maliciously crafted AFP network share may lead to system termination.
Affects afpfs
    x x x      
CVE-2025-31238: Processing maliciously crafted web content may lead to memory corruption.
Affects WebKit
x   x     x x x
CVE-2025-31239: Parsing a file may lead to an unexpected app termination.
Affects CoreMedia
x x x x x x x x
CVE-2025-31241: A remote attacker may cause an unexpected app termination.
Affects Kernel
x x x x x x x x
CVE-2025-31242: An app may be able to access sensitive user data.
Affects StoreKit
  x x x x      
CVE-2025-31244: An app may be able to break out of its sandbox.
Affects quarantine
    x          
CVE-2025-31245: An app may be able to cause unexpected system termination.
Affects Pro Res
x x x x x   x x
CVE-2025-31246: Connecting to a malicious AFP server may corrupt kernel memory.
Affects afpfs
    x x        
CVE-2025-31247: An attacker may gain access to protected parts of the file system.
Affects SharedFileList
    x x x      
CVE-2025-31249: An app may be able to access sensitive user data.
Affects Sandbox
    x          
CVE-2025-31250: An app may be able to access sensitive user data.
Affects TCC
    x          
CVE-2025-31251: Processing a maliciously crafted media file may lead to unexpected app termination or corrupt process memory.
Affects AppleJPEG
x x x x x x x x
CVE-2025-31253: Muting the microphone during a FaceTime call may not result in audio being silenced.
Affects FaceTime
x              
CVE-2025-31256: Hot corner may unexpectedly reveal a user?s deleted notes.
Affects Notes
    x          
CVE-2025-31257: Processing maliciously crafted web content may lead to an unexpected Safari crash.
Affects WebKit
x   x     x x x
CVE-2025-31258: An app may be able to break out of its sandbox.
Affects RemoteViewServices
    x          
CVE-2025-31259: An app may be able to gain elevated privileges.
Affects SoftwareUpdate
    x          
CVE-2025-31260: An app may be able to access sensitive user data.
Affects Apple Intelligence Reports
    x          


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Weekly Roundup: South America expansion, Q Developer in OpenSearch, and more (May 12, 2025)

This post was originally published on this site

I’ve always been fascinated by how quickly we’re able to stand up new Regions and Availability Zones at AWS. Today there are 36 launched Regions and 114 launched Availability Zones. That’s amazing!

This past week at AWS was marked by significant expansion to our global infrastructure. The announcement of a new Region in the works for South America means customers will have more options for meeting their low latency and data residency requirements. Alongside the expansion, AWS announced the availability of numerous instance types in additional Regions.

In addition to the infrastructure expansion, AWS is also expanding the reach of Amazon Q Developer into Amazon OpenSearch Service.

Last week’s launches

Instance announcements

AWS expanded instance availability for an array of instance types across additional Regions.

Additional updates

Upcoming events

We are in the middle of AWS Summit season! AWS Summits run throughout the summer in cities all around the world. Be sure to check the calendar to find out when a AWS Summit is happening near you. Here are the remaining Summits for May, 2025.


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)

It Is 2025, And We Are Still Dealing With Default IoT Passwords And Stupid 2013 Router Vulnerabilities, (Mon, May 12th)

This post was originally published on this site

Unipi Technologies is a company developing programmable logic controllers for a number of different applications like home automation, building management, and industrial controls. The modules produced by Unipi are likely to appeal to a more professional audience. All modules are based on the "Marvis" platform, a customized Linux distribution maintained by Unipi.