Tag Archives: AWS

Create Snapshots From Any Block Storage Using EBS Direct APIs

This post was originally published on this site

I am excited to announce you can now create Amazon Elastic Block Store (EBS) snapshots from any block storage data, such as on-premises volumes, volumes from another cloud provider, existing block data stored on Amazon Simple Storage Service (S3), or even your own laptop πŸ™‚

AWS customers using the cloud for disaster recovery of on-premises infrastructure all have the same question: how can I transfer my on-premises volume data to the cloud efficiently and at low cost? You usually create temporary Amazon Elastic Compute Cloud (EC2) instances, attach Amazon Elastic Block Store (EBS) volumes, transfer the data at block level from on-premises to these new Amazon Elastic Block Store (EBS) volumes, take a snapshot of every EBS volumes created and tear-down the temporary infrastructure. Some of you choose to use CloudEndure to simplify this process. Or maybe you just gave up and did not copy your on-premises volumes to the cloud because of the complexity.

To simplify this, we are announcing today 3 new APIs that are part of EBS direct API, a new set of APIs we announced at re:Invent 2019. We initially launched a read and diff APIs. We extend it today with write capabilities. These 3 new APIs allow to create Amazon Elastic Block Store (EBS) snapshots from your on-premises volumes, or any block storage data that you want to be able to store and recover in AWS.

With the addition of write capability in EBS direct API, you can now create new snapshots from your on-premises volumes, or create incremental snapshots, and delete them. Once a snapshot is created, it has all the benefits of snapshots created from Amazon Elastic Block Store (EBS) volumes. You can copy them, share them between AWS Accounts, keep them available for a Fast Snapshot Restore, or create Amazon Elastic Block Store (EBS) volumes from them.

Having Amazon Elastic Block Store (EBS) snapshots created from any volumes, without the need to spin up Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Store (EBS) volumes, allows you to simplify and to lower the cost of the creation and management of your disaster recovery copy in the cloud.

Let’s have a closer look at the API
You first call StartSnapshot to create a new snapshot. When the snapshot is incremental, you pass the ID of the parent snapshot. You can also pass additional tags to apply to the snapshot, or encrypt these snapshots and manage the key, just like usual. If you choose to encrypt snapshots, be sure to check our technical documentation to understand the nuances and options.

Then, for each block of data, you call PutSnapshotBlock. This API has 6 mandatory parameters: snapshot-id, block-index, block-data, block-length, checksum, and checksum-algorithm. The API supports block lengths of 512 KB. You can send your blocks in any order, and in parallel, block-index keeps the order correct.

After you send all the blocks, you call CompleteSnapshot with changed-blocks-count parameter having the number of blocks you sent.

Let’s put all these together
Here is the pseudo code you must write to create a snapshot.

AmazonEBS amazonEBS = AmazonEBSClientBuilder.standard()
   .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointName, awsRegion))
   .withCredentials(credentialsProvider)
   .build();

response = amazonEBS.startSnapshot(startSnapshotRequest)
snapshotId = response.getSnapshotId();

for each (block in changeset) {
    putResponse = amazonEBS.putSnapshotBlock(putSnapshotBlockRequest);
}
amazonEBS.completeSnapshot(completeSnapshotRequest);

As usual, when using this code, you must have appropriate IAM policies allowing to call the new API. For example:

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*" }]

Also include some KMS related permissions when creating encrypted snapshots.

In addition of the storage cost for snapshots, there is a charge per API call when you call PutSnapshotBlock.

These new snapshot APIs are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

You can start to use them today.

— seb

AWS IoT SiteWise – Now Generally Available

This post was originally published on this site

At AWS re:Invent 2018, we announced AWS IoT SiteWise in preview which is a fully managed AWS IoT service that you can use to collect, organize, and analyze data from industrial equipment at scale.

Getting performance metrics from industrial equipment is challenging because data is often locked into proprietary on-premises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis. AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data.

With AWS IoT SiteWise, you can easily monitor equipment across your industrial facilities to identify waste, such as breakdown of equipment and processes, production inefficiencies, and defects in products.

Last year at AWS re:Invent 2019, a bunch of new features were launched including SiteWise Monitor. Today, I am excited to announce AWS IoT SiteWise is now generally available in regions of US East (N. Virginia), Europe (Ireland), Europe (Frankfurt), and US West (Oregon). Let’s see how AWS IoT SiteWise works!

AWS IoT SiteWise – Getting Started
You can easily explore AWS IoT SiteWise by creating a demo wind farm with a single click on the AWS IoT SiteWise console. The demo deploys an AWS CloudFormation template to create assets and generate sample data for up to a week under AWS Free Tier.

You can find the SiteWise demo in the upper-right corner of the AWS IoT SiteWise console, and choose Create demo. The demo takes around 3 minutes to create demo models and assets for representing a wind farm.

Once you see created assets in the console, you can create virtual representations of your industrial operations with AWS IoT SiteWise assets. An asset can represent a device, a piece of equipment, or a process that uploads one or more data streams to the AWS Cloud. For example, the wind turbine that sends air temperature, propeller rotation speed, and power output time-series measurements to asset properties in AWS IoT SiteWise.

Also, you can securely collect data from the plant floor from sensors, equipment, or a local on-premises gateway and upload to the AWS Cloud using a gateway software called AWS IoT SiteWise Connector. It runs on common industrial gateway devices running AWS IoT Greengrass, and reads data directly from servers and historians over the OPC Unified Architecture protocol. AWS IoT SiteWise also accepts MQTT data through AWS IoT Core, and direct ingestion using REST API.

You can learn how to collect data using AWS IoT SiteWise Connector in the blog series – Part 1 of AWS IoT Blog and in the service documentation.

SiteWise Monitor – Creating Managed Web Applications
Once data is stored in AWS IoT SiteWise, you can stream live data in near real-time and query historical data to build downstream IoT applications, but we provide a no-code alternative with SiteWise Monitor. You can explore your library of assets, and create and share operational dashboards with plant operators for real-time monitoring and visualization of equipment health and output with SiteWise Monitor.

With SiteWise Monitor console, choose Create portal to create a web application that is accessible from from a web browser on any web-enabled desktop, tablet or phone and sign-in with your corporate credentials through AWS Single Sign-On (SSO) experience.

Administrators can create one or more web applications to easily share access to asset data with any team in your organization to accelerate insights.

If you click a given portal link and sign in via the credential of AWS SSO, you can visualize and monitor your device, process, and equipment data to quickly identify issues and improve operational efficiency with SiteWise Monitor

You can create a dashboard in a new project for your team so they can visualize and understand your project data. And, choose a visualization type that best displays your data and rearrange and resize visualizations to create a layout that fits your team’s needs.

The dashboard shows asset data and computed metrics in near real time or you can compare and analyze historical time series data from multiple assets and different time periods. There is a new dashboard feature, where you can specify thresholds on the charts and have the charts change color when those thresholds are exceeded.

Also, you can learn how to monitor key measurements and metrics of your assets in near-real time using SiteWise Monitor in the blog series – Part 2 of AWS IoT Blog.

Furthermore, you can subscribe to the AWS IoT SiteWise modeled data via AWS IoT Core rules engine, enable condition monitoring and send notifications or alerts using AWS IoT Events in near-real time, and enable Business Intelligence (BI) reporting on historical data using Amazon QuickSight. For more detail, please refer to this hands-on guide in the blog series – Part 3 of AWS IoT Blog.

Now Available!
With AWS IoT SiteWise, you only pay for what you use with no minimum fees or mandatory service usage. You are billed separately for usage of messaging, data storage, data processing, and SiteWise Monitor. This approach provides you with billing transparency because you only pay for the specific AWS IoT SiteWise resources you use. Please visit the pricing page to learn more and estimate your monthly bill using the AWS IoT SiteWise Calculator.

You can watch interesting talks about business cases and solutions in ‘Driving Overall Equipment Effectiveness (OEE) Across Your Industrial Facilities’ and ‘Building an End-to-End Industrial IoT (IIoT) Solution with AWS IoT‘. To learn more, please visit the AWS IoT SiteWise website or the tutorial, and developer guide.

Explore AWS IoT SiteWise with Bill Vass and Cherie Wong!

Please send us feedback either in the forum for AWS IoT SiteWise or through your usual AWS support contacts.

Channy;

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

This post was originally published on this site

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars:

Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Cost Optimization – The ability to run systems to deliver business value at
the lowest price point.

Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications.

Lots of Updates
Today we are making a host of updates to the Well-Architected Framework! Here’s an overview:

Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more.

Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar:

Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers.

Learning More
In addition to the documents that I linked above, you should also watch these videos.

In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms:

In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool:

Get Started Today
If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible!

Jeff;

New – Label Videos with Amazon SageMaker Ground Truth

This post was originally published on this site

Launched at AWS re:Invent 2018, Amazon Sagemaker Ground Truth is a capability of Amazon SageMaker that makes it easy to annotate machine learning datasets. Customers can efficiently and accurately label image, text and 3D point cloud data with built-in workflows, or any other type of data with custom workflows. Data samples are automatically distributed to a workforce (private, 3rd party or MTurk), and annotations are stored in Amazon Simple Storage Service (S3). Optionally, automated data labeling may also be enabled, reducing both the amount of time required to label the dataset, and the associated costs.

As models become more sophisticated, AWS customers are increasingly applying machine learning prediction to video content. Autonomous driving is perhaps the most well-known use case, as safety demands that road condition and moving objects be correctly detected and tracked in real-time. Video prediction is also a popular application in Sports, tracking players or racing vehicles to compute all kinds of statistics that fans are so fond of. Healthcare organizations also use video prediction to identify and track anatomical objects in medical videos. Manufacturing companies do the same to track objects on the assembly line, parcels for logistics, and more. The list goes on, and amazing applications keep popping up in many different industries.

Of course, this requires building and labeling video datasets, where objects of interest need to be labeled manually. At 30 frames per second, one minute of video translates to 1,800 individual images, so the amount of work can quickly become overwhelming. In addition, specific tools have to be built to label images, manage workflows, and so on. All this work takes valuable time and resources away from an organization’s core business.

AWS customers have asked us for a better solution, and today I’m very happy to announce that Amazon Sagemaker Ground Truth now supports video labeling.

Customer use case: the National Football League
The National Football League (NFL) has already put this new feature to work. Says Jennifer Langton, SVP of Player Health and Innovation, NFL: “At the National Football League (NFL), we continue to look for new ways to use machine learning (ML) to help our fans, broadcasters, coaches, and teams benefit from deeper insights. Building these capabilities requires large amounts of accurately labeled training data. Amazon SageMaker Ground Truth was truly a force multiplier in accelerating our project timelines. We leveraged the new video object tracking workflow in addition to other existing computer vision (CV) labeling workflows to develop labels for training a computer vision system that tracks all 22 players as they move on the field during plays. Amazon SageMaker Ground Truth reduced the timeline for developing a high quality labeling dataset by more than 80%”.

Courtesy of the NFL, here are a couple of predicted frames, showing helmet detection in a Seattle Seahawks video. This particular video has 353 frames. This first picture is frame #100.

Object tracking

This second picture is frame #110.

Object tracking

Introducing Video Labeling
With the addition of video task types, customers can now use Amazon Sagemaker Ground Truth for:

  • Video clip classification
  • Video multi-frame object detection
  • Video multi-frame object tracking

The multi-frame task types support multiple labels, so that you may label different object classes present in the video frames. You can create labeling jobs to annotate frames from scratch, as well as adjustment jobs to review and fine tune frames that have already been labeled. These jobs may be distributed either to a private workforce, or to a vendor workforce you picked on AWS Marketplace.

Using the built-in GUI, workers can then easily label and track objects across frames. Once they’ve annotated a frame, they can use an assistive labeling feature to predict the location of bounding boxes in the next frame, as you will see in the demo below. This significantly simplifies labeling work, saves time, and improves the quality of annotations. Last but not least, work is saved automatically.

Preparing Input Data for Video Object Detection and Tracking
As you would expect, input data must be located in S3. You may bring either video files, or sequences of video frames.

The first option is the simplest, as Amazon Sagemaker Ground Truth includes a tool that automatically extracts frames from your video files. Optionally, you can sample frames (1 in ‘n’), in order to reduce the amount of labeling work. The extraction tool also builds a manifest file describing sequences and frames. You can learn more about it in the documentation.

The second option requires two steps: extracting frames, and building the manifest file. Extracting frames can easily be performed with the popular ffmpeg open source tool. Here’s how you could convert the first 60 seconds of a video to a frame sequence.

$ ffmpeg -ss 00:00:00.00 -t 00:01:0.00 -i basketball.mp4 frame%04d.jpg

Each frame sequence should be uploaded to S3 under a different prefix, for example s3://my-bucket/my-videos/sequence1, s3://my-bucket/my-videos/sequence2, and so on, as explained in the documentation.

Once you have uploaded your frame sequences, you may then either bring your own JSON files to describe them, or let Ground Truth crawl your sequences and build the JSON files and the manifest file for you automatically. Please note that a video sequence cannot be longer than 2,000 frames, which corresponds to about a minute of video at 30 frames per second.

Each sequence should be described by a simple sequence file:

  • A sequence number, an S3 prefix, and a number of frames.
  • A list of frames: number, file name, and creation timestamp.

Here’s an example of a sequence file.

{"version": "2020-06-01",
"seq-no": 1, "prefix": "s3://jsimon-smgt/videos/basketball", "number-of-frames": 1800, 
	"frames": [
		{"frame-no": 1, "frame": "frame0001.jpg", "unix-timestamp": 1594111541.71155},
		{"frame-no": 2, "frame": "frame0002.jpg", "unix-timestamp": 1594111541.711552},
		{"frame-no": 3, "frame": "frame0003.jpg", "unix-timestamp": 1594111541.711553},
		{"frame-no": 4, "frame": "frame0004.jpg", "unix-timestamp": 1594111541.711555},
. . .

Finally, the manifest file should point at the sequence files you’d like to include in the labeling job. Here’s an example.

{"source-ref": "s3://jsimon-smgt/videos/seq1.json"}
{"source-ref": "s3://jsimon-smgt/videos/seq2.json"}
. . .

Just like for other task types, the augmented manifest is available in S3 once labeling is complete. It contains annotations and labels, which you can then feed to your machine learning training job.

Labeling Videos with Amazon SageMaker Ground Truth
Here’s a sample video where I label the first ten frames of a sequence. You can see a screenshot below.

I first use the Ground Truth GUI to carefully label the first frame, drawing bounding boxes for basketballs and basketball players. Then, I use the “Predict next” assistive labeling tool to predict the location of the boxes in the next nine frames, applying only minor adjustments to some boxes. Although this was my first try, I found the process easy and intuitive. With a little practice, I could certainly go much faster!

Getting Started
Now, it’s your turn. You can start labeling videos with Amazon Sagemaker Ground Truth today in the following regions:

  • US East (N. Virginia), US East (Ohio), US West (Oregon),
  • Canada (Central),
  • Europe (Ireland), Europe (London), Europe (Frankfurt),
  • Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo).

We’re looking forward to reading your feedback. You can send it through your usual support contacts, or in the AWS Forum for Amazon SageMaker.

– Julien

New – Create Amazon RDS DB Instances on AWS Outposts

This post was originally published on this site

Late last year I told you about AWS Outposts and invited you to Order Yours Today. As I told you at the time, this is a comprehensive, single-vendor compute and storage offering that is designed to meet the needs of customers who need local processing and very low latency in their data centers and on factory floors. Outposts uses the hardware that we use in AWS public regions

I first told you about Amazon RDS back in 2009. This fully managed service makes it easy for you to launch, operate, and scale a relational database. Over the years we have added support for multiple open source and commercial databases, along with tons of features, all driven by customer requests.

DB Instances on AWS Outposts
Today I am happy to announce that you can now create RDS DB Instances on AWS Outposts. We are launching with support for MySQL and PostgreSQL, with plans to add other database engines in the future (as always, let us know what you need so that we can prioritize it).

You can make use of important RDS features including scheduled backups to Amazon Simple Storage Service (S3), built-in encryption at rest and in transit, and more.

Creating a DB Instance
I can create a DB Instance using the RDS Console, API (CreateDBInstance), CLI (create-db-instance), or CloudFormation (AWS::RDS::DBInstance).

I’ll use the Console, taking care to select the AWS Region that serves as “home base” for my Outpost. I open the Console and click Create database to get started:

I select On-premises for the Database location, and RDS on Outposts for the On-premises database option:

Next, I choose the Virtual Private Cloud (VPC). The VPC must already exist, and it must have a subnet for my Outpost. I also choose the Security Group and the Subnet:

Moving forward, I select the database engine, and version. We’re launching with support for MySQL 8.0.17 and PostgreSQL 12.2-R1, with plans to add more engines and versions based on your feedback:

I give my DB Instance a name (jb-database-2), and enter the credentials for the master user:

Then I choose the size of the instance. I can select between Standard classes (db.m5):

and Memory Optimized classes (db.r5):

Next, I configure the desired amount of SSD storage:

One thing to keep in mind is that each Outpost has a large, but finite amount of compute power and storage. If there’s not enough of either one free when I attempt to create the database, the request will fail.

Within the Additional configuration section I can set up several database options, customize my backups, and set up the maintenance window. Once everything is ready to go, I click Create database:

As usual when I use RDS, the state of my instance starts out as Creating and transitions to Available when my DB Instance is ready:

After the DB instance is ready, I simply configure my code (running in my VPC or in my Outpost) to use the new endpoint:

Things to Know
Here are a couple of things to keep in mind about this new way to use Amazon RDS:

Operations & Functions – Much of what you already know about RDS works as expected and is applicable. You can rename, reboot, stop, start, tag DB instances, and you can make use of point-in-time recovery; you can scale the instance up and down, and automatic minor version upgrades work as expected. You cannot make use of read replicas or create highly available clusters.

Backup & Recover – Automated backups work as expected, and are stored in S3. You can use them to create a fresh DB Instance in the cloud or in any of your Outposts. Manual snapshots also work, and are stored on the Outpost. They can be used to create a fresh DB Instance on the same Outpost.

Encryption – The storage associated with your DB instance is encrypted, as are your DB snapshots, both with KMS keys.

Pricing – RDS on Outposts pricing is based on a management fee that is charged on an hourly basis for each database that is managed. For more information, check out the RDS on Outposts pricing page.

Available Now
You can start creating RDS DB Instances on your Outposts today.

Jeff;

 

Announcing the Porting Assistant for .NET

This post was originally published on this site

.NET Core is the future of .NET! Version 4.8 of the .NET Framework is the last major version to be released, and Microsoft has stated it will receive only bug-, reliability-, and security-related fixes going forward. For applications where you want to continue to take advantage of future investments and innovations in the .NET platform, you need to consider porting your applications to .NET Core. Also, there are additional reasons to consider porting applications to .NET Core such as benefiting from innovation in Linux and open source, improved application scaling and performance, and reducing licensing spend. Porting can, however, entail significant manual effort, some of which is undifferentiated such as updating references to project dependencies.

When porting .NET Framework applications, developers need to search for compatible NuGet packages and update those package references in the application’s project files, which also need to be updated to the .NET Core project file format. Additionally, they need to discover replacement APIs since .NET Core contains a subset of the APIs available in the .NET Framework. As porting progresses, developers have to wade through long lists of compile errors and warnings to determine the best or highest priority places to continue chipping away at the task. Needless to say, this is challenging, and the added friction could be a deterrent for customers with large portfolios of applications.

Today we announced the Porting Assistant for .NET, a new tool that helps customers analyze and port their .NET Framework applications to .NET Core running on Linux. The Porting Assistant for .NET assesses both the application source code and the full tree of public API and NuGet package dependencies to identify those incompatible with .NET Core and guides developers to compatible replacements when available. The suggestion engine for API and package replacements is designed to improve over time as the assistant learns more about the usage patterns and frequency of missing packages and APIs.

The Porting Assistant for .NET differs from other tools in that it is able to assess the full tree of package dependencies, not just incompatible APIs. It also uses solution files as the starting point, which makes it easier to assess monolithic solutions containing large numbers of projects, instead of having to analyze and aggregate information on individual binaries. These and other abilities gives developers a jump start in the porting process.

Analyzing and porting an application
Getting started with porting applications using the Porting Assistant for .NET is easy, with just a couple of prerequisites. First, I need to install the .NET Core 3.1 SDK. Secondly I need a credential profile (compatible with the AWS Command Line Interface (CLI), although the CLI is not used or required). The credential profile is used to collect compatibility information on the public APIs and packages (from NuGet and core Microsoft packages) used in your application and public NuGet packages that it references. With those prerequisites taken care of, I download and run the installer for the assistant.

With the assistant installed, I check out my application source code and launch the Porting Assistant for .NET from the Start menu. If I’ve previously assessed some solutions, I can view and open those from the Assessed solutions screen, enabling me to pick up where I left off. Or I can select Get started, as I’ll do here, from the home page to begin assessing my application’s solution file.

I’m asked to select the credential profile I want to use, and here I can also elect to opt-in to share my telemetry data. Sharing this data helps to further improve on suggestion accuracy for all users as time goes on, and is useful in identifying issues, so we hope you consider opting-in.

I click Next, browse to select the solution file that I want, and then click Assess to begin the analysis. For this post I’m going to use the open source NopCommerce project.

When analysis is complete I am shown the overall results – the number of incompatible packages the application depends on, APIs it uses that are incompatible, and an overall Portability score. This score is an estimation of the effort required to port the application to .NET Core, based on the number of incompatible APIs it uses. If I’m working on porting multiple applications I can use this to identify and prioritize the applications I want to start on first.

Let’s dig into the assessment overview to see what was discovered. Clicking on the solution name takes me to a more detailed dashboard and here I can see the projects that make up the application in the solution file, and for each the numbers of incompatible package and API dependencies, along with the portability score for each particular project. The current port status of each project is also listed, if I’ve already begun porting the application and have reopened the assessment.

Note that with no project selected in the Projects tab the data shown in the Project references, NuGet packages, APIs, and Source files tabs is solution-wide, but I can scope the data if I wish by first selecting a project.

The Project references tab shows me a graphical view of the package dependencies and I can see where the majority of the dependencies are consumed, in this case the Npp.Core, Npp.Services, and Npp.Web.Framework projects. This view can help me decide where I might want to start first, so as to get the most ‘bang for my buck’ when I begin. I can also select projects to see the specific dependencies more clearly.

The NuGet packages tab gives me a look at the compatible and incompatible dependencies, and suggested replacements if available. The APIs tab lists the incompatible APIs, what package they are in, and how many times they are referenced. Source files lists all of the source files making up the projects in the application, with an indication of how many incompatible API calls can be found in each file. Selecting a source file opens a view showing me where the incompatible APIs are being used and suggested package versions to upgrade to, if they exist, to resolve the issue. If there is no suggested replacement by simply updating to a different package version then I need to crack open a source editor and update the code to use a different API or approach. Here I’m looking at the report for DependencyRegistrar.cs, that exists in the Nop.Web project, and uses the Autofac NuGet package.

Let’s start porting the application, starting with the Nop.Core project. First, I navigate back to the Projects tab, select the project, and then click Port project. During porting the tool will help me update project references to NuGet packages, and also updates the project files themselves to the newer .NET Core formats. I have the option of either making a copy of the application’s solution file, project files, and source files, or I can have the changes made in-place. Here I’ve elected to make a copy.

Clicking Save copies the application source code to the selected location, and opens the Port projects view where I can set the new target framework version (in this case netcoreapp3.1), and the list of NuGet dependencies for the project that I need to upgrade. For each incompatible package the Porting Assistant for .NET gives me a list of possible version upgrades and for each version, I am shown the number of incompatible APIs that will either remain, or will additionally become incompatible. For the package I selected here there’s no difference but for cases where later versions potentially increase the number of incompatible APIs that I would then need to manually fix up in the source code, this indication helps me to make a trade-off decision around whether to upgrade to the latest versions of a package, or stay with an older one.

Once I select a version, the Deprecated API calls field alongside the package will give me a reminder of what I need to fix up in a code editor. Clicking on the value summarizes the deprecated calls.

I continue with this process for each package dependency and when I’m ready, click Port to have the references updated. Using my IDE, I can then go into the source files and work on replacing the incompatible API calls using the Porting Assistant for .NET‘s source file and deprecated API list views as a reference, and follow a similar process for the other projects in my application.

Improving the suggestion engine
The suggestion engine behind the Porting Assistant for .NET is designed to learn and give improved results over time, as customers opt-in to sharing their telemetry. The data models behind the engine, which are the result of analyzing hundreds of thousands of unique packages with millions of package versions, are available on GitHub. We hope that you’ll consider helping improve accuracy and completeness of the results by contributing your data. The user guide gives more details on how the data is used.

The Porting Assistant for .NET is free to use and is available now.

— Steve

AWS App2Container – A New Containerizing Tool for Java and ASP.NET Applications

This post was originally published on this site

Our customers are increasingly developing their new applications with containers and serverless technologies, and are using modern continuous integration and delivery (CI/CD) tools to automate the software delivery life cycle. They also maintain a large number of existing applications that are built and managed manually or using legacy systems. Maintaining these two sets of applications with disparate tooling adds to operational overhead and slows down the pace of delivering new business capabilities. As much as possible, they want to be able to standardize their management tooling and CI/CD processes across both their existing and new applications, and see the option of packaging their existing applications into containers as the first step towards accomplishing that goal.

However, containerizing existing applications requires a long list of manual tasks such as identifying application dependencies, writing dockerfiles, and setting up build and deployment processes for each application. These manual tasks are time consuming, error prone, and can slow down the modernization efforts.

Today, we are launching AWS App2Container, a new command-line tool that helps containerize existing applications that are running on-premises, in Amazon Elastic Compute Cloud (EC2), or in other clouds, without needing any code changes. App2Container discovers applications running on a server, identifies their dependencies, and generates relevant artifacts for seamless deployment to Amazon ECS and Amazon EKS. It also provides integration with AWS CodeBuild and AWS CodeDeploy to enable a repeatable way to build and deploy containerized applications.

AWS App2Container generates the following artifacts for each application component: Application artifacts such as application files/folders, Dockerfiles, container images in Amazon Elastic Container Registry (ECR), ECS Task definitions, Kubernetes deployment YAML, CloudFormation templates to deploy the application to Amazon ECS or EKS, and templates to set up a build/release pipeline in AWS Codepipeline which also leverages AWS CodeBuild and CodeDeploy.

Starting today, you can use App2Container to containerize ASP.NET (.NET 3.5+) web applications running in IIS 7.5+ on Windows, and Java applications running on Linux—standalone JBoss, Apache Tomcat, and generic Java applications such as Spring Boot, IBM WebSphere, Oracle WebLogic, etc.

By modernizing existing applications using containers, you can make them portable, increase development agility, standardize your CI/CD processes, and reduce operational costs. Now let’s see how it works!

AWS App2Container – Getting Started
AWS App2Container requires that the following prerequisites be installed on the server(s) hosting your application: AWS Command Line Interface (CLI) version 1.14 or later, Docker tools, and (in the case of ASP.NET) Powershell 5.0+ for applications running on Windows. Additionally, you need to provide appropriate IAM permissions to App2Container to interact with AWS services.

For example, let’s look how you containerize your existing Java applications. App2Container CLI for Linux is packaged as a tar.gz archive. The file provides users an interactive shell script, install.sh to install the App2Container CLI. Running the script guides users through the install steps and also updates the user’s path to include the App2Container CLI commands.

First, you can begin by running a one-time initialization on the installed server for the App2Container CLI with the init command.

$ sudo app2container init
Workspace directory path for artifacts[default:  /home/ubuntu/app2container/ws]:
AWS Profile (configured using 'aws configure --profile')[default: default]:  
Optional S3 bucket for application artifacts (Optional)[default: none]: 
Report usage metrics to AWS? (Y/N)[default: y]:
Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]:
Configuration saved

This sets up a workspace to store application containerization artifacts (minimum 20GB of disk space available). You can extract them into your Amazon Simple Storage Service (S3) bucket using your AWS profile configured to use AWS services.

Next, you can view Java processes that are running on the application server by using the inventory command. Each Java application process has a unique identifier (for example, java-tomcat-9e8e4799) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

$ sudo app2container inventory
{
    "java-jboss-5bbe0bec": {
        "processId": 27366,
        "cmdline": "java ... /home/ubuntu/wildfly-10.1.0.Final/modules org.jboss.as.standalone -Djboss.home.dir=/home/ubuntu/wildfly-10.1.0.Final -Djboss.server.base.dir=/home/ubuntu/wildfly-10.1.0.Final/standalone ",
        "applicationType": "java-jboss"
    },
    "java-tomcat-9e8e4799": {
        "processId": 2537,
        "cmdline": "/usr/bin/java ... -Dcatalina.home=/home/ubuntu/tomee/apache-tomee-plume-7.1.1 -Djava.io.tmpdir=/home/ubuntu/tomee/apache-tomee-plume-7.1.1/temp org.apache.catalina.startup.Bootstrap start ",
        "applicationType": "java-tomcat"
    }
}

You can also intialize ASP.NET applications on an administrator-run PowerShell session of Windows Servers with IIS version 7.0 or later. Note that Docker tools and container support are available on Windows Server 2016 and later versions. You can select to run all app2container operations on the application server with Docker tools installed or use a worker machine with Docker tools using Amazon ECS-optimized Windows Server AMIs.

PS> app2container inventory
{
    "iis-smarts-51d2dbf8": {
        "siteName": "nopCommerce39",
        "bindings": "http/*:90:",
        "applicationType": "iis"
    }
}

The inventory command displays all IIS websites on the application server that can be containerized. Each IIS website process has a unique identifier (for example, iis-smarts-51d2dbf8) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands.

You can choose a specific application by referring to its application ID and generate an analysis report for the application by using the analyze command.

$ sudo app2container analyze --application-id java-tomcat-9e8e4799
Created artifacts folder /home/ubuntu/app2container/ws/java-tomcat-9e8e4799
Generated analysis data in /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/analysis.json
Analysis successful for application java-tomcat-9e8e4799
Please examine the same, make appropriate edits and initiate containerization using "app2container containerize --application-id java-tomcat-9e8e4799"

You can use the analysis.json template generated by the application analysis to gather information on the analyzed application that helps identify all system dependencies from the analysisInfo section, and update containerization parameters to customize the container images generated for the application using the containerParameters section.

$ cat java-tomcat-9e8e4799/analysis.json
{
    "a2CTemplateVersion": "1.0",
	"createdTime": "2020-06-24 07:40:5424",
    "containerParameters": {
        "_comment1": "*** EDITABLE: The below section can be edited according to the application requirements. Please see the analyisInfo section below for deetails discoverd regarding the application. ***",
        "imageRepository": "java-tomcat-9e8e4799",
        "imageTag": "latest",
        "containerBaseImage": "ubuntu:18.04",
        "coopProcesses": [ 6446, 6549, 6646]
    },
    "analysisInfo": {
        "_comment2": "*** NON-EDITABLE: Analysis Results ***",
        "processId": 2537
        "appId": "java-tomcat-9e8e4799",
		"userId": "1000",
        "groupId": "1000",
        "cmdline": [...],
        "os": {...},
        "ports": [...]
    }
}

Also, you can run the $ app2container extract --application-id java-tomcat-9e8e4799 command to generate an application archive for the analyzed application. This depends on the analysis.json file generated earlier in the workspace folder for the application,and adheres to any containerization parameter updates specified in there. By using extract command, you can continue the workflow on a worker machine after running the first set of commands on the application server.

Now you can containerize command generated Docker images for the selected application.

$ sudo app2container containerize --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Extracted container artifacts for application
Entry file generated
Dockerfile generated under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated dockerfile.update under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts
Generated deployment file at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json
Containerization successful. Generated docker image java-tomcat-9e8e4799
You're all set to test and deploy your container image.

Next Steps:
1. View the container image with "docker images" and test the application.
2. When you're ready to deploy to AWS, please edit the deployment file as needed at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json.
3. Generate deployment artifacts using app2container generate app-deployment --application-id java-tomcat-9e8e4799

Using this command, you can view the generated container images using Docker images on the machine where the containerize command is run. You can use the docker run command to launch the container and test application functionality.

Note that in addition to generating container images, the containerize command also generates a deployment.json template file that you can use with the next generate-appdeployment command. You can edit the parameters in the deployment.json template file to change the image repository name to be registered in Amazon ECR, the ECS task definition parameters, or the Kubernetes App name.

$ cat java-tomcat-9e8e4799/deployment.json
{
       "a2CTemplateVersion": "1.0",
       "applicationId": "java-tomcat-9e8e4799",
       "imageName": "java-tomcat-9e8e4799",
       "exposedPorts": [
              {
                     "localPort": 8090,
                     "protocol": "tcp6"
              }
       ],
       "environment": [],
       "ecrParameters": {
              "ecrRepoTag": "latest"
       },
       "ecsParameters": {
              "createEcsArtifacts": true,
              "ecsFamily": "java-tomcat-9e8e4799",
              "cpu": 2,
              "memory": 4096,
              "dockerSecurityOption": "",
              "enableCloudwatchLogging": false,
              "publicApp": true,
              "stackName": "a2c-java-tomcat-9e8e4799-ECS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              },
              "gMSAParameters": {
                     "domainSecretsArn": "",
                     "domainDNSName": "",
                     "domainNetBIOSName": "",
                     "createGMSA": false,
                     "gMSAName": ""
              }
       },
       "eksParameters": {
              "createEksArtifacts": false,
              "applicationName": "",
              "stackName": "a2c-java-tomcat-9e8e4799-EKS",
              "reuseResources": {
                     "vpcId": "",
                     "cfnStackName": "",
                     "sshKeyPairName": ""
              }
       }
 }

At this point, the application workspace where the artifacts are generated serves as an iteration sandbox. You can choose to edit the Dockerfile generated here to make changes to their application and use the docker build command to build new container images as needed. You can generate the artifacts needed to deploy the application containers in Amazon EKS by using the generate-deployment command.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
EKS Cloudformation templates and additional deployment artifacts generated successfully for application java-tomcat-9e8e4799

You're all set to use AWS Cloudformation to manage your application stack.
Next Steps:
1. Edit the cloudformation template as necessary.
2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command:

       aws cloudformation deploy --template-file /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml --capabilities CAPABILITY_NAMED_IAM --stack-name java-tomcat-9e8e4799

3. Setup a pipeline for your application stack:

       app2container generate pipeline --application-id java-tomcat-9e8e4799

This command works based on the deployment.json template file produced as part of running the containerize command. App2Container will now generate ECS/EKS cloudformation templates as well and an option to deploy those stacks.

The command registers the container image to user specified ECR repository, generates cloudformation template for Amazon ECS and EKS deployments. You can register ECS task definition with Amazon ECS and use kubectl to launch the containerized application on the existing Amazon EKS or self-managed kubernetes cluster using App2Container generated amazon-eks-master.template.deployment.yaml.

Alternatively, you can directly deploy containerized applications by --deploy options into Amazon EKS.

$ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 --deploy
AWS pre-requisite check succeeded
Docker pre-requisite check succeeded
Created ECR Repository
Uploaded Cloud Formation resources to S3 Bucket: none
Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml
Initiated Cloudformation stack creation. This may take a few minutes. Please visit the AWS Cloudformation Console to track progress.
Deploying application to EKS

Handling ASP.NET Applications with Windows Authentication
Containerizing ASP.NET applications is almost same process as Java applications, but Windows containers cannot be directly domain joined. They can however still use Active Directory (AD) domain identities to support various authentication scenarios.

App2Container detects if a site is using Windows authentication and accordingly makes the IIS site’s application pool run as the network service identity, and generates the new cloudformation templates for Windows authenticated IIS applications. The creation of gMSA and AD Security group, domain join ECS nodes and making containers use this gMSA are all taken care of by those templates.

Also, it provides two PowerShell scripts as output to the $ app2container containerize command along with an instruction file on how to use it.

The following is an example output:

PS C:Windowssystem32> app2container containerize --application-id iis-SmartStoreNET-a726ba0b
Running AWS pre-requisite check...
Running Docker pre-requisite check...
Container build complete. Please use "docker images" to view the generated container images.
Detected that the Site is using Windows Authentication.
Generating powershell scripts into C:UsersAdminAppDataLocalapp2containeriis-SmartStoreNET-a726ba0bArtifacts required to setup Container host with Windows Authentication
Please look at C:UsersAdminAppDataLocalapp2containeriis-SmartStoreNET-a726ba0bArtifactsWindowsAuthSetupInstructions.md for setup instructions on Windows Authentication.
A deployment file has been generated under C:UsersAdminAppDataLocalapp2containeriis-SmartStoreNET-a726ba0b
Please edit the same as needed and generate deployment artifacts using "app2container generate-deployment"

The first PowerShellscript, DomainJoinAddToSecGroup.ps1, joins the container host and adds it to an Active Directory security group. The second script, CreateCredSpecFile.ps1, creates a Group Managed Service Account (gMSA), grants access to the Active Directory security group, generates the credential spec for this gMSA, and stores it locally on the container host. You can execute these PowerShellscripts on the ECS host. The following is an example usage of the scripts:

PS C:Windowssystem32> .DomainJoinAddToSecGroup.ps1 -ADDomainName Dominion.com -ADDNSIp 10.0.0.1 -ADSecurityGroup myIISContainerHosts -CreateADSecurityGroup:$true
PS C:Windowssystem32> .CreateCredSpecFile.ps1 -GMSAName MyGMSAForIIS -CreateGMSA:$true -ADSecurityGroup myIISContainerHosts

Before executing the app2container generate-deployment command, edit the deployment.json file to change the value of dockerSecurityOption to the name of the CredentialSpec file that the CreateCredSpecFile script generated. For example,
"dockerSecurityOption": "credentialspec:file://dominion_mygmsaforiis.json"

Effectively, any access to network resource made by the IIS server inside the container for the site will now use the above gMSA to authenticate. The final step is to authorize this gMSA account on the network resources that the IIS server will access. A common example is authorizing this gMSA inside the SQL Server.

Finally, if the application must connect to a database to be fully functional and you run the container in Amazon ECS, ensure that the application container created from the Docker image generated by the tool has connectivity to the same database. You can refer to this documentation for options on migrating: MS SQL Server from Windows to Linux on AWS, Database Migration Service, and backup and restore your MS SQL Server to Amazon RDS.

Now Available
AWS App2Container is offered free. You only pay for the actual usage of AWS services like Amazon EC2, ECS, EKS, and S3 etc based on their usage. For details, please refer to App2Container FAQs and documentations. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for ECS, AWS Forum for EKS, or on the container roadmap on Github.

Channy;

Amazon RDS Proxy – Now Generally Available

This post was originally published on this site

At AWS re:Invent 2019, we launched the preview of Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Following the preview of MySQL engine, we extended to the PostgreSQL compatibility. Today, I am pleased to announce that we are now generally available for both engines.

Many applications, including those built on modern serverless architectures using AWS Lambda, Fargate, Amazon ECS, or EKS can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources.

Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency, application scalability, and security. With RDS Proxy, failover times for Amazon Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).

Amazon RDS Proxy can be enabled for most applications with no code change, and you don’t need to provision or manage any additional infrastructure and only pay per vCPU of the database instance for which the proxy is enabled.

Amazon RDS Proxy – Getting started
You can get started with Amazon RDS Proxy in just a few clicks by going to the AWS management console and creating an RDS Proxy endpoint for your RDS databases. In the navigation pane, choose Proxies and Create proxy. You can also see the proxy panel below.

To create your proxy, specify the Proxy identifier, a unique name of your choosing, and choose the database engine – either MySQL or PostgreSQL. Choose the encryption setting if you want the proxy to enforce TLS / SSL for all connection between application and proxy, and specify a time period that a client connection can be idle before the proxy can close it.

A client connection is considered idle when the application doesn’t submit a new request within the specified time after the previous request completed. The underlying connection between the proxy and database stays open and is returned to the connection pool. Thus, it’s available to be reused for new client connections.

Next, choose one RDS DB instance or Aurora DB cluster in Database to access through this proxy. The list only includes DB instances and clusters with compatible database engines, engine versions, and other settings.

Specify Connection pool maximum connections, a value between 1 and 100. This setting represents the percentage of the max_connections value that RDS Proxy can use for its connections. If you only intend to use one proxy with this DB instance or cluster, you can set it to 100. For details about how RDS Proxy uses this setting, see Connection Limits and Timeouts.

Choose at least one Secrets Manager secret associated with the RDS DB instance or Aurora DB cluster that you intend to access with this proxy, and select an IAM role that has permission to access the Secrets Manager secrets you chose. If you don’t have an existing secret, please click Create a new secret before setting up the RDS proxy.

After setting VPC Subnets and a security group, please click Create proxy. If you more settings in details, please refer to the documentation.

You can see the new RDS proxy after waiting a few minutes and then point your application to the RDS Proxy endpoint. That’s it!

You can also create an RDS proxy easily via AWS CLI command.

aws rds create-db-proxy 
    --db-proxy-name channy-proxy 
    --role-arn iam_role 
    --engine-family { MYSQL|POSTGRESQL } 
    --vpc-subnet-ids space_separated_list 
    [--vpc-security-group-ids space_separated_list] 
    [--auth ProxyAuthenticationConfig_JSON_string] 
    [--require-tls | --no-require-tls] 
    [--idle-client-timeout value] 
    [--debug-logging | --no-debug-logging] 
    [--tags comma_separated_list]

How RDS Proxy works
Let’s see an example that demonstrates how open connections continue working during a failover when you reboot a database or it becomes unavailable due to a problem. This example uses a proxy named channy-proxy and an Aurora DB cluster with DB instances instance-8898 and instance-9814. When the failover-db-cluster command is run from the Linux command line, the writer instance that the proxy is connected to changes to a different DB instance. You can see that the DB instance associated with the proxy changes while the connection remains open.

$ mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
Enter password:
...
mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ # Initially, instance-9814 is the writer.
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-8898 is the writer.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-8898 |
+--------------------+
1 row in set (0.01 sec)

mysql>
[1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p
$ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399
JSON output
$ # After a short time, the console shows that the failover operation is complete.
$ # Now instance-9814 is the writer again.
$ fg
mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p

mysql> select @@aurora_server_id;
+--------------------+
| @@aurora_server_id |
+--------------------+
| instance-9814 |
+--------------------+
1 row in set (0.01 sec)
+---------------+---------------+
| Variable_name | Value |
+---------------+---------------+
| hostname | ip-10-1-3-178 |
+---------------+---------------+
1 row in set (0.02 sec)

With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections.

You can review the demo for an overview of RDS Proxy and the steps you need take to access RDS Proxy from a Lambda function.

If you want to know how your serverless applications maintain excellent performance even at peak loads, please read this blog post. For a deeper dive into using RDS Proxy for MySQL with serverless, visit this post.

The following are a few things that you should be aware of:

  • Currently, RDS Proxy is available for the MySQL and PostgreSQL engine family. This engine family includes RDS for MySQL 5.6 and 5.7, PostgreSQL 10.11 and 11.5.
  • In an Aurora cluster, all of the connections in the connection pool are handled by the Aurora primary instance. To perform load balancing for read-intensive workloads, you still use the reader endpoint directly for the Aurora cluster.
  • Your RDS Proxy must be in the same VPC as the database. Although the database can be publicly accessible, the proxy can’t be.
  • Proxies don’t support compressed mode. For example, they don’t support the compression used by the --compress or -C options of the mysql command.

Now Available!
Amazon RDS Proxy is generally available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London) , Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions for Aurora MySQL, RDS for MySQL, Aurora PostgreSQL, and RDS for PostgreSQL, and it includes support for Aurora Serverless and Aurora Multi-Master.

Take a look at the product page, pricing, and the documentation to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts.

Channy;

Find Your Most Expensive Lines of Code – Amazon CodeGuru Is Now Generally Available

This post was originally published on this site

Bringing new applications into production, maintaining their code base as they grow and evolve, and at the same time respond to operational issues, is a challenging task. For this reason, you can find many ideas on how to structure your teams, on which methodologies to apply, and how to safely automate your software delivery pipeline.

At re:Invent last year, we introduced in preview Amazon CodeGuru, a developer tool powered by machine learning that helps you improve your applications and troubleshoot issues with automated code reviews and performance recommendations based on runtime data. During the last few months, many improvements have been launched, including a more cost-effective pricing model, support for Bitbucket repositories, and the ability to start the profiling agent using a command line switch, so that you no longer need to modify the code of your application, or add dependencies, to run the agent.

You can use CodeGuru in two ways:

  • CodeGuru Reviewer uses program analysis and machine learning to detect potential defects that are difficult for developers to find, and recommends fixes in your Java code. The code can be stored in GitHub (now also in GitHub Enterprise), AWS CodeCommit, or Bitbucket repositories. When you submit a pull request on a repository that is associated with CodeGuru Reviewer, it provides recommendations for how to improve your code. Each pull request corresponds to a code review, and each code review can include multiple recommendations that appear as comments on the pull request.
  • CodeGuru Profiler provides interactive visualizations and recommendations that help you fine-tune your application performance and troubleshoot operational issues using runtime data from your live applications. It currently supports applications written in Java virtual machine (JVM) languages such as Java, Scala, Kotlin, Groovy, Jython, JRuby, and Clojure. CodeGuru Profiler can help you find the most expensive lines of code, in terms of CPU usage or introduced latency, and suggest ways you can improve efficiency and remove bottlenecks. You can use CodeGuru Profiler in production, and when you test your application with a meaningful workload, for example in a pre-production environment.

Today, Amazon CodeGuru is generally available with the addition of many new features.

In CodeGuru Reviewer, we included the following:

  • Support for Github Enterprise – You can now scan your pull requests and get recommendations against your source code on Github Enterprise on-premises repositories, together with a description of what’s causing the issue and how to remediate it.
  • New types of recommendations to solve defects and improve your code – For example, checking input validation, to avoid issues that can compromise security and performance, and looking for multiple copies of code that do the same thing.

In CodeGuru Profiler, you can find these new capabilities:

  • Anomaly detection – We automatically detect anomalies in the application profile for those methods that represent the highest proportion of CPU time or latency.
  • Lambda function support – You can now profile AWS Lambda functions just like applications hosted on Amazon Elastic Compute Cloud (EC2) and containerized applications running on Amazon ECS and Amazon Elastic Kubernetes Service, including those using AWS Fargate.
  • Cost of issues in the recommendation report – Recommendations contain actionable resolution steps which explain what the problem is, the CPU impact, and how to fix the issue. To help you better prioritize your activities, you now have an estimation of the savings introduced by applying the recommendation.
  • Color-my-code – In the visualizations, to help you easily find your own code, we are coloring your methods differently from frameworks and other libraries you may use.
  • CloudWatch metrics and alerts – To keep track and monitor efficiency issues that have been discovered.

Let’s see some of these new features at work!

Using CodeGuru Reviewer with a Lambda Function
I create a new repo in my GitHub account, and leave it empty for now. Locally, where I am developing a Lambda function using the Java 11 runtime, I initialize my Git repo and add only the README.md file to the master branch. In this way, I can add all the code as a pull request later and have it go through a code review by CodeGuru.

git init
git add README.md
git commit -m "First commit"

Now, I add the GitHub repo as origin, and push my changes to the new repo:

git remote add origin https://github.com/<my-user-id>/amazon-codeguru-sample-lambda-function.git
git push -u origin master

I associate the repository in the CodeGuru console:

When the repository is associated, I create a new dev branch, add all my local files to it, and push it remotely:

git checkout -b dev
git add .
git commit -m "Code added to the dev branch"
git push --set-upstream origin dev

In the GitHub console, I open a new pull request by comparing changes across the two branches, master and dev. I verify that the pull request is able to merge, then I create it.

Since the repository is associated with CodeGuru, a code review is listed as Pending in the Code reviews section of the CodeGuru console.

After a few minutes, the code review status is Completed, and CodeGuru Reviewer issues a recommendation on the same GitHub page where the pull request was created.

Oops! I am creating the Amazon DynamoDB service object inside the function invocation method. In this way, it cannot be reused across invocations. This is not efficient.

To improve the performance of my Lambda function, I follow the CodeGuru recommendation, and move the declaration of the DynamoDB service object to a static final attribute of the Java application object, so that it is instantiated only once, during function initialization. Then, I follow the link in the recommendation to learn more best practices for working with Lambda functions.

Using CodeGuru Profiler with a Lambda Function
In the CodeGuru console, I create a MyServerlessApp-Development profiling group and select the Lambda compute platform.

Next, I give the AWS Identity and Access Management (IAM) role used by my Lambda function permissions to submit data to this profiling group.

Now, the console is giving me all the info I need to profile my Lambda function. To configure the profiling agent, I use a couple of environment variables:

  • AWS_CODEGURU_PROFILER_GROUP_ARN to specify the ARN of the profiling group to use.
  • AWS_CODEGURU_PROFILER_ENABLED to enable (TRUE) or disable (FALSE) profiling.

I follow the instructions (for Maven and Gradle) to add a dependency, and include the profiling agent in the build. Then, I update the code of the Lambda function to wrap the handler function inside the LambdaProfiler provided by the agent.

To generate some load, I start a few scripts invoking my function using the Amazon API Gateway as trigger. After a few minutes, the profiling group starts to show visualizations describing the runtime behavior of my Lambda function.

For example, I can see how much CPU time is spent in the different methods of my function. At the bottom, there are the entry point methods. As I scroll up, I find methods that are called deeper in the stack trace. I right-click and hide the LambdaRuntimeClient methods to focus on my code. Note that my methods are colored differently than those in the packages I am using, such as the AWS SDK for Java.

I am mostly interested in what happens in the handler method invoked by the Lambda platform. I select the handler method, and now it becomes the new “base” of the visualization.

As I move my pointer on each of my methods, I get more information, including an estimation of the yearly cost of running that specific part of the code in production, based on the load experienced by the profiling agent during the selected time window. In my case, the handler function cost is estimated to be $6. If I select the two main functions above, I have an estimation of $3 each. The cost estimation works for code running on Lambda functions, EC2 instances, and containerized applications.

Similarly, I can visualize Latency, to understand how much time is spent inside the methods in my code. I keep the Lambda function handler method selected to drill down into what is under my control, and see where time is being spent the most.

The CodeGuru Profiler is also providing a recommendation based on the data collected. I am spending too much time (more than 4%) in managing encryption. I can use a more efficient crypto provider, such as the open source Amazon Corretto Crypto Provider, described in this blog post. This should lower the time spent to what is expected, about 1% of my profile.

Finally, I edit the profiling group to enable notifications. In this way, if CodeGuru detects an anomaly in the profile of my application, I am notified in one or more Amazon Simple Notification Service (SNS) topics.

Available Now
Amazon CodeGuru is available today in 10 regions, and we are working to add more regions in the coming months. For regional availability, please see the AWS Region Table.

CodeGuru helps you improve your application code and reduce compute and infrastructure costs with an automated code reviewer and application profiler that provide intelligent recommendations. Using visualizations based on runtime data, you can quickly find the most expensive lines of code of your applications. With CodeGuru, you pay only for what you use. Pricing is based on the lines of code analyzed by CodeGuru Reviewer, and on sampling hours for CodeGuru Profiler.

To learn more, please see the documentation or check out the video by Jeff!

Danilo

Introducing Amazon Honeycode – Build Web & Mobile Apps Without Writing Code

This post was originally published on this site

VisiCalc was launched in 1979, and I purchased a copy (shown at right) for my Apple II. The spreadsheet model was clean, easy to use, and most of all, easy to teach. I was working in a retail computer store at that time, and knew that this product was a big deal when customers started asking to purchase the software, and for whatever hardware that was needed to run it.

Today’s spreadsheets fill an important gap between mass-produced packaged applications and custom-built code created by teams of dedicated developers. Every tool has its limits, however. Sharing data across multiple users and multiple spreadsheets is difficult, as is dealing with large amounts of data. Integration & automation are also challenging, and require specialized skills. In many cases, those custom-built apps would be a better solution than a spreadsheet, but a lack of developers or other IT resources means that these apps rarely get built.

Introducing Amazon Honeycode
Today we are launching Amazon Honeycode in beta form. This new fully-managed AWS service gives you the power to build powerful mobile & web applications without writing any code. It uses the familiar spreadsheet model and lets you get started in minutes. If you or your teammates are already familiar with spreadsheets and formulas, you’ll be happy to hear that just about everything you know about sheets, tables, values, and formulas still applies.

Amazon Honeycode includes templates for some common applications that you and other members of your team can use right away:

You can customize these apps at any time and the changes will be deployed immediately. You can also start with an empty table, or by importing some existing data in CSV form. The applications that you build with Honeycode can make use of a rich palette of user interface objects including lists, buttons, and input fields:

You can also take advantage of a repertoire of built-in, trigger-driven actions that can generate email notifications and modify tables:

Honeycode also includes a lengthy list of built-in functions. The list includes many functions that will be familiar to users of existing spreadsheets, along with others that are new to Honeycode. For example, FindRow is a more powerful version of the popular Vlookup function.

Getting Started with Honeycode
It is easy to get started. I visit the Honeycode Builder, and create my account:

After logging in I see My Drive, with my workbooks & apps, along with multiple search, filter, & view options:

I can open & explore my existing items, or I can click Create workbook to make something new. I do that, and then select the Simple To-do template:

The workbook, tables, and the apps are created and ready to use right away. I can simply clear the sample data from the tables and share the app with the users, or I can inspect and customize it. Let’s inspect it first, and then share it!

After I create the new workbook, the Tasks table is displayed and I can see the sample data:

Although this looks like a traditional spreadsheet, there’s a lot going on beneath the surface. Let’s go through, column-by-column:

A (Task) – Plain text.

B (Assignee) – Text, formatted as a Contact.

C (First Name) – Text, computed by a formula:

In the formula, Assignee refers to column B, and First Name refers to the first name of the contact.

D (Due) – A date, with multiple formatting options:

E (Done) – A picklist that pulls values from another table, and that is formatted as a Honeycode rowlink. Together, this restricts the values in this column to those found in the other table (Done, in this case, with the values Yes and No), and also makes the values from that table visible within the context of this one:

F (Remind On) – Another picklist, this one taking values from the ReminderOptions table:

G (Notification) – Another date.

This particular table uses just a few of the features and options that are available to you.

I can use the icons on the left to explore my workbook:

I can see the tables:

I can also see the apps. A single Honeycode workbook can contain multiple apps that make use of the same tables:

I’ll return to the apps and the App Builder in a minute, but first I’ll take a look at the automations:

Again, all of the tables and apps in the workbook can use any of the automations in the workbook.

The Honeycode App Builder
Let’s take a closer look at the app builder. As was the case with the tables, I will show you some highlights and let you explore the rest yourself. Here’s what I see when I open my Simple To-do app in the builder:

This app contains four screens (My Tasks, All Tasks, Edit, and Add Task). All screens have both web and mobile layouts. Newly created screens, and also those in this app, have the layouts linked, so that changes to one are reflected in the other. I can unlink the layouts if I want to exercise more control over the controls, the presentation, or to otherwise differentiate the two:

Objects within a screen can reference data in tables. For example, the List object on the My Task screen filters rows of the Tasks table, selecting the undone tasks and ordering them by the due date:

Here’s the source expression:

=Filter(Tasks,"Tasks[Done]<>% ORDER BY Tasks[Due]","Yes")

The “%”  in the filter condition is replaced by the second parameter (“Yes”) when the filter is evaluated. This substitution system makes it easy for you to create interesting & powerful filters using the FILTER() function.

When the app runs, the objects within the List are replicated, one per task:

Objects on screens can initiate run automations and initiate actions. For example, the ADD TASK button navigates to the Add Task screen:

The Add Task screen prompts for the values that specify the new task, and the ADD button uses an automation that writes the values to the Tasks table:

Automations can be triggered in four different ways. Here’s the automation that generates reminders for tasks that have not been marked as done. The automation runs once for each row in the Tasks table:

The notification runs only if the task has not been marked as done, and could also use the FILTER() function:

While I don’t have the space to show you how to build an app from scratch, here’s a quick overview.

Click Create workbook and Import CSV file or Start from scratch:

Click the Tables icon and create reference and data tables:

Click the Apps icon and build the app. You can select a wizard that uses your tables as a starting point, or you can start from an empty canvas.

Click the Automations icon and add time-driven or data-driven automations:

Share the app, as detailed in the next section.

Sharing Apps
After my app is ready to go, I can share it with other members of my team. Each Honeycode user can be a member of one or more teams:

To share my app, I click Share app:

Then I search for the desired team members and share the app:

They will receive an email that contains a link, and can start using the app immediately. Users with mobile devices can install the Honeycode Player (iOS, Android) and make use of any apps that have been shared with them. Here’s the Simple To-do app:

Amazon Honeycode APIs
External applications can also use the Honeycode APIs to interact with the applications you build with Honeycode. The functions include:

GetScreenData – Retrieve data from any screen of a Honeycode application.

InvokeScreenAutomation – Invoke an automation or action defined in the screen of a Honeycode application.

Check it Out
As you can see, Amazon Honeycode is easy to use, with plenty of power to let you build apps that help you and your team to be more productive. Check it out, build something cool, and let me know what you think! You can find out more in the announcement video from Larry Augustin here:

Jeff;

PS – The Amazon Honeycode Forum is the place for you to ask questions, learn from other users, and to find tutorials and other resources that will help you to get started.