Category Archives: AWS

Using Amazon CloudWatch Lambda Insights to Improve Operational Visibility

This post was originally published on this site

To balance costs, while at the same time ensuring the service levels needed to meet business requirements are met, some customers elect to continuously monitor and optimize their AWS Lambda functions. They collect and analyze metrics and logs to monitor performance, and to isolate errors for troubleshooting purposes. Additionally, they also seek to right-size function configurations by measuring function duration, CPU usage, and memory allocation. Using various tools and sources of data to do this can be time-consuming, and some even go so far as to build their own customized dashboards to surface and analyze this data.

We announced Amazon CloudWatch Lambda Insights as a public preview this past October for customers looking to gain deeper operational oversight and visibility into the behavior of their Lambda functions. Today, I’m pleased to announce that CloudWatch Lambda Insights is now generally available. CloudWatch Lambda Insights provides clearer and simpler operational visibility of your functions by automatically collating and summarizing Lambda performance metrics, errors, and logs in prebuilt dashboards, saving you from time-consuming, manual work.

Once enabled on your functions, CloudWatch Lambda Insights automatically starts collecting and summarizing performance metrics and logs, and, from a convenient dashboard, provides you with a one-click drill-down into metrics and errors for Lambda function requests, simplifying analysis and troubleshooting.

Exploring CloudWatch Lambda Insights
To get started, I need to enable Lambda Insights on my functions. In the Lambda console, I navigate to my list of functions, and then select the function I want to enable for Lambda Insights by clicking on its name. From the function’s configuration view I then scroll to the Monitoring tools panel, click Edit, enable Enhanced monitoring, and click Save. If you want to enable enhanced monitoring for many functions, you may find it more convenient to use AWS Command Line Interface (CLI), AWS Tools for PowerShell, or AWS CloudFormation approaches instead. Note that once enhanced monitoring has been enabled, it can take a few minutes before data begins to surface in CloudWatch.

Screenshot showing enabling of <span title="">Lambda Insights</span> In the Amazon CloudWatch Console, I start by selecting Performance monitoring beneath Lambda Insights in the navigation panel. This takes me to the Multi-function view. Metrics for all functions on which I have enabled Lambda Insights are graphed in the view. At the foot of the page there’s also a table listing the functions, summarizing some data in the graphs and adding Cold starts. The table gives me the ability to sort the data based on the metric I’m interested in.

Screenshot of metric graphs on the <span title="">Lambda Insights</span> Multi-function viewScreenshot of the <span title="">Lambda Insights</span> Multi-function view summary listAn interesting graph on this page, especially if you are trying to balance cost with performance, is Function Cost. This graph shows the direct cost of your functions in terms of megabyte milliseconds (MB-MS), which is how Lambda computes the financial charge of a function’s invocation. Hovering over the graph at a particular point in time shows more details.

Screenshot of function cost graphLet’s examine my ExpensiveFunction further. Moving to the summary list at the bottom of the page I click on the function name which takes me to the Single function view (from here I can switch to my other functions using the controls at the top of the page, without needing to return to the multiple function view). The graphs show me metrics for invocations and errors, duration, any throttling, and memory, CPU, and network usage on the selected function and to add to the detail available, the most recent 1000 invocations are also listed in a table which I can sort as needed.

Clicking View in the Trace column of a request in the invocations list takes me to the Service Lens trace view, showing where my function spent its time on that particular invocation request. I could use this to determine if changes to the business logic of the function might improve performance by reducing function duration, which will have a direct effect on cost. If I’m troubleshooting, I can view the Application or Performance logs for the function using the View logs button. Application logs are those that existed before Lambda Insights was enabled on the function, whereas Performance logs are those that Lambda Insights has collated across all my enabled functions. The log views enable me to run queries and in the case of the Performance logs I can run queries across all enabled functions in my account, for example to perform a top-N analysis to determine my most expensive functions, or see how one function compares to another.

Here’s how I can make use of Lambda Insights to check if I’m ‘moving the needle’ in the correct direction when attempting to right-size a function, by examining the effect of changes to memory allocation on function cost. The starting point for my ExpensiveFunction is 128MB. By moving from 128MB to 512MB, the data shows me that function cost, duration, and concurrency are all reduced – this is shown at (1) in the graphs. Moving from 512MB to 1024MB, (2), has no impact on function cost, but it further reduces duration by 50% and also affected the maximum concurrency. I ran two further experiments, first moving from 1024MB to 2048MB, (3), which resulted in a further reduction in duration but the function cost started to increase so the needle is starting to swing in the wrong direction. Finally, moving from 2048MB to 3008MB, (4), significantly increased the cost but had no effect on duration. With the aid of Lambda Insights I can infer that the sweet spot for this function (assuming latency is not a consideration) lies between 1024MB and 2048MB. All these experiments are shown in the graphs below (the concurrency graph lags slightly, as earlier invocations are finishing up as configuration changes are made).

Screenshot of function cost experiments

CloudWatch Lambda Insights gives simple and convenient operational oversight and visibility into the behavior of my AWS Lambda functions, and is available today in all regions where AWS Lambda is present.

Learn more about Amazon CloudWatch Lambda Insights in the documentation and get started today.

— Steve

New – Fully Serverless Batch Computing with AWS Batch Support for AWS Fargate

This post was originally published on this site

We launched AWS Batch on December 2016 as a fully managed batch computing service that enables developers, scientists and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. With AWS Batch, you no longer need to install and manage batch computing software or server clusters to run your jobs. AWS Batch is designed to remove the heavy lifting of batch workload management by creating compute environments, managing queues, and launching the appropriate compute resources to run your jobs quickly and efficiently.

Today, we are happy to introduce the ability to specify AWS Fargate as a computing resource for AWS Batch jobs. AWS Fargate is a serverless computing engine for containers that eliminates the need to provision and manage your own servers. With this enhancement, customers will now have a way to run their jobs on serverless computing resources: Simply submit your analysis, ML inference, map reduce analysis, and other batch workloads, and let Batch and Fargate handle the rest.

Basic Concept
Customers running batch workloads in the cloud have a variety of orchestration needs: for example, workloads need to be queued, submitted to a compute resource, given priorities, dependencies and retries need to be handled, compute needs to be scalable and available, and users need to account for utilization and resource management. While AWS Batch simplifies all the queuing, scheduling, and lifecycle management for customers, and even provisions and manages compute in the customer account, customers are looking for even more simplicity where they can get up and running in minutes. Time spent on image maintenance, right-sizing of compute, and monitoring is time not spent on applications. These customer needs have led us to develop Fargate integration, which we are pleased to announce today.

How It Works
Simply specify Fargate or Fargate Spot as the resource type in Batch and submit a Fargate job definition, and customers can now take advantage of the benefits of serverless computing without the need for image patching, isolation of VM boundaries, and calculation of the correct size.

To start, access the AWS Management Console of AWS Batch. Select Compute environments and Create.Getting startWe now have 2 new options for Provisioning model: Fargate and Fargate Spot.

Selecting FargateWith Fargate or Fargate Spot, you don’t need to worry about Amazon EC2 instances or Amazon Machine Images. Just set Fargate or Fargate Spot, your subnets, and the maximum total vCPU of the jobs running in the compute environment, and you have a ready-to-go Fargate computing environment. With Fargate Spot, you can take advantage of up to 70% discount for your fault-tolerant, time-flexible jobs.

vCPU fro FargateSelect Create compute environment. Then, Batch will create your Fargate-based compute environment.

Created Computing environment

Next step is to create the Job Queue, which is where your jobs live when waiting to be run. Then, Connect that to your Fargate compute environment.

After you finished setting up the job queue, next step is to create Job definitions for your Fargate jobs. Select Job definitions from the left pane, and click the Create button.

Setting up job definitionOnce you’ve selected Fargate for the job definition, you are now ready to submit your job. Batch will handle queueing, submission, and job lifecycle for you! You can access Job definitions by clicking Job definitions in the left pane. After selecting Job Definition, click Submit new job.

Submitting JobYou need to select the Job queue previously set up for your Fargate compute environment.

Submitting new job

You can now submit your new job by pressing the Submit button at the bottom.

Follow the steps below to set up your Fargate-based compute environment using the AWS CLI.

1. Creating Compute Environment

aws batch create-compute-environment --cli-input-json file://below_sample.json

{
    "computeEnvironmentName": "FargateComputeEnvironment",
    "type": "MANAGED",
    "state": "ENABLED",
    "computeResources": {
        "type": "FARGATE", # or FARGATE_SPOT
        "maxvCpus": 40,
        "subnets": [
             "subnet-xxxxxxxx","subnet-xxxxxxxx","subnet-xxxxxxxx"
        ],
        "securityGroupIds": ["sg-xxxxxxxxxxxxxxxx"],
        "tags": {
            "KeyName": "fargate"
        }
    },
"serviceRole": "arn:aws:iam::xxxxxxxxxxxx:role/service-role/AWSBatchServiceRole"
}

2.Creating Job Queue

aws batch create-job-queue --cli-input-json file://below_job_queue.json

{
  "jobQueueName": "FargateJobQueue",
  "state": "ENABLED",
  "priority": 1,
  "computeEnvironmentOrder": [
    {
      "order": 1,
      "computeEnvironment": "FargateComputeEnvironment"
    }
  ]
}

3.Creating and Registering Job Definitions
aws batch-fargate register-job-definition --cli-input-json file://below_job_definition.json

{
    "jobDefinitionName": "FargateJobDefinition",
    "type": "container",
    "propagateTags": true,
     "containerProperties": {
        "image": "xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/test:latest",
        "networkConfiguration": {
            "assignPublicIp": "ENABLED"
        },
        "fargatePlatformConfiguration": {
            "platformVersion": "LATEST"
        },
        "resourceRequirements": [
            {
                "value": "0.25",
                "type": "VCPU"
            },
            {
                "value": "512",
                "type": "MEMORY"
            }
        ],
        "jobRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
        "executionRoleArn":"arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole",
        "logConfiguration": {
            "logDriver": "awslogs",
            "options": {
            "awslogs-group": "/ecs/sleepenv",
            "awslogs-region": "us-east-1",
            "awslogs-stream-prefix": "ecs"
            }
        }
     },
   "platformCapabilities": [
        "FARGATE"
    ],
    "tags": {
    "Service": "Batch",
    "Name": "JobDefinitionTag",
    "Expected": "MergeTag"
    }

You can also use other container image registries like Docker Hub in addition to Amazon Elastic Container Registry.

4.Submitting Job
aws batch submit-job --job-name faragteJob --job-queue FargateJobQueue --job-definition FargateJobDefinition

Generally Available Today
AWS Batch support for AWS Fargate is generally available today for all AWS Regions where AWS Batch and AWS Fargate are available. Please visit the AWS Batch page and technical documentation for more details.

– Kame

New – SaaS Lens in AWS Well-Architected Tool

This post was originally published on this site

To help you build secure, high-performing, resilient, and efficient solutions on AWS, in 2015 we publicly launched the AWS Well-Architected Framework. It started as a single whitepaper but has expanded to include domain-specific lenses, hands-on labs, and the AWS Well-Architected Tool (available at no cost in the AWS Management Console) that provides a mechanism for regularly evaluating your workloads, identifying high risk issues, and recording your improvements.

To offer more workload-specific advice, in 2017 we extended the framework with the concept of “lens” to go beyond a general perspective and enter specific technology domains. Now, to help accelerate building Software-as-a-Service (SaaS) solutions, the AWS SaaS Factory team has led an effort to build a new AWS Well-Architected SaaS Lens.

SaaS is a licensing and delivery model by which software is centrally managed and hosted by a provider and available to customers on a subscription basis. In this way, software providers can innovate rapidly, optimize their costs, and gain operational efficiencies. At the same time, customers benefit from simplified IT management, speed, and a pay-for-what-you-use business model.

The Well-Architected SaaS Lens adds questions to the tool that are tailored to SaaS workloads and intended to drive critical thinking for developing and operating SaaS workloads. Each question has a list of best practices, and each best practice has a list of improvement plans to help guide you in implementing them. AWS Solution Architects from the AWS SaaS Factory Program, having worked with thousands of software developers and AWS Partners, view these well-architected patterns as a key component of building and operating a SaaS architecture on AWS.

Using the SaaS Lens in the Well-Architected Tool
In the Well-Architected Tool console, I start by defining my workload. Today, I’m reviewing a pre-production environment of a SaaS application. It’s just a minimum viable product (MVP) version of what I want to build, with just enough features to be usable and get a first feedback.

Now, I can choose which lenses to apply. The AWS Well-Architected Framework is there by default. I select the SaaS Lens. This is adding a set of additional questions that help me understand how to design, deploy, and architect my SaaS workload following the framework best practices. Other lenses are available in the tool, for example the Serverless Lens described here.

Now, I start my review. Many questions in the SaaS Lens are focused on how you are managing a multi-tenant application. This is the first question for the Operational Excellence pillar. I can also add some notes to explain my answer better or take note of what I want to improve.

I don’t need to answer all questions to start improving my SaaS application. For example, this is the improvement plan based on my answer to the previous question. For each point here, I can click and get more information on how to implement that on AWS.

Moving to the Reliability pillar, I feel more confident because of the techniques I used to separate individual tenants of my SaaS application in their own “sandbox” environment.

As I expect, no risks are detected this time!

When I finish reviewing the SaaS Lens for my workload, I get an overview of the detected risks. Here, I can also save a milestone that I can use later to compare my status and estimate my improvements.

Just below that, I get a suggestion on what to focus on next. Again, I can click and get in-depth suggestion on how to mitigate the risk.

As often happens in IT services, this is an iterative process. The AWS Well-Architected Tool helps quantify the risks and gives me a path to follow to continuously improve my SaaS application.

Available Now
The SaaS Lens is available today in all regions where the AWS Well-Architected Tool is offered, as described in the AWS Regional Services List. It can be applied to existing workloads, or used for new workloads you define in the tool.

There are no costs in using the AWS Well-Architected Tool; you can use it to improve the application you are working on, or to get visibility into multiple workloads used by the department or area you are working with.

Learn more about the new SaaS Lens and get started today with the AWS Well-Architected Tool!

Danilo

AWS Marketplace Now Offers Professional Services

This post was originally published on this site

Now with AWS Marketplace, customers can not only find and buy third-party software but also the professional services needed to support the full lifecycle of those products, including planning, deployment and support. This simplifies the software supply chain including tasks like managing provider relationships and procurement processes and also consolidates billing and invoices in one place.

Until today, customers have used AWS Marketplace for buying software and then used a separate process for contracting professional services. Many customers need extra professional services when they purchase third-party software, like premium support, implementation, or training. The additional effort to support different procurement processes impacts customers’ project timelines and adds a lot complexity to the customer’s organization.

Last year we announced AWS IQ, a service that helps you engage with AWS Certified third-party experts for AWS project work. This year we want to go one step further and help you find professional services for all those third-party software solutions you currently buy from AWS Marketplace.

For the Buyers
Buyers can now discover professional services using AWS Marketplace from multiple trusted sellers, manage the invoices and payments from software and services together and reduce procurement time, accelerating the process from months to days.

This new feature allow buyers to choose from a selection of professional services such as assessments, implementation, premium support, and managed services and training from consulting partners, managed service providers and independent software vendors.

To get started finding and buying professional services, first you need to find the right service for you. If you are looking for a professional service associated with a particular piece of software, using the search tool in AWS Marketplace, you can search for the software and the related professional services will appear in the search results. Use the delivery method to filter the results to just include professional services.

Screenshot of searching for professional services

After you find the service you are looking for, you can visit the service details page and learn more information about the listing. If you want to buy the service, just click continue.

Screenshot of service page

That will open the request service form where you can connect to the seller and request the service. The seller will receive a notification and then they can contact you to agree on the scope of the work including deliverables, milestones, pricing, payment schedules, and service terms.

Screenshot of request service form

Once you agree with the seller on all the specific details of the contract, the seller sends you a private offer. Now the offer page will show the private offer details instead of a request for service form. You can review the pricing, payment schedule, and contract terms and create the contract.

Screenshot of private offer

The service subscription starts after you review and accept the private offer on AWS Marketplace. Also, you will receive an invoice from AWS Marketplace and you can track your subscriptions in the buyers management console. The purchases of the services are itemized on your AWS invoice, simplifying payments and cost management.

For the Sellers
This new feature of AWS Marketplace enables you, the seller, to grow your business and reach new customers by listing your professional service offerings. You can list professional services offerings as individual products or alongside existing software products in AWS Marketplace using pricing, payment schedule, and service terms that are independent from the software.

In AWS Marketplace you will create your seller page, where all your information as a seller will be displayed to the potential buyers.

Public professional service listings are discoverable by search and visible in your seller profile. You will receive customer requests for each of the services listed. Agree with the customer on the details of the service contract and then send a private offer to them.

Screenshot for creating a professional service

AWS Marketplace will invoice and collect the payments from the customers and distribute the funds to your bank account after the customers pay. AWS Marketplace also offers you seller reports that are updated daily to understand how your business is doing.

Availability
To learn more about buying and selling professional services in AWS Marketplace, visit the AWS Marketplace service page

Marcia

Managed Entitlements in AWS License Manager Streamlines License Tracking and Distribution for Customers and ISVs

This post was originally published on this site

AWS License Manager is a service that helps you easily manage software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across your Amazon Web Services (AWS) and on-premises environments. You can define rules based on your licensing agreements to prevent license violations, such as using more licenses than are available. You can set the rules to help prevent licensing violations or notify you of breaches. AWS License Manager also offers automated discovery of bring your own licenses (BYOL) usage that keeps you informed of all software installations and uninstallations across your environment and alerts you of licensing violations.

License Manager can manage licenses purchased in AWS Marketplace, a curated digital catalog where you can easily find, purchase, deploy, and manage third-party software, data, and services to build solutions and run your business. Marketplace lists thousands of software listings from independent software vendors (ISVs) in popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps.

Managed entitlements for AWS License Manager
Starting today, you can use managed entitlements, a new feature of AWS License Manager that lets you distribute licenses across your AWS Organizations, automate software deployments quickly and track licenses – all from a single, central account. Previously, each of your users would have to independently accept licensing terms and subscribe through their own individual AWS accounts. As your business grows and scales, this becomes increasingly inefficient.

Customers can use managed entitlements to manage more than 8,000 listings available for purchase from more than 1600 vendors in the AWS Marketplace. Today, AWS License Manager automates license entitlement distribution for Amazon Machine Image, Containers and Machine Learning products purchased in the Marketplace with a variety of solutions.

How It Works
Managed entitlements provides built-in controls that allow only authorized users and workloads to consume a license within vendor-defined limits. This new license management mechanism also eliminates the need for ISVs to maintain their own licensing systems and conduct costly audits.

overview

Each time a customer purchases licenses from AWS Marketplace or a supported ISV, the license is activated based on AWS IAM credentials, and the details are registered to License Manager.

list of granted license

Administrators distribute licenses to AWS accounts. They can manage a list of grants for each license.

list of grants

Benefits for ISVs
AWS License Manager managed entitlements provides several benefits to ISVs to simplify the automatic license creation and distribution process as part of their transactional workflow. License entitlements can be distributed to end users with and without AWS accounts. Managed entitlements streamlines upgrades and renewals by removing expensive license audits and provides customers with a self-service tracking tool with built-in license tracking capabilities. There are no fees for this feature.

Managed entitlements provides the ability to distribute licenses to end users who do not have AWS accounts. In conjunction with the AWS License Manager, ISVs create a unique long-term token to identify the customer. The token is generated and shared with the customer. When the software is launched, the customer enters the token to activate the license. The software exchanges the long-term customer token for a short-term token that is passed to the API and the setting of the license is completed. For on-premises workloads that are not connected to the Internet, ISVs can generate a host-specific license file that customers can use to run the software on that host.

Now Available
This new enhancement to AWS License Manager is available today for US East (N. Virginia), US West (Oregon), and Europe (Ireland) with other AWS Regions coming soon.

Licenses purchased on AWS Marketplace are automatically created in AWS License Manager and no special steps are required to use managed entitlements. For more details about the new feature, see the managed entitlement pages on AWS Marketplace, and the documentation. For ISVs to use this new feature, please visit our getting started guide.

Get started with AWS License Manager and the new managed entitlements feature today.

– Kame

re:Invent 2020 Liveblog: Partner Keynote

This post was originally published on this site

Join us Wednesday, Dec. 3, from 7:45-9:30 a.m., for the AWS Partner Keynote with Doug Yeum, head of Worldwide Channels and Alliances; Sandy Carter, vice president, Global Public Sector Partners and Programs; and Dave McCann, vice president, AWS Migration, Marketplace, and Control Services. Developer Advocates Steve Roberts and Martin Beeby will be liveblogging all the announcements and discussion.

See you at 7:45 a.m. (PST) Wednesday!

— Steve & — Martin

 


Amazon S3 Update – Strong Read-After-Write Consistency

This post was originally published on this site

When we launched S3 back in 2006, I discussed its virtually unlimited capacity (“…easily store any number of blocks…”), the fact that it was designed to provide 99.99% availability, and that it offered durable storage, with data transparently stored in multiple locations. Since that launch, our customers have used S3 in an amazing diverse set of ways: backup and restore, data archiving, enterprise applications, web sites, big data, and (at last count) over 10,000 data lakes.

One of the more interesting (and sometimes a bit confusing) aspects of S3 and other large-scale distributed systems is commonly known as eventual consistency. In a nutshell, after a call to an S3 API function such as PUT that stores or modifies data, there’s a small time window where the data has been accepted and durably stored, but not yet visible to all GET or LIST requests. Here’s how I see it:

This aspect of S3 can become very challenging for big data workloads (many of which use Amazon EMR) and for data lakes, both of which require access to the most recent data immediately after a write. To help customers run big data workloads in the cloud, Amazon EMR built EMRFS Consistent View and open source Hadoop developers built S3Guard, which provided a layer of strong consistency for these applications.

S3 is Now Strongly Consistent
After that overly-long introduction, I am ready to share some good news!

Effective immediately, all S3 GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata, are now strongly consistent. What you write is what you will read, and the results of a LIST will be an accurate reflection of what’s in the bucket. This applies to all existing and new S3 objects, works in all regions, and is available to you at no extra charge! There’s no impact on performance, you can update an object hundreds of times per second if you’d like, and there are no global dependencies.

This improvement is great for data lakes, but other types of applications will also benefit. Because S3 now has strong consistency, migration of on-premises workloads and storage to AWS should now be easier than ever before.

We’ve been working with the Amazon EMR team and developers in the open-source community to ensure that customers can take advantage of this update with their big data workloads. As a result of that you no longer need to use EMRFS Consistent View or S3Guard, further reducing the cost to run big data workloads in AWS.

To learn more about S3 strong consistency, visit the feature page here.

A Word From Dropbox
Long-time AWS customer Dropbox recently migrated a 34 PB analytics data lake from on-premises Hadoop clusters to S3. Watch this video to learn more about strong consistency and how it has allowed Dropbox to simplify their data lake:

Jeff;

 

 

New – Amazon S3 Replication Adds Support for Multiple Destination Buckets

This post was originally published on this site

Amazon Simple Storage Service (S3) supports many types of replication, including S3 Same-Region Replication (SRR), which launched in 2019 and S3 Cross-Region Replication (CRR), which has been around since 2015. Today, we are happy to announce S3 Replication support for multiple destination buckets. S3 Replication now gives you the ability to replicate data from one source bucket to multiple destination buckets. With S3 Replication (multi-destination) you can replicate data in the same AWS Regions using S3 SRR or across different AWS Regions by using S3 CRR, or a combination of both.

Before this launch, if you needed to have multiple copies of your data in different S3 buckets, you had to build your own S3 replication service by monitoring S3 events, identifying created objects, and using AWS Lambda functions to copy objects to each destination bucket.

This launch removes the need for you to develop your own solutions to replicate the data across multiple destinations. You can use the flexibility of S3 Replication (multi-destination) to store multiple copies of your data in different storage classes, with different encryption types, or across different accounts depending on its intended use. Additionally, when replicating to multiple destinations, you can use CloudWatch metrics to track replication progress for each region pair.

S3 Replication (multi-destination) is an extension to S3 Replication, and it supports all existing S3 Replication features like Replication Time Control (RTC) and delete marker replication. If you need a predictable replication time backed by a Service Level Agreement, you can use RTC to replicate objects in less than 15 minutes.

How to Get Started With S3 Replication (multi-destination)
In order to get S3 Replication working, all the buckets involved in the replication (source and destinations) must have bucket versioning enabled.

To setup S3 Replication (multi-destination), you need to define replication rules. You can create a new rule in the bucket Management page, under Replication Rules.

Screenshot of adding a rule

When creating a new replication rule, one very important step is to set up permissions for replication, as S3 will need to replicate objects on your behalf. To do that, you can follow the instructions available in the S3 documentation page.

To create the replication rule, just follow the steps in the console. You can specify to which objects of the bucket this rule applies, the destination bucket, if you want to change the storage class of the replicated objects and many other preferences for your replicated objects.

Screenshot configuring the replication rule

One thing to have in mind when activating a rule is that the replication will start for all new objects added to the bucket from that moment. Objects uploaded to the bucket before the rule was created need to be copied using one time operations like S3 Batch Operations or S3 copy.

If you want to monitor the progress of your replication using CloudWatch metrics, don’t forget to click the Replication metrics and notifications checkbox.

Screenshot of configuring replication rules metrics

Now that we support multiple destinations for replication, rule priorities are used when there are two or more rules with the same destination. When that happens, the rule with the highest priority will be applied. For the same destination bucket, a lower priority rule will not be applied when the replication configuration has two or more rules with overlapping scope. If there are two or more rules with the same scope and different destinations, both rules will be applied.

You can see a summary of all your rules in the Replication rules listing under the bucket Management page.

Screenshot of replication rules listing

Monitoring Replication
When you have all the rules configured, you can start uploading objects to the source bucket and monitor how they get replicated in all the different destinations.

To know the replication status of an object in the source bucket, you can see the Replication status in the object Details. The status types are:

  • COMPLETED: The replication was successful in all the destinations.
  • PENDING: The replication is still in progress.
  • FAILED: The replication failed to replicate in at least one of the destinations. When there is a failure in replication, the only way to fix it is by uploading the object again.

screenshot of object metadata

For replicated objects, you will see the REPLICA status under the Replication status.

You can also use CloudWatch metrics to monitor the replication. First, you need to enable metrics for each of the rules. And then in the bucket Metrics, you can choose which rules you want to see the metrics of and see the charts for each of them; the metrics are also available in the CloudWatch console.

Screenshot of replication metrics

Availability
S3 Replication (multi-destination) is available today in all AWS Regions. To get started, you can use the AWS Management Console, SDKs, S3 API, or AWS CloudFormation to create replication rules from one source bucket to multiple destination buckets.

Pricing for S3 Replication (multi-destination) applies for each rule. For pricing information, please visit the Amazon S3 pricing page.

For more information about this new feature visit the S3 Replication page.

Marcia

 

New AWS Amplify Admin UI Helps You Develop App Backends, No Cloud Experience Required

This post was originally published on this site

Today AWS Amplify announces new Admin UI to configure an application backend, and manage app users and content outside the AWS console. This new feature makes it easier to use AWS services and accelerates the development and management of full-stack web and mobile apps.

We launched AWS Amplify in November 2018, and since then it has been helping front-end web and mobile developers to quickly develop and deploy cloud-connected web and mobile applications. In order to stay ahead of the curve and deliver innovation to customers, businesses need to ship features fast. However, developers and non-developers who are unfamiliar with AWS fundamentals require training, which slows the entire process down.

AWS Amplify today launches a new Admin UI that enables team members to interface with AWS without requiring an AWS account (only the first deployment requires an AWS account).

The Admin UI provides simple yet powerful tools to model database tables, add authentication and authorization, and manage app content, users, and groups. It also offers the ability to manage the application users and content. The AWS Amplify Admin UI focuses on data types rather than backend infrastructure. All the backend resources generate infrastructure as code (IaC) templates that can be committed in the team repository and integrated with AWS Amplify continuous deployment workflow to manage the different environments.

Let’s Look at an Example Using the New AWS Amplify Admin UI
Imagine that you are a front-end web developer creating a website for a local restaurant. The restaurant owner wants to have a website where they can show their daily menu, and wants a simple way to update the content of the page every day.

There are many ways to solve this problem. You can spin up a server and install a CMS for the restaurant owner to manage the menu. For this particular use case, having a server exclusively to do this is just over-provisioning resources. Or, you can create the CMS yourself using serverless tools; however, this adds a lot of complexity and extra time to the development cycle.

Another option is to use the new AWS Amplify Admin UI that allows you to take advantage of many AWS managed services to create the backend quickly and also provides the ability to manage the application users and content.

The first thing you need to do is to create a new AWS Amplify app backend in the AWS Console. AWS Amplify will create a backend environment called staging. When, your app backend is ready, open the new Admin UI. If you would like to get another developer working on this application who doesn’t have experience with AWS, nor access to the AWS account, now you can grant access to them so they can continue the work on the UI. But for now, let’s imagine that you are going to do all the development.

Screenshot of opening the admin ui

The Admin UI contains all the tools that application developers need to configure the application backend and that content managers need to update the application content.

In the sidebar of the Admin UI (as shown in the following illustration), you can find all the different options for setting up your application.

To get started with the restaurant website, you need a menu data model. For that, first go to Data (1), then create a new data model call Menu (2), add the necessary fields and Save and deploy (3) the model. Saving and deploying the model will create all the needed AWS resources in the backend, like an AWS AppSync API and a Amazon DynamoDB table to host the menu items. Deploying takes a few minutes.

Screenshot for data modeling

After your model is deployed, you can start working on your website. For this example I will be using React, one of the web frameworks supported by AWS Amplify, but you can do the same example with any of the supported frameworks.

First, you need to install the AWS Amplify CLI:

npm install -g @aws-amplify/cli

Then create a new React application:

npx create-react-app react-amplified
cd react-amplified

When your application is created, you can configure it with the AWS Amplify application we just created. For that, go back to the Admin UI and select Local setup instructions (1), and execute the amplify command (2) in the directory where the web application is stored in your computer.

Screenshot of pulling amplify configuration

When you execute that command, a browser window will open that asks you if you are sure that you want to log in to the AWS Amplify Admin UI. Selecting yes will grant the AWS Amplify CLI access to deploy updates to the backend directly from your local desktop. The CLI will prompt you with a few questions about your local environment, and finally will ask if you plan to modify this backend locally. Choose yes.

When that process ends, you will notice some changes in your web application directory: a couple of new directories were created (amplify and src/models) and also a new file (aws-exports.js). These files and directories hold all the configuration for your AWS Amplify application.

Now it’s time to develop your application. To access the menu data model you created in the first steps, you will use the DataStore library from AWS Amplify. DataStore allows you to connect to your deployed database and perform CRUD, sort and filter operations from your UI to manipulate backend data. In the Admin UI, you can see some examples on how to create, update, delete and query the model.

Screenshot of using the data model

When the website is ready, it’s time to add some content. The restaurant owner is the one adding the menu items. In order for them to be able to add items, they need to have permissions to access the Admin UI for this application.

To do this, you need to create a new Admin UI account for the restaurant owner with the correct permissions. Go to the AWS Amplify console for your application and then to the Admin UI management and invite users.

When adding new users to the Admin UI you can define their permission scope. If you want to grant them full access, they will be able to configure and manage the application backend environment, and if you want them just to be able to edit the content, you can give them the manage only access scope. For the restaurant owner grant manage only permissions.

Screenshot for inviting new users to the AdminUI

After sending the invite, the restaurant owner will receive an email with a link to access the Admin UI and a username and password to log in. When they log in, they can go to the Content tab (1) and start adding items in their menu (2) and they can see the items available in the table in the screen (3).

Screenshot adding new content

From this screen, the restaurant owner can add, delete and edit items in their menu whenever they want to. These changes are reflected in the website immediately after they save.

The use cases for Admin UI are endless, such as blogs, e-commerce sites, planning apps, etc. Developers can build complex and feature-rich apps by focusing on their domain-specific data model instead of spending hours deploying and stitching together cloud infrastructure. AWS Amplify gives front-end developers the fastest and easiest way to develop mobile and web apps. And all accessible to developers that are not familiar with the cloud and without the need to give AWS access to everybody in the team.

Availability
AWS Amplify Admin UI is available at launch in: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (London).

For more information, visit the Amplify service page. Get started building a data model without an AWS account in the sandbox experience.

Marcia

Amazon EKS Distro: The Kubernetes Distribution Used by Amazon EKS

This post was originally published on this site

Our customers have told us that they want to focus on building innovative solutions for their customers, and focus less on the heavy lifting of managing Kubernetes infrastructure. That is why Amazon Elastic Kubernetes Service (EKS) has been so popular; we remove the burden of managing Kubernetes while our customers glean the benefits.

However, not all customers choose to use Amazon EKS. For example, they may have existing infrastructure investments, data residency requirements or compliance obligations that lead them to operate Kubernetes on-premises. Customers in these situations tell us that they spend a lot of effort to track updates, figure out compatible versions of Kubernetes and the complicated matrix of underlying components, test them for compatibility, and keep pace with the Kubernetes release cadence, which can be as frequent as every three to four months. If customers are not able to maintain pace for testing and qualifying new versions, they risk breaking changes, version compatibility issues, and running unsupported versions of Kubernetes lacking critical security patches.

We have learned a lot while providing Amazon EKS at AWS and have developed a deep understanding of how to deliver Kubernetes with operational security, stability, and reliability. Today we are sharing Amazon EKS Distro, which we built using that knowledge.

EKS Distro is a distribution of the same version of Kubernetes deployed by Amazon EKS, which you can use to manually create your own Kubernetes clusters anywhere you choose. EKS Distro provides the installable builds and code of open source Kubernetes used by Amazon EKS, including the dependencies and AWS-maintained patches. Using a choice of cluster creation and management tooling, you can create EKS Distro clusters in AWS on Amazon Elastic Compute Cloud (EC2), in other clouds, and on your on-premises hardware.

EKS Distro includes upstream open source Kubernetes components and third-party tools including configuration database, network, and storage components necessary for cluster creation. They include Kubernetes control plane components (kube-controller-manager, etcd, and CoreDNS) and Kubernetes worker node components (kubelet, CNI plugins, CSI Sidecar images, Metrics Server and AWS-IAM-authenticator).

Building a Cluster
The EKS Distro repository has everything you need to build and create Kubernetes clusters. The repository contains the raw documentation for EKS Distro, and it has been built and published at https://distro.eks.amazonaws.com.

To create a new cluster, I follow this section of the documentation. The guide explains how I can build all of the parts and ultimately deploy a cluster to some EC2 instances on AWS using the open source tool kOps. EKS Distro works with many other tools besides kOps. You can find the details in the partner section of the documentation, and many partners have released blogs today that explain how you can deploy using their tooling.

The guide explains that before I can build my cluster, I need to get several container images. I can get them from the EKS Distro Container repository, download them as a tarball, or build them from scratch. I opt to build my containers from scratch and follow the Build Guide. An hour later, I have managed to create twenty containers and have pushed them into Amazon Elastic Container Registry.

The instructions detail several prerequisites that are required by both the build and deploy stages. I follow the guide and install all of the tools suggested.

Next, as per the guide, I locate the kops.sh script in the development folder of the EKS Distro repository. After running the script, it prompts me to enter a Fully Qualified Domain Name (FQDN). I provide newsblog.thebeebs.net.

This script does several things, including creating an S3 bucket in my account to store artifacts required by kOps. Also, it creates a file called newsblog.thebeebs.net.yaml. I edit this file and replace the container Image URLs with ones that point to my images in Elastic Container Registry.

I continue to follow the guide, which now instructs me to run some kOps commands to create my cluster. These commands use the newsblog.thebeebs.net.yaml file, which was an output of the previous step.

CLUSTER_NAME=newsblog.thebeebs.net
kops create -f ./$CLUSTER_NAME.yaml
kops create secret --name $CLUSTER_NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
kops update cluster $CLUSTER_NAME --yes
kops validate cluster --wait 10m
cat << EOF > aws-iam-authenticator.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-iam-authenticator
  namespace: kube-system
  labels:
    k8s-app: aws-iam-authenticator
data:
  config.yaml: |
    clusterID: $CLUSTER_NAME
EOF

One of these commands creates a file called aws-iam-authenticator.yaml. I will apply this file to my kubernetes cluster so that it works correctly with the aws-iam-authenticator.

kubectl apply -f aws-iam-authenticator.yaml

I can now verify that my Kubernetes cluster is using the EKS Distro images by using kubectl to list all of the namespaces.

kubectl get po --all-namespaces -o json | jq -r .items[].spec.containers[].image | sort

Lastly, I delete my cluster by using kOps and issuing a delete command.

kops delete -f ./newsblog.thebeebs.net.yaml --yes

Updates
New versions of EKS Distro will be released soon after we make releases to Amazon EKS. The source code, open source tools, and settings are provided for reproducible builds so you can be assured EKS Distro matches what is deployed by Amazon EKS.

Things to Know
EKS Distro supports the same versions of Kubernetes and point releases that Amazon EKS uses. EKS Distro provides the same upstream versions of Kubernetes and dependencies that operating system vendors have tested and confirmed work with Kubernetes. This means that EKS Distro already works with common operating systems, such as CentOS, Canonical Ubuntu, Red Hat Enterprise Linux, Suse, and more.

Pricing and Support
EKS Distro is an open source project and will be distributed for free. Please collaborate with us on GitHub to make it even better. For example, if you find any issues, please submit them or create a pull request and we will fix them on a best effort basis. Partners will receive support through the Amazon Partner Network program and customers that adopt EKS Distro through partners will receive support from those providers.

What is Coming Next?
In 2021 we will be launching EKS Anywhere, which will provide an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support, it will enable you to centrally backup, recover, patch, and upgrade your production clusters with minimal disruption. EKS Anywhere creates clusters based on EKS Distro, and so you will have version consistency with Amazon EKS. This version and tooling consistency will reduce support costs, and eliminate the redundant effort of using multiple tools for managing your on-premises and Amazon EKS clusters.

Available Now
EKS Distro is available today for download and you can get the source and builds from GitHub. To help you get started, check out the documentation.

Happy Deploying!

— Martin