Tag Archives: AWS

Meet the Newest AWS News Bloggers!

This post was originally published on this site

I wrote my first post for this blog way back in 2004! Over the course of the first decade, the amount of time that I devoted to the blog grew from a small fraction of my day to a full day. In the early days my email inbox was my primary source of information about upcoming launches, and also my primary tool for managing my work backlog. When that proved to be unscalable, Ana came onboard and immediately built a ticketing system and set up a process for teams to request blog posts. Today, a very capable team (Greg, Devin, and Robin) takes care of tickets, platforms, comments, metrics, and so forth so that I can focus on what I like to do best: using new services and writing about them!

Over the years we have experimented with a couple of different strategies to scale the actual writing process. If you are a long-time reader you may have seen posts from Mike, Jinesh, Randall, Tara, Shaun, and a revolving slate of guest bloggers.

News Bloggers
I would like to introduce you to our current lineup of AWS News Bloggers. Like me, the bloggers have a technical background and are prepared to go hands-on with every new service and feature. Here’s our roster:

Steve Roberts (@bellevuesteve) – Steve focuses on .NET tools and technologies.

Julien Simon (@julsimon) – Julien likes to help developers and enterprises to bring their ideas to life.

Brandon West (@bwest) – Brandon leads our developer relations team in the Americas, and has written a book on the topic.

Martin Beeby (@thebeebs) – Martin focuses on .NET applications, and has worked as a C# and VB developer since 2001.

Danilo Poccia (@danilop) – Danilo works with companies of any size to support innovation. He is the author of AWS Lambda in Action.

Sébastien Stormacq (@sebesto) – Sébastien works with builders to unlock the value of the AWS cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity, and creativity.

We are already gearing up for re:Invent 2019, and can’t wait to bring you a rich set of blog posts. Stay tuned!

Jeff;

Learn about AWS Services & Solutions – July AWS Online Tech Talks

This post was originally published on this site

AWS Tech Talks

Join us this July to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

July 24, 2019 | 11:00 AM – 12:00 PM PTBuilding System of Record Applications with Amazon QLDB – Dive deep into the features and functionality of our first-of-its-kind, purpose-built ledger database, Amazon QLDB.

Containers

July 31, 2019 | 11:00 AM – 12:00 PM PTMachine Learning on Amazon EKS – Learn how to use KubeFlow and TensorFlow on Amazon EKS for your machine learning needs.

Data Lakes & Analytics

July 31, 2019 | 1:00 PM – 2:00 PM PTHow to Build Serverless Data Lake Analytics with Amazon Athena – Learn how to use Amazon Athena for serverless SQL analytics on your data lake, transform data with AWS Glue, and manage access with AWS Lake Formation.

August 1, 2019 | 11:00 AM – 12:00 PM PTEnhancing Your Apps with Embedded Analytics – Learn how to add powerful embedded analytics capabilities to your applications, portals and websites with Amazon QuickSight.

Databases

July 25, 2019 | 9:00 AM – 10:00 AM PTMySQL Options on AWS: Self-Managed, Managed, Serverless – Understand different self-managed and managed MySQL deployment options on AWS, and watch a demonstration of creating a serverless MySQL-compatible database using Amazon Aurora.

DevOps

July 30, 2019 | 9:00 AM – 10:00 AM PT Build a Serverless App in Under 20 Minutes with Machine Learning Functionality Using AWS Toolkit for Visual Studio Code – Get a live demo on how to create a new, ready-to-deploy serverless application.

End-User Computing
July 23, 2019 | 1:00 PM – 2:00 PM PTA Security-First Approach to Delivering End User Computing Services – Learn how AWS improves security and reduces cost by moving data to the cloud while providing secure, fast access to desktop applications and data.

IoT

July 30, 2019 | 11:00 AM – 12:00 PM PTSecurity Spotlight: Best Practices for Edge Security with Amazon FreeRTOS – Learn best practices for building a secure embedded IoT project with Amazon FreeRTOS.

Machine Learning

July 23, 2019 | 9:00 AM – 10:00 AM PTGet Started with Machine Learning: Introducing AWS DeepLens, 2019 Edition – Learn the basics of machine learning through building computer vision apps with the new AWS DeepLens.

August 1, 2019 | 9:00 AM – 10:00 AM PT Implementing Machine Learning Solutions with Amazon SageMaker – Learn how machine learning with Amazon SageMaker can be used to solve industry problems.

Mobile

July 31, 2019 | 9:00 AM – 10:00 AM PT Best Practices for Android Authentication on AWS with AWS Amplify – Learn the basics of Android authentication on AWS and leverage the built in AWS Amplify Authentication modules to provide user authentication in just a few lines of code.

Networking & Content Delivery

July 23, 2019 | 11:00 AM – 12:00 PM PT Simplify Traffic Monitoring and Visibility with Amazon VPC Traffic Mirroring – Learn to easily mirror your VPC traffic to monitor and secure traffic in real-time with monitoring appliances of your choice.

Productivity & Business Solutions

July 30, 2019 | 1:00 PM – 2:00 PM PTGet Started in Minutes with Amazon Connect in Your Contact Center – See how easy it is to get started with Amazon Connect, based on the same technology used by Amazon Customer Service to power millions of customer conversations.

Robotics

July 25, 2019 | 11:00 AM – 12:00 PM PT Deploying Robotic Simulations Using Machine Learning with Nvidia JetBot and AWS RoboMaker – Learn how to deploy robotic simulations (and find dinosaurs along the way) using machine learning with Nvidia JetBot and AWS RoboMaker.

Security, Identity & Compliance

July 24, 2019 | 9:00 AM – 10:00 AM PT Deep Dive on AWS Certificate Manager Private CA – Creating and Managing Root and Subordinate Certificate Authorities – Learn how to quickly and easily create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs.

Serverless

July 24, 2019 | 1:00 PM – 2:00 PM PT Getting Started with AWS Lambda and Serverless Computing – Learn how to run code without provisioning or managing servers with AWS Lambda.

AWS New York Summit 2019 – Summary of Launches & Announcements

This post was originally published on this site

The AWS New York Summit just wrapped up! Here’s a quick summary of what we launched and announced:

Amazon EventBridge – This new service builds on the event-processing model that forms the basis for Amazon CloudWatch Events, and makes it easy for you to integrate your AWS applications with SaaS applications such as Zendesk, Datadog, SugarCRM, and Onelogin. Read my blog post, Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications, to learn more.

Werner announces EventBridge – Photo by Serena

Cloud Development Kit – CDK is now generally available, with support for TypeScript and Python. Read Danilo‘s post, AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available, to learn more.

Fluent Bit Plugins for AWSFluent Bit is a multi-platform, open source log processor and forwarder that is compatible with Docker and Kubernetes environments. You can now build a container image that includes new Fluent Bit plugins for Amazon CloudWatch and Amazon Kinesis Data Firehose. The plugins routes logs to CloudWatch, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service. Read Centralized Container Logging with Fluent Bit to learn more.


Nicki, Randall, Robert, and Steve – Photo by Deepak

AWS Toolkit for VS Code – This toolkit lets you develop and test locally (including step-through debugging) in a Lambda-like environment, and then deploy to the AWS Region of your choice. You can invoke Lambda functions locally or remotely, with full control of the function configuration, including the event payload and environment variables. To learn more, read Announcing AWS Toolkit for Visual Studio Code.

Amazon CloudWatch Container Insights (preview) – You can now create CloudWatch Dashboards that monitor the performance and health of your Amazon ECS and AWS Fargate clusters, tasks, containers, and services. Read Using Container Insights to learn more.

CloudWatch Anomaly Detection (preview) – This cool addition to CloudWatch uses machine learning to continuously analyze system and application metrics, determine a nominal baseline, and surface anomalies, all without user intervention. It adapts to trends, and helps to identity unexpected changes in performance or behavior. Read the CloudWatch Anomaly Detection documentation to learn more.

Amazon SageMaker Managed Spot Training (coming soon) – You will soon be able to use Amazon EC2 Spot to lower the cost of training your machine learning models. This upcoming enhancement to SageMaker will lower your training costs by up to 70%, and can be used in conjunction with Automatic Model Training.

Jeff;

 

AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available

This post was originally published on this site

Managing your Infrastructure as Code provides great benefits and is often a stepping stone for a successful application of DevOps practices. In this way, instead of relying on manually performed steps, both administrators and developers can automate provisioning of compute, storage, network, and application services required by their applications using configuration files.

For example, defining your Infrastructure as Code makes it possible to:

  • Keep infrastructure and application code in the same repository
  • Make infrastructure changes repeatable and predictable across different environments, AWS accounts, and AWS regions
  • Replicate production in a staging environment to enable continuous testing
  • Replicate production in a performance test environment that you use just for the time required to run a stress test
  • Release infrastructure changes using the same tools as code changes, so that deployments include infrastructure updates
  • Apply software development best practices to infrastructure management, such as code reviews, or deploying small changes frequently

Configuration files used to manage your infrastructure are traditionally implemented as YAML or JSON text files, but in this way you’re missing most of the advantages of modern programming languages. Specifically with YAML, it can be very difficult to detect a file truncated while transferring to another system, or a missing line when copying and pasting from one template to another.

Wouldn’t it be better if you could use the expressive power of your favorite programming language to define your cloud infrastructure? For this reason, we introduced last year in developer preview the AWS Cloud Development Kit (CDK), an extensible open-source software development framework to model and provision your cloud infrastructure using familiar programming languages.

I am super excited to share that the AWS CDK for TypeScript and Python is generally available today!

With the AWS CDK you can design, compose, and share your own custom components that incorporate your unique requirements. For example, you can create a component setting up your own standard VPC, with its associated routing and security configurations. Or a standard CI/CD pipeline for your microservices using tools like AWS CodeBuild and CodePipeline.

Personally I really like that by using the AWS CDK, you can build your application, including the infrastructure, in your IDE, using the same programming language and with the support of autocompletion and parameter suggestion that modern IDEs have built in, without having to do a mental switch between one tool, or technology, and another. The AWS CDK makes it really fun to quickly code up your AWS infrastructure, configure it, and tie it together with your application code!

How the AWS CDK works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

How to use the AWS CDK
We continuously add new features based on the feedback of our customers. That means that when creating an AWS resource, you often have to specify many options and dependencies. For example, if you create a VPC you have to think about how many Availability Zones (AZs) to use and how to configure subnets to give private and public access to the resources that will be deployed in the VPC.

To make it easy to define the state of AWS resources, the AWS Construct Library exposes the full richness of many AWS services with sensible defaults that you can customize as needed. In the case above, the VPC construct creates by default public and private subnets for all the AZs in the VPC, using 3 AZs if not specified.

For creating and managing CDK apps, you can use the AWS CDK Command Line Interface (CLI), a command-line tool that requires Node.js and can be installed quickly with:

npm install -g aws-cdk

After that, you can use the CDK CLI with different commands:

  • cdk init to initialize in the current directory a new CDK project in one of the supported programming languages
  • cdk synth to print the CloudFormation template for this app
  • cdk deploy to deploy the app in your AWS Account
  • cdk diff to compare what is in the project files with what has been deployed

Just run cdk to see more of the available commands and options.

You can easily include the CDK CLI in your deployment automation workflow, for example using Jenkins or AWS CodeBuild.

Let’s use the AWS CDK to build two sample projects, using different programming languages.

An example in TypeScript
For the first project I am using TypeScript to define the infrastructure:

cdk init app --language=typescript

Here’s a simplified view of what I want to build, not entering into the details of the public/private subnets in the VPC. There is an online frontend, writing messages in a queue, and an asynchronous backend, consuming messages from the queue:

Inside the stack, the following TypeScript code defines the resources I need, and their relations:

  • First I define the VPC and an Amazon ECS cluster in that VPC. By using the defaults provided by the AWS Construct Library, I don’t need to specify any parameter here.
  • Then I use an ECS pattern that in a few lines of code sets up an Amazon SQS queue and an ECS service running on AWS Fargate to consume the messages in that queue.
  • The ECS pattern library provides higher-level ECS constructs which follow common architectural patterns, such as load balanced services, queue processing, and scheduled tasks.
  • A Lambda function has the name of the queue, created by the ECS pattern, passed as an environment variable and is granted permissions to send messages to the queue.
  • The code of the Lambda function and the Docker image are passed as assets. Assets allow you to bundle files or directories from your project and use them with Lambda or ECS.
  • Finally, an Amazon API Gateway endpoint provides an HTTPS REST interface to the function.
const myVpc = new ec2.Vpc(this, "MyVPC");

const myCluster = new ecs.Cluster(this, "MyCluster", {
  vpc: myVpc
});

const myQueueProcessingService = new ecs_patterns.QueueProcessingFargateService(
  this, "MyQueueProcessingService", {
    cluster: myCluster,
    memoryLimitMiB: 512,
    image: ecs.ContainerImage.fromAsset("my-queue-consumer")
  });

const myFunction = new lambda.Function(
  this, "MyFrontendFunction", {
    runtime: lambda.Runtime.NODEJS_10_X,
    timeout: Duration.seconds(3),
    handler: "index.handler",
    code: lambda.Code.asset("my-front-end"),
    environment: {
      QUEUE_NAME: myQueueProcessingService.sqsQueue.queueName
    }
  });

myQueueProcessingService.sqsQueue.grantSendMessages(myFunction);

const myApi = new apigateway.LambdaRestApi(
  this, "MyFrontendApi", {
    handler: myFunction
  });

I find this code very readable and easier to maintain than the corresponding JSON or YAML. By the way, cdk synth in this case outputs more than 800 lines of plain CloudFormation YAML.

An example in Python
For the second project I am using Python:

cdk init app --language=python

I want to build a Lambda function that is executed every 10 minutes:

When you initialize a CDK project in Python, a virtualenv is set up for you. You can activate the virtualenv and install your project requirements with:

source .env/bin/activate

pip install -r requirements.txt

Note that Python autocompletion may not work with some editors, like Visual Studio Code, if you don’t start the editor from an active virtualenv.

Inside the stack, here’s the Python code defining the Lambda function and the CloudWatch Event rule to invoke the function periodically as target:

myFunction = aws_lambda.Function(
    self, "MyPeriodicFunction",
    code=aws_lambda.Code.asset("src"),
    handler="index.main",
    timeout=core.Duration.seconds(30),
    runtime=aws_lambda.Runtime.PYTHON_3_7,
)

myRule = aws_events.Rule(
    self, "MyRule",
    schedule=aws_events.Schedule.rate(core.Duration.minutes(10)),
)
myRule.add_target(aws_events_targets.LambdaFunction(myFunction))

Again, this is easy to understand even if you don’t know the details of AWS CDK. For example, durations include the time unit and you don’t have to wonder if they are expressed in seconds, milliseconds, or days. The output of cdk synth in this case is more than 90 lines of plain CloudFormation YAML.

Available Now
There is no charge for using the AWS CDK, you pay for the AWS resources that are deployed by the tool.

To quickly get hands-on with the CDK, start with this awesome step-by-step online tutorial!

More examples of CDK projects, using different programming languages, are available in this repository:

https://github.com/aws-samples/aws-cdk-examples

You can find more information on writing your own constructs here.

The AWS CDK is open source and we welcome your contribution to make it an even better tool:

https://github.com/awslabs/aws-cdk

Check out our source code on GitHub, start building your infrastructure today using TypeScript or Python, or try different languages in developer preview, such as C# and Java, and give us your feedback!

Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications

This post was originally published on this site

Many AWS customers also make great use of SaaS (Software as a Service) applications. For example, they use Zendesk to manage customer service & support tickets, PagerDuty to handle incident response, and SignalFx for real-time monitoring. While these applications are quite powerful on their own, they are even more so when integrated into a customer’s own systems, databases, and workflows.

New Amazon EventBridge
In order to support this increasingly common use case, we are launching Amazon EventBridge today. Building on the powerful event processing model that forms the basis for CloudWatch Events, EventBridge makes it easy for our customers to integrate their own AWS applications with SaaS applications. The SaaS applications can be hosted anywhere, and simply publish events to an event bus that is specific to each AWS customer. The asynchronous, event-based model is fast, clean, and easy to use. The publisher (SaaS application) and the consumer (code running on AWS) are completely decoupled, and are not dependent on a shared communication protocol, runtime environment, or programming language. You can use simple Lambda functions to handle events that come from a SaaS application, and you can also route events to a wide variety of other AWS targets. You can store incident or ticket data in Amazon Redshift, train a machine learning model on customer support queries, and much more.

Everything that you already know (and hopefully love) about CloudWatch Events continues to apply, with one important change. In addition to the existing default event bus that accepts events from AWS services, calls to PutEvents, and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.

As part of today’s launch we are also opening up a partner program. The integration process is simple and straightforward, and generally requires less than one week of developer time.

All About Amazon EventBridge
Here are some terms that you need to know in order to understand how to use Amazon EventBridge:

Partner – An organization that has integrated their SaaS application with EventBridge.

Customer – An organization that uses AWS, and that has subscribed to a partner’s SaaS application.

Partner Name – A unique name that identifies an Amazon EventBridge partner.

Partner Event Bus – An Event Bus that is used to deliver events from a partner to AWS.

EventBridge can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS SDKs. There are distinct commands and APIs for partners and for customers. Here are some of the most important ones:

PartnersCreatePartnerEventSource, ListPartnerEventSourceAccounts, ListPartnerEventSources, PutPartnerEvents.

CustomersListEventSources, ActivateEventSource, CreateEventBus, ListEventBuses, PutRule, PutTargets.

Amazon EventBridge for Partners & Customers
As I noted earlier, the integration process is simple and straightforward. You need to allow your customers to enter an AWS account number and to select an AWS region. With that information in hand, you call CreatePartnerEventSource in the desired region, inform the customer of the event source name and tell them that they can accept the invitation to connect, and wait for the status of the event source to change to ACTIVE. Then, each time an event of interest to the customer occurs, you call PutPartnerEvents and reference the event source.

The process is just as simple on the customer side. You accept the invitation to connect by calling CreateEventBus to create an event bus associated with the event source. You add rules and targets to the event bus, and prepare your Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. You can use DeActivateEventSource and ActivateEventSource to control the flow.

Here’s the overall flow (diagram created using SequenceDiagram):

Each partner has the freedom to choose the events that are relevant to their application, and to define the data elements that are included with each event.

Using EventBridge
Starting from the EventBridge Console, I click Partner event sources, find the partner of interest, and click it to learn more:

Each partner page contains additional information about the integration. I read the info, and click Set up to proceed:

The page provides me with a simple, three-step procedure to set up my event source:


After the partner creates the event source, I return to Partner event sources and I can see that the Zendesk event source is Pending:

I click the pending event source, review the details, and then click Associate with event bus:

I have the option to allow other AWS accounts, my Organization, or another Organization to access events on the event bus that I am about to create. After I have confirmed that I trust the origin and have added any additional permissions, I click Associate:

My new event bus is now available, and is listed as a Custom event bus:

I click Rules, select the event bus, and see the rules (none so far) associated with it. Then I click Create rule to make my first rule:

I enter a name and a description for my first rule:

Then I define a pattern, choosing Zendesk as the Service name:

Next, I select a Lambda function as my target:

I can also choose from many other targets:

After I create my rule, it will be activated in response to activities that occur within my Zendesk account. The initial set of events includes TicketCreated, CommentCreated, TagsChanged, AgentAssignmentChanged, GroupAssignmentChanged, FollowersChanged, EmailCCsChanged, CustomFieldChanged, and StatusChanged. Each event includes a rich set of properties; you’ll need to consult the documentation to learn more.

Partner Event Sources
We are launching with ten partner event sources, with more to come:

  • Datadog
  • Zendesk
  • PagerDuty
  • Whispir
  • Saviynt
  • Segment
  • SignalFx
  • SugarCRM
  • OneLogin
  • Symantec

If you have a SaaS application and you are ready to integrate, read more about EventBridge Partner Integration.

Now Available
Amazon EventBridge is available now and you can start using it today in all public AWS regions in the aws partition. Support for the AWS regions in China, and for the Asia Pacific (Osaka) Local Region, is in the works.

Pricing is based on the number of events published to the event buses in your account, billed at $1 for every million events. There is no charge for events published by AWS services.

Jeff;

PS – As you can see from this post, we are paying even more attention to the overall AWS event model, and have a lot of interesting goodies on the drawing board. With this launch, CloudWatch Events has effectively earned a promotion to a top-level service, and I’ll have a lot more to say about that in the future!

Amazon Aurora PostgreSQL Serverless – Now Generally Available

This post was originally published on this site

The database is usually the most critical part of a software architecture and managing databases, especially relational ones, has never been easy. For this reason, we created Amazon Aurora Serverless, an auto-scaling version of Amazon Aurora that automatically starts up, shuts down and scales up or down based on your application workload.

The MySQL-compatible edition of Aurora Serverless has been available for some time now. I am pleased to announce that the PostgreSQL-compatible edition of Aurora Serverless is generally available today.

Before moving on with details, I take the opportunity to congratulate the Amazon Aurora development team that has just won the 2019 Association for Computing Machinery’s (ACM) Special Interest Group on Management of Data (SIGMOD) Systems Award!

When you create a database with Aurora Serverless, you set the minimum and maximum capacity. Your client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is very fast because resources are “warm” and ready to be added to serve your requests.

 

There is no change with Aurora Serverless on how storage is managed by Aurora. The storage layer is independent from the compute resources used by the database. There is no need to provision storage in advance. The minimum storage is 10GB and, based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance.

Creating an Aurora Serverless PostgreSQL Database
Let’s start an Aurora Serverless PostgreSQL database and see the automatic scalability at work. From the Amazon RDS console, I select to create a database using Amazon Aurora as engine. Currently, Aurora serverless is compatible with PostgreSQL version 10.5. Selecting that version, the serverless option becomes available.

I give the new DB cluster an identifier, choose my master username, and let Amazon RDS generate a password for me. I will be able to retrieve my credentials during database creation.

I can now select the minimum and maximum capacity for my database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration I choose to pause compute capacity after 5 minutes of inactivity. Based on my settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.

Testing Some Load on the Database
To generate some load on the database I am using sysbench on an EC2 instance. There are a couple of Lua scripts bundled with sysbench that can help generate an online transaction processing (OLTP) workload:

  • The first script, parallel_prepare.lua, generates 100,000 rows per table for 24 tables.
  • The second script, oltp.lua, generates workload against those data using 64 worker threads.

By using those scripts, I start generating load on my database cluster. As you can see from this graph, taken from the RDS console monitoring tab, the serverless database capacity grows and shrinks to follow my requirements. The metric shown on this graph is the number of ACUs used by the database cluster. First it scales up to accommodate the sysbench workload. When I stop the load generator, it scales down and then pauses.

Available Now
Aurora Serverless PostgreSQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). With Aurora Serverless, you pay on a per-second basis for the database capacity you use when the database is active, plus the usual Aurora storage costs.

For more information on Amazon Aurora, I recommend this great post explaining why and how it was created:

Amazon Aurora ascendant: How we designed a cloud-native relational database

It’s never been so easy to use a relational database in production. I am so excited to see what you are going to use it for!

OpsCenter – A New Feature to Streamline IT Operations

This post was originally published on this site

The AWS teams are always listening to customers and trying to understand how they can improve services to make customers more productive. A new feature in AWS Systems Manager called OpsCenter exemplifies this approach by enabling customers to aggregate issues, events and alerts, across services. So customers can go to one place to view, investigate, and remediate issues reducing the need to navigate across multiple different AWS services.

Issues, events and alerts appear as operations items (OpsItems) in this new console and provide contextual information, historical guidance, and quick solution steps. The feature aims to improve the mean time to resolution, making engineers more productive by ensuring key investigation data is available in one place.

Engineers working on an OpsItem get access to information such as:

  • Event, resource and account details
  • Past OpsItems with similar characteristics
  • Related AWS Config changes and relationships
  • AWS CloudTrail logs
  • Amazon CloudWatch alarms
  • AWS CloudFormation Stack information
  • Other quick-links to access logs and metrics
  • List of runbooks and recommended runbooks
  • Additional information passed to OpsCenter through AWS services

This information helps engineers to investigate and remediate operational issues faster. Engineers can use OpsCenter to view and address problems using the Systems Manager console or via the Systems Manager OpsCenter APIs.

I’ll spend the rest of this blog exploring the capabilities of this new feature. To get started, I open the Systems Manager Console, make sure that I am in the region of interest, and click OpsCenter inside the Operations Management menu which is on the far left of the screen.

After arriving at the OpsCenter screen for the first time and clicking on “Getting Started” I am prompted with a configure sources screen. This screen sets up the systems with some example CloudWatch rules that will create OpsItems when specific rules trigger. For example, one of the CloudWatch rules will alert if an AutoScaling EC2 instance is stopped or terminated. On this screen, you need to configure and add the ARN of an IAM role that has permission to create OpsItems. This security role is used by the CloudWatch rules to create the OpsItems. You can, of course, create your OpsItems through the API or by creating custom CloudWatch rules.

Now the system has set me up some CloudWatch rules I thought I would test it out by triggering an alert. In the EC2 console, I will intentionally deregister (delete) the Amazon Machine Image that is associated with my AutoScaling Group. I will then increase the Desired Capacity of my AutoScaling group from 2 to 4. The AutoScaling group will later try to create new instances; however, it will fail because I have deleted the AMI.

As I expected this triggered the CloudWatch rule to create an OpsItem in OpsCenter console. There is now one item open in the OpsItem status summary dashboard. I click on this to get more detail on the open OpsItems.

This gives me a list of all the open OpsItems, and I can see that I have one with the title “Auto Scaling EC2 instance launch failed” which has been created by CloudWatch rules because I deleted the AMI associated with the AutoScaling group. Clicking on that OpsItems takes me to more detail of the OpsItem.

I can from this overview screen start to explore the item. Looking around this screen, I can find out more information about this OpsItem and see it is collecting data from numerous services and presenting it in one place.

Further down the screen I can see other Similar OpsItems and can explore them. In a real situation, this might give me contextual information as to how similar problems were solved in the past, ensuring that operations teams learn from their previous collective experience. I can also manually add a relationship between OpsItems if they are connected. Importantly the Operational data section gives me information about the cause. The status message is particularly useful since it’s calling out the issue: that the AMI does not exist.

On the related resources details screen, I can find out more information about this OpsItem. For example, I can see tag information about the resources alongside relevant CloudWatch alarms. I can explore details from AWS config as well as drill into CloudTrail logs. I can even see if the resources are associated with any CloudFormation stacks.

Earlier on, I created a CloudWatch alarm that will alert when the number of instances on my AutoScaling group falls below the desired instance threshold (4 Instances). As you can see, I don’t need to go into the CloudWatch console to view this, I can see right from this screen that I have an Alarm State for Booking App Instance Count Low.

The Runbooks section is fascinating; what it is offering me is automated ways in which I can resolve this issue. There are several built-in Runbooks; however, I have a custom one which, luckily enough, automates the fix for this exact problem. It will create a new AMI based upon one of the healthy instances in my AutoScaling Group and then update the config to use that new AMI when it creates instances. To run this automation, I select the runbook and press execute.

It asks me to provide some parameters for the automation job. I paste the AutoScaling Group Name (BookingsAppASG) as the only required parameter and press Execute.

After a minute or so a green success signifier appears in the Latest Status column of the runbook and I am now able to view the logs and even save the output to operation data on the OpsItem so that other engineers can clearly see what I have done.

 

Back in the OpsCenter OpsItem related resource details screen, I can now see that my CloudWatch alarm is green and in an OK state, signifying that my AutoScalling group currently has four instances running and I am safe to resolve the OpsItem.

This service is available now, and you can start using it today in all public AWS regions so why not open up the console and start exploring all the ways that you can save you and your team valuable time.

EC2 Instance Update – Two More Sizes of M5 & R5 Instances

This post was originally published on this site

When I introduced the Nitro system last year I said:

The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Today I am happy to make good on that promise, with the introduction of two additional sizes of the Intel and AMD-powered M5 and R5 instances, including optional NVMe storage. These additional sizes will make it easier for you to find an instance size that is a perfect match for your workload.

M5 Instances
These instances are designed for general-purpose workloads such as web servers, app servers, dev/test environments, gaming, logging, and media processing. Here are the specs:

Instance Name vCPUs RAM Storage EBS-Optimized Bandwidth Network Bandwidth
m5.8xlarge
32  128 GiB EBS Only 5 Gbps 10 Gbps
m5.16xlarge
64  256 GiB EBS Only 10 Gbps 20 Gbps
m5a.8xlarge
32  128 GiB EBS Only 3.5 Gbps Up to 10 Gbps
m5a.16xlarge
64  256 GiB EBS Only 7 Gbps 12 Gbps
m5d.8xlarge
32 128 GiB 2 x 600 GB NVMe SSD 5 Gbps 10 Gbps
m5d.16xlarge
64 256 GiB 4 x 600 GB NVMe SSD 10 Gbps 20 Gbps

If you are currently using m4.10xlarge or m4.16xlarge instances, you now have several upgrade paths.

To learn more, read M5 – The Next Generation of General-Purpose EC2 Instances, New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances, and EC2 Instance Update – M5 Instances with Local NVMe Storage.

R5 Instances
These instances are designed for data mining, in-memory analytics, caching, simulations, and other memory-intensive workloads. Here are the specs:

Instance Name vCPUs RAM Storage EBS-Optimized Bandwidth Network Bandwidth
r5.8xlarge
32 256 GiB EBS Only 5 Gbps 10 Gbps
r5.16xlarge
64 512 GiB EBS Only 10 Gbps 20 Gbps
r5a.8xlarge
32 256 GiB EBS Only 3.5 Gbps Up to 10 Gbps
r5a.16xlarge
64 512 GiB EBS Only 7 Gbps 12 Gbps
r5d.8xlarge
32 256 GiB 2 x 600 GB NVMe SSD 5 Gbps 10 Gbps
r5d.16xlarge
64 512 GiB 4 x 600 GB NVMe SSD 10 Gbps 20 Gbps

If you are currently using r4.8xlarge or r4.16xlarge instances, you now have several easy and powerful upgrade paths.

To learn more, read Amazon EC2 Instance Update – Faster Processors and More Memory.

Things to Know
Here a couple of things to keep in mind when you use these new instances:

Processor Choice – You can choose between Intel and AMD EPYC processors (instance names include an “a”). Read my post, New Lower-Cost AMD-Powered M5a and R5a EC2 Instances, to learn more.

AMIs – You can use the same AMIs that you use with your existing M5 and R5 instances.

Regions – The new sizes are available in all AWS Regions where the existing sizes are already available.

Local NVMe Storage – On “d” instances with local NVMe storage, the devices are encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated. The local devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

Available Now
The new sizes are available in On-Demand, Spot, and Reserved Instance form and you can start using them today!

Jeff;

 

New – VPC Traffic Mirroring – Capture & Inspect Network Traffic

This post was originally published on this site

Running a complex network is not an easy job. In addition to simply keeping it up and running, you need to keep an ever-watchful eye out for unusual traffic patterns or content that could signify a network intrusion, a compromised instance, or some other anomaly.

VPC Traffic Mirroring
Today we are launching VPC Traffic Mirroring. This is a new feature that you can use with your existing Virtual Private Clouds (VPCs) to capture and inspect network traffic at scale. This will allow you to:

Detect Network & Security Anomalies – You can extract traffic of interest from any workload in a VPC and route it to the detection tools of your choice. You can detect and respond to attacks more quickly than is possible with traditional log-based tools.

Gain Operational Insights – You can use VPC Traffic Mirroring to get the network visibility and control that will let you make security decisions that are better informed.

Implement Compliance & Security Controls – You can meet regulatory & compliance requirements that mandate monitoring, logging, and so forth.

Troubleshoot Issues – You can mirror application traffic internally for testing and troubleshooting. You can analyze traffic patterns and proactively locate choke points that will impair the performance of your applications.

You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC. As you will soon see, you can choose to capture all traffic or you can use filters to capture the packets that are of particular interest to you, with an option to limit the number of bytes captured per packet. You can use VPC Traffic Mirroring in a multi-account AWS environment, capturing traffic from VPCs spread across many AWS accounts and then routing it to a central VPC for inspection.

You can mirror traffic from any EC2 instance that is powered by the AWS Nitro system (A1, C5, C5d, M5, M5a, M5d, R5, R5a, R5d, T3, and z1d as I write this).

Getting Started with VPC Traffic Mirroring
Let’s review the key elements of VPC Traffic Mirroring and then set it up:

Mirror Source – An AWS network resource that exists within a particular VPC, and that can be used as the source of traffic. VPC Traffic Mirroring supports the use of Elastic Network Interfaces (ENIs) as mirror sources.

Mirror Target – An ENI or Network Load Balancer that serves as a destination for the mirrored traffic. The target can be in the same AWS account as the Mirror Source, or in a different account for implementation of the central-VPC model that I mentioned above.

Mirror Filter – A specification of the inbound or outbound (with respect to the source) traffic that is to be captured (accepted) or skipped (rejected). The filter can specify a protocol, ranges for the source and destination ports, and CIDR blocks for the source and destination. Rules are numbered, and processed in order within the scope of a particular Mirror Session.

Traffic Mirror Session – A connection between a mirror source and target that makes use of a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target.

You can set this up using the VPC Console, EC2 CLI, or the EC2 API, with CloudFormation support in the works. I’ll use the Console.

I already have ENI that I will use as my mirror source and destination (in a real-world use case I would probably use an NLB destination):

The MirrorTestENI_Source and MirrorTestENI_Destination ENIs are already attached to suitable EC2 instances. I open the VPC Console and scroll down to the Traffic Mirroring items, then click Mirror Targets:

I click Create traffic mirror target:

I enter a name and description, choose the Network Interface target type, and select my ENI from the menu. I add a Blog tag to my target, as is my practice, and click Create:

My target is created and ready to use:

Now I click Mirror Filters and Create traffic mirror filter. I create a simple filter that captures inbound traffic on three ports (22, 80, and 443), and click Create:

Again, it is created and ready to use in seconds:

Next, I click Mirror Sessions and Create traffic mirror session. I create a session that uses MirrorTestENI_Source, MainTarget, and MyFilter, allow AWS to choose the VXLAN network identifier, and indicate that I want the entire packet mirrored:

And I am all set. Traffic from my mirror source that matches my filter is encapsulated as specified in RFC 7348 and delivered to my mirror target. I can then use tools like Suricata to capture, analyze, and visualize it.

Things to Know
Here are a couple of things to keep in mind:

Sessions Per ENI – You can have up to three active sessions on each ENI.

Cross-VPC – The source and target ENIs can be in distinct VPCs as long as they are peered to each other or connected through Transit Gateway.

Scaling & HA – In most cases you should plan to mirror traffic to a Network Load Balancer and then run your capture & analysis tools on an Auto Scaled fleet of EC2 instances behind it.

Bandwidth – The replicated traffic generated by each instance will count against the overall bandwidth available to the instance. If traffic congestion occurs, mirrored traffic will be dropped first.

Now Available
VPC Traffic Mirroring is available now and you can start using it today in all commercial AWS Regions except Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for those regions will be added soon. You pay an hourly fee (starting at $0.015 per hour) for each mirror source; see the VPC Pricing page for more info.

Jeff;

 

AWS Security Hub Now Generally Available

This post was originally published on this site

I’m a developer, or at least that’s what I tell myself while coming to terms with being a manager. I’m definitely not an infosec expert. I’ve been paged more than once in my career because something I wrote or configured caused a security concern. When systems enable frequent deploys and remove gatekeepers for experimentation, sometimes a non-compliant resource is going to sneak by. That’s why I love tools like AWS Security Hub, a service that enables automated compliance checks and aggregated insights from a variety of services. With guardrails like these in place to make sure things stay on track, I can experiment more confidently. And with a single place to view compliance findings from multiple systems, infosec feels better about letting me self-serve.

With cloud computing, we have a shared responsibility model when it comes to compliance and security. AWS handles the security of the cloud: everything from the security of our data centers up to the virtualization layer and host operating system. Customers handle security in the cloud: the guest operating system, configuration of systems, and secure software development practices.

Today, AWS Security Hub is out of preview and available for general use to help you understand the state of your security in the cloud. It works across AWS accounts and integrates with many AWS services and third-party products. You can also use the Security Hub API to create your own integrations.

Getting Started

When you enable AWS Security Hub, permissions are automatically created via IAM service-linked roles. Automated, continuous compliance checks begin right away. Compliance standards determine these compliance checks and rules. The first compliance standard available is the Center for Internet Security (CIS) AWS Foundations Benchmark. We’ll add more standards this year.

The results of these compliance checks are called findings. Each finding tells you severity of the issue, which system reported it, which resources it affects, and a lot of other useful metadata. For example, you might see a finding that lets you know that multi-factor authentication should be enabled for a root account, or that there are credentials that haven’t been used for 90 days that should be revoked.

Findings can be grouped into insights using aggregation statements and filters.

Integrations

In addition to the Compliance standards findings, AWS Security Hub also aggregates and normalizes data from a variety of services. It is a central resource for findings from AWS Guard Duty, Amazon Inspector, Amazon Macie, and from 30 AWS partner security solutions.

AWS Security Hub also supports importing findings from custom or proprietary systems. Findings must be formatted as AWS Security Finding Format JSON objects. Here’s an example of an object I created that meets the minimum requirements for the format. To make it work for your account, switch out the AwsAccountId and the ProductArn. To get your ProductArn for custom findings, replace REGION and ACCOUNT_ID in the following string: arn:aws:securityhub:REGION:ACCOUNT_ID:product/ACCOUNT_ID/default.

{
    "Findings": [{
        "AwsAccountId": "12345678912",
        "CreatedAt": "2019-06-13T22:22:58Z",
        "Description": "This is a custom finding from the API",
        "GeneratorId": "api-test",
        "Id": "us-east-1/12345678912/98aebb2207407c87f51e89943f12b1ef",
        "ProductArn": "arn:aws:securityhub:us-east-1:12345678912:product/12345678912/default",
        "Resources": [{
            "Type": "Other",
            "Id": "i-decafbad"
        }],
        "SchemaVersion": "2018-10-08",
        "Severity": {
            "Product": 2.5,
            "Normalized": 11
        },
        "Title": "Security Finding from Custom Software",
        "Types": [
            "Software and Configuration Checks/Vulnerabilities/CVE"
        ],
        "UpdatedAt": "2019-06-13T22:22:58Z"
    }]
}

Then I wrote a quick node.js script that I named importFindings.js to read this JSON file and send it off to AWS Security Hub via the AWS JavaScript SDK.

const fs    = require('fs');        // For file system interactions
const util  = require('util');      // To wrap fs API with promises
const AWS   = require('aws-sdk');   // Load the AWS SDK

AWS.config.update({region: 'us-east-1'});

// Create our Security Hub client
const sh = new AWS.SecurityHub();

// Wrap readFile so it returns a promise and can be awaited 
const readFile = util.promisify(fs.readFile);

async function getFindings(path) {
    try {
        // wait for the file to be read...
        let fileData = await readFile(path);

        // ...then parse it as JSON and return it
        return JSON.parse(fileData);
    }
    catch (error) {
        console.error(error);
    }
}

async function importFindings() {
    // load the findings from our file
    const findings = await getFindings('./findings.json');

    try {
        // call the AWS Security Hub BatchImportFindings endpoint
        response = await sh.batchImportFindings(findings).promise();
        console.log(response);
    }
    catch (error) {
        console.error(error);
    }
}

// Engage!
importFindings();

A quick run of node importFindings.js results in { FailedCount: 0, SuccessCount: 1, FailedFindings: [] }. And now I can see my custom finding in the Security Hub console:

Custom Actions

AWS Security Hub can integrate with response and remediation workflows through the use of custom actions. With custom actions, a batch of selected findings is used to generate CloudWatch events. With CloudWatch Rules, these events can trigger other actions such as sending notifications via a chat system or paging tool, or sending events to a visualization service.

First, we open Settings from the AWS Security Console, and select Custom Actions. Add a custom action and note the ARN.

Then we create a CloudWatch Rule using the custom action we created as a resource in the event pattern, like this:

{
  "source": [
    "aws.securityhub"
  ],
  "detail-type": [
    "Security Hub Findings - Custom Action"
  ],
  "resources": [
    "arn:aws:securityhub:us-west-2:123456789012:action/custom/DoThing"
  ]
}

Our CloudWatch Rule can have many different kinds of targets, such as Amazon Simple Notification Service (SNS) Topics, Amazon Simple Queue Service (SQS) Queues, and AWS Lambda functions. Once our action and rule are in place, we can select findings, and then choose our action from the Actions dropdown list. This will send the selected findings to Amazon CloudWatch Events. Those events will match our rule, and the event targets will be invoked.

Important Notes

  • AWS Config must be enabled for Security Hub compliance checks to run.
  • AWS Security Hub is available in 15 regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (London), Europe (Paris), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai).
  • AWS Security Hub does not transfer data outside of the regions where it was generated. Data is not consolidated across multiple regions.

AWS Security Hub is already the type of service that I’ll enable on the majority of the AWS accounts I operate. As more compliance standards become available this year, I expect it will become a standard tool in many toolboxes. A 30-day free trial is available so you can try it out and get an estimate of what your costs would be. As always, we want to hear your feedback and understand how you’re using AWS Security Hub. Stay in touch, and happy building!

— Brandon