CloudWatch Metric Streams – Send AWS Metrics to Partners and to Your Apps in Real Time

This post was originally published on this site

When we launched Amazon CloudWatch back in 2009 (New Features for Amazon EC2: Elastic Load Balancing, Auto Scaling, and Amazon CloudWatch), it tracked performance metrics (CPU load, Disk I/O, and network I/O) for EC2 instances, rolled them up at one-minute intervals, and stored them for two weeks. At that time it was used to monitor instance health and to drive Auto Scaling. Today, CloudWatch is a far more comprehensive and sophisticated service. Some of the most recent additions include metrics with 1-minute granularity for all EBS volume types, CloudWatch Lambda Insights, and the Metrics Explorer.

AWS Partners have used the CloudWatch metrics to create all sorts of monitoring, alerting, and cost management tools. In order to access the metrics the partners created polling fleets that called the ListMetrics and GetMetricDatafunctions for each of their customers.

These fleets must scale in proportion to the number of AWS resources created by each of the partners’ customers and the number of CloudWatch metrics that are retrieved for each resource. This polling is simply undifferentiated heavy lifting that each partner must do. It adds no value, and takes precious time that could be better invested in other ways.

New Metric Streams
CloudWatch Side Menu (Dashboards, Alarms, Metrics, and more)In order to make it easier for AWS Partners and others to gain access to CloudWatch metrics faster and at scale, we are launching CloudWatch Metric Streams. Instead of polling (which can result in 5 to 10 minutes of latency), metrics are delivered to a Kinesis Data Firehose stream. This is highly scalable and far more efficient, and supports two important use cases:

Partner Services – You can stream metrics to a Kinesis Data Firehose that writes data to an endpoint owned by an AWS Partner. This allows partners to scale down their polling fleets substantially, and lets them build tools that can respond more quickly when key cost or performance metrics change in unexpected ways.

Data Lake – You can stream metrics to a Kinesis Data Firehose of your own. From there you can apply any desired data transformations, and then push the metrics into Amazon Simple Storage Service (S3) or Amazon Redshift. You then have the full array of AWS analytics tools at your disposal: S3 Select, Amazon SageMaker, Amazon EMR, Amazon Athena, Amazon Kinesis Data Analytics, and more. Our customers do this to combine billing and performance data in order to measure & improve cost optimization, resource performance, and resource utilization.

CloudWatch Metric Streams are fully managed and very easy to set up. Streams can scale to handle any volume of metrics, with delivery to the destination within two or three minutes. You can choose to send all available metrics to each stream that you create, or you can opt-in to any of the available AWS (EC2, S3, and so forth) or custom namespaces.

Once a stream has been set up, metrics start to flow within a minute or two. The flow can be stopped and restarted later if necessary, which can be handy for testing and debugging. When you set up a stream you choose between the binary Open Telemetry 0.7 format, and the human-readable JSON format.

Each Metric Stream resides in a particular AWS region and delivers metrics to a single destination. If you want to deliver metrics to multiple partners, you will need to create a Metric Stream for each one. If you are creating a centralized data lake that spans multiple AWS accounts and/or regions, you will need to set up some IAM roles (see Controlling Access with Amazon Kinesis Data Firehose for more information).

Creating a Metric Stream
Let’s take a look at two ways to use a Metric Stream. First, I will use the Quick S3 setup option to send data to a Kinesis Data Firehose and from there to S3. Second, I will use a Firehose that writes to an endpoint at AWS Partner New Relic.

I open the CloudWatch Console, select the desired region, and click Streams in the left-side navigation. I review the page, and click Create stream to proceed:

CloudWatch Metric Streams Home Page in Console

I choose the metrics to stream. I can select All metrics and then exclude those that I don’t need, or I can click Selected namespaces and include those that I need. I’ll go for All, but exclude Firehose:

Create a Metric Stream - Part 1, All or Selected

I select Quick S3 setup, and leave the other configuration settings in this section unchanged (I expanded it so that you could see all of the options that are available to you):

Create a Metric Stream - Options for S3 bucket and IAM roles

Then I enter a name (MyMetrics-US-East-1A) for my stream, confirm that I understand the resources that will be created, and click Create metric stream:

Set name for Metric Stream and confirm resources to be created

My stream is created and active within seconds:

Metric Stream is Active

Objects begin to appear in the S3 bucket within a minute or two:

Objects appear in S3 Bucket

I can analyze my metrics using any of the tools that I listed above, or I can simply look at the raw data:

A single Metrics object, shown in NotePad++, JSON format

Each Metric Stream generates its own set of CloudWatch metrics:

Metrics for the metrics stream

I can stop a running stream:

How to stop a running stream

And then start it:

How to start a stopped stream

I can also create a Metric Stream using a CloudFormation template. Here’s an excerpt:

AWSTemplateFormatVersion: 2010-09-09
Resources:
  MetricStream:
    Type: 'AWS::CloudWatch::MetricStream'
    Properties:
      OutputFormat: json
      RoleArn: !GetAtt
        - MetricStreamToFirehoseRole
        - Arn
      FirehoseArn: !GetAtt
        - FirehoseToS3Bucket
        - Arn

Now let’s take a look at the Partner-style use case! The team at New Relic set me up with a CloudFormation template that created the necessary IAM roles and the Metric Stream. I simply entered my API key and an S3 bucket name and the template did all of the heavy lifting. Here’s what I saw:

Metrics displayed in the New Relic UI

Things to Know
And that’s about it! Here are a couple of things to keep in mind:

Regions – Metric Streams are now available in all commercial AWS Regions, excluding the AWS China (Beijing) Region and the AWS China (Ningxia) Region. As noted earlier, you will need to create a Metric Stream in each desired account and region (this is a great use case for CloudFormation Stacksets).

Pricing – You pay $0.003 for every 1000 metric updates, and for any charges associated with the Kinesis Data Firehose. To learn more, check out the pricing page.

Metrics – CloudWatch Metric Streams is compatible with all CloudWatch metrics, but does not send metrics that have a timestamp that is more than two hours old. This includes S3 daily storage metrics and some of the billing metrics.

Partner Services
We designed this feature with the goal of making it easier & more efficient for AWS Partners including Datadog, Dynatrace, New Relic, Splunk, and Sumo Logic to get access to metrics so that the partners can build even better tools. We’ve been working with these partners to help them get started with CloudWatch Metric Streams. Here are some of the blog posts that they wrote in order to share their experiences. (I am updating this article with links as they are published.)

Now Available
CloudWatch Metric Streams is available now and you can use it to stream metrics to a Kinesis Data Firehose of your own or an AWS Partners. For more information, check out the documentation and send feedback to the AWS forum for Amazon CloudWatch.

Jeff;

Troubleshoot Boot and Networking Issues with New EC2 Serial Console

This post was originally published on this site

Fixing production issues is one of the key responsibilities of system and network administrators. In fact, I’ve always found it to be one of the most interesting parts of infrastructure engineering. Diving as deep as needed into the problem at hand, not only do you (eventually) have the satisfaction of solving the issue, you also learn a lot of things along the way, which you probably wouldn’t have been exposed to under normal circumstances.

Operating systems certainly present such opportunities. Over time, they’ve grown ever more complex, forcing administrators to master a zillion configuration files and settings. Although infrastructure as code and automation have greatly improved provisioning and managing servers, there’s always room for mistakes and breakdowns that prevent a system from starting correctly. The list is endless: missing hardware drivers, misconfigured file systems, invalid network configuration, incorrect permissions, and so on. To make things worse, many issues can effectively lock administrators out of a system, preventing them from logging in, diagnosing the problem and applying the appropriate fix. The only option is to have an out-of-band connection to your servers, and although customers could view the console output of an EC2 instance, they couldn’t interact with it – until now.

Today, I’m extremely happy to announce the EC2 Serial Console, a simple and secure way to troubleshoot boot and network connectivity issues by establishing a serial connection to your Amazon Elastic Compute Cloud (EC2) instances.

Introducing the EC2 Serial Console
EC2 Serial Console access is available for EC2 instances based on the AWS Nitro System. It supports all major Linux distributions, FreeBSD, NetBSD, Microsoft Windows, and VMWare.

Without any need for a working network configuration, you can connect to an instance using either a browser-based shell in the AWS Management Console, or an SSH connection to a managed console server. No need for an sshd server to be running on your instance: the only requirement is that the root account has been assigned a password, as this is the one you will use to log in. Then, you can enter commands as if you have a keyboard and monitor directly attached to one of the instance’s serial ports.

In addition, you can trigger operating system specific procedures:

  • On Linux, you can trigger a Magic SysRq command to generate a crash dump, kill processes, and so on.
  • On Windows, you can interrupt the boot process, and boot in safe mode using Emergency Management Service (EMS) and Special Admin Console (SAC).

Getting access to an instance’s console is a privileged operation that should be tightly controlled, which is why EC2 Serial Console access is not permitted by default at the account level. Once you permit access in your account, it applies to all instances in this account. Administrators can also apply controls at the organization level thanks to Service Control Policies, and at instance level thanks to AWS Identity and Access Management (IAM) permissions. As you would expect, all communication with the EC2 Serial Console is encrypted, and we generate a unique key for each session.

Let’s do a quick demo with Linux. The process is similar with other operating systems.

Connecting to the EC2 Serial Console with the AWS Management Console
First, I launch an Amazon Linux 2 instance. Logging in to it, I decide to mangle the network configuration for its Ethernet network interface (/etc/sysconfig/network-scripts/ifcfg-eth0), setting a completely bogus static IP address. PLEASE do not try this on a production instance!

Then, I reboot the instance. A few seconds later, although the instance is up and running in the EC2 console and port 22 is open in its Security Group, I’m unable to connect to it with SSH.

$ ssh -i ~/.ssh/mykey.pem ec2-user@ec2-3-238-8-46.compute-1.amazonaws.com
ssh: connect to host ec2-3-238-8-46.compute-1.amazonaws.com port 22: Operation timed out

EC2 Serial Console to the rescue!

First, I need to allow console access in my account. All it takes is ticking a box in the EC2 settings.

Enabling the console

Then, right clicking on the instance’s name in the EC2 console, I select Monitor and troubleshoot; then EC2 Serial Console.

This opens a new window confirming the instance id and the serial port number to connect to. I simply click on Connect.

This opens a new tab in my browser. Hitting Enter, I see the familiar login prompt.

Amazon Linux 2
Kernel 4.14.225-168.357.amzn2.x86_64 on an x86_64
ip-172-31-67-148 login:

Logging in as root, I’m relieved to get a friendly shell prompt.

Enabling Magic SysRq for this session (sysctl -w kernel.sysrq=1), I first list available commands (CTRL-0 + h), and then ask for a memory report (CTRL-0 + m). You can click on the image below to get a larger view.

Connecting to the console

Pretty cool! This would definitely come in handy to troubleshoot complex issues. No need for this here: I quickly restore a valid configuration for the network interface, and I restart the network stack.

Trying to connect to the instance again, I can see that the problem is solved.

$ ssh -i ~/.ssh/mykey.pem ec2-user@ec2-3-238-8-46.compute-1.amazonaws.com

__|   __|_  )
_|   (    / Amazon Linux 2 AMI
___|___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-172-31-67-148 ~]$

Now, let me quickly show you the equivalent commands using the AWS command line interface.

Connecting to the EC2 Serial Console with the AWS CLI
This is equally simple. First, I send the SSH public key for the instance key pair to the serial console. Please make sure to add the file:// prefix.

$ aws ec2-instance-connect send-serial-console-ssh-public-key --instance-id i-003aecec198b537b0 --ssh-public-key file://~/.ssh/mykey.pub --serial-port 0 --region us-east-1

Then, I ssh to the serial console, using <instance id>.port<port number> as user name, and I’m greeted with a login prompt.

$ ssh -i ~/.ssh/mykey.pem i-003aecec198b537b0.port0@serial-console.ec2-instance-connect.us-east-1.aws

Amazon Linux 2
Kernel 4.14.225-168.357.amzn2.x86_64 on an x86_64
ip-172-31-67-148 login:

Once I’ve logged in, Magic SysRq is available, and I can trigger it with ~B+command. I can also terminate the console session with ~..

Get Started with EC2 Serial Console
As you can see, the EC2 Serial Console makes it much easier to debug and fix complex boot and network issues happening on your EC2 instances. You can start using it today in the following AWS regions, at no additional cost:

  • US East (N. Virginia), US West (Oregon), US East (Ohio)
  • Europe (Ireland), Europe (Frankfurt)
  • Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore)

Please give it a try, and let us know what you think. We’re always looking forward to your feedback! You can send it through your usual AWS Support contacts, or on the AWS Forum for Amazon EC2.

– Julien

 

 

Red Hat OpenShift Service on AWS Now GA

This post was originally published on this site

Our customers come to AWS with many different experiences and skill sets, and we are continually thinking about how we can make them feel at home on AWS. In this spirit, we are proud to launch Red Hat OpenShift Service on AWS (ROSA), which allows customers familiar with Red Hat OpenShift tooling and APIs to extend easily from their datacenter into AWS.

ROSA provides a fully managed OpenShift service with joint support from AWS and Red Hat. It has an AWS integrated experience for cluster creation, a consumption-based billing model, and a single invoice for AWS deployments.

We have built this service in partnership with Red Hat to ensure you have a fast and straightforward way to move some or all of your existing deployments over to AWS, confident in the knowledge that you can turn to both Red Hat and AWS for support.

If you’re already familiar with Red Hat OpenShift, you can accelerate your application development process by leveraging standard APIs and existing Red Hat OpenShift tools for your AWS deployments. You’ll also be able to take advantage of the wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to improve customer agility, rate of innovation, and scalability.

To demonstrate how ROSA works, let me show you how I create a new cluster.

Building a Cluster With ROSA
First, I need to enable ROSA in my account. To do this, I head over to the console and click the Enable OpenShift button.

Screenshot of Red Hat OpenShift Console

Once enabled, I then check out the Getting Started section, which provides instructions on downloading and installing the ROSA CLI.

Once the CLI is installed, I run the verify command to ensure I have the necessary permissions:

rosa verify permissions

I already have my AWS credentials set up on my machine, so now I need to log into my Red Hat account. To do that, I issue the login command and follow the instructions on setting up a token.

rosa login

To confirm that I am logged into the correct account and all set up, I run the whoami command:

rosa whoami

This lists all my AWS and Red Hat account details and confirms what AWS Region I am currently in.

AWS Account ID:               4242424242
AWS Default Region:           us-east-2
AWS ARN:                      arn:aws:iam::4242424242:user/thebeebs
OCM API:                      https://api.openshift.com
OCM Account ID:               42hhgttgFTW
OCM Account Name:             Martin Beeby
OCM Account Username:         thebeebs@domain.com
OCM Account Email:            thebeebs@domain.com
OCM Organization ID:          42hhgttgFTW
OCM Organization Name:        AWS News Blog
OCM Organization External ID: 424242

To prepare my account for cluster deployment, I need a CloudFormation stack to be created in my account. This takes 1-2 minutes to complete, and I kick the process off by issuing the init command.

rosa init

Now that I am set up, I can create a cluster. It takes around 40 minutes for the cluster creation to complete.

rosa create cluster --cluster-name=newsblogcluster

To check the status of my cluster, I use the describe command. During cluster
creation, the State field will transition from pending to installing and finally reach ready.

rosa describe cluster newsblogcluster

I can now list all of my clusters using the list command.

rosa list clusters

My newly created cluster is now also visible within the Red Hat OpenShift Cluster manager console. I can now configure an Identity provider in the console, create admins, and generally administer the cluster.

To learn more about how to configure an identity provider and deploy applications, check out the documentation.

Available Now
Red Hat OpenShift Service on AWS is now available in GA in the following regions: Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), South America (São Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Pricing
While this is a service jointly managed and supported by Red Hat and AWS, you will only receive a bill from AWS. Each AWS service supporting your cluster components and application requirements will be a separate billing line item just like it is currently, but now with the addition of your OpenShift subscription.

Our customers tell us that a consumption-based model is one of the main reasons they moved to the cloud in the first place. Consumption-based pricing allows them to experiment and fail fast, and customers have told us they want to align their Red Hat OpenShift licensing consumption with how they plan to operate in AWS. As a result, we provide both an hourly, pay-as-you-go model and annual commitments; check out the pricing page for more information.

To get started, create a cluster from the Managed Red Hat OpenShift service on the AWS Console. To learn more, visit the product page. Happy containerizing.

— Martin

Introducing Amazon S3 Object Lambda – Use Your Code to Process Data as It Is Being Retrieved from S3

This post was originally published on this site

When you store data in Amazon Simple Storage Service (S3), you can easily share it for use by multiple applications. However, each application has its own requirements and may need a different view of the data. For example, a dataset created by an e-commerce application may include personally identifiable information (PII) that is not needed when the same data is processed for analytics and should be redacted. On the other side, if the same dataset is used for a marketing campaign, you may need to enrich the data with additional details, such as information from the customer loyalty database.

To provide different views of data to multiple applications, there are currently two options. You either create, store, and maintain additional derivative copies of the data, so that each application has its own custom dataset, or you build and manage infrastructure as a proxy layer in front of S3 to intercept and process data as it is requested. Both options add complexity and costs, so the S3 team decided to build a better solution.

Today, I’m very happy to announce the availability of S3 Object Lambda, a new capability that allows you to add your own code to process data retrieved from S3 before returning it to an application. S3 Object Lambda works with your existing applications and uses AWS Lambda functions to automatically process and transform your data as it is being retrieved from S3. The Lambda function is invoked inline with a standard S3 GET request, so you don’t need to change your application code.

In this way, you can easily present multiple views from the same dataset, and you can update the Lambda functions to modify these views at any time.

Architecture diagram.

There are many use cases that can be simplified by this approach, for example:

  • Redacting personally identifiable information for analytics or non-production environments.
  • Converting across data formats, such as converting XML to JSON.
  • Augmenting data with information from other services or databases.
  • Compressing or decompressing files as they are being downloaded.
  • Resizing and watermarking images on the fly using caller-specific details, such as the user who requested the object.
  • Implementing custom authorization rules to access data.

You can start using S3 Object Lambda with a few simple steps:

  1. Create a Lambda Function to transform data for your use case.
  2. Create an S3 Object Lambda Access Point from the S3 Management Console.
  3. Select the Lambda function that you created above.
  4. Provide a supporting S3 Access Point to give S3 Object Lambda access to the original object.
  5. Update your application configuration to use the new S3 Object Lambda Access Point to retrieve data from S3.

To get a better understanding of how S3 Object Lambda works, let’s put it in practice.

How to Create a Lambda Function for S3 Object Lambda
To create the function, I start by looking at the syntax of the input event the Lambda function receives from S3 Object Lambda:

{
    "xAmzRequestId": "1a5ed718-5f53-471d-b6fe-5cf62d88d02a",
    "getObjectContext": {
        "inputS3Url": "https://myap-123412341234.s3-accesspoint.us-east-1.amazonaws.com/s3.txt?X-Amz-Security-Token=...",
        "outputRoute": "io-iad-cell001",
        "outputToken": "..."
    },
    "configuration": {
        "accessPointArn": "arn:aws:s3-object-lambda:us-east-1:123412341234:accesspoint/myolap",
        "supportingAccessPointArn": "arn:aws:s3:us-east-1:123412341234:accesspoint/myap",
        "payload": "test"
    },
    "userRequest": {
        "url": "/s3.txt",
        "headers": {
            "Host": "myolap-123412341234.s3-object-lambda.us-east-1.amazonaws.com",
            "Accept-Encoding": "identity",
            "X-Amz-Content-SHA256": "e3b0c44297fc1c149afbf4c8995fb92427ae41e4649b934ca495991b7852b855"
        }
    },
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "...",
        "arn": "arn:aws:iam::123412341234:user/myuser",
        "accountId": "123412341234",
        "accessKeyId": "..."
    },
    "protocolVersion": "1.00"
}

The getObjectContext property contains some of the most useful information for the Lambda function:

  • The inputS3Url is a presigned URL that the function can use to download the original object from the supporting Access Point. In this way, the Lambda function doesn’t need to have S3 read permissions to retrieve the original object and can only access the object processed by each invocation.
  • The outputRoute and the outputToken are two parameters that are used to send back the modified object using the new WriteGetObjectResponse API.

The configuration property contains the Amazon Resource Name (ARN) of the Object Lambda Access Point and of the supporting Access Point.

The userRequest property gives more information of the original request, such as the path in the URL, and the HTTP headers.

Finally, the userIdentity section returns the details of who made the original request and can be used to customize access to the data.

Now that I know the syntax of the event, I can create the Lambda function. To keep things simple, here’s a function written in Python that changes all text in the original object to uppercase:

import boto3
import requests

def lambda_handler(event, context):
    print(event)

    object_get_context = event["getObjectContext"]
    request_route = object_get_context["outputRoute"]
    request_token = object_get_context["outputToken"]
    s3_url = object_get_context["inputS3Url"]

    # Get object from S3
    response = requests.get(s3_url)
    original_object = response.content.decode('utf-8')

    # Transform object
    transformed_object = original_object.upper()

    # Write object back to S3 Object Lambda
    s3 = boto3.client('s3')
    s3.write_get_object_response(
        Body=transformed_object,
        RequestRoute=request_route,
        RequestToken=request_token)

    return {'status_code': 200}

Looking at the code of the function, there are three main sections:

  • First, I use the inputS3Url property of the input event to download the original object. Since the value is a presigned URL, the function doesn’t need permissions to read from S3.
  • Then, I transform the text to be all uppercase. To customize the behavior of the function for your use case, this is the part you need to change. For example, to detect and redact personally identifiable information (PII), I can use Amazon Comprehend to locate PII entities with the DetectPiiEntities API and replace them with asterisks or a description of the redacted entity type.
  • Finally, I use the new WriteGetObjectResponse API to send the result of the transformation back to S3 Object Lambda. In this way, the transformed object can be much larger than the maximum size of the response returned by a Lambda function. For larger objects, the WriteGetObjectResponse API supports chunked transfer encoding to implement a streaming data transfer. The Lambda function only needs to return the status code (200 OK in this case), eventual errors, and optionally customize the metadata of the returned object as described in the S3 GetObject API.

I package the function, including the dependencies, and upload it to Lambda. Note that the maximum duration for a Lambda function used by S3 Object Lambda is 60 seconds, and that the Lambda function needs AWS Identity and Access Management (IAM) permissions to call the WriteGetObjectResponse API.

How to Create an S3 Object Lambda Access Point from the Console
In the S3 console, I create an S3 Access Point on one of my S3 buckets:

S3 console screenshot.

Then, I create an S3 Object Lambda Access Point using the supporting Access Point I just created. The Lambda function is going to use the supporting Access Point to download the original objects.

S3 console screenshot.

During the configuration of the S3 Object Lambda Access Point as shown below, I select the latest version of the Lambda function I created above. Optionally, I can enable support for requests using a byte range, or using part numbers. For now, I leave them disabled. To understand how to use byte range and part numbers with S3 Object Lambda, please see the documentation.

S3 console screenshot.

When configuring the S3 Object Lambda Access Point, I can set up a string as a payload that is passed to the Lambda function in all invocations coming from that Access Point, as you can see in the configuration property of the sample event I described before. In this way, I can configure the same Lambda function for multiple S3 Object Lambda Access Points, and use the value of thepayload to customize the behavior for each of them.

S3 console screenshot.

Finally, I can set up a policy, similar to what I can do with normal S3 Access Points, to provide access to the objects accessible through this Object Lambda Access Point. For now, I keep the policy empty. Then, I leave the default option to block all public access and create the Object Lambda Access Point.

Now that the S3 Object Lambda Access Point is ready, let’s see how I can use it.

How to Use the S3 Object Lambda Access Point
In the S3 console, I select the newly created Object Lambda Access Point. In the properties, I copy the ARN to have it available later.

S3 console screenshot.

With the AWS Command Line Interface (CLI), I upload a text file containing a few sentences to the S3 bucket behind the S3 Object Lambda Access Point:

aws cp s3.txt s3://danilop-data/

Using S3 Object Lambda with my existing applications is very simple. I just need to replace the S3 bucket with the ARN of the S3 Object Lambda Access Point and update the AWS SDKs to accept the new syntax using the S3 Object Lambda ARN.

For example, this is a Python script that downloads the text file I just uploaded: first, straight from the S3 bucket, and then from the S3 Object Lambda Access Point. The only difference between the two downloads is the value of the Bucket parameter.

import boto3

s3 = boto3.client('s3')

print('Original object from the S3 bucket:')
original = s3.get_object(
  Bucket='danilop-data',
  Key='s3.txt')
print(original['Body'].read().decode('utf-8'))

print('Object processed by S3 Object Lambda:')
transformed = s3.get_object(
  Bucket='arn:aws:s3-object-lambda:us-east-1:123412341234:accesspoint/myolap',
  Key='s3.txt')
print(transformed['Body'].read().decode('utf-8'))

I start the script on my laptop:

python3 read_original_and_transformed_object.py

And this is the result I get:

Original object on S3:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

Object processed by S3 Object Lambda:
AMAZON SIMPLE STORAGE SERVICE (AMAZON S3) IS AN OBJECT STORAGE SERVICE THAT OFFERS INDUSTRY-LEADING SCALABILITY, DATA AVAILABILITY, SECURITY, AND PERFORMANCE. THIS MEANS CUSTOMERS OF ALL SIZES AND INDUSTRIES CAN USE IT TO STORE AND PROTECT ANY AMOUNT OF DATA FOR A RANGE OF USE CASES, SUCH AS DATA LAKES, WEBSITES, MOBILE APPLICATIONS, BACKUP AND RESTORE, ARCHIVE, ENTERPRISE APPLICATIONS, IOT DEVICES, AND BIG DATA ANALYTICS.

The first output is downloaded straight from the source bucket, and I see the original content as expected. The second time, the object is processed by the Lambda function as it is being retrieved and, as the result, all text is uppercase!

More Use Cases for S3 Object Lambda
When retrieving an object using S3 Object Lambda, there is no need for an object with the same name to exist in the S3 bucket. The Lambda function can use information in the name of the file or in the HTTP headers to generate a custom object.

For example, if you ask to use an S3 Object Lambda Access Point for an image with name sunset_600x400.jpg, the Lambda function can look for an image named sunset.jpg and resize it to fit the maximum width and height as described in the file name. In this case, the Lambda function would need access permission to read the original image, because the object key is different from what was used in the presigned URL.

Another interesting use case would be to retrieve JSON or CSV documents, such as order.json or items.csv, that are generated on the fly based on the content of a database. The metadata in the request HTTP headers can be used to pass the orderId to use. As usual, I expect our customers’ creativity to far exceed the use cases I described here.

Here’s a short video describing how S3 Object Lambda works and how you can use it:

Availability and Pricing
S3 Object Lambda is available today in all AWS Regions with the exception of the Asia Pacific (Osaka), AWS GovCloud (US-East), AWS GovCloud (US-West), China (Beijing), and China (Ningxia) Regions. You can use S3 Object Lambda with the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. Currently, the AWS CLI high-level S3 commands, such as aws s3 cp, don’t support objects from S3 Object Lambda Access Points, but you can use the low-level S3 API commands, such as aws s3api get-object.

With S3 Object Lambda, you pay for the AWS Lambda compute and request charges required to process the data, and for the data S3 Object Lambda returns to your application. You also pay for the S3 requests that are invoked by your Lambda function. For more pricing information, please see the Amazon S3 pricing page.

This new capability makes it much easier to share and convert data across multiple applications.

Start using S3 Object Lambda to simplify your storage architecture today.

Danilo

AA21-076A: TrickBot Malware

This post was originally published on this site

Original release date: March 17, 2021

Summary

This Advisory uses the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK®) framework. See the ATT&CK for Enterprise for all referenced threat actor tactics and techniques.

The Cybersecurity and Infrastructure Security Agency (CISA) and Federal Bureau of Investigation (FBI) have observed continued targeting through spearphishing campaigns using TrickBot malware in North America. A sophisticated group of cybercrime actors is luring victims, via phishing emails, with a traffic infringement phishing scheme to download TrickBot.

TrickBot—first identified in 2016—is a Trojan developed and operated by a sophisticated group of cybercrime actors. Originally designed as a banking Trojan to steal financial data, TrickBot has evolved into highly modular, multi-stage malware that provides its operators a full suite of tools to conduct a myriad of illegal cyber activities.

To secure against TrickBot, CISA and FBI recommend implementing the mitigation measures described in this Joint Cybersecurity Advisory, which include blocking suspicious Internet Protocol addresses, using antivirus software, and providing social engineering and phishing training to employees.

Click here for a PDF version of this report.

Technical Details

TrickBot is an advanced Trojan that malicious actors spread primarily by spearphishing campaigns using tailored emails that contain malicious attachments or links, which—if enabled—execute malware (Phishing: Spearphishing Attachment [T1566.001], Phishing: Spearphishing Link [T1566.002]). CISA and FBI are aware of recent attacks that use phishing emails, claiming to contain proof of a traffic violation, to steal sensitive information. The phishing emails contain links that redirect to a website hosted on a compromised server that prompts the victim to click on photo proof of their traffic violation. In clicking the photo, the victim unknowingly downloads a malicious JavaScript file that, when opened, automatically communicates with the malicious actor’s command and control (C2) server to download TrickBot to the victim’s system.

Attackers can use TrickBot to:

  • Drop other malware, such as Ryuk and Conti ransomware, or
  • Serve as an Emotet downloader.[1]

TrickBot uses person-in-the-browser attacks to steal information, such as login credentials (Man in the Browser [T1185]). Additionally, some of TrickBot’s modules spread the malware laterally across a network by abusing the Server Message Block (SMB) Protocol. TrickBot operators have a toolset capable of spanning the entirety of the MITRE ATT&CK framework, from actively or passively gathering information that can be used to support targeting (Reconnaissance [TA0043]), to trying to manipulate, interrupt, or destroy systems and data (Impact [TA0040]).

TrickBot is capable of data exfiltration, cryptomining, and host enumeration (e.g., reconnaissance of Unified Extensible Firmware Interface or Basic Input/Output System [UEFI/BIOS] firmware).[2] For host enumeration, operators deliver TrickBot in modules containing a configuration file with specific tasks.

Figure 1 lays out TrickBot’s use of enterprise techniques.

Figure 1: MITRE ATT&CK enterprise techniques used by TrickBot

 

MITRE ATT&CK Techniques

According to MITRE, TrickBot [S0266] uses the ATT&CK techniques listed in table 1.

Table 1: TrickBot ATT&CK techniques for enterprise

Initial Access [TA0001]

Technique Title

ID Use
Phishing: Spearphishing Attachment T1566.001 TrickBot has used an email with an Excel sheet containing a malicious macro to deploy the malware.
Phishing: Spearphishing Link T1566.002

TrickBot has been delivered via malicious links in phishing emails.

Execution [TA0002]

Technique Title ID Use
Scheduled Task/Job: Scheduled Task T1053.005 TrickBot creates a scheduled task on the system that provides persistence.
Command and Scripting Interpreter: Windows Command Shell T1059.003 TrickBot has used macros in Excel documents to download and deploy the malware on the user’s machine.
Native API T1106 TrickBot uses the Windows Application Programming Interface (API) call, CreateProcessW(), to manage execution flow.
User Execution: Malicious Link T1204.001 TrickBot has sent spearphishing emails in an attempt to lure users to click on a malicious link.
User Execution: Malicious File T1204.002 TrickBot has attempted to get users to launch malicious documents to deliver its payload.

Persistence [TA0003]

Technique Title ID Use
Scheduled Task/Job: Scheduled Task T1053.005 TrickBot creates a scheduled task on the system that provides persistence.
Create or Modify System Process: Windows Service T1543.003 TrickBot establishes persistence by creating an autostart service that allows it to run whenever the machine boots.

Privilege Escalation [TA0004]

Technique Title ID Use
Scheduled Task/Job: Scheduled Task T1053.005 TrickBot creates a scheduled task on the system that provides persistence.
Process Injection: Process Hollowing T1055.012 TrickBot injects into the svchost.exe process.
Create or Modify System Process: Windows Service T1543.003 TrickBot establishes persistence by creating an autostart service that allows it to run whenever the machine boots.

 Defense Evasion [TA0005]

Technique Title ID Use
Obfuscated Files or Information T1027 TrickBot uses non-descriptive names to hide functionality and uses an AES CBC (256 bits) encryption algorithm for its loader and configuration files.
Obfuscated Files or Information: Software Packing T1027.002 TrickBot leverages a custom packer to obfuscate its functionality.
Masquerading T1036 The TrickBot downloader has used an icon to appear as a Microsoft Word document.
Process Injection: Process Hollowing T1055.012 TrickBot injects into the svchost.exe process.
Modify Registry T1112 TrickBot can modify registry entries.
Deobfuscate/Decode Files or Information T1140 TrickBot decodes the configuration data and modules.
Subvert Trust Controls: Code Signing T1553.002 TrickBot has come with a signed downloader component.
Impair Defenses: Disable or Modify Tools T1562.001 TrickBot can disable Windows Defender.

Credential Access [TA0006]

Technique Title ID Use
Input Capture: Credential API Hooking T1056.004 TrickBot has the ability to capture Remote Desktop Protocol credentials by capturing the CredEnumerateA API.
Unsecured Credentials: Credentials in Files T1552.001 TrickBot can obtain passwords stored in files from several applications such as Outlook, Filezilla, OpenSSH, OpenVPN and WinSCP. Additionally, it searches for the .vnc.lnk affix to steal VNC credentials.
Unsecured Credentials: Credentials in Registry T1552.002 TrickBot has retrieved PuTTY credentials by querying the SoftwareSimonTathamPuttySessions registry key.
Credentials from Password Stores T1555 TrickBot can steal passwords from the KeePass open-source password manager.
Credentials from Password Stores: Credentials from Web Browsers T1555.003 TrickBot can obtain passwords stored in files from web browsers such as Chrome, Firefox, Internet Explorer, and Microsoft Edge, sometimes using esentutl.

Discovery [TA0007]

Technique Tactic ID Use
System Service Discovery T1007 TrickBot collects a list of install programs and services on the system’s machine.
System Network Configuration Discovery T1016 TrickBot obtains the IP address, location, and other relevant network information from the victim’s machine.
Remote System Discovery T1018 TrickBot can enumerate computers and network devices.
System Owner/User Discovery T1033 TrickBot can identify the user and groups the user belongs to on a compromised host.
Permission Groups Discovery T1069 TrickBot can identify the groups the user on a compromised host belongs to.
System Information Discovery T1082 TrickBot gathers the OS version, machine name, CPU type, amount of RAM available from the victim’s machine.
File and Directory Discovery T1083 TrickBot searches the system for all of the following file extensions: .avi, .mov, .mkv, .mpeg, .mpeg4, .mp4, .mp3, .wav, .ogg, .jpeg, .jpg, .png, .bmp, .gif, .tiff, .ico, .xlsx, and .zip. It can also obtain browsing history, cookies, and plug-in information.
Account Discovery: Local Account T1087.001 TrickBot collects the users of the system.
Account Discovery: Email Account T1087.003 TrickBot collects email addresses from Outlook.
Domain Trust Discovery T1482 TrickBot can gather information about domain trusts by utilizing Nltest.

Collection [TA0009]

Technique Tactic ID Use
Data from Local System T1005 TrickBot collects local files and information from the victim’s local machine.
Input Capture:Credential API Hooking T1056.004 TrickBot has the ability to capture Remote Desktop Protocol credentials by capturing the CredEnumerateA API.
Person in the Browser T1185 TrickBot uses web injects and browser redirection to trick the user into providing their login credentials on a fake or modified webpage.

Command and Control [TA0011]

Technique Tactic ID Use
Fallback Channels T1008 TrickBot can use secondary command and control (C2) servers for communication after establishing connectivity and relaying victim information to primary C2 servers.
Application Layer Protocol: Web Protocols T1071.001 TrickBot uses HTTPS to communicate with its C2 servers, to get malware updates, modules that perform most of the malware logic and various configuration files.
Ingress Tool Transfer T1105 TrickBot downloads several additional files and saves them to the victim’s machine.
Data Encoding: Standard Encoding T1132.001 TrickBot can Base64-encode C2 commands.
Non-Standard Port T1571 Some TrickBot samples have used HTTP over ports 447 and 8082 for C2.
Encrypted Channel: Symmetric Cryptography T1573.001 TrickBot uses a custom crypter leveraging Microsoft’s CryptoAPI to encrypt C2 traffic.

Exfiltration [TA0010]

Technique Tactic ID Use
Exfiltration Over C2 Channel T1041 TrickBot can send information about the compromised host to a hardcoded C2 server.

Detection

Signatures

CISA developed the following snort signature for use in detecting network activity associated with TrickBot activity.

 

alert tcp any [443,447] -> any any (msg:”TRICKBOT:SSL/TLS Server X.509 Cert Field contains ‘example.com’ (Hex)”; sid:1; rev:1; flow:established,from_server; ssl_state:server_hello; content:”|0b|example.com”; fast_pattern:only; content:”Global Security”; content:”IT Department”; pcre:”/(?:x09x00xc0xb9x3bx93x72xa3xf6xd2|x00xe2x08xffxfbx7bx53x76x3d)/”; classtype:bad-unknown; metadata:service ssl,service and-ports;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”TRICKBOT_ANCHOR:HTTP URI GET contains ‘/anchor'”; sid:1; rev:1; flow:established,to_server; content:”/anchor”; http_uri; fast_pattern:only; content:”GET”; nocase; http_method; pcre:”/^/anchor_?.{3}/[w_-]+.[A-F0-9]+/?$/U”; classtype:bad-unknown; priority:1; metadata:service http;)

 

alert tcp any $SSL_PORTS -> any any (msg:”TRICKBOT:SSL/TLS Server X.509 Cert Field contains ‘C=XX, L=Default City, O=Default Company Ltd'”; sid:1; rev:1; flow:established,from_server; ssl_state:server_hello; content:”|31 0b 30 09 06 03 55 04 06 13 02|XX”; nocase; content:”|31 15 30 13 06 03 55 04 07 13 0c|Default City”; nocase; content:”|31 1c 30 1a 06 03 55 04 0a 13 13|Default Company Ltd”; nocase; content:!”|31 0c 30 0a 06 03 55 04 03|”; classtype:bad-unknown; reference:url,www.virustotal.com/gui/file/e9600404ecc42cf86d38deedef94068db39b7a0fd06b3b8fb2d8a3c7002b650e/detection; metadata:service ssl;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”TRICKBOT:HTTP Client Header contains ‘boundary=Arasfjasu7′”; sid:1; rev:1; flow:established,to_server; content:”boundary=Arasfjasu7|0d 0a|”; http_header; content:”name=|22|proclist|22|”; http_header; content:!”Referer”; content:!”Accept”; content:”POST”; http_method; classtype:bad-unknown; metadata:service http;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”TRICKBOT:HTTP Client Header contains ‘User-Agent|3a 20|WinHTTP loader/1.'”; sid:1; rev:1; flow:established,to_server; content:”User-Agent|3a 20|WinHTTP loader/1.”; http_header; fast_pattern:only; content:”.png|20|HTTP/1.”; pcre:”/^Hostx3ax20(?:d{1,3}.){3}d{1,3}(?:x3ad{2,5})?$/mH”; content:!”Accept”; http_header; content:!”Referer|3a 20|”; http_header; classtype:bad-unknown; metadata:service http;)

 

alert tcp any $HTTP_PORTS -> any any (msg:”TRICKBOT:HTTP Server Header contains ‘Server|3a 20|Cowboy'”; sid:1; rev:1; flow:established,from_server; content:”200″; http_stat_code; content:”Server|3a 20|Cowboy|0d 0a|”; http_header; fast_pattern; content:”content-length|3a 20|3|0d 0a|”; http_header; file_data; content:”/1/”; depth:3; isdataat:!1,relative; classtype:bad-unknown; metadata:service http;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”TRICKBOT:HTTP URI POST contains C2 Exfil”; sid:1; rev:1; flow:established,to_server; content:”Content-Type|3a 20|multipart/form-data|3b 20|boundary=——Boundary”; http_header; fast_pattern; content:”User-Agent|3a 20|”; http_header; distance:0; content:”Content-Length|3a 20|”; http_header; distance:0; content:”POST”; http_method; pcre:”/^/[a-z]{3}d{3}/.+?.[A-F0-9]{32}/d{1,3}//U”; pcre:”/^Hostx3ax20(?:d{1,3}.){3}d{1,3}$/mH”; content:!”Referer|3a|”; http_header; classtype:bad-unknown; metadata:service http;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”HTTP URI GET/POST contains ‘/56evcxv’ (Trickbot)”; sid:1; rev:1; flow:established,to_server; content:”/56evcxv”; http_uri; fast_pattern:only; classtype:bad-unknown; metadata:service http;)

 

alert icmp any any -> any any (msg:”TRICKBOT_ICMP_ANCHOR:ICMP traffic conatins ‘hanc'”; sid:1; rev:1; itype:8; content:”hanc”; offset:4; fast_pattern; classtype:bad-unknown;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”HTTP Client Header contains POST with ‘host|3a 20|*.onion.link’ and ‘data=’ (Trickbot/Princess Ransomeware)”; sid:1; rev:1; flow:established,to_server; content:”POST”; nocase; http_method; content:”host|3a 20|”; http_header; content:”.onion.link”; nocase; http_header; distance:0; within:47; fast_pattern; file_data; content:”data=”; distance:0; within:5; classtype:bad-unknown; metadata:service http;)

 

alert tcp any any -> any $HTTP_PORTS (msg:”HTTP Client Header contains ‘host|3a 20|tpsci.com’ (trickbot)”; sid:1; rev:1; flow:established,to_server; content:”host|3a 20|tpsci.com”; http_header; fast_pattern:only; classtype:bad-unknown; metadata:service http;)

Mitigations

CISA and FBI recommend that network defenders—in federal, state, local, tribal, territorial governments, and the private sector—consider applying the following best practices to strengthen the security posture of their organization’s systems. System owners and administrators should review any configuration changes prior to implementation to avoid negative impacts.

  • Provide social engineering and phishing training to employees.
  • Consider drafting or updating a policy addressing suspicious emails  that specifies users must report all suspicious emails to the security and/or IT departments.
  • Mark external emails with a banner denoting the email is from an external source to assist users in detecting spoofed emails.
  • Implement Group Policy Object and firewall rules.
  • Implement an antivirus program and a formalized patch management process.
  • Implement filters at the email gateway and block suspicious IP addresses at the firewall.
  • Adhere to the principle of least privilege.
  • Implement a Domain-Based Message Authentication, Reporting & Conformance validation system.
  • Segment and segregate networks and functions.
  • Limit unnecessary lateral communications between network hoses, segments and devices.
  • Consider using application allowlisting technology on all assets to ensure that only authorized software executes, and all unauthorized software is blocked from executing on assets. Ensure that such technology only allows authorized, digitally signed scripts to run on a system.
  • Enforce multi-factor authentication.
  • Enable a firewall on agency workstations configured to deny unsolicited connection requests.
  • Disable unnecessary services on agency workstations and servers.
  • Implement an Intrusion Detection System, if not already used, to detect C2 activity and other potentially malicious network activity
  • Monitor web traffic. Restrict user access to suspicious or risky sites.
  • Maintain situational awareness of the latest threats and implement appropriate access control lists.
  • Disable the use of SMBv1 across the network and require at least SMBv2 to harden systems against network propagation modules used by TrickBot.
  • Visit the MITRE ATT&CK Techniques pages (linked in table 1 above) for additional mitigation and detection strategies.
  • See CISA’s Alert on Technical Approaches to Uncovering and Remediating Malicious Activity for more information on addressing potential incidents and applying best practice incident response procedures.

For additional information on malware incident prevention and handling, see the National Institute of Standards and Technology Special Publication 800-83, Guide to Malware Incident Prevention and Handling for Desktops and Laptops.

Resources

References

Revisions

  • March 17, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

IAM Access Analyzer Update – Policy Validation

This post was originally published on this site

AWS Identity and Access Management (IAM) is an important and fundamental part of AWS. You can create IAM policies and service control policies (SCPs) that define the desired level of access to specific AWS services and resources, and then attach the policies to IAM principals (users and roles), groups of users, or to AWS resources. With the fine-grained control that you get with IAM comes the responsibility to use it properly, almost always seeking to establish least privilege access. The IAM tutorials will help you to learn more, and the IAM Access Analyzer will help you to identify resources that are shared with an external entity. We recently launched an update to IAM Access Analyzer that allows you to Validate Access to Your S3 Buckets Before Deploying Permissions Changes.

New Policy Validation
Today I am happy to announce that we are adding policy validation to IAM Access Analyzer. This powerful new feature will help you to construct IAM policies and SCPs that take advantage of time-tested AWS best practices.

Designed for use by developers and security teams, validation takes place before policies are attached to IAM principals. Over 100 checks, each designed to designed to improve your security posture and to help you to simplify policy management at scale, are performed. The findings from each check include detailed information and concrete recommendations.

Validation is accessible from the JSON Policy Editor in the IAM Console, as well as from the command line (aws accessanalyzer validate-policy) and your own code (ValidatePolicy). You can use the CLI and API options to perform programmatic validation as part of your CI/CD workflows.

In the IAM Console, policy validation takes place in real-time whenever you create or edit a customer-managed policy, with findings broken down by severity; here are some examples:

Security – Policy elements that are overly permissive, and that may be a security risk. This includes use of iam:PassRole in conjunction with NotResource or with “*” (wildcard) as the resource:

Error – Policy elements that stop the policy from functioning. This includes many types of syntax errors, missing actions, invalid constructs, and so forth:

Warning – Policy elements that don’t conform to AWS best practices, such as references to deprecated global condition keys or invalid users, and the use of ambiguous dates:

Suggestion – Policy elements that are missing, empty, or redundant:

Things to Know
As I noted earlier, we are launching with a set of over 100 checks. We have plans to add more over time, and welcome your suggestions.

In the Amazon spirit of drinking our own Champagne, we routinely validate the Amazon-managed IAM policies and fine-tune them when appropriate. From time to time we mark existing managed policies as deprecated, issue notifications to our customers via email, and make updated replacements available. To learn more about our process, read Deprecated AWS Managed Policies.

As you may know, there are already several open source policy linters available for AWS, including the well-known Parliament from Duo Labs. Our customers told us that these tools are useful, but that they wanted an AWS-native validation feature that was active while they were editing policies. A group of developers on the IAM team responded to this feedback and implemented policy validation from the ground up.

You can use this feature now in all AWS regions at no charge.

Jeff;

New Amazon EC2 X2gd Instances – Graviton2 Power for Memory-Intensive Workloads

This post was originally published on this site

We launched the first Graviton-powered EC2 instances in late 2018 and announced the follow-on Graviton2 processor just a year later. The dual SIMD units, support for int8 and fp16 instructions, and other architectural improvements between generations combine to make the Graviton2 a highly cost-effective workhorse processor.

Today, you can choose between General Purpose (M6g and M6gd), Compute-Optimized (C6g, C6gn, and C6gd), Memory-Optimized (R6g and R6gd), and Burstable (T4g) instances, all powered by fast, efficient Graviton2 processors. Our customers use these instances to run application servers, gaming servers, HPC workloads, video encoding, ad servers, and more. Multiple benchmarks (including this one and this one) have shown that these Graviton2-based instances deliver better price-performance than existing EC2 instances.

New X2gd Instances
I’m happy to announce the latest in our ever-growing roster of Graviton2-powered instances! The new X2gd instances have twice as much memory per vCPU as the memory-optimized R6g instances, and are designed for your memory-hunger workloads. This includes in-memory databases (Redis and Memcached), open source relational databases, Electronic Design Automation design & verification, real-time analytics, caching services, and containers.

X2gd instances are available in eight sizes, and also in bare metal form. Here are the specs:

Name vCPUs Memory
(GiB)
Local NVMe Storage Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
x2gd.medium 1 16 1 x 59 GiB Up to 10 Up to 4.750
x2gd.large 2 32 1 x 118 GiB Up to 10 Up to 4.750
x2gd.xlarge 4 64 1 x 237 GiB Up to 10 Up to 4.750
x2gd.2xlarge 8 128 1 x 475 GiB Up to 10 Up to 4.750
x2gd.4xlarge 16 256 1 x 950 GiB Up to 10 4.750
x2gd.8xlarge 32 512 1 x 1900 GiB 12 9.500
x2gd.12xlarge 48 768 2 x 1425 GiB 20 14.250
x2gd.16xlarge 64 1024 2 x 1900 GiB 25 19.000
x2gd.metal 64 1024 2 x 1900 GiB 25 19.000

When compared to the existing X1 instances, the new X2gd instances offer 55% better price/perfomance. The X2gd instances also offer the lowest price per GiB of memory of any current EC2 instance.

On the compute side, the X2gd instances provide the same amount of CPU power as the other Graviton2-powered instances (M6g, C6g, R6g, and T4g). Each vCPU is an entire physical core, which speeds compute-heavy EDA and financial service workloads, and also encourages denser packing of containers onto an instance of a particular size.

In addition to the fast SSD-based local NVMe storage, X2gd instances support Elastic Network Adapter (ENA), and can deliver up to 25 Gbps of network bandwidth between instances when launched within a Placement Group.

X2gd instances are built on AWS Nitro System, and you can use your existing Arm-compatible EC2 AMIs.

Now Available
X2gd instances are available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions, with more regions to come. You can launch them today in On-Demand and Spot form, and you can purchase Savings Plans or Reserved Instances.

For more information, check out the X2 Instance page and the AWS Graviton2 page.

Jeff;

AWS Fault Injection Simulator – Use Controlled Experiments to Boost Resilience

This post was originally published on this site

AWS gives you the components that you need to build systems that are highly reliable: multiple Regions (each with multiple Availability Zones), Amazon CloudWatch (metrics, monitoring, and alarms), Auto Scaling, Load Balancing, several forms of cross-region replication, and lots more. When you put them together in line with the guidance provided in the Well-Architected Framework, your systems should be able to keep going even if individual components fail.

However, you won’t know that this is indeed the case until you perform the right kinds of tests. The relatively new field of Chaos Engineering (based on pioneering work done by “Master of Disaster” Jesse Robbins in the early days of Amazon.com, and then taken into high gear by the Netflix Chaos Monkey) focuses on adding stress to an application by creating disruptive events, observing how the system responds, and implementing improvements. In addition to pointing out the areas for improvements, Chaos Engineering helps to discover blind spots that deserve additional monitoring & alarming, uncovers once-hidden implementation issues, and gives you an opportunity to improve your operational skills with an eye toward improving recovery time. To learn a lot more about this topic, start with Chaos Engineering – Part 1 by my colleague Adrian Hornsby.

Introducing AWS Fault Injection Simulator (FIS)
Today we are introducing AWS Fault Injection Simulator (FIS). This new service will help you to perform controlled experiments on your AWS workloads by injecting faults and letting you see what happens. You will learn how your system reacts to various types of faults and you will have a better understanding of failure modes. You can start by running experiments in pre-production environments and then step up to running them as part of your CI/CD workflow and ultimately in your production environment.

Each AWS Fault Injection Simulator (FIS) experiment targets a specific set of AWS resources and performs a set of actions on them. We’re launching with support for Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Amazon Relational Database Service (RDS), with additional resources and actions on the roadmap for 2021. You can select the target resources by type, tag, ARN, or by querying for specific attributes. You also have the ability to stop the experiment if one or more stop conditions (as defined by CloudWatch Alarms) are met. This allows you to quickly terminate the experiment if it has an unexpected impact on a crucial business or operational metric.

Using AWS Fault Injection Simulator (FIS)
Let’s create an experiment template and run an experiment! I will use four EC2 instances, all tagged with a Mode of Test:

Four EC2 instances

I open the FIS Console and click Create experiment template to get started:

FIS Console Home Page

I enter a Description and choose an IAM Role. The role grants permission that are needed for FIS to perform actions on the selected resources so that it can perform the experiment:

Set up Description and IAM Role

Next, I define the action(s) that comprise the experiment. I click on Add action to get started:

Ready to add an action

Then I define my first action — I want to stop some of my EC2 instances (tagged with a Mode of Test for this example) for five minutes, and make sure that my system stays running. I make my choices and click Save:

Next, I choose the target resources (EC2 instances in this case) for the experiment. I click Add target, give my target a name, and indicate that it consists of all of my EC2 instances (in the current region) that have tag Mode with value Test. I can also choose a random instance or a percentage of all of instances that match the tag or the Resource filter. Again, I make my choices and click Save:

Setting up a target

I can choose one or more stop conditions (CloudWatch Alarms) for the experiment. If an alarm is triggered, the experiment stops. This is a safety mechanism that allows me to make sure that a local failure does not cascade into a full-scale outage.

Setting a stop condition

Finally, I tag my experiment and click Create experiment template:

Add tags and create experiment

My template is ready to be used as the basis for an experiment:

Experiment templates

To run an experiment, I select a template and choose Start experiment from the Actions menu:

Then I click Start experiment (I also decided to add a tag):

I confirm my intent, since it can affect my AWS resources:

Confirm affect on AWS resources

My experiment starts to run, and I can watch the actions:

Experiment is running

As expected, the target instances are stopped:

My experiment runs to conclusion, and I now know that my system can keep on going if those instances are stopped:

I can also create, run, and review experiments using the FIS API and the FIS CLI. You could, for example, run different experiments against the same target, or run the same experiment against different targets.

Available Now
AWS Fault Injection Simulator (FIS) is available now and you can use it to run controlled experiments today. It is available in all of the commercial AWS Regions today except Asia Pacific (Osaka) and the two Regions in China. The remaining three commercial regions are on the roadmap.

Pricing is based on the number of minutes that your actions run, with no extra charge when two or more actions run in parallel.

We’ll be adding support for additional services and additional actions throughout 2021, so stay tuned!

Jeff;

SecretStore Release Candidate 3

This post was originally published on this site

The SecretStore release candidate 3 (RC3) module is now available on the PowerShell Gallery. This contains an exciting new feature which allows users to non-interactively create, and configure a SecretStore. This feature was added to support CI systems and other automated scenarios.

SecretStore is an extension vault module, for PowerShell SecretManagement, which works over all supported PowerShell platforms on Windows, Linux, and macOS. For more context on this module and the SecretManagement module refer to the previous blog posts:

For more context on these modules check out these previous blog posts:

Before installing this module, please uninstall the current preview versions of the module and restart your PowerShell session.

To install these updates run the following commands:

Uninstall-Module Microsoft.PowerShell.SecretStore -Force
# Restart your PowerShell session
Install-Module -Name Microsoft.PowerShell.SecretStore -Repository PSGallery
Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault -AllowClobber

SecretStore Updates

Previously, Set-SecretStoreConfiguration required manual password confirmation to make changes to SecretStore configuration. This update adds a -Password parameter to Set-SecretStoreConfiguration to endable automated creation and configuration of a SecretVault.

Breaking Change

  • The -Force parameter was removed from the Set-SecretStoreConfiguration command, and instead the -Confirm:$false should be used to suppress PowerShell confirmation prompting in automation scripts.

New Feature

  • Set-SecretStoreConfiguration command now takes a -Password parameter so that there is no need to prompt for a password

How to non-interactively create and configure a SecretStore

This is an example of automation script that installs and configures the Microsoft.PowerShell.SecretStore module without user prompting. The configuration requires a password and sets user interaction to None, so that SecretStore will never prompt the user. The configuration also requires a password, and the password is passed in as a SecureString object. The -Confirm:false parameter is used so that PowerShell will not prompt for confirmation.

The SecretStore password must be provided in a secure fashion. Here the password is being imported from an encrypted file using Windows Data Protection API, but this is a Windows only solution. Another option is to use a CI system mechanism such as secure variables.

Next, the SecretManagement module is installed and the SecretStore module registered so that the SecretStore secrets can be managed.

The Unlock-SecretStore cmdlet is used to unlock the SecretStore for this session. The password timeout was configured for 1 hour and SecretStore will remain unlocked in the session for that amount of time, after which it will need to be unlocked again before secrets can be accessed.

Install-Module -Name Microsoft.PowerShell.SecretStore -Repository PSGallery -Force
$password = Import-CliXml -Path $securePasswordPath

Set-SecretStoreConfiguration -Scope CurrentUser -Authentication Password -PasswordTimeout 3600 -Interaction None -Password $password -Confirm:$false

Install-Module -Name Microsoft.PowerShell.SecretManagement -Repository PSGallery -Force
Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault

Unlock-SecretStore -Password $password

General Availability (GA)

This is a “go live” release, which means that we feel that this RC is feature complete and supported in production.

If no bugs are identified through this release, we will increment the versioning and declare the module as GA in late March. If any high-risk bugs are identified we will continue to release RCs until the quality bar is met for a GA release.

Feedback and Support

Community feedback has been essential to the iterative development of these modules. Thank you to everyone who has contributed issues, and feedback thus far! To file issues or get support for the SecretManagement interface or vault development experience please use the SecretManagement repository. For issues which pertain specifically to the SecretStore and its cmdlet interface please use the SecretStore repository.

Sydney Smith

PowerShell Team

 

 

The post SecretStore Release Candidate 3 appeared first on PowerShell Team.

Amazon S3 Glacier Price Reduction

This post was originally published on this site

The Amazon S3 Glacier storage class is ideal for data archiving and long-term backup of information that will be accessed at least once per quarter (Amazon S3 Glacier Deep Archive is a better fit for data that is seldom accessed).

Amazon S3 Glacier stores your data across three Availability Zones (AZs), each physically separated from the others by a meaningful distance, but no more than 100 km (60 miles). You can store your archives and backups for as little as $4 per terabyte per month, and then choose between three retrieval options (Expedited, Standard, and Bulk) when you need access.

Our customers use this storage class for media asset workflows, archiving of healthcare and financial services data that is subject to retention requirements, scientific data storage, digital preservation, and as a replacement for magnetic tape. To learn more, check out our case studies from Nasdaq, Reuters, Teespring, BandLab, and Celgene.

Now More Cost-Effective
In addition to being durable and secure, the S3 Glacier storage class is now even more cost-effective than before. Effective March 1, 2021, we are lowering the charges for PUT and Lifecycle requests to S3 Glacier by 40% for all AWS Regions. This includes the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. Check out the Amazon S3 Pricing page for more information.

You can use the S3 PUT API to directly store compliance and backup data in S3 Glacier. You can also use S3 Lifecycle policies to save on storage costs for data that is rarely accessed:

Setting up an S3 lifecycle rule to transition objects to Glacier 30 days after creation.

If you are ready to build a comprehensive backup or archiving system, be sure to check out our Backup and Restore page and don’t forget to take a look at the products & services offered by the AWS Storage Competency Partners.

Jeff;