Apple Releases Security Updates for AirPort Extreme, AirPort Time Capsule

This post was originally published on this site

Original release date: May 30, 2019

Apple has released AirPort Base Station Firmware Update 7.91 to address vulnerabilities in AirPort Extreme and AirPort Time Capsule wireless routers. A remote attacker could exploit some of these vulnerabilities to take control of an affected system.

The Cybersecurity and Information Security Agency (CISA) encourages users and administrators to review the Apple security page for AirPort Base Station Firmware Update 7.9.1 and apply the necessary updates.


This product is provided subject to this Notification and this Privacy & Use policy.

Hurricane-Related Scams

This post was originally published on this site

Original release date: May 30, 2019

As the 2019 hurricane season approaches, the Cybersecurity and Infrastructure Security Agency (CISA) warns users to remain vigilant for malicious cyber activity targeting disaster victims and potential donors. Fraudulent emails commonly appear after major natural disasters and often contain links or attachments that direct users to malicious websites. Users should exercise caution in handling any email with a hurricane-related subject line, attachments, or hyperlinks. In addition, users should be wary of social media pleas, texts, or door-to-door solicitations relating to severe weather events.

To avoid becoming victims of malicious activity, users and administrators should review the following resources and take preventative measures:

If you believe you have been a victim of cybercrime, file a complaint with the Federal Bureau of Investigation Internet Crime Complaint Center at www.ic3.gov.


This product is provided subject to this Notification and this Privacy & Use policy.

Amazon Managed Streaming for Apache Kafka (MSK) – Now Generally Available

This post was originally published on this site

I am always amazed at how our customers are using streaming data. For example, Thomson Reuters, one of the world’s most trusted news organizations for businesses and professionals, built a solution to capture, analyze, and visualize analytics data to help product teams continuously improve the user experience. Supercell, the social game company providing games such as Hay Day, Clash of Clans, and Boom Beach, is delivering in-game data in real-time, handling 45 billion events per day.

Since we launched Amazon Kinesis at re:Invent 2013, we have continually expanded the ways in in which customers work with streaming data on AWS. Some of the available tools are:

  • Kinesis Data Streams, to capture, store, and process data streams with your own applications.
  • Kinesis Data Firehose, to transform and collect data into destinations such as Amazon S3, Amazon Elasticsearch Service, and Amazon Redshift.
  • Kinesis Data Analytics, to continuously analyze data using SQL or Java (via Apache Flink applications), for example to detect anomalies or for time series aggregation.
  • Kinesis Video Streams, to simplify processing of media streams.

At re:Invent 2018, we introduced in open preview Amazon Managed Streaming for Apache Kafka (MSK), a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data.

I am excited to announce that Amazon MSK is generally available today!

How it works

Apache Kafka (Kafka) is an open-source platform that enables customers to capture streaming data like click stream events, transactions, IoT events, application and machine logs, and have applications that perform real-time analytics, run continuous transformations, and distribute this data to data lakes and databases in real time. You can use Kafka as a streaming data store to decouple applications producing streaming data (producers) from those consuming streaming data (consumers).

While Kafka is a popular enterprise data streaming and messaging framework, it can be difficult to setup, scale, and manage in production. Amazon MSK takes care of these managing tasks and makes it easy to set up, configure, and run Kafka, along with Apache ZooKeeper, in an environment following best practices for high availability and security.

Your MSK clusters always run within an Amazon VPC managed by the MSK service. Your MSK resources are made available to your own VPC, subnet, and security group through elastic network interfaces (ENIs) which will appear in your account, as described in the following architectural diagram:

Customers can create a cluster in minutes, use AWS Identity and Access Management (IAM) to control cluster actions, authorize clients using TLS private certificate authorities fully managed by AWS Certificate Manager (ACM), encrypt data in-transit using TLS, and encrypt data at rest using AWS Key Management Service (KMS) encryption keys.

Amazon MSK continuously monitors server health and automatically replaces servers when they fail, automates server patching, and operates highly available ZooKeeper nodes as a part of the service at no additional cost. Key Kafka performance metrics are published in the console and in Amazon CloudWatch. Amazon MSK is fully compatible with Kafka versions 1.1.1 and 2.1.0, so that you can continue to run your applications, use Kafka’s admin tools, and and use Kafka compatible tools and frameworks without having to change your code.

Based on our customer feedback during the open preview, Amazon MSK added may features such as:

  • Encryption in-transit via TLS between clients and brokers, and between brokers
  • Mutual TLS authentication using ACM private certificate authorities
  • Support for Kafka version 2.1.0
  • 99.9% availability SLA
  • HIPAA eligible
  • Cluster-wide storage scale up
  • Integration with AWS CloudTrail for MSK API logging
  • Cluster tagging and tag-based IAM policy application
  • Defining custom, cluster-wide configurations for topics and brokers

AWS CloudFormation support is coming in the next few weeks.

Creating a cluster

Let’s create a cluster using the AWS management console. I give the cluster a name, select the VPC I want to use the cluster from, and the Kafka version.

I then choose the Availability Zones (AZs) and the corresponding subnets to use in the VPC. In the next step, I select how many Kafka brokers to deploy in each AZ. More brokers allow you to scale the throughtput of a cluster by allocating partitions to different brokers.

I can add tags to search and filter my resources, apply IAM policies to the Amazon MSK API, and track my costs. For storage, I leave the default storage volume size per broker.

I select to use encryption within the cluster and to allow both TLS and plaintext traffic between clients and brokers. For data at rest, I use the AWS-managed customer master key (CMK), but you can select a CMK in your account, using KMS, to have further control. You can use private TLS certificates to authenticate the identity of clients that connect to your cluster. This feature is using Private Certificate Authorities (CA) from ACM. For now, I leave this option unchecked.

In the advanced setting, I leave the default values. For example, I could have chosen here a different instance type for my brokers. Some of these settings can be updated using the AWS CLI.

I create the cluster and monitor the status from the cluster summary, including the Amazon Resource Name (ARN) that I can use when interacting via CLI or SDKs.

When the status is active, the client information section provides specific details to connect to the cluster, such as:

  • The bootstrap servers I can use with Kafka tools to connect to the cluster.
  • The Zookeper connect list of hosts and ports.

I can get similar information using the AWS CLI:

  • aws kafka list-clusters to see the ARNs of your clusters in a specific region
  • aws kafka get-bootstrap-brokers --cluster-arn <ClusterArn> to get the Kafka bootstrap servers
  • aws kafka describe-cluster --cluster-arn <ClusterArn> to see more details on the cluster, including the Zookeeper connect string

Quick demo of using Kafka

To start using Kafka, I create two EC2 instances in the same VPC, one will be a producer and one a consumer. To set them up as client machines, I download and extract the Kafka tools from the Apache website or any mirror. Kafka requires Java 8 to run, so I install Amazon Corretto 8.

On the producer instance, in the Kafka directory, I create a topic to send data from the producer to the consumer:

bin/kafka-topics.sh --create --zookeeper <ZookeeperConnectString>
--replication-factor 3 --partitions 1 --topic MyTopic

Then I start a console-based producer:

bin/kafka-console-producer.sh --broker-list <BootstrapBrokerString>
--topic MyTopic

On the consumer instance, in the Kafka directory, I start a console-based consumer:

bin/kafka-console-consumer.sh --bootstrap-server <BootstrapBrokerString>
--topic MyTopic --from-beginning

Here’s a recording of a quick demo where I create the topic and then send messages from a producer (top terminal) to a consumer of that topic (bottom terminal):

Pricing and availability

Pricing is per Kafka broker-hour and per provisioned storage-hour. There is no cost for the Zookeeper nodes used by your clusters. AWS data transfer rates apply for data transfer in and out of MSK. You will not be charged for data transfer within the cluster in a region, including data transfer between brokers and data transfer between brokers and ZooKeeper nodes.

You can migrate your existing Kafka cluster to MSK using tools like MirrorMaker (that comes with open source Kafka) to replicate data from your clusters into a MSK cluster.

Upstream compatibility is a core tenet of Amazon MSK. Our code changes to the Kafka platform are released back to open source.

Amazon MSK is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), EU (Frankfurt), EU (Ireland), EU (Paris), and EU (London).

I look forward to see how are you going to use Amazon MSK to simplify building and migrating streaming applications to the cloud!

Now Available – AWS IoT Things Graph

This post was originally published on this site

We announced AWS IoT Things Graph last November and described it as a tool to let you build IoT applications visually. Today I am happy to let you know that the service is now available and ready for you to use!

As you will see in a moment, you can represent your business logic in a flow composed of devices and services. Each web service and each type of device (sensor, camera, display, and so forth) is represented in Things Graph as a model. The models hide the implementation details that are peculiar to a particular brand or model of device, and allow you to build flows that can evolve along with your hardware. Each model has a set of actions (inputs), events (outputs), and states (attributes). Things Graph includes a set of predefined models, and also allows you to define your own. You can also use mappings as part of your flow to convert the output from one device into the form expected by other devices. After you build your flow, you can deploy it to the AWS Cloud or an AWS IoT Greengrass-enabled device for local execution. The flow, once deployed, orchestrates interactions between locally connected devices and web services.

Using AWS IoT Things Graph
Let’s take a quick walk through the AWS IoT Things Graph Console!

The first step is to make sure that I have models which represent the devices and web services that I plan to use in my flow. I click Models in the console navigation to get started:

The console outlines the three steps that I must follow to create a model, and also lists my existing models:

The presence of aws/examples in the URN for each of the devices listed above indicates that they are predefined, and part of the public AWS IoT Things Graph namespace. I click on Camera to learn more about this model; I can see the Properties, Actions, and Events:

The model is defined using GraphQL; I can view it, edit it, or upload a file that contains a model definition. Here’s the definition of the Camera:

This model defines an abstract Camera device. The model, in turn, can reference definitions for one or more actual devices, as listed in the Devices section:

Each of the devices is also defined using GraphQL. Of particular interest is the use of MQTT topics & messages to define actions:

Earlier, I mentioned that models can also represent web services. When a flow that references a model of this type is deployed, activating an action on the model invokes a Greengrass Lambda function. Here’s how a web service is defined:

Now I can create a flow. I click Flows in the navigation, and click Create flow:

I give my flow a name and enter a description:

I start with an empty canvas, and then drag nodes (Devices, Services, or Logic) to it:

For this demo (which is fully explained in the AWS IoT Things Graph User Guide), I’ll use a MotionSensor, a Camera, and a Screen:

I connect the devices to define the flow:

Then I configure and customize it. There are lots of choices and settings, so I’ll show you a few highlights, and refer you to the User Guide for more info. I set up the MotionSensor so that a change of state initiates this flow:

I also (not shown) configure the Camera to perform the Capture action, and the Screen to display it. I could also make use of the predefined Services:

I can also add Logic to my flow:

Like the models, my flow is ultimately defined in GraphQL (I can view and edit it directly if desired):

At this point I have defined my flow, and I click Publish to make it available for deployment:

The next steps are:

Associate – This step assigns an actual AWS IoT Thing to a device model. I select a Thing, and then choose a device model, and repeat this step for each device model in my flow:

Deploy – I create a Flow Configuration, target it at the Cloud or Greengrass, and use it to deploy my flow (read Creating Flow Configurations to learn more).

Things to Know
I’ve barely scratched the surface here; AWS IoT Things Graph provides you with a lot of power and flexibility and I’ll leave you to discover more on your own!

Here are a couple of things to keep in mind:

Pricing – Pricing is based on the number of steps executed (for cloud deployments) or deployments (for edge deployments), and is detailed on the AWS IoT Things Graph Pricing page.

API Access – In addition to console access, you can use the AWS IoT Things Graph API to build your models and flows.

RegionsAWS IoT Things Graph is available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Jeff;

 

 

New – Data API for Amazon Aurora Serverless

This post was originally published on this site

If you have ever written code that accesses a relational database, you know the drill. You open a connection, use it to process one or more SQL queries or other statements, and then close the connection. You probably used a client library that was specific to your operating system, programming language, and your database. At some point you realized that creating connections took a lot of clock time and consumed memory on the database engine, and soon after found out that you could (or had to) deal with connection pooling and other tricks. Sound familiar?

The connection-oriented model that I described above is adequate for traditional, long-running programs where the setup time can be amortized over hours or even days. It is not, however, a great fit for serverless functions that are frequently invoked and that run for time intervals that range from milliseconds to minutes. Because there is no long-running server, there’s no place to store a connection identifier for reuse.

Aurora Serverless Data API
In order to resolve this mismatch between serverless applications and relational databases, we are launching a Data API for the MySQL-compatible version of Amazon Aurora Serverless. This API frees you from the complexity and overhead that come along with traditional connection management, and gives you the power to quickly and easily execute SQL statements that access and modify your Amazon Aurora Serverless Database instances.

The Data API is designed to meet the needs of both traditional and serverless apps. It takes care of managing and scaling long-term connections to the database and returns data in JSON form for easy parsing. All traffic runs over secure HTTPS connections. It includes the following functions:

ExecuteStatement – Run a single SQL statement, optionally within a transaction.

BatchExecuteStatement – Run a single SQL statement across an array of data, optionally within a transaction.

BeginTransaction – Begin a transaction, and return a transaction identifier. Transactions are expected to be short (generally 2 to 5 minutes).

CommitTransaction – End a transaction and commit the operations that took place within it.

RollbackTransaction – End a transaction without committing the operations that took place within it.

Each function must run to completion within 1 minute, and can return up to 1 megabyte of data.

Using the Data API
I can use the Data API from the Amazon RDS Console, the command line, or by writing code that calls the functions that I described above. I’ll show you all three in this post.

The Data API is really easy to use! The first step is to enable it for the desired Amazon Aurora Serverless database. I open the Amazon RDS Console, find & select the cluster, and click Modify:

Then I scroll down to the Network & Security section, click Data API, and Continue:

On the next page I choose to apply the settings immediately, and click Modify cluster:

Now I need to create a secret to store the credentials that are needed to access my database. I open the Secrets Manager Console and click Store a new secret. I leave Credentials for RDS selected, enter a valid database user name and password, optionally choose a non-default encryption key, and then select my serverless database. Then I click Next:

I name my secret and tag it, and click Next to configure it:

I use the default values on the next page, click Next again, and now I have a brand new secret:

Now I need two ARNs, one for the database and one for the secret. I fetch both from the console, first for the database:

And then for the secret:

The pair of ARNs (database and secret) provides me with access to my database, and I will protect them accordingly!

Using the Data API from the Amazon RDS Console
I can use the Query Editor in the Amazon RDS Console to run queries that call the Data API. I open the console and click Query Editor, and create a connection to the database. I select the cluster, enter my credentials, and pre-select the table of interest. Then I click Connect to database to proceed:

I enter a query and click Run, and view the results within the editor:

Using the Data API from the Command Line
I can exercise the Data API from the command line:

$ aws rds-data execute-statement 
  --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL" 
  --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1" 
  --database users 
  --sql "show tables" 
  --output json

I can use jq to pick out the part of the result that is of interest to me:

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "users"
      }
    ]
  }
]

I can query the table and get the results (the SQL statement is "select * from users where userid='jeffbarr'"):

... | jq .records
[
  {
    "values": [
      {
        "stringValue": "jeffbarr"
      },
      {
        "stringValue": "Jeff"
      },
      {
        "stringValue": "Barr"
      }
    ]
  }

If I specify --include-result-metadata, the query also returns data that describes the columns of the result (I’ll show only the first one in the interest of frugality):

... | jq .columnMetadata[0]
{
  "type": 12,
  "name": "userid",
  "label": "userid",
  "nullable": 1,
  "isSigned": false,
  "arrayBaseColumnType": 0,
  "scale": 0,
  "schemaName": "",
  "tableName": "users",
  "isCaseSensitive": false,
  "isCurrency": false,
  "isAutoIncrement": false,
  "precision": 15,
  "typeName": "VARCHAR"
}

The Data API also allows me to wrap a series of statements in a transaction, and then either commit or rollback. Here’s how I do that (I’m omitting --secret-arn and --resource-arn for clarity):

$ $ID=`aws rds-data begin-transaction --database users --output json | jq .transactionId`
$ echo $ID
"ATP6Gz88GYNHdwNKaCt/vGhhKxZs2QWjynHCzGSdRi9yiQRbnrvfwF/oa+iTQnSXdGUoNoC9MxLBwyp2XbO4jBEtczBZ1aVWERTym9v1WVO/ZQvyhWwrThLveCdeXCufy/nauKFJdl79aZ8aDD4pF4nOewB1aLbpsQ=="

$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ ...
$ aws rds-data execute-statement --transaction-id $ID --database users --sql "..."
$ aws rds-data commit-transaction $ID

If I decide not to commit, I invoke rollback-transaction instead.

Using the Data API with Python and Boto
Since this is an API, programmatic access is easy. Here’s some very simple Python / Boto code:

import boto3

client = boto3.client('rds-data')

response = client.execute_sql(
    secretArn   = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:aurora-serverless-data-api-sl-admin-2Ir1oL',
    database    = 'users',
    resourceArn = 'arn:aws:rds:us-east-1:123456789012:cluster:aurora-sl-1',
    sql         = 'select * from users'
)

for user in response['records']:
  userid     = user[0]['stringValue']
  first_name = user[1]['stringValue']
  last_name  = user[2]['stringValue']
  print(userid + ' ' + first_name + ' ' + last_name)

And the output:

$ python data_api.py
jeffbarr Jeff Barr
carmenbarr Carmen Barr

Genuine, production-quality code would reference the table columns symbolically using the metadata that is returned as part of the response.

By the way, my Amazon Aurora Serverless cluster was configured to scale capacity all the way down to zero when not active. Here’s what the scaling activity looked like while I was writing this post and running the queries:

Now Available
You can make use of the Data API today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. There is no charge for the API, but you will pay the usual price for data transfer out of AWS.

Jeff;

New – AWS IoT Events: Detect and Respond to Events at Scale

This post was originally published on this site

As you may have been able to tell from many of the announcements that we have made over the last four or five years, we are working to build a wide-ranging set of Internet of Things (IoT) services and capabilities. Here’s a quick recap:

October 2015AWS IoT Core – A fundamental set of Cloud Services for Connected Devices.

Jun 2017AWS Greengrass – The ability to Run AWS Lambda Functions on Connected Devices.

November 2017AWS IoT Device ManagementOnboarding, Organization, Monitoring, and Remote Management of Connected Devices.

November 2017AWS IoT AnalyticsAdvanced Data Analysis for IoT Devices.

November 2017Amazon FreeRTOSAn IoT Operating System for Microcontrollers.

April 2018Greengrass ML InferenceThe power to do Machine Learning Inference at the Edge.

August 2018AWS IoT Device Defender – A service that helps to Keep Your Connected Devices Safe.

Last November we also announced our plans to launch four new IoT Services:

You can use these services individually or together to build all sorts of powerful, connected applications!

AWS IoT Events Now Available
Today we are making AWS IoT Events available in production form in four AWS Regions. You can use this service to monitor and respond to events (patterns of data that identify changes in equipment or facilities) at scale. You can detect a misaligned robot arm, a motion sensor that triggers outside of business hours, an unsealed freezer door, or a motor that is running outside of tolerance, all with the goal of driving faster and better-informed decisions.

As you will see in a moment, you can easily create detector models that represent your devices, their states, and the transitions (driven by sensors and events, both known as inputs) between the states. The models can trigger actions when critical events are detected, allowing you to build robust, highly automated systems. Actions can, for example, send a text message to a service technician or invoke an AWS Lambda function.

You can access AWS IoT Events from the AWS IoT Event Console or by writing code that calls the AWS IoT Events API functions. I’ll use the Console, and I will start by creating a detector model. I click Create detector model to begin:

I have three options; I’ll go with the demo by clicking Launch demo with inputs:

This shortcut creates an input and a model, and also enables some “demo” functionality that sends data to the model. The model looks like this:

Before examining the model, let’s take a look at the input. I click on Inputs in the left navigation to see them:

I can see all of my inputs at a glance; I click on the newly created input to learn more:

This input represents the battery voltage measured from a device that is connected to a particular powerwallId:

Ok, let’s return to (and dissect) the detector model! I return to the navigation, click Detector models, find my model, and click it:

There are three Send options at the top; each one sends data (an input) to the detector model. I click on Send data for Charging to get started. This generates a message that looks like this; I click Send data to do just that:

Then I click Send data for Charged to indicate that the battery is fully charged. The console shows me the state of the detector:

Each time an input is received, the detector processes it. Let’s take a closer look at the detector. It has three states (Charging, Charged, and Discharging):

The detector starts out in the Charging state, and transitions to Charged when the Full_charge event is triggered. Here’s the definition of the event, including the trigger logic:

The trigger logic is evaluated each time an input is received (your IoT app must call BatchPutMessage to inform AWS IoT Events). If the trigger logic evaluates to a true condition, the model transitions to the new (destination) state, and it can also initiate an event action. This transition has no actions; I can add one (or more) by clicking Add action. My choices are:

  • Send MQTT Message – Send a message to an MQTT topic.
  • Send SNS Message – Send a message to an SNS target, identifed by an ARN.
  • Set Timer – Set, reset, or destroy a timer. Times can be expressed in seconds, minutes, hours, days, or months.
  • Set Variable – Set, increment, or decrement a variable.

Returning (once again) to the detector, I can modify the states as desired. For example, I could fine-tune the Discharging aspect of the detector by adding a LowBattery state:

After I create my inputs and my detector, I Publish the model so that my IoT devices can use and benefit from it. I click Publish and fill in a few details:

The Detector generation method has two options. I can Create a detector for each unique key value (if I have a bunch of devices), or I can Create a single detector (if I have one device). If I choose the first option, I need to choose the key that separates one device from another.

Once my detector has been published, I can send data to it using AWS IoT Analytics, IoT Core, or from a Lambda function.

Get Started Today
We are launching AWS IoT Events in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions and you can start using it today!

Jeff;

 

 

MS-ISAC Highlights Verizon Data Breach Report Release

This post was originally published on this site

Original release date: May 29, 2019

The Multi-State Information Sharing & Analysis Center (MS-ISAC) has released a Cybersecurity Spotlight on the 2019 Verizon Data Breach Report to raise awareness of data breach incidents and provide recommended best practices for election officials. The report—produced annually by the Verizon Threat Research Advisory Center (VTRAC)—provides analysis on data breach trends affecting a variety of sectors, including public administration, healthcare, and education.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages election officials to review MS-ISAC’s Cybersecurity Spotlight and Verizon’s 2019 Data Brach Investigations Report for more information and recommendations.


This product is provided subject to this Notification and this Privacy & Use policy.

Tips for a Cyber Safe Vacation

This post was originally published on this site

Original release date: May 24, 2019

As summer nears, many people will soon be taking vacations. When planning vacations, users should be aware of potential rental scams and “free” vacation ploys. Travelers should also keep in mind risks related to travelling with mobile devices.

The Cybersecurity and Information Security Agency (CISA) encourages travelers to review the following suggested tips and security practices to keep their vacation cyber safe:


This product is provided subject to this Notification and this Privacy & Use policy.

New – Opt-in to Default Encryption for New EBS Volumes

This post was originally published on this site

My colleagues on the AWS team are always looking for ways to make it easier and simpler for you to protect your data from unauthorized access. This work is visible in many different ways, and includes the AWS Cloud Security page, the AWS Security Blog, a rich collection of AWS security white papers, an equally rich set of AWS security, identity, and compliance services, and a wide range of security features within individual services. As you might recall from reading this blog, many AWS services support encryption at rest & in transit, logging, IAM roles & policies, and so forth.

Default Encryption
Today I would like to tell you about a new feature that makes the use of encrypted Amazon EBS (Elastic Block Store) volumes even easier. This launch builds on some earlier EBS security launches including:

You can now specify that you want all newly created EBS volumes to be created in encrypted form, with the option to use the default key provided by AWS, or a key that you create. Because keys and EC2 settings are specific to individual AWS regions, you must opt-in on a region-by-region basis.

This new feature will let you reach your protection and compliance goals by making it simpler and easier for you to ensure that newly created volumes are created in encrypted form. It will not affect existing unencrypted volumes.

If you use IAM policies that require the use of encrypted volumes, you can use this feature to avoid launch failures that would occur if unencrypted volumes were inadvertently referenced when an instance is launched. Your security team can enable encryption by default without having to coordinate with your development team, and with no other code or operational changes.

Encrypted EBS volumes deliver the specified instance throughput, volume performance, and latency, at no extra charge. I open the EC2 Console, make sure that I am in the region of interest, and click Settings to get started:

Then I select Always encrypt new EBS volumes:

I can click Change the default key and choose one of my keys as the default:

Either way, I click Update to proceed. One thing to note here: This setting applies to a single AWS region; I will need to repeat the steps above for each region of interest, checking the option and choosing the key.

Going forward, all EBS volumes that I create in this region will be encrypted, with no additional effort on my part. When I create a volume, I can use the key that I selected in the EC2 Settings, or I can select a different one:

Any snapshots that I create are encrypted with the key that was used to encrypt the volume:

If I use the volume to create a snapshot, I can use the original key or I can choose another one:

Things to Know
Here are some important things that you should know about this important new AWS feature:

Older Instance Types – After you enable this feature, you will not be able to launch any more C1, M1, M2, or T1 instances or attach newly encrypted EBS volumes to existing instances of these types. We recommend that you migrate to newer instance types.

AMI Sharing – As I noted above, we recently gave you the ability to share encrypted AMIs with other AWS accounts. However, you cannot share them publicly, and you should use a separate account to create community AMIs, Marketplace AMIs, and public snapshots. To learn more, read How to Share Encrypted AMIs Across Accounts to Launch Encrypted EC2 Instances.

Other AWS Services – AWS services such as Amazon Relational Database Service (RDS) and Amazon WorkSpaces that use EBS for storage perform their own encryption and key management and are not affected by this launch. Services such as Amazon EMR that create volumes within your account will automatically respect the encryption setting, and will use encrypted volumes if the always-encrypt feature is enabled.

API / CLI Access – You can also access this feature from the EC2 CLI and the API.

No Charge – There is no charge to enable or use encryption. If you are using encrypted AMIs and create a separate one for each AWS account, you can now share the AMI with other accounts, leading to a reduction in storage utilization and charges.

Per-Region – As noted above, you can opt-in to default encryption on a region-by-region basis.

Available Now
This feature is available now and you can start using it today in all public AWS regions and in GovCloud. It is not available in the AWS regions in China.

Jeff;

 

Privacy Awareness Week

This post was originally published on this site

Original release date: May 22, 2019

The Federal Trade Commission (FTC) has released an announcement promoting Privacy Awareness Week (PAW). PAW is an annual event fostering awareness of privacy issues and the importance of protecting personal information. This year’s theme, “Protecting Privacy is Everyone’s Responsibility,” focuses on promoting privacy awareness for consumers and businesses.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages consumers and organizations to review FTC’s post and consider the following practices to protect privacy and safeguard data:


This product is provided subject to this Notification and this Privacy & Use policy.