Tag Archives: AWS

New – Gigabit Connectivity Options for Amazon Direct Connect

This post was originally published on this site

AWS Direct Connect gives you the ability to create private network connections between your datacenter, office, or colocation environment and AWS. The connections start at your network and end at one of 91 AWS Direct Connect locations and can reduce your network costs, increase throughput, and deliver a more consistent experience than an Internet-based connection. In most cases you will need to work with an AWS Direct Connect Partner to get your connection set up.

As I prepared to write this post, I learned that my understanding of AWS Direct Connect was incomplete, and that the name actually encompasses three distinct models. Here’s a summary:

Dedicated Connections are available with 1 Gbps and 10 Gbps capacity. You use the AWS Management Console to request a connection, after which AWS will review your request and either follow up via email to request additional information or provision a port for your connection. Once AWS has provisioned a port for you, the remaining time to complete the connection by the AWS Direct Connect Partner will vary between days and weeks. A Dedicated Connection is a physical Ethernet port dedicated to you. Each Dedicated Connection supports up to 50 Virtual Interfaces (VIFs). To get started, read Creating a Connection.

Hosted Connections are available with 50 to 500 Mbps capacity, and connection requests are made via an AWS Direct Connect Partner. After the AWS Direct Connect Partner establishes a network circuit to your premises, capacity to AWS Direct Connect can be added or removed on demand by adding or removing Hosted Connections. Each Hosted Connection supports a single VIF; you can obtain multiple VIFs by acquiring multiple Hosted Connections. The AWS Direct Connect Partner provisions the Hosted Connection and sends you an invite, which you must accept (with a click) in order to proceed.

Hosted Virtual Interfaces are also set up via AWS Direct Connect Partners. A Hosted Virtual Interface has access to all of the available capacity on the network link between the AWS Direct Connect Partner and an AWS Direct Connect location. The network link between the AWS Direct Connect Partner and the AWS Direct Connect location is shared by multiple customers and could possibly be oversubscribed. Due to the possibility of oversubscription in the Hosted Virtual Interface model, we no longer allow new AWS Direct Connect Partner service integrations using this model and recommend that customers with workloads sensitive to network congestion use Dedicated or Hosted Connections.

Higher Capacity Hosted Connections
Today we are announcing Hosted Connections with 1, 2, 5, or 10 Gbps of capacity. These capacities will be available through a select set of AWS Direct Connect Partners who have been specifically approved by AWS. We are also working with AWS Direct Connect Partners to implement additional monitoring of the network link between the AWS Direct Connect Partners and AWS.

Most AWS Direct Connect Partners support adding or removing Hosted Connections on demand. Suppose that you archive a massive amount of data to Amazon Glacier at the end of every quarter, and that you already have a pair of resilient 10 Gbps circuits from your AWS Direct Connect Partner for use by other parts of your business. You then create a pair of resilient 1, 2, 5 or 10 Gbps Hosted Connections at the end of the quarter, upload your data to Glacier, and then delete the Hosted Connections.

You pay AWS for the port-hour charges while the Hosted Connections are in place, along with any associated data transfer charges (see the Direct Connect Pricing page for more info). Check with your AWS Direct Connect Partner for the charges associated with their services. You get a cost-effective, elastic way to move data to the cloud while creating Hosted Connections only when needed.

Available Now
The new higher capacity Hosted Connections are available through select AWS Direct Connect Partners after they are approved by AWS.

Jeff;

PS – As part of this launch, we are reducing the prices for the existing 200, 300, 400, and 500 Mbps Hosted Connection capacities by 33.3%, effective March 1, 2019.

 

AWS Heroes: Putting AWS security services to work for you

This post was originally published on this site

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent!

Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with.

Nothing could be further from the truth.

At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud?

Shared responsibility

It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS.

My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model.

But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category.

Security services

The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility.

A few of these categories are already well understood.

  • Authentication services help me identify my users.
  • Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions.
  • Protected stores allow me to encrypt sensitive data and regulate access to it.

Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended.

Enforcement

Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections.

By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me.

AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level.

Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations.

Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks.

AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks.

Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions.

Visibility

Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions.

The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first.

This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success.

Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls:

• Core AWS security services and APN Partner solutions at the bottom
• The AWS Security Hub providing visibility in the middle
• Automation as the crown jewel on top

What’s next?

In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

New – Open Distro for Elasticsearch

This post was originally published on this site

Elasticsearch is a distributed, document-oriented search and analytics engine. It supports structured and unstructured queries, and does not require a schema to be defined ahead of time. Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics.

Originally launched as a true open source project, some of the more recent additions to Elasticsearch are proprietary. My colleague Adrian explains our motivation to start Open Distro for Elasticsearch in his post, Keeping Open Source Open. As strong believers in, and supporters of, open source software, we believe this project will help continue to accelerate open source Elasticsearch innovation.

Open Distro for Elasticsearch
Today we are launching Open Distro for Elasticsearch. This is a value-added distribution of Elasticsearch that is 100% open source (Apache 2.0 license) and supported by AWS. Open Distro for Elasticsearch leverages the open source code for Elasticsearch and Kibana. This is not a fork; we will continue to send our contributions and patches upstream to advance these projects.

In addition to Elasticsearch and Kibana, the first release includes a set of advanced security, event monitoring & alerting, performance analysis, and SQL query features (more on those in a bit). In addition to the source code repo, Open Distro for Elasticsearch and Kibana are available as RPM and Docker containers, with separate downloads for the SQL JDBC and the PerfTop CLI. You can run this code on your laptop, in your data center, or in the cloud.

Contributions are welcome, as are bug reports and feature requests.

Inside Open Distro for Elasticsearch
Let’s take a quick look at the features that we are including in Open Distro for Elasticsearch. Some of these are currently available in Amazon Elasticsearch Service; others will become available in future updates.

Security – This plugin that supports node-to-node encryption, five types of authentication (basic, Active Directory, LDAP, Kerberos, and SAML), role-based access controls at multiple levels (clusters, indices, documents, and fields), audit logging, and cross-cluster search so that any node in a cluster can run search requests across other nodes in the cluster. Learn More

Event Monitoring & Alerting – This feature notifies you when data from one or more Elasticsearch indices meets certain conditions. You could, for example, notify a Slack channel if an application logs more than five HTTP 503 errors in an hour. Monitoring is based on jobs that run on a defined schedule, checking indices against trigger conditions, and raising alerts when a condition has been triggered. Learn More

Deep Performance Analysis – This is a REST API that allows you to query a long list of performance metrics for your cluster. You can access the metrics programmatically or you can visualize them using perf top and other perf tools. Learn More

SQL Support – This feature allows you to query your cluster using SQL statements. It is an improved version of the elasticsearch-sql plugin, and supports a rich set of statements.

This is just the beginning; we have more in the works, and also look forward to your contributions and suggestions!

Jeff;

 

Building serverless apps with components from the AWS Serverless Application Repository

This post was originally published on this site

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless.

Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster.

What is the AWS Serverless Application Repository?

The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.

How to use AWS Serverless Application Repository in production

I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time.

Broadly speaking, I built:

  • A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form.
  • A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions.

For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page.

Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy.

Find and deploy a component

First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results:

I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page.

Before I deployed any component, I inspected it to make sure it was safe and production-ready.

Evaluate a component

The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process:

  1. Check component permissions.
  2. Inspect the component implementation.
  3. Deploy and run the component in a restricted environment.
  4. Monitor the component’s behavior and cost before using in production.

This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company.

Here’s how to apply this approach while adding the Stripe component.

1. Check component permissions

There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security.

In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.

  CreateStripeCharge:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Policies:
        - SNSCrudPolicy:
          TopicName: !GetAtt SNSTopic.TopicName
        - Statement:
          Effect: Allow
          Action:
            - ssm:GetParameters
          Resource: !Sub
arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/*

The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key.

2. Inspect the component implementation

I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option.

The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code:

return paymentProcessor.createCharge(token, amount, currency, description)
    .then(chargeResponse => {
      createdCharge = chargeResponse;
      return pubsub.publish(createdCharge, TOPIC_ARN);
    })
    .then(() => createdCharge)
    .catch((err) => {
      console.log(err);
      throw err;
    });

The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work.

3. Deploy and run the component in a restricted environment

Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place.

I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code.

4. Monitor behavior and cost before using a component in production

When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long.

I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it.

After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application.

Deploy the component to an existing application

To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following:

Resources:
  ShowProduct:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs8.10
      Timeout: 10
      Events:
        Api:
          Type: Api
          Properties:
            Path: /product/:productId
            Method: GET
  apilambdastripecharge:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge
        SemanticVersion: 3.0.0
      Parameters:
        # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied.
        CorsOrigin: YOUR_VALUE
        # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.
       # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value
Outputs:
  ApiUrl:
    Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123
    Description: The URL of the sample API Gateway

I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters.

  • I set CorsOrigin to “*” for demonstration purposes.
  • I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command:

aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite

In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too.

To finish, I ran the following two commands:

aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME

aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM

For the second command, you could add parameter overrides as needed.

My product-selection and payment application was successfully deployed!

Summary

AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value.

In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too.

Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Learn about AWS Services & Solutions – March AWS Online Tech Talks

This post was originally published on this site

AWS Tech Talks

Join us this March to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Compute

March 26, 2019 | 11:00 AM – 12:00 PM PTTechnical Deep Dive: Running Amazon EC2 Workloads at Scale – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand.

March 27, 2019 | 9:00 AM – 10:00 AM PTIntroduction to AWS Outposts – Learn how you can run AWS infrastructure on-premises with AWS Outposts for a truly consistent hybrid experience.

March 28, 2019 | 1:00 PM – 2:00 PM PTDeep Dive on OpenMPI and Elastic Fabric Adapter (EFA) – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand.

Containers

March 21, 2019 | 11:00 AM – 12:00 PM PTRunning Kubernetes with Amazon EKS – Learn how to run Kubernetes on AWS with Amazon EKS.

March 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application.

Data Lakes & Analytics

March 19, 2019 | 9:00 AM – 10:00 AM PTFuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation – Learn how to use ML Transforms for AWS Glue to link and de-duplicate matching records.

March 20, 2019 | 9:00 AM – 10:00 AM PTCustomer Showcase: Perform Real-time ETL from IoT Devices into your Data Lake with Amazon Kinesis – Learn best practices for how to perform real-time extract-transform-load into your data lake with Amazon Kinesis.

March 20, 2019 | 11:00 AM – 12:00 PM PTMachine Learning Powered Business Intelligence with Amazon QuickSight – Learn how Amazon QuickSight leverages powerful ML and natural language capabilities to generate insights that help you discover the story behind the numbers.

Databases

March 18, 2019 | 9:00 AM – 10:00 AM PTWhat’s New in PostgreSQL 11 – Find out what’s new in PostgreSQL 11, the latest major version of the popular open source database, and learn about AWS services for running highly available PostgreSQL databases in the cloud.

March 19, 2019 | 1:00 PM – 2:00 PM PTIntroduction on Migrating your Oracle/SQL Server Databases over to the Cloud using AWS’s New Workload Qualification Framework – Get an introduction on how AWS’s Workload Qualification Framework can help you with your application and database migrations.

March 20, 2019 | 1:00 PM – 2:00 PM PTWhat’s New in MySQL 8 – Find out what’s new in MySQL 8, the latest major version of the world’s most popular open source database, and learn about AWS services for running highly available MySQL databases in the cloud.

March 21, 2019 | 9:00 AM – 10:00 AM PTBuilding Scalable & Reliable Enterprise Apps with AWS Relational Databases – Learn how AWS Relational Databases can help you build scalable & reliable enterprise apps.

DevOps

March 19, 2019 | 11:00 AM – 12:00 PM PTIntroduction to Amazon Corretto: A No-Cost Distribution of OpenJDK – Learn how to transform your approach to secure desktop delivery with a cloud desktop solution like Amazon WorkSpaces.

End-User Computing

March 28, 2019 | 9:00 AM – 10:00 AM PTFireside Chat: Enabling Today’s Workforce with Cloud Desktops – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand.

Enterprise

March 26, 2019 | 1:00 PM – 2:00 PM PTSpeed Your Cloud Computing Journey With the Customer Enablement Services of AWS: ProServe, AMS, and Support – Learn how to accelerate your cloud journey with AWS’s Customer Enablement Services.

IoT

March 26, 2019 | 9:00 AM – 10:00 AM PTHow to Deploy AWS IoT Greengrass Using Docker Containers and Ubuntu-snap – Learn how to bring cloud services to the edge using containerized microservices by deploying AWS IoT Greengrass to your device using Docker containers and Ubuntu snaps.

Machine Learning

March 18, 2019 | 1:00 PM – 2:00 PM PTOrchestrate Machine Learning Workflows with Amazon SageMaker and AWS Step Functions – Learn about how ML workflows can be orchestrated with the rich features of Amazon SageMaker and AWS Step Functions.

March 21, 2019 | 1:00 PM – 2:00 PM PTExtract Text and Data from Any Document with No Prior ML Experience – Learn how to extract text and data from any document with no prior machine learning experience.

March 22, 2019 | 11:00 AM – 12:00 PM PTBuild Forecasts and Individualized Recommendations with AI – Learn how you can build accurate forecasts and individualized recommendation systems using our new AI services, Amazon Forecast and Amazon Personalize.

Management Tools

March 29, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on Inventory Management and Configuration Compliance in AWS – Learn how AWS helps with effective inventory management and configuration compliance management of your cloud resources.

Networking & Content Delivery

March 25, 2019 | 1:00 PM – 2:00 PM PTApplication Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield – Learn how to secure and accelerate your applications using AWS’s Edge services in this demo-driven tech talk.

Robotics

March 28, 2019 | 11:00 AM – 12:00 PM PTBuild a Robot Application with AWS RoboMaker – Learn how to improve your robotics application development lifecycle with AWS RoboMaker.

Security, Identity, & Compliance

March 27, 2019 | 11:00 AM – 12:00 PM PTRemediating Amazon GuardDuty and AWS Security Hub Findings – Learn how to build and implement remediation automations for Amazon GuardDuty and AWS Security Hub.

March 27, 2019 | 1:00 PM – 2:00 PM PTScaling Accounts and Permissions Management – Learn how to scale your accounts and permissions management efficiently as you continue to move your workloads to AWS Cloud.

Serverless

March 18, 2019 | 11:00 AM – 12:00 PM PT Testing and Deployment Best Practices for AWS Lambda-Based Applications – Learn best practices for testing and deploying AWS Lambda based applications.

Storage

March 25, 2019 | 11:00 AM – 12:00 PM PT Introducing a New Cost-Optimized Storage Class for Amazon EFS – Come learn how the new Amazon EFS storage class and Lifecycle Management automatically reduces cost by up to 85% for infrequently accessed files.

New – RISC-V Support in the FreeRTOS Kernel

This post was originally published on this site

FreeRTOS is a popular operating system designed for small, simple processors often known as microcontrollers. It is available under the MIT open source license and runs on many different Instruction Set Architectures (ISAs). Amazon FreeRTOS extends FreeRTOS with a collection of IoT-oriented libraries that provide additional networking and security features including support for Bluetooth Low Energy, Over-the-Air Updates, and Wi-Fi.

RISC-V is a free and open ISA that was designed to be simple, extensible, and easy to implement. The simplicity of the RISC-V model, coupled with its permissive BSD license, makes it ideal for a wide variety of processors, including low-cost microcontrollers that can be manufactured without incurring license costs. The RISC-V model can be implemented in many different ways, as you can see from the RISC-V cores page. Development tools, including simulators, compilers, and debuggers, are also available.

Today I am happy to announce that we are now providing RISC-V support in the FreeRTOS kernel. The kernel supports the RISC-V I profile (RV32I and RV64I) and can be extended to support any RISC-V microcontroller. It includes preconfigured examples for the OpenISA VEGAboard, QEMU emulator for SiFive’s HiFive board, and Antmicro’s Renode emulator for the Microchip M2GL025 Creative Board.

You now have a powerful new option for building smart devices that are more cost-effective than ever before!

Jeff;

 

Podcast #299: February 2019 Updates

This post was originally published on this site

Simon guides you through lots of new features, services and capabilities that you can take advantage of. Including the new AWS Backup service, more powerful GPU capabilities, new SLAs and much, much more!

Chapters:

Service Level Agreements 0:17
Storage 0:57
Media Services 5:08
Developer Tools 6:17
Analytics 9:54
AI/ML 12:07
Database 14:47
Networking & Content Delivery 17:32
Compute 19:02
Solutions 21:57
Business Applications 23:38
AWS Cost Management 25:07
Migration & Transfer 25:39
Application Integration 26:07
Management & Governance 26:32
End User Computing 29:22

Additional Resources

Topic || Service Level Agreements 0:17

Topic || Storage 0:57

Topic || Media Services 5:08

Topic || Developer Tools 6:17

Topic || Analytics 9:54

Topic || AI/ML 12:07

Topic || Database 14:47

Topic || Networking and Content Delivery 17:32

Topic || Compute 19:02

Topic || Solutions 21:57

Topic || Business Applications 23:38

Topic || AWS Cost Management 25:07

Topic || Migration and Transfer 25:39

Topic || Application Integration 26:07

Topic || Management and Governance 26:32

Topic || End User Computing 29:22

About the AWS Podcast

The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following:

Like the Podcast?

Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!