Category Archives: AWS

New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server

This post was originally published on this site

You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas.

New HDD Option
Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems.

Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system.

You can choose the HDD option when you create a new file system:

If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or robocopy to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity.

Performance and Caching
The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity:

The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance.

Now Available
HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today.

Jeff;

BuildforCOVID19 Global Online Hackathon

This post was originally published on this site

The COVID-19 Global Hackathon is an opportunity for builders to create software solutions that drive social impact with the aim of tackling some of the challenges related to the current coronavirus (COVID-19) pandemic.

We’re encouraging YOU – builders around the world – to #BuildforCOVID19 using technologies of your choice across a range of suggested themes and challenge areas, some of which have been sourced through health partners like the World Health Organization. The hackathon welcomes locally and globally focused solutions and is open to all developers.

AWS is partnering with technology companies like Facebook, Giphy, Microsoft, Pinterest, Slack, TikTok, Twitter, and WeChat to support this hackathon. We will be providing technical mentorship and credits for all participants.

Join BuildforCOVID19 and chat with fellow participants and AWS mentors in the COVID19 Global Hackathon Slack channel.

Jeff;

Working From Home? Here’s How AWS Can Help

This post was originally published on this site

Just a few weeks and so much has changed. Old ways of living, working, meeting, greeting, and communicating are gone for a while. Friendly handshakes and warm hugs are not healthy or socially acceptable at the moment.

My colleagues and I are aware that many people are dealing with changes in their work, school, and community environments. We’re taking measures to support our customers, communities, and employees to help them to adjust and deal with the situation, and will continue to do more.

Working from Home
With people in many cities and countries now being asked to work or learn from home, we believe that some of our services can help to make the transition from the office or the classroom to the home just a bit easier. Here’s an overview of our solutions:

Amazon WorkSpaces lets you launch virtual Windows and Linux desktops that can be accessed anywhere and from any device. These desktops can be used for remote work, remote training, and more.

Amazon WorkDocs makes it easy for you to collaborate with others, also from anywhere and on any device. You can create, edit, share, and review content, all stored centrally on AWS.

Amazon Chime supports online meetings with up to 100 participants (growing to 250 later this month), including chats and video calls, all from a single application.

Amazon Connect lets you set up a call or contact center in the cloud, with the ability to route incoming calls and messages to tens of thousands of agents. You can use this to provide emergency information or personalized customer service, while the agents are working from home.

Amazon AppStream lets you deliver desktop applications to any computer. You can deliver enterprise, educational, or telemedicine apps at scale, including those that make use of GPUs for computation or 3D rendering.

AWS Client VPN lets you set up secure connections to your AWS and on-premises networks from anywhere. You can give your employees, students, or researchers the ability to “dial in” (as we used to say) to your existing network.

Some of these services have special offers designed to make it easier for you to get started at no charge; others are already available to you under the AWS Free Tier. You can learn more on the home page for each service, and on our new Remote Working & Learning page.

You can sign up for and start using these services without talking to us, but we are here to help if you need more information or need some help in choosing the right service(s) for your needs. Here are some points of contact:

If you are already an AWS customer, your Technical Account Manager (TAM) and Solutions Architect (SA) will be happy to help.

Some Useful Content
I am starting a collection of other AWS-related content that will help you use these services and work-from-home as efficiently as possible. Here’s what I have so far:

If you create something similar, share it with me and I’ll add it to my list.

Please Stay Tuned
This is, needless to say, a dynamic and unprecedented situation and we are all learning as we go.

I do want you to know that we’re doing our best to help. If there’s something else that you need, please do not hesitate to reach out. Go through your normal AWS channels first, but contact me if you are in a special situation and I’ll do my best!

Jeff;

 

Now Available: Amazon ElastiCache Global Datastore for Redis

This post was originally published on this site

In-memory data stores are widely used for application scalability, and developers have long appreciated their benefits for storing frequently accessed data, whether volatile or persistent. Systems like Redis help decouple databases and backends from incoming traffic, shedding most of the traffic that would had otherwise reached them, and reducing application latency for users.

Obviously, managing these servers is a critical task, and great care must be taken to keep them up and running no matter what. In a previous job, my team had to move a cluster of physical cache servers across hosting suites: one by one, they connected them to external batteries, unplugged external power, unracked them, and used an office trolley (!) to roll them to the other suite where they racked them again! It happened without any service interruption, but we all breathed a sigh of relief once this was done… Lose cache data on a high-traffic platform, and things get ugly. Fast. Fortunately, cloud infrastructure is more flexible! To help minimize service disruption should an incident occur, we have added many high-availability features to Amazon ElastiCache, our managed in-memory data store for Memcached and Redis: cluster mode, multi-AZ with automatic failover, etc.

As Redis is often used to serve low latency traffic to global users, customers have told us that they’d love to be able to replicate Amazon ElastiCache clusters across AWS regions. We listened to them, got to work, and today, we’re very happy to announce that this replication capability is now available for Redis clusters.

Introducing Amazon ElastiCache Global Datastore For Redis
In a nutshell, Amazon ElastiCache Global Datastore for Redis lets you replicate a cluster in one region to clusters in up to two other regions. Customers typically do this in order to:

  • Bring cached data closer to your users, in order to reduce network latency and improve application responsiveness.
  • Build disaster recovery capabilities, should a region be partially or totally unavailable.

Setting up a global datastore is extremely easy. First, you pick a cluster to be the primary cluster receiving writes from applications: this can either be a new cluster, or an existing cluster provided that it runs Redis 5.0.6 or above. Then, you add up to two secondary clusters in other regions which will receive updates from the primary.

This setup is available for all Redis configurations except single node clusters: of course, you can convert a single node cluster to a replication group cluster, and then use it as a primary cluster.

Last but not least, clusters that are part of a global datastore can be modified and resized as usual (adding or removing nodes, changing node type, adding or removing shards, adding or removing replica nodes).

Let’s do a quick demo.

Replicating a Redis Cluster Across Regions
Let me show you how to build from scratch a three-cluster global datastore: the primary cluster will be located in the us-east-1 region, and the two secondary clusters will be located in the us-west-1 and us-west-2 regions. For the sake of simplicity, I’ll use the same default configuration for all clusters: three cache.r5.large nodes, multi-AZ, one shard.

Heading out to the AWS Console, I click on ‘Global Datastore’, and then on ‘Create’ to create my global datastore. I’m asked if I’d like to create a new cluster supporting the datastore, or if I’d rather use an existing cluster. I go for the former, and create a cluster named global-ds-1-useast1.

I click on ‘Next’, and fill in details for a secondary cluster hosted in the us-west-1 region. I unimaginatively name it global-ds-1-us-west1.

Then, I add another secondary cluster in the us-west-2 region, named global-ds-1-uswest2: I go to ‘Global Datastore’, click on ‘Add Region’, and fill in cluster details.

A little while later, all three clusters are up, and have been associated to the global datastore.

Using the redis-cli client running on an Amazon Elastic Compute Cloud (EC2) instance hosted in the us-east-1 region, I can quickly connect to the cluster endpoint and check that it’s indeed operational.

[us-east-1-instance] $ redis-cli -h $US_EAST_1_CLUSTER_READWRITE
> ping
PONG
> set paris france
OK
> set berlin germany
OK
> set london uk
OK
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

This looks fine. Using an EC2 instance hosted in the us-west-1 region, let’s now check that the data we stored in the primary cluster has been replicated to the us-west-1 secondary cluster.

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READONLY
> keys *
1) "london"
2) "berlin"
3) "paris"
> get paris
"france"

Nice. Now let’s add some more data on the primary cluster…

> hset Parsifal composer "Richard Wagner" date 1882 acts 3 language "German"
> hset DonGiovanni composer "W.A. Mozart" date 1787 acts 2 language "Italian"
> hset Tosca composer "Giacomo Puccini" date 1900 acts 3 language "Italian"

…and check as quickly as possible on the secondary cluster.

> keys *
1) "DonGiovanni"
2) "london"
3) "berlin"
4) "Parsifal"
5) "Tosca"
6) "paris"
> hget Parsifal composer
"Richard Wagner"

That was fast: by the time I switched to the other terminal and ran the command, the new data was already there. That’s not really surprising since the typical network latency for cross region traffic ranges from 60 milliseconds to 200 milliseconds depending on regions.

Now, what would happen if something went wrong with our primary cluster hosted in us-east-1? Well, we could easily promote one of the secondary clusters to full read/write capabilities.

For good measure, I also remove the us-east-1 cluster from the global datastore. Once this is complete, the global datastore looks like this.

Now, using my EC2 instance in the us-west-1 region, and connecting to the read/write endpoint of my cluster, I add more data…

[us-west-1-instance] $ redis-cli -h $US_WEST_1_CLUSTER_READWRITE
> hset Lohengrin composer "Richard Wagner" date 1850 acts 3 language "German"

… and check that it’s been replicated to the us-west-2 cluster.

[us-west-2-instance] $ redis-cli -h $US_WEST_2_CLUSTER_READONLY
> hgetall Lohengrin
1) "composer"
2) "Richard Wagner"
3) "date"
4) "1850"
5) "acts"
6) "3"
7) "language"
8) "German"

It’s all there. Global datastores make it really easy to replicate Amazon ElastiCache data across regions!

Now Available!
This new global datastore feature is available today in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London). Please give it a try and send us feedback, either on the AWS forum for Amazon ElastiCache, or through your usual AWS support contacts.

Julien;

AWS Online Tech Talks for March 2020

This post was originally published on this site

Register for March 2020 webinars

Join us for live, online presentations led by AWS solutions architects and engineers. AWS Online Tech Talks cover a range of topics and expertise levels, and feature technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts.

Note – All sessions are free and in Pacific Time. Can’t join us live? Access webinar recordings and slides on our On-Demand Portal.

Tech talks this month, by category, are:

March 23, 2020 | 9:00 AM – 10:00 AM PT – Build and Deploy Full-Stack Containerized Applications with the New Amazon ECS CLI – ​Learn to build, deploy and operate your containerized apps, all from the comfort of your terminal, with the new ECS CLI V2.​​​

March 23, 2020 | 11:00 AM – 12:00 PM PT – Understanding High Availability and Disaster Recovery Features for Amazon RDS for Oracle – ​Learn when and how to use the High Availability (HA) and Disaster Recovery (DR) features to meet recovery point objective (RPO) and recovery time objective (RTO) requirements for mission critical Oracle databases running on Amazon RDS.​

March 23, 2020 | 1:00 PM – 2:00 PM PT – Intro to AWS IAM Access Analyzer – ​Learn how you can use IAM Access Analyzer to identify resource policies that don’t comply with your organization’s security requirements.​

March 24, 2020 | 9:00 AM – 10:00 AM PT – Introducing HTTP APIs: A Better, Cheaper, Faster Way to Build APIs – ​Building APIs? Save up to 70% and reduce latency by 60% using HTTP APIs, now generally available from Amazon API Gateway.​

March 24, 2020 | 11:00 AM – 12:00 PM PT – Move to Managed Windows File Storage – ​Move to Managed: Learn about simplifying your Windows file shares with Amazon FSx for Windows File Server​.

March 24, 2020 | 1:00 PM – 2:00 PM PT – Advanced Architectures with AWS Transit Gateway – ​Learn how to use AWS Transit Gateway with centralized security appliances to improve network security and manageability​.

March 25, 2020 | 9:00 AM – 10:00 AM PT – How a U.S. Government Agency Accelerated Its Migration to GovCloud – ​Learn how a government agency mitigated risk and accelerated its migration of 200 VMs to GovCloud using CloudEndure Migration.​​

March 25, 2020 | 11:00 AM – 12:00 PM PT – Machine Learning-Powered Contact Center Analytics with Contact Lens for Amazon Connect – ​Every engagement your customer has with your contact center can provide your team with powerful insights. Explore how with ML-powered analytics, register for this Tech Talk to learn how Contact Lens for Amazon Connect helps you deeply understand your customer conversations.​​

March 25, 2020 | 1:00 PM – 2:00 PM PT – Monitoring Serverless Applications Using CloudWatch ServiceLens – ​In this tech talk, we will provide an introduction to CloudWatch ServiceLens and how it enables you to visualize and analyze the health, performance, and availability of your microservice based applications in a single place.

March 26, 2020 | 9:00 AM – 10:00 AM PT – End User Computing on AWS: Secure Solutions for the Agile Enterprise – ​Learn how tens of thousands of companies are accelerating their cloud migration and increasing agility with Amazon End-User Computing services.

March 26, 2020 | 11:00 AM – 12:00 PM PT – Bring the Power of Machine Learning to Your Fight Against Online Fraud – ​Learn how machine learning can help you automatically start detecting online fraud faster with Amazon Fraud Detector.​

March 26, 2020 | 1:00 PM – 2:00 PM PT – Bring-Your-Own-License (BYOL) on AWS Made Easier – ​Learn how to bring your own licenses (BYOL) to AWS for optimizing Total Cost of Ownership (TCO) with a new, simplified BYOL experience.​​

March 27, 2020 | 9:00 AM – 10:00 AM PT – Intro to Databases for Games: Hands-On with Amazon DynamoDB and Amazon Aurora – Making a game and need to integrate a database? We’ll show you how!​

March 27, 2020 | 11:00 AM – 12:00 PM PT – Accelerate Your Analytics by Migrating to a Cloud Data Warehouse – ​Learn how to accelerate your analytics by migrating to a cloud data warehouse​​​.​

March 30, 2020 | 9:00 AM – 10:00 AM PT – Optimizing Lambda Performance for Your Serverless Applications – ​Learn how to optimize the performance of your Lambda functions through new features like improved VPC networking and Provisioned Concurrency.​​​

March 30, 2020 | 11:00 AM – 12:00 PM PT – Protecting Your Web Application Using AWS Managed Rules for AWS WAF – ​Learn about the new AWS WAF experience and how you can leverage AWS Managed Rules to protect your web application.​

March 30, 2020 | 1:00 PM – 2:00 PM PT – Migrating ASP.NET applications to AWS with Windows Web Application Migration Assistant – ​Join this tech talk to learn how to migrate ASP.NET web applications into a fully managed AWS Elastic Beanstalk environment.​

March 31, 2020 | 9:00 AM – 10:00 AM PT – Forrester Explains the Total Economic Impact™ of Working with AWS Managed Services – ​Learn about the 243% ROI your organization could expect from an investment in AWS Managed Services from the Forrester Total Economic Impact™ study.​

March 31, 2020 | 11:00 AM – 12:00 PM PT – Getting Started with AWS IoT Greengrass Solution Accelerators for Edge Computing – ​Learn how you can use AWS IoT Greengrass Solution Accelerators to quickly build solutions for industrial, consumer, and commercial use cases.

March 31, 2020 | 1:00 PM – 2:00 PM PT – A Customer’s Perspective on Building an Event-Triggered System-of-Record Application with Amazon QLDB – ​UK’s Dept. of Vehicle and Licensing shares its perspective on building an Event-triggered System of Record Application with Amazon QLDB​.

April 1, 2020 | 9:00 AM – 10:00 AM PT – Containers for Game Development – ​Learn the basic architectures for utilizing containers and how to operate your environment, helping you seamlessly scale and save cost while building your game backend on AWS.​

April 1, 2020 | 11:00 AM – 12:00 PM PT – Friends Don’t Let Friends Manage Data Streaming Infrastructure – ​Eliminate the headaches of managing your streaming data infrastructure with fully managed AWS services for Apache Kafka and Apache Flink.​

April 2, 2020 | 9:00 AM – 10:00 AM PT – How to Train and Tune Your Models with Amazon SageMaker – ​Learn how Amazon SageMaker provides everything you need to tune and debug models and execute training experiments​.

April 2, 2020 | 11:00 AM – 12:00 PM PT – Enterprise Transformation: Migration and Modernization Demystified – ​Learn how to lead your business transformation while executing cloud migrations and modernizations​.

April 2, 2020 | 1:00 PM – 2:00 PM PT – How to Build Scalable Web Based Applications for Less with Amazon EC2 Spot Instances – ​Learn how you can scale and optimize web based applications running on Amazon EC2 for cost and performance, all while handling peak demand.​

April 3, 2020 | 9:00 AM – 10:00 AM PT – Migrating File Data to AWS: Demo & Technical Guidance – ​In this tech talk, we’ll touch upon AWS storage destination services in brief, and demo how you can use AWS DataSync and AWS Storage Gateway to easily and securely move your file data into AWS for file services, processing, analytics, machine learning, and archiving, as well as providing on-premises access where needed.​

April 3, 2020 | 11:00 AM – 12:00 PM PT – Building Real-Time Audio and Video Calling in Your Applications with the Amazon Chime SDK – Learn how to quickly build real-time communication capabilities in your own applications for engaging customer experiences.

Materialize your Amazon Redshift Views to Speed Up Query Execution

This post was originally published on this site

At AWS, we take pride in building state of the art virtualization technologies to simplify the management and access to cloud services such as networks, computing resources or object storage.

In a Relational Database Management Systems (RDBMS), a view is virtualization applied to tables : it is a virtual table representing the result of a database query. Views are frequently used when designing a schema, to present a subset of the data, summarized data (such as aggregated or transformed data) or to simplify data access across multiple tables. When using data warehouses, such as Amazon Redshift, a view simplifies access to aggregated data from multiple tables for Business Intelligence (BI) tools such as Amazon QuickSight or Tableau.

Views provide ease of use and flexibility but they are not speeding up data access. The database system must evaluate the underlying query representing the view each time your application accesses the view. When performance is key, data engineers use create table as (CTAS) as an alternative. A CTAS is a table defined by a query. The query is executed at table creation time and your applications can use it like a normal table, with the downside that the CTAS data set is not refreshed when underlying data are updated. Furthermore, the CTAS definition is not stored in the database system. It is not possible to know if a table was created by a CTAS or not, making it difficult to track which CTAS needs to be refreshed and which is current.

Today, we are introducing materialized views for Amazon Redshift. A materialized view (MV) is a database object containing the data of a query. A materialized view is like a cache for your view. Instead of building and computing the data set at run-time, the materialized view pre-computes, stores and optimizes data access at the time you create it. Data are ready and available to your queries just like regular table data.

Using materialized views in your analytics queries can speed up the query execution time by orders of magnitude because the query defining the materialized view is already executed and the data is already available to the database system.

Materialized views are especially useful for queries that are predictable and repeated over and over. Instead of performing resource-intensive queries on large tables, applications can query the pre-computed data stored in the materialized view.

When the data in the base tables are changing, you refresh the materialized view by issuing a Redshift SQL statement “refresh materialized view“. After issuing a refresh statement, your materialized view contains the same data as would have been returned by a regular view. Refreshes can be incremental or full refreshes (recompute). When possible, Redshift incrementally refreshes data that changed in the base tables since the materialized view was last refreshed.

Let’s see how it works. I create a sample schema to store sales information : each sales transaction and details about the store where the sales took place.

To view the total amount of sales per city, I create a materialized view with the create materialized view SQL statement. I connect to the Redshift console, select the query Editor and type the following statement to create a materialized view (city_sales) joining records from two tables and aggregating sales amount (sum(sales.amount)) per city (group by city):

CREATE MATERIALIZED VIEW city_sales AS (
  SELECT st.city, SUM(sa.amount) as total_sales
  FROM sales sa, store st
  WHERE sa.store_id = st.id
  GROUP BY st.city
);

The resulting schema is below:

Now I can query the materialized view just like a regular view or table and issue statements like “SELECT city, total_sales FROM city_sales” to get the below results. The join between the two tables and the aggregate (sum and group by) are already computed, resulting to significantly less data to scan.

When the data in the underlying base tables change, the materialized view is not automatically reflecting those changes. The data stored in the materialized can be refreshed on demand with latest changes from base tables using the SQL refreshmaterialized view command. Let’s see a practical example:

!-- let's add a row in the sales base table
INSERT INTO sales (id, item, store_id, customer_id, amount)
VALUES(8, 'Gaming PC Super ProXXL', 1, 1, 3000);

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|        690|

!-- the new sale is not taken into account !

!-- let's refresh the materialized view
REFRESH MATERIALIZED VIEW city_sales;

SELECT city, total_sales FROM city_sales WHERE city = 'Paris'

city |total_sales|
-----|-----------|
Paris|       3690|

!-- now the view has the latest sales data

The full code for this very simple demo is available as a gist.

You can start to use materialized views today in all AWS Regions.

There is nothing to change in your existing clusters to start to use materialized views, you can start to create them today at no additional cost.

Happy building !

Bottlerocket – Open Source OS for Container Hosting

This post was originally published on this site

It is safe to say that our industry has decided that containers are now the chosen way to package and scale applications. Our customers are making great use of Amazon ECS and Amazon Elastic Kubernetes Service, with over 80% of all cloud-based containers running on AWS.

Container-based environments lend themselves to easy scale-out, and customers can run host environments that encompass hundreds or thousands of instances. At this scale, several challenges arise with the host operating system. For example:

Security – Installing extra packages simply to satisfy dependencies can increase the attack surface.

Updates – Traditional package-based update systems and mechanisms are complex and error prone, and can have issues with dependencies.

Overhead – Extra, unnecessary packages consume disk space and compute cycles, and also increase startup time.

Drift – Inconsistent packages and configurations can damage the integrity of a cluster over time.

Introducing Bottlerocket
Today I would like to tell you about Bottlerocket, a new Linux-based open source operating system that we designed and optimized specifically for use as a container host.

Bottlerocket reflects much of what we have learned over the years. It includes only the packages that are needed to make it a great container host, and integrates with existing container orchestrators. It supports Docker image and images that conform to the Open Container Initiative (OCI) image format.

Instead of a package update system, Bottlerocket uses a simple, image-based model that allows for a rapid & complete rollback if necessary. This removes opportunities for conflicts and breakage, and makes it easier for you to apply fleet-wide updates with confidence using orchestrators such as EKS.

In addition to the minimal package set, Bottlerocket uses a file system that is primarily read-only, and that is integrity-checked at boot time via dm-verity. SSH access is discouraged, and is available only as part of a separate admin container that you can enable on an as-needed basis and then use for troubleshooting purposes.

Try it Out
We’re launching a public preview of Bottlerocket today. You can follow the steps in QUICKSTART to set up an EKS cluster, and you can take a look at the GitHub repo. Try it out, report bugs, send pull requests, and let us know what you think!

Jeff;

 

Host Your Apps with AWS Amplify Console from the AWS Amplify CLI

This post was originally published on this site

Have you tried out AWS Amplify and AWS Amplify Console yet? In my opinion, they provide one of the fastest ways to get a new web application from idea to prototype on AWS. So what are they? AWS Amplify is an opinionated framework for building modern applications, with a toolchain for easily adding services like authentication (via Amazon Cognito) or storage (via Amazon Simple Storage Service (S3)) or GraphQL APIs, all via a command-line interface. AWS Amplify Console makes continuous deployment and hosting for your modern web apps easy. It supports hosting the frontend and backend assets for single page app (SPA) frameworks including React, Angular, Vue.js, Ionic, and Ember. It also supports static site generators like Gatsby, Eleventy, Hugo, VuePress, and Jekyll.

With today’s launch, hosting options available from the AWS Amplify CLI now include Amplify Console in addition to S3 and Amazon CloudFront. By using Amplify Console, you can take advantage of features like continuous deployment, instant cache invalidation, custom redirects, and simple configuration of custom domains.

Initializing an Amplify App

Let’s take a look at a quick example. We’ll be deploying a static site demo of Amazon Transcribe. I’ve already got the AWS Command Line Interface (CLI) installed, as well as the AWS Amplify CLI. I’ve forked and then cloned the sample code to my local machine. In the following gif, you can see the initialization process for an AWS Amplify app. (I sped things up a little for the gif. It might take a few seconds for your app to be created.)

Terminal session showing the "amplify init" workflow

Now that I’ve got my app initialized, I can add additional services. Let’s add some hosting via AWS Amplify Console. After choosing Amplify Console for hosting, I can pick manual deployment or continuous deployment using a git-based workflow.

Continuous Deployment

First, I’m going to set up continuous deployment so that changes to our git repo will trigger a build and deploy.

A screenshot of a terminal session adding Amplify Console to an Amplify project

The workflow for configuring continuous deployment requires a quick browser session. First, I select our git provider. The forked repo is on GitHub, so I need to authorize Amplify Console to use my GitHub account.

Screenshot of git provider selection

Once a provider is authorized, I choose the repo and branch to watch for changes.

Screenshot of repo and branch selection

AWS Amplify Console auto-detected the correct build settings, based on the contents of package.json.

Screenshot of build settings

Once I’ve confirmed the settings, the initial build and deploy will start. Then any changes to the selected git branch will result in additional builds and deploys. Now I need to finish the workflow in the CLI, and I the need the ARN of the new Amplify Console app for that. In the browser, under App Settings and then General, I copy the ARN and then paste it into my terminal, and check the status.

A screenshot of a terminal window where the app ARN is being set

A quick check of the URL in my browser confirms that the app has been successfully deployed.

A screenshot of the sample app we deployed in this post

Manual Deploys

Manual deploys with Amplify Console also provide a bunch of useful features. The CLI can now manage front-end environments, making it easy to add a test or dev environment. It’s also easy to add URL redirects and rewrites, or add a username/password via HTTP Basic Auth.

Configuring manual deploys is straightforward. Just set your environment name. When it’s time to deploy, run amplify publishand the build scripts defined during the initialization of the project will be run. The generated artifact will then be uploaded automatically.

A screenshot of a terminal window where manual deploys are configured

With manual deployments, you can set up multiple frontend environments (e.g. dev and prod) directly from the CLI. To create a new dev environment, run amplify env add (name it dev) and amplify publish. This will create a second frontend environment in Amplify Console. To view all your frontend and backend environments, run amplify console from the CLI to open your Amplify Console app.

Ever since using AWS Amplify Console for the first time a few weeks ago, it has become my go-to way to deploy applications, especially static sites. I’m excited to see the simplicity of hosting with AWS Amplify Console extended to the Amplify CLI, and I hope you are too. Happy building!

— Brandon

AWS Named as a Leader in Gartner’s Magic Quadrant for Cloud AI Developer Services

This post was originally published on this site

Last week I spoke to executives from a large AWS customer and had an opportunity to share aspects of the Amazon culture with them. I was able to talk to them about our Leadership Principles and our Working Backwards model. They asked, as customers often do, about where we see the industry in the next 5 or 10 years. This is a hard question to answer, because about 90% of our product roadmap is driven by requests from our customers. I honestly don’t know where the future will take us, but I do know that it will help our customers to meet their goals and to deliver on their vision.

Magic Quadrant for Cloud AI Developer Services
It is always good to see that our hard work continues to delight our customers, and it is also good to be recognized by Gartner and other leading analysts. Today I am happy to share that AWS has secured the top-right corner of Gartner’s Magic Quadrant for Cloud AI Developer Services, earning highest placement for Ability to Execute and furthest to the right for Completeness of Vision:

You can read the full report to learn more (registration is required).

Keep the Cat Out
As a simple yet powerful example of the power of the AWS AI & ML services, check out Ben Hamm’s DeepLens-powered cat door:

AWS AI & ML Services
Building on top of the AWS compute, storage, networking, security, database, and analytics services, our lineup of AI and ML offerings are designed to serve newcomers, experts, and everyone in-between. Let’s take a look at a few of them:

Amazon SageMaker – Gives developers and data scientists the power to build, train, test, tune, deploy, and manage machine learning models. SageMaker provides a complete set of machine learning components designed to reduce effort, lower costs, and get models into production as quickly as possible:

Amazon Kendra – An accurate and easy-to-use enterprise search service that is powered by machine learning. Kendra makes content from multiple, disparate sources searchable with powerful natural language queries:

Amazon CodeGuru – This service provides automated code reviews and makes recommendations that can improve application performance by identifying the most expensive lines of code. It has been trained on hundreds of thousands of internal Amazon projects and on over 10,000 open source projects on GitHub.

Amazon Textract – This service extracts text and data from scanned documents, going beyond traditional OCR by identifying the contents of fields in forms and information stored in tables. Powered by machine learning, Textract can handle virtually any type of document without the need for manual effort or custom code:

Amazon Personalize – Based on the same technology that is used at Amazon.com, this service provides real-time personalization and recommendations. To learn more, read Amazon Personalize – Real-Time Personalization and Recommendation for Everyone.

Time to Learn
If you are ready to learn more about AI and ML, check out the AWS Ramp-Up Guide for Machine Learning:

You should also take a look at our Classroom Training in Machine Learning and our library of Digital Training in Machine Learning.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Get to know the latest AWS Heroes, including the first IoT Heroes!

This post was originally published on this site

The AWS Heroes program recognizes and honors individuals who are prominent leaders in local communities, known for sharing AWS knowledge and facilitating peer-to-peer learning in a variety of ways. The AWS Heroes program grows just as the enthusiasm for all things AWS grows in communities around the world, and there are now AWS Heroes in 35 countries.

Today we are thrilled to introduce the newest AWS Heroes, including the first Heroes in Bosnia, Indonesia, Nigeria, and Sweden, as well as the first IoT Heroes:

 

Joshua Arvin Lat – National Capital Region, Philippines

Machine Learning Hero Joshua Arvin Lat is the CTO of Complete Business Online, Insites, and Jepto. He has achieved 9 AWS Certifications, and has participated and contributed as a certification Subject Matter Expert to help update the AWS Certified Machine Learning – Specialty exam during the Item Development Workshops. He has been serving as one of the core leaders of the AWS User Group Philippines for the past 4-5 years and also shares knowledge at several international AWS conferences and including AWS Summit Singapore – TechFest, and AWS Community Day – Melbourne.

 

 

 

 

Nofar Asselman – Tel Aviv, Israel

Community Hero Nofar Asselman is the Head of Business Development at Epsagon – an automated tracing platform for cloud microservices, where she initiated Epsagon’s partnership with AWS. Nofar is a key figure at the AWS Partner Community and founded the first-ever AWS Partners Meetup Group. Nofar is passionate about her work with AWS cloud communities, organizes meetups regularly, and participates in conferences, events and user groups. She loves sharing insights and best practices about her AWS experiences in blog posts on Medium.

 

 

 

 

Filipe Barretto – Rio de Janeiro, Brazil

Community Hero Filipe Barretto is one of the founders of Solvimm, an AWS Consulting Partner since 2013. He organizes the AWS User Group in Rio de Janeiro, Brazil, promoting talks, hands-on labs and study groups for AWS Certifications. He also frequently speaks at universities, introducing students to Cloud Computing and AWS services. He actively participates in other AWS User Groups in Brazil, working to build a strong and bigger community in the country, and, when possible, with AWS User Groups in other Latin American countries.

 

 

 

 

Stephen Borsay – Portland, USA

IoT Hero Stephen Borsay is a Degreed Computer Engineer and electronic hobbyist with a passion to make IoT and embedded systems understandable and enjoyable to enthusiasts of all experience levels. Stephen authors community IoT projects, as well as develops online teaching materials focused on AWS IoT to solve problems for both professional developers and casual IoT enthusiasts. He founded the Digital Design meetup group in Portland, Oregon which holds regular meetings focusing on hands-on IoT training. He regularly posts IoT tutorials for Hackster.io and you can find his online AWS IoT training courses on YouTube and Udemy.

 

 

 

Ernest Chiang – Taipei City, Taiwan

Community Hero Ernest Chiang, also known as Deng-Wei Chiang, started his AWS journey in 2008. He has been passionate about bridging AWS technology with business through AWS related presentations at local meet-ups, conferences, and online blog posts. Since 2011, many AWS services have been adopted, across AWS Global and China regions, under Ernest’s leadership as the Director of Product & Technology Integration of PAFERS Tech.

 

 

 

 

 

Don Coleman – Philadelphia, USA

IoT Hero Don Coleman is the Chief Innovation Officer at Chariot Solutions, where he builds software that leverages a wide range of AWS services. His experience building IoT projects enables him to share knowledge and lead workshops on solving IoT challenges using AWS. He also enjoys speaking at conferences about devices and technology, discussing things like NFC, Bluetooth Low Energy, LoRaWAN, and AWS IoT.

 

 

 

 

 

Ken Collins – Norfolk, USA

Serverless Hero Ken Collins is a Staff Engineer at Custom Ink, focusing on DevOps and their Ecommerce Platform with an emphasis on emerging opportunities. With a love for the Ruby programming language and serverless, Ken continues his open source Rails work by focusing on using Rails with AWS Lambda using a Ruby gem called Lamby. Recently he wrote an ActiveRecord adapter to take advantage of Aurora Serverless with Rails on Lambda.

 

 

 

 

 

Ewere Diagboya – Lagos, Nigeria

Community Hero Ewere Diagboya started building desktop and web apps using PHP and VB as a software engineer in junior high school. He started his Cloud journey with AWS at Terragon Group, where he grew into the DevOps and Infrastructure Lead. Later he collaborated to speak at the first ever AWS Nigeria Meetup and was the only Nigerian representative at AWS Johannesburg Loft in 2019. He is the co-founder of DevOps Nigeria, shares videos on YouTube showcasing AWS technologies, and has a blog on Medium, called MyCloudSeries.

 

 

 

 

Dzenan Dzevlan – Mostar, Bosnia and Herzegovina

Community Hero Dzenan Dzevlan is a Cloud and DevOps expert at TN-TECH and has been an AWS user since 2011. In 2016, Dzenan founded AWS User Group Bosnia and helped it grow to three user groups with more than 600 members. This AWS community is now the largest IT community in Bosnia. As a part of his activities, he runs online meetups, a YouTube channel, and the sqlheisenberg.com blog (in Bosnian language) to help people in the Balkans region achieve their AWS certification and start working with AWS.

 

 

 

 

Ben Ellerby – London, United Kingdom

Serverless Hero Ben Ellerby is VP of Engineering for Theodo and a dedicated member of the Serverless community. He is the editor of Serverless Transformation: a blog, newsletter & podcast sharing tools, techniques and use cases for all things Serverless. Ben speaks about serverless at conferences and events around the world. In addition to speaking, he co-organizes and supports serverless events including the Serverless User Group in London and ServerlessDays London.

 

 

 

 

 

Gunnar Grosch – Karlstad, Sweden

Serverless Hero Gunnar Grosch is an evangelist at Opsio based in Sweden. With a focus on building reliable and robust serverless applications, Gunnar has been one of the driving forces in creating techniques and tools for using chaos engineering in serverless. He regularly and passionately speaks at events on these and other serverless topics around the world. Gunnar is also deeply involved in the community by organizing AWS User Groups and Serverless Meetups in the Nordics, as well as being an organizer of ServerlessDays Stockholm and AWS Community Day Nordics. A variety of his contributions can be found on his personal website.

 

 

 

Scott Liao – New Taipei City, Taiwan

Community Hero Scott Liao is a DevOps Engineer and Manager at 104 Corp. His work is predominantly focused on Data Center and AWS Cloud solution architecture. He is interested in building hyper-scale DevOps environments for containers using AWS CloudFormation, CDK, Terraform, and various open-source tools. Scott has spoken regularly as AWS-focused events including AWS User Groups, Cloud Edge Summit Taipei, DevOpsDays Taipei, and other conferences. He also shares his expertise to writing, by producing written content for blogs and IT magazines in Taiwan.

 

 

 

 

Austin Loveless – Denver, USA

Community Hero Austin Loveless is a Cloud Architect at Photobucket and Founder of the AWSMeetupGroup. He travels around the country, teaching people of all skill levels about AWS Cloud Technologies. He live-streams all his events on YouTube. He partners with large software companies (AWS, MongoDB, Confluent, Galvanize, Flatiron School) to help grow the meetup group and teach more people. Austin also routinely blogs on Medium under the handle AWSMeetupGroup.

 

 

 

 

Efi Merdler-Kravitz – Tel Aviv, Israel

Serverless Hero Efi Merdler-Kravitz is Director of Engineering at Lumigo.io, a monitoring and debugging platform for AWS serverless applications built on a 100% serverless backend. As an early and enthusiastic adopter of serverless technology, Efi has been racking up the air miles as a frequent speaker at serverless events around the globe, and writes regularly on the topic for the Lumigo blog. Efi began on his journey into serverless as head of engineering at Coneuron, building its entire stack on Lambda, S3, API Gateway, and Firebase, while perfecting the art of helping developers transition to a serverless mindset.

 

 

 

Dhaval Nagar – Surat, India

Serverless Hero Dhaval Nagar is the founder and director of cloud consulting firm AppGambit based in India. He thinks that serverless is not just another method but a big paradigm shift in modern computing that will have a major impact on future technologies. Dhaval has been building on AWS since early 2015. Coincidentally, the first service that he picked on AWS was Lambda. He has 11 AWS Certifications, is a regular speaker at AWS user groups and conferences, and frequently writes on his Medium blog. He runs the Surat AWS User Group and Serverless Group and has organized over 20 meetups since it started in 2018.

 

 

 

Tomasz Ptak – London, United Kingdom

Machine Learning Hero Tomasz Ptak is a software engineer with a focus on tackling technical debt, transforming legacy products to maintainable projects and delivering a Developer experience that enables teams to achieve their objectives. He was a participant in the AWS DeepRacer League, a winner in Virtual League’s September race and a 2019 season finalist. He joined the AWS DeepRacer Community on day one to become one of its leaders. He runs the community blog, the knowledge base and maintains a DeepRacer log analysis tool.

 

 

 

 

Mike Rahmati – Sydney, Australia

Community Hero Mike Rahmati is Co-Founder and CTO of Cloud Conformity (acquired by Trend Micro), a leader in public cloud infrastructure security and compliance monitoring, where he helps organizations design and build cloud solutions that are Well-Architected at all times. As an active community member, Mike has designed thousands of best practices for AWS, and contributed to a number of open source AWS projects including Cloud Conformity Auto Remediation using AWS Serverless.

 

 

 

 

Namrata Shah (Nam) – New York, USA

Community Hero Nam Shah is a dynamic passionate technical leader based in the New York/New Jersey Area focused on custom application development and cloud architecture. She has over twenty years of professional information technology consulting experience delivering complex systems. Nam loves to share her technical knowledge and frequently posts AWS videos on her YouTube Channel and occasionally posts AWS courses on Udemy.

 

 

 

 
 

Yan So – Seoul, South Korea

Machine Learning Hero Yan So is a senior data scientist who possesses a variety of experience dealing with business issues by utilizing big data and machine learning. He was a co-founder of the Data Science Group of the AWS Korea Usergroup (AWSKRUG) and hosted over 30 meetups and AI/ML hands-on labs since 2017. He regularly speaks on interesting topics such as Amazon SageMaker GroundTruth on AWS Community Day, Zigzag’s Data Analytics Platform at the AWS Summit Seoul, and a recommendation engine on Amazon Personalize in AWS Retail & CPG Day 2019.
 

 

 

 

Steve Teo – Singapore

Community Hero Steve Teo has been serving the AWS User Group Singapore Community since 2017, which has over 5000 members. Having benefited from Meetups at the start of his career, he makes it his personal mission to pay it forward and build the community so that others might reap the benefits and contribute back. The community in Singapore has grown to have monthly meetups and now includes sub-chapters such as the Enterprise User Group, as well as Cloud Seeders, a member-centric Cloud Learning Community for Women, Built by Women. Steve also serves as a speaker in AWS APAC Community Conferences, where he shares on his Speakerdeck.
 

 

 

 

Hein Tibosch – Bali, Indonesia

IoT Hero Hein Tibosch is a skilled software developer, specializing in embedded applications and working as an independent at his craft for over 17 years. Hein is exemplary in his community contributions for FreeRTOS, as an active committer to the FreeRTOS project and the most active customer on the FreeRTOS Community Forums. Over the last 8 years, Hein’s contributions to FreeRTOS have made a significant impact on the successful adoption of FreeRTOS by embedded developers of all technical levels and backgrounds.
 

 

 

 
You can learn all about the AWS Heroes and connect with a Hero near you by visiting the AWS Hero website.

Ross;