Category Archives: AWS

Amazon EKS Price Reduction

This post was originally published on this site

Since it launched 18 months ago, Amazon Elastic Kubernetes Service has released a staggering 62 features, 14 regions, and 4 Kubernetes versions. While developers, like me, are loving the speed of innovation and the incredible new features, today, we have an announcement that is going to bring a smile to the people in your finance department. We are reducing the price by 50%.

As of the 21st of January, the price will reduce from $0.20 per hour for each Amazon EKS cluster to $0.10 per hour. This new price is for all new and existing Amazon EKS clusters.

Incredible Momentum
Last year, I wrote about a few of those 62 Amazon EKS features. Features such as Amazon EKS on AWS Fargate, EKS Windows Containers support, and Managed Node Groups for Amazon Elastic Kubernetes Service. It has been a pleasure to hear customers in the comments, in meetings, and at events tell me that features like these are enabling them to run different kinds of applications more reliably and more efficiently than ever before. I also have enjoyed watching customer feedback come in via the public containers roadmap and see the Amazon EKS team deliver requested features at a constant rate.

Customers are Flourishing on Amazon Elastic Kubernetes Service
Amazon EKS is used by big and small customers to run everything from simple websites to mission-critical systems and large scale machine learning jobs. Below are three examples from the many customers that are seeing tremendous value from Amazon EKS.

Snap runs 100% on K8s in the cloud and, in the last year, moved multiple parts of their app, including the core messaging architecture to Amazon EKS as part of their move from a monolithic service-oriented architecture to microservices. In their words, “Undifferentiated Heavy Lifting is work that we have to do that doesn’t directly benefit our customers. It’s just work. Amazon EKS frees us up to worry about delivering customer value and allows developers without operational experience to innovate without having to know where their code runs.” You can learn more about Snap’s journey in this video recorded at the AWS New York Summit.

HSBC runs mission-critical, highly secure banking infrastructure on Amazon EKS and joined us on stage at AWS re:Invent 2019 to talk about why they bank on Amazon EKS.

Advalo is a predictive marketing platform company, reaching customers during the most influential moments in their purchase decision. – Edouard Devouge, Lead SRE at Advalo says “We are running our applications on Amazon EKS, launching up to 2,000 nodes per day and running up to 75,000 pods for microservices and Machine Learning apps, allowing us to detect purchase intent through individualized Marketing in the website and shops of our customers.”

With today’s announcement, all the benefits that these customers describe are now available at a great new price, ensuring that AWS remains the best place in the world to run your Kubernetes clusters.

Amazon Elastic Kubernetes Service Resources
Here are some resources to help you to learn how to make great use of Amazon EKS in your organization:

Effective Immediately
The 50% price reduction is available in all regions effective immediately, and you do not have to do anything to take advantage of the new price. From today onwards, you will be charged the new lower price for the Amazon Elastic Kubernetes Service service. So sit back, relax, and enjoy the savings.

— Martin

CloudEndure Highly Automated Disaster Recovery – 80% Price Reduction

This post was originally published on this site

AWS acquired CloudEndure last year. After the acquisition we began working with our new colleagues to integrate their products into the AWS product portfolio.

CloudEndure Disaster Recovery is designed to help you minimize downtime and data loss. It continuously replicates the contents of your on-premises, virtual, or cloud-based systems to a low-cost staging area in the AWS region of your choice, within the confines of your AWS account:

The block-level replication encompasses essentially every aspect of the protected system including the operating system, configuration files, databases, applications, and data files. CloudEndure Disaster Recovery can replicate any database or application that runs on supported versions of Linux or Windows, and is commonly used with Oracle and SQL Server, as well as enterprise applications such as SAP. If you do an AWS-to-AWS replication, the AWS environment within a specified VPC is replicated; this includes the VPC itself, subnets, security groups, routes, ACLs, Internet Gateways, and other items.

Here are some of the most popular and interesting use cases for CloudEndure Disaster Recovery:

On-Premises to Cloud Disaster Recovery -This model moves your secondary data center to the AWS Cloud without downtime or performance impact. You can improve your reliability, availability, and security without having to invest in duplicate hardware, networking, or software.

Cross-Region Disaster Recovery – If your application is already on AWS, you can add an additional layer of cost-effective protection and improve your business continuity by setting up cross-region disaster recovery. You can set up continuous replication between regions or Availability Zones and meet stringent RPO (Recovery Point Objective) or RTO (Recovery Time Objective) requirements.

Cross-Cloud Disaster Recovery – If you run workloads on other clouds, you can increase your overall resilience and meet compliance requirements by using AWS as your DR site. CloudEndure Disaster Recovery will replicate and recover your workloads, including automatic conversion of your source machines so that they boot and run natively on AWS.

80% Price Reduction
Recovery is quick and robust, yet cost-effective. In fact, we are reducing the price for CloudEndure Disaster Recovery by about 80% today, making it more cost-effective than ever: $0.028 per hour, or about $20 per month per server.

If you have tried to implement a DR solution in the traditional way, you know that it requires a costly set of duplicate IT resources (storage, compute, and networking) and software licenses. By replicating your workloads into a low-cost staging area in your preferred AWS Region, CloudEndure Disaster Recovery reduces compute costs by 95% and eliminates the need to pay for duplicate OS and third-party application licenses.

To learn more, watch the Disaster Recovery to AWS Demo Video:

After that, be sure to visit the new CloudEndure Disaster Recovery page!

Jeff;

In the Works – AWS Osaka Local Region Expansion to Full Region

This post was originally published on this site

Today, we are excited to announce that, due to high customer demand for additional services in Osaka, the Osaka Local Region will be expanded into a full AWS Region with three Availability Zones by early 2021. Like all AWS Regions, each Availability Zone will be isolated with its own power source, cooling system, and physical security, and be located far enough apart to significantly reduce the risk of a single event impacting availability, yet near enough to provide low latency for high availability applications.

We are constantly expanding our infrastructure to provide customers with sufficient capacity to grow and the necessary tools to architect a variety of system designs for higher availability and robustness. AWS now operates 22 regions and 69 Availability Zones globally.

In March 2011, we launched the AWS Tokyo Region as our fifth AWS Region with two Availability Zones. After that, we launched a third Tokyo Availability Zone in 2012 and a fourth in 2018.

In February 2018, we launched the Osaka Local Region as a new region construct that comprises an isolated, fault-tolerant infrastructure design contained in a single data center and complements an existing AWS Region. Located 400km from the Tokyo Region, the Osaka Local Region has supported customers with applications that require in-country, geographic diversity for disaster recovery purposes that could not be served with the Tokyo Region alone.

Osaka Local Region in the Future
When launched, the Osaka Region will provide the same broad range of services as other AWS Regions and will be available to all AWS customers. Customers will be capable of deploying multi-region systems within Japan, and those users located in western Japan will enjoy even lower latency than what they have today.

If you are interested in how AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable and secure cloud computing environment with the highest global network performance then check out our Global Infrastructure site which explains and visualizes it all.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS Regions as soon as I have it, so stay tuned! We are working on 4 more regions (Indonesia, Italy, South Africa, and Spain), and 13 more Availability Zones globally.

– Kame, Sr. Product Marketing Manager / Sr. Evangelist Amazon Web Services Japan

 

AWS Backup: EC2 Instances, EFS Single File Restore, and Cross-Region Backup

This post was originally published on this site

Since we launched AWS Backup last year, over 20,000 AWS customers are protecting petabytes of data every day. AWS Backup is a fully managed, centralized backup service simplifying the management of your backups for your Amazon Elastic Block Store (EBS) volumes, your databases (Amazon Relational Database Service (RDS) or Amazon DynamoDB), AWS Storage Gateway and your Amazon Elastic File System (EFS) filesystems.

We continuously listen to your feedback and today, we are bringing additional enterprise data capabilities to AWS Backup :

Here are the details.

EC2 Instance Backup
Backing up and restoring an EC2 instance requires additional protection than just the instance’s individual EBS volumes. To restore an instance, you’ll need to restore all EBS volumes but also recreate an identical instance: instance type, VPC, Security Group, IAM role etc.

Today, we are adding the ability to perform backup and recovery tasks on whole EC2 instances. When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two (Elastic Inference Accelerator and user data script).

Once the backup is complete, you can easily restore the full instance using the console, API, or AWS Command Line Interface (CLI). You will be able to restore and edit all parameters using the API or AWS Command Line Interface (CLI), and in the console, you will be able to restore and edit 16 parameters from your original EC2 instance.

To get started, open the Backup console and select either a backup plan or an on-demand backup. For this example, I chose On-Demand backup. I select EC2 from the list of services and select the ID of the instance I want to backup.

Note that you need to stop write activity and flush filesystem caches in case you’re using RAID volumes or any other type of technique to group your volumes.

After a while, I see the backup available in my vault. To restore the backup, I select the backup and click Restore.

Before actually starting the restore, I can see the EC2 configuration options that have been backed up and I have the opportunity to modify any value listed before re-creating the instance.

After a few seconds, my restored instance starts and is available in the EC2 console.

Single File Restore for EFS
Often AWS Backup customers would like to restore an accidentally deleted or corrupted file or folder. Before today, you would need to perform a full restore of the entire filesystem, which makes it difficult to meet your strict RTO objectives.

Starting today, you can restore a single file or directory from your Elastic File System filesystem. You select the backup, type the relative path of the file or directory to restore, and AWS Backup will create a new Elastic File System recovery directory at the root of your filesystem, preserving the original path hierarchy. You can restore your files to an existing filesystem or to a new filesystem.

To restore a single file from an Elastic File System backup, I choose the backup from the vault and I click Restore. On the Restore backup window, I choose between restoring the full filesystem or individual items. I enter the path relative to the root of the filesystem (not including the mount point) for the files and directories I want to restore. I also choose if I want to restore the items in the existing filesystem or in a new filesystem. Finally, I click Restore backup to start the restore job.

Cross-region Backup
Many enterprise AWS customers have strict business continuity policies requiring a minimum distance between two copies of their backup. To help enterprises to meet this requirement, we’re adding the capability to copy a backup to another Region, either on-demand when you need it or automatically, as part of a backup plan.

To initiate an on-demand copy of my backup to another Region, I use the console to browse my vaults, select the backup I want to copy and click Copy. I chose the destination Region, the destination vault, and keep the default value for other options. I click Copy on the bottom of the page.

The time to make the copy depends on the size of the backup. I monitor the status on the new Copy Jobs tab of the Job section:

Once the copy is finished, I switch my console to the target Region, I see the backup in the target vault and I can initiate a restore operation, just like usual.

I also can use the AWS Command Line Interface (CLI) or one of our AWS SDKs to automate or to integrate any of these processes in other applications.

Pricing
Pricing depends on the type of backup:

  • there is no additional charge for EC2 instance backup, you will be charged for the storage used by all EBS volumes attached to your instance,
  • for Elastic File System single file restore, you will be charged a fixed fee per restore and for the number of bytes you restore,
  • and for cross-region backup, you will be charged for the cross-region data transfer bandwidth and for the new warm storage space in the target Region.

These three new features are available today in all commercial AWS Regions where AWS Backup is available (you can verify services availability per Region on this web page).

As it is usual with any backup system, it is best practice to regularly perform backups and backup testing. Restore-able backups are the best kind of backups.

— seb

New for Amazon EFS – IAM Authorization and Access Points

This post was originally published on this site

When building or migrating applications, we often need to share data across multiple compute nodes. Many applications use file APIs and Amazon Elastic File System (EFS) makes it easy to use those applications on AWS, providing a scalable, fully managed Network File System (NFS) that you can access from other AWS services and on-premises resources.

EFS scales on demand from zero to petabytes with no disruptions, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity. By using it, you get strong file system consistency across 3 Availability Zones. EFS performance scales with the amount of data stored, with the option to provision the throughput you need.

Last year, the EFS team focused on optimizing costs with the introduction of the EFS Infrequent Access (IA) storage class, with storage prices up to 92% lower compared to EFS Standard. You can quickly start reducing your costs by setting a Lifecycle Management policy to move to EFS IA the files that haven’t been accessed for a certain amount of days.

Today, we are introducing two new features that simplify managing access, sharing data sets, and protecting your EFS file systems:

  • IAM authentication and authorization for NFS Clients, to identify clients and use IAM policies to manage client-specific permissions.
  • EFS access points, to enforce the use of an operating system user and group, optionally restricting access to a directory in the file system.

Using IAM Authentication and Authorization
In the EFS console, when creating or updating an EFS file system, I can now set up a file system policy. This is an IAM resource policy, similar to bucket policies for Amazon Simple Storage Service (S3), and can be used, for example, to disable root access, enforce read-only access, or enforce in-transit encryption for all clients.

Identity-based policies, such as those used by IAM users, groups, or roles, can override these default permissions. These new features work on top of EFS’s current network-based access using security groups.

I select the option to disable root access by default, click on Set policy, and then select the JSON tab. Here, I can review the policy generated based on my settings, or create a more advanced policy, for example to grant permissions to a different AWS account or a specific IAM role.

The following actions can be used in IAM policies to manage access permissions for NFS clients:

  • ClientMount to give permission to mount a file system with read-only access
  • ClientWrite to be able to write to the file system
  • ClientRootAccess to access files as root

I look at the policy JSON. I see that I can mount and read (ClientMount) the file system, and I can write (ClientWrite) in the file system, but since I selected the option to disable root access, I don’t have ClientRootAccess permissions.

Similarly, I can attach a policy to an IAM user or role to give specific permissions. For example, I create a IAM role to give full access to this file system (including root access) with this policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "elasticfilesystem:ClientMount",
                "elasticfilesystem:ClientWrite",
                "elasticfilesystem:ClientRootAccess"
            ],
            "Resource": "arn:aws:elasticfilesystem:us-east-2:123412341234:file-system/fs-d1188b58"
        }
    ]
}

I start an Amazon Elastic Compute Cloud (EC2) instance in the same Amazon Virtual Private Cloud as the EFS file system, using Amazon Linux 2 and a security group that can connect to the file system. The EC2 instance is using the IAM role I just created.

The open source efs-utils are required to connect a client using IAM authentication, in-transit encryption, or both. Normally, on Amazon Linux 2, I would install efs-utils using yum, but the new version is still rolling out, so I am following the instructions to build the package from source in this repository. I’ll update this blog post when the updated package is available.

To mount the EFS file system, I use the mount command. To leverage in-transit encryption, I add the tls option. I am not using IAM authentication here, so the permissions I specified for the “*” principal in my file system policy apply to this connection.

$ sudo mkdir /mnt/shared
$ sudo mount -t efs -o tls fs-d1188b58 /mnt/shared

My file system policy disables root access by default, so I can’t create a new file as root.

$ sudo touch /mnt/shared/newfile
touch: cannot touch ‘/mnt/shared/newfile’: Permission denied

I now use IAM authentication adding the iam option to the mount command (tls is required for IAM authentication to work).

$ sudo mount -t efs -o iam,tls fs-d1188b58 /mnt/shared

When I use this mount option, the IAM role from my EC2 instance profile is used to connect, along with the permissions attached to that role, including root access:

$ sudo touch /mnt/shared/newfile
$ ls -la /mnt/shared/newfile
-rw-r--r-- 1 root root 0 Jan  8 09:52 /mnt/shared/newfile

Here I used the IAM role to have root access. Other common use cases are to enforce in-transit encryption (using the aws:SecureTransport condition key) or create different roles for clients needing write or read-only access.

EFS IAM permission checks are logged by AWS CloudTrail to audit client access to your file system. For example, when a client mounts a file system, a NewClientConnection event is shown in my CloudTrail console.

Using EFS Access Points
EFS access points allow you to easily manage application access to NFS environments, specifying a POSIX user and group to use when accessing the file system, and restricting access to a directory within a file system.

Use cases that can benefit from EFS access points include:

  • Container-based environments, where developers build and deploy their own containers (you can also see this blog post for using EFS for container storage).
  • Data science applications, that require read-only access to production data.
  • Sharing a specific directory in your file system with other AWS accounts.

In the EFS console, I create two access points for my file system, each using a different POSIX user and group:

  • /data – where I am sharing some data that must be read and updated by multiple clients.
  • /config – where I share some configuration files that must not be updated by clients using the /data access point.

I used file permissions 755 for both access points. That means that I am giving read and execute access to everyone and write access to the owner of the directory only. Permissions here are used when creating the directory. Within the directory, permissions are under full control of the user.

I mount the /data access point adding the accesspoint option to the mount command:

$ sudo mount -t efs -o tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can now create a file, because I am not doing that as root, but I am automatically using the user and group ID of the access point:

$ sudo touch /mnt/shared/datafile
$ ls -la /mnt/shared/datafile
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/datafile

I mount the file system again, without specifying an access point. I see that datafile was created in the /data directory, as expected considering the access point configuration. When using the access point, I was unable to access any files that were in the root or other directories of my EFS file system.

$ sudo mount -t efs -o tls /mnt/shared/
$ ls -la /mnt/shared/data/datafile 
-rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/data/datafile

To use IAM authentication with access points, I add the iam option:

$ sudo mount -t efs -o iam,tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared

I can restrict a IAM role to use only a specific access point adding a Condition on the AccessPointArn to the policy:

"Condition": {
    "StringEquals": {
        "elasticfilesystem:AccessPointArn" : "arn:aws:elasticfilesystem:us-east-2:123412341234:access-point/fsap-0204ce67a2208742e"
    }
}

Using IAM authentication and EFS access points together simplifies securely sharing data for container-based architectures and multi-tenant-applications, because it ensures that every application automatically gets the right operating system user and group assigned to it, optionally limiting access to a specific directory, enforcing in-transit encryption, or giving read-only access to the file system.

Available Now
IAM authorization for NFS clients and EFS access points are available in all regions where EFS is offered, as described in the AWS Region Table. There is no additional cost for using them. You can learn more about using EFS with IAM and access points in the documentation.

It’s now easier to create scalable architectures sharing data and configurations. Let me know what you are going use these new features for!

Danilo

Urgent & Important – Rotate Your Amazon RDS, Aurora, and DocumentDB Certificates

This post was originally published on this site

You may have already received an email or seen a console notification, but I don’t want you to be taken by surprise!

Rotate Now
If you are using Amazon Aurora, Amazon Relational Database Service (RDS), or Amazon DocumentDB and are taking advantage of SSL/TLS certificate validation when you connect to your database instances, you need to download & install a fresh certificate, rotate the certificate authority (CA) for the instances, and then reboot the instances.

If you are not using SSL/TLS connections or certificate validation, you do not need to make any updates, but I recommend that you do so in order to be ready in case you decide to use SSL/TLS connections in the future. In this case, you can use a new CLI option that rotates and stages the new certificates but avoids a restart.

The new certificate (CA-2019) is available as part of a certificate bundle that also includes the old certificate (CA-2015) so that you can make a smooth transition without getting into a chicken and egg situation.

What’s Happening?
The SSL/TLS certificates for RDS, Aurora, and DocumentDB expire and are replaced every five years as part of our standard maintenance and security discipline. Here are some important dates to know:

September 19, 2019 – The CA-2019 certificates were made available.

January 14, 2020 – Instances created on or after this date will have the new (CA-2019) certificates. You can temporarily revert to the old certificates if necessary.

February 5 to March 5, 2020 – RDS will stage (install but not activate) new certificates on existing instances. Restarting the instance will activate the certificate.

March 5, 2020 – The CA-2015 certificates will expire. Applications that use certificate validation but have not been updated will lose connectivity.

How to Rotate
Earlier this month I created an Amazon RDS for MySQL database instance and set it aside in preparation for this blog post. As you can see from the screen shot above, the RDS console lets me know that I need to perform a Certificate update.

I visit Using SSL/TLS to Encrypt a Connection to a DB Instance and download a new certificate. If my database client knows how to handle certificate chains, I can download the root certificate and use it for all regions. If not, I download a certificate that is specific to the region where my database instance resides. I decide to download a bundle that contains the old and new root certificates:

Next, I update my client applications to use the new certificates. This process is specific to each app and each database client library, so I don’t have any details to share.

Once the client application has been updated, I change the certificate authority (CA) to rds-ca-2019. I can Modify the instance in the console, and select the new CA:

I can also do this via the CLI:

$ aws rds modify-db-instance --db-instance-identifier database-1 
  --ca-certificate-identifier rds-ca-2019

The change will take effect during the next maintenance window. I can also apply it immediately:

$ aws rds modify-db-instance --db-instance-identifier database-1 
  --ca-certificate-identifier rds-ca-2019 --apply-immediately

After my instance has been rebooted (either immediately or during the maintenance window), I test my application to ensure that it continues to work as expected.

If I am not using SSL and want to avoid a restart, I use --no-certificate-rotation-restart:

$ aws rds modify-db-instance --db-instance-identifier database-1 
  --ca-certificate-identifier rds-ca-2019 --no-certificate-rotation-restart

The database engine will pick up the new certificate during the next planned or unplanned restart.

I can also use the RDS ModifyDBInstance API function or a CloudFormation template to change the certificate authority.

Once again, all of this must be completed by March 5, 2020 or your applications may be unable to connect to your database instance using SSL or TLS.

Things to Know
Here are a couple of important things to know:

Amazon Aurora ServerlessAWS Certificate Manager (ACM) is used to manage certificate rotations for this database engine, and no action is necessary.

Regions – Rotation is needed for database instances in all commercial AWS regions except Asia Pacific (Hong Kong), Middle East (Bahrain), and China (Ningxia).

Cluster Scaling – If you add more nodes to an existing cluster, the new nodes will receive the CA-2019 certificate if one or more of the existing nodes already have it. Otherwise, the CA-2015 certificate will be used.

Learning More
Here are some links to additional information:

Jeff;

 

Amazon at CES 2020 – Connectivity & Mobility

This post was originally published on this site

The Consumer Electronics Show (CES) starts tomorrow. Attendees will have the opportunity to learn about the latest and greatest developments in many areas including 5G, IoT, Advertising, Automotive, Blockchain, Health & Wellness, Home & Family, Immersive Entertainment, Product Design & Manufacturing, Robotics & Machine Intelligence, and Sports.

Amazon at CES
If you will be traveling to Las Vegas to attend CES, I would like to invite you to visit the Amazon Automotive exhibit in the Las Vegas Convention Center. Come to booth 5616 to learn about our work to help auto manufacturers and developers create the next generation of software-defined vehicles:

As you might know, this industry is working to reinvent itself, with manufacturers expanding from designing & building vehicles to a more expansive vision that encompasses multiple forms of mobility.

At the booth, you will find multiple demos that are designed to show you what is possible when you mashup vehicles, connectivity, software, apps, sensors, and machine learning in new ways.

Cadillac Customer Journey – This is an interactive, immersive demo of a data-driven shopping experience to engage customers at every touchpoint. Powered by ZeroLight and running on AWS, the demo uses 3D imagery that is generated in real time on GPU-equipped EC2 instances.

Future Mobility – This demo uses the Alexa Auto SDK and several AWS Machine Learning services to create an interactive in-vehicle assistant. It stores driver profiles in the cloud, and uses Amazon Rekognition to load the proper profile for the driver. Machine learning is used to detect repeated behaviors, such as finding the nearest coffee shop each morning.

Rivian Alexa – This full-vehicle demo showcases the deep Alexa Auto SDK integration that Rivian is using to control core vehicle functions on their upcoming R1T Electric Truck.

Smart Home / Garage – This demo ensemble showcases several of the Alexa home-to-car and car-to-home integrations, and features multiple Amazon & Alexa offerings including Amazon Pay, Fire TV, and Ring.

Karma Automotive / Blackberry QNX – Built on AWS IoT and machine learning inference models developed using Amazon SageMaker, this demo includes two use cases. The first one shows how data from Karma‘s fleet of electric vehicles is used to predict the battery state of health. The second one shows how cloud-trained models run at the edge (in the vehicle) to detect gestures that control vehicle functions.

Accenture Personalized Connected Vehicle Adventure – This demo shows how identity and personalization can be used to create unique transportation experiences. The journeys are customized using learned preferences and contextual data gathered in real time, powered by Amazon Personalize.

Accenture Data Monetization – This demo tackles data monetization while preserving customer privacy. Built around a data management reference architecture that uses Amazon QLDB and AWS Data Exchange, the demo enables consent and value exchange, with a focus on insights, predictions, and recommendations.

Denso Connected Vehicle Reference System – CVRS is an intelligent, end-to-end mobility service built on the AWS Connected Vehicle Solution. It uses a layered architecture that combines edge and cloud components, to allow mobility service providers to build innovative products without starting from scratch.

WeRide – This company runs a fleet of autonomous vehicles in China. The ML training to support the autonomy runs on AWS, as does the overall fleet management system. The demo shows how the AWS cloud supports their connected & autonomous fleet.

Dell EMC / National Instruments – This jointly developed demo focuses on the Hardware-in-Loop phase of autonomous vehicle development, where actual vehicle hardware running in real-world conditions is used.

Unity – This demo showcases a Software-in-Loop autonomous vehicle simulation built with Unity. An accurate, photorealistic representation of Berlin, Germany is used, with the ability to dynamically vary parameters such as time, weather, and scenery. Using the Unity Simulation framework and AWS, 100 permutations of each scene are generated and used as training data in parallel.

Get in Touch
If you are interested in learning more about any of these demos or if you are ready to build a connected or autonomous vehicle solution of your own, please feel free to contact us.

Jeff;

Celebrating AWS Community Leaders at re:Invent 2019

This post was originally published on this site

Even though cloud computing is a global phenomenon, location still matters when it comes to community. For example, customers regularly tell me that they are impressed by the scale, enthusiasm, and geographic reach of the AWS User Group Community. We are deeply appreciative of the work that our user group and community leaders do.

Each year, leaders of local communities travel to re:Invent in order to attend a series of events designed to meet their unique needs. They attend an orientation session, learn about We Power Tech (“Building a future of tech that is diverse, inclusive and accessible”), watch the keynotes, and participate in training sessions as part of a half-day AWS Community Leader workshop. After re:Invent wraps up, they return to their homes and use their new knowledge and skills to do an even better job of creating and sharing technical content and of nurturing their communities.

Community Leadership Grants
In order to make it possible for more community leaders to attend and benefit from re:Invent, we launched a grant program in 2018. The grants covered registration, housing, and flights and were awarded to technologists from emerging markets and underrepresented communities.

Several of the recipients went on to become AWS Heroes, and we decided to expand the program for 2019. We chose 17 recipients from 14 countries across 5 continents, with an eye toward recognizing those who are working to build inclusive AWS communities. Additionally, We Power Tech launched a separate Grant Program with Project Alloy to help support underrepresented technologists in the first five years of their careers to attend re:Invent by covering conference registration, hotel, and airfare. In total, there were 102 grantees from 16 countries.

The following attendees received Community Leadership Grants and were able to attend re:Invent:

Ahmed Samir – Riyadh, KSA (LinkedIn, Twitter) – Ahmed is a co-organizer of the AWS Riyadh User Group. He is well known for his social media accounts in which he translates all AWS announcements to Arabic.

Veronique Robitaille – Valencia, Spain (LinkedIn, Twitter) – Veronique is an SA certified cloud consultant in Valencia, Spain. She is the co organizer of the AWS User Group in Valencia, and also translates AWS content into Spanish.

Dzenana Dzevlan – Mostar, Bosnia (LinkedIn) – Dzenana is an electrical engineering masters student at the University of Sarajevo, and a co-organizer of the AWS User Group in Bosnia-Herzegovina.

Magdalena Zawada – Warsaw, Poland (LinkedIn) – Magdalena is a cloud consultant and co-organizer of the AWS User Group Poland.

Hiromi Ito – Osaka, Japan (Twitter) – Hiromi runs IT communities for women in Japan and elsewhere in Asia, and also contributes to JAWS-UG in Kansai. She is the founder of the Asian Woman’s Association Meetup in Singapore.

Lena Taupier – Columbus, Ohio, USA (LinkedIn) – Lena co-organizes the Columbus AWS Meetup, was on the organizing team for the 2018 and 2019 Midwest / Chicago AWS Community Days, and delivered a lightning talk on “Building Diverse User Communities” at re:Invent.

Victor Perez – Panama City, Panama (LinkedIn) – Victor founded the AWS Panama User Group after deciding that he wanted to make AWS Cloud the new normal for the country. He also created the AWS User Group Caracas.

Hiro Nishimura – New York, USA (LinkedIn, Twitter) – Hiro is an educator at heart. She founded AWS Newbies to teach beginners about AWS, and worked with LinkedIn to create video courses to introduce cloud computing to non-engineers.

Sridevi Murugayen –  Chennai, India (LinkedIn) – Sridevi is a core leader of AWS Community Day Chennai. She managed a diversity session at the Community Day, and is a regular presenter and participant in the AWS Chennai User Group.

Sukanya Mandal – Mumbai, India (LinkedIn) – Sukanya leads the PyData community in Mumbai, and also contributes to the AWS User Group there. She talked about “ML for IoT at the Edge” at the AWS Developer Lounge in the re:Invent 2019 Expo Hall.

Seohyun Yoon – Seoul, Korea (LinkedIn) – Seohyun is a founding member of the student division of the AWS Korea Usergroup (AUSG), one of the youngest active AWS advocates in Korea, and served as a judge for the re:Invent 2019 Non-Profit Hackathon for Good. Check out her hands-on AWS lab guides!

Farah Clara Shinta Rachmady – Jakarta, Indonesia (LinkedIn, Twitter) – Farah nurtures AWS Indonesia and other technical communities in Indonesia, and also organizes large-scale events & community days.

Sandy Rodríguez – Mexico City, Mexico (LinkedIn) – Sandy co-organized the AWS Mexico City User Group and focuses on making events great for attendees. She delivered a 20-minute session in the AWS Village Theater at re:Invent 2019. Her work is critical to the growth of the AWS community in Mexico.

Vanessa Alves dos Santos – São Paulo, Brazil (LinkedIn) – Vanessa is a powerful AWS advocate within her community. She helped to plan AWS Community Days Brazil and the AWS User Group in São Paulo.

The following attendees were chosen for grants, but were not able to attend due to issues with travel visas:

Ayeni Oluwakemi – Lagos, Nigeria (LinkedIn, Twitter) – Ayeni is the founder of the AWS User Group in Lagos, Nigeria. She is the organizer of AWSome Day in Nigeria, and writes for the Cloud Guru Blog.

Ewere Diagboya – Lagos, Nigeria (LinkedIn, Twitter) – Ewere is one of our most active advocates in Nigeria. He is very active in the DevOps and Cloud Computing community as educator, and also organizes the DevOps Nigeria Meetup.

Minh Ha – Hanoi, Vietnam – Minh grows the AWS User Group Vietnam by organizing in-person meetups and online events. She co-organized AWS Community Day 2018, runs hackathons, and co-organized SheCodes Vietnam.

Jeff;

 

Alejandra’s Top 5 Favorite re:Invent🎉 Launches of 2019

This post was originally published on this site

favorite re:Invent launches of 2019

While re:Invent 2019 may feel well over, I’m still feeling elated and curious about several of the launches that were announced that week. Is it just me, or did some of the new feature announcements seem to bring us closer to the Scifi worlds (i.e. AWS WaveLength anyone? and don’t get me started on Amazon Braket) of the future we envisioned as kids?

The future might very well be here. Can you handle it?

If you can, then I’m pumped to tell you why the following 5 launches of re:Invent 2019 got me the most excited.

[CAVEAT: Out of consideration for your sanity, dear reader, we try to keep these posts to a maximum word length. After all, I wouldn’t want you to fall asleep at your keyboard during work hours! Sadly, this also means I limited myself to only sharing a set number of the cool, new launches that happened. If you’re curious to read about ALL OF THEM, you can find them here: 2019 re:Invent Announcement Summary Page.]

 

1. Amazon Braket: explore Quantum Computing

Backstory of why I picked this one…

First of all, let’s address the 🐘elephant in the room🐘 and admit that 99.9% of us don’t really know what Quantum Computing is. But we want to! Because it sounds so cool and futuristic. So let’s give it a shot…

According to The Internet, a quantum computer is any computational device that uses the quantum mechanical phenomenas of superposition and entanglement to perform data operations. The basic principle of quantum computation is that quantum properties can be used to represent data and perform operations on it. Also, fun fact… in a “normal” computer —like your laptop— that data…that information… is stored in something called bits. But in a quantum computer, it is stored as qubits (quantum bits).

Quantum Computing is still in its infancy. Are you wondering where it will go?

What got launched?

Amazon Braket is a new service that makes it easy for scientists, researchers, and developers to build, test, and run quantum computing algorithms.

Sounds cool, but what does that actually mean?

The way it works is that Amazon Braket provides a development environment that enables you to design your own quantum algorithms from scratch or choose from a set of pre-built algorithms. Once you’ve picked your algorithm of choice, Amazon Braket provides you with a simulation service that helps you troubleshoot and verify your implementation of said algorithm. Once you’re ready, you could also choose to run your algorithm on a real quantum computer from one of our quantum hardware providers (i.e. D-Wave, IonQ, and Rigetti, etc).

So what are you waiting for? Go explore the future of quantum computing with Amazon Braket!

👉🏽Don’t forget to check out the docs: aws.amazon.com/braket
⚠Sign up to get notified when it’s released.

 

2. AWS Wavelength: ultra-low latency apps for 5G

Backstory of why I picked this one…

When I was a kid in the 80s, we were still on the beginning stages of the first wireless technology.

1G.

It had a lot of similarities to an old AM/FM radio. And just like with radio stations, cell phone calls ended up recieving interference from other callers ALL THE TIME. Sometimes, the calls became staticy if you were too far away from cell phone towers.

But it’s no longer the 80s, my dear readers. It’s 2019 and we’re all the way up to 5G now.

[note: When talking about 1,2, 3, 4 or 5G, the G stands for generation.]

What got launched?

AWS Wavelength combines high bandwidth and single-digit millisecond latency of 5G networks with AWS compute and storage services to enable developers to build new kinds of apps.

Phew, that was quite the brain🧠dump🗑, wasn’t it?

Sounds cool, but what does that actually mean?

Every generation of wireless technology has been defined by the speed of data transmission. So just how fast are we hoping 5G will be? Well, to give you a baseline…our fastest current 4G mobile networks offer about 45Mbps (megabits per second). But Qualcomm believes 5G could achieve browsing and download speeds about 10 to 20 times faster!

What makes this speed improvement possible is that 5G technology makes better use of the radio spectrum. It enables a more devices to access the mobile internet at the same time. Thus, it’s much better at handling thousands of devices simultaneously, without the congestion that was experienced in previous wireless generations.

At this speed, access to low latency services is really important. Why? Low latency is optimized to process a high volume of data messages with minimal delay (latency). This is exactly what you want if your business requires near real-time access to rapidly changing data.

Enter AWS Wavelength.

AWS Wavelength brings AWS services to the edge of the 5G network. It allows you to build the next generation of ultra-low latency apps using familiar AWS services, APIs, and tools. To deploy your app to 5G, simply extend your Amazon Virtual Private Cloud (VPC) to a Wavelength Zone and then create AWS resources like Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Storage (EBS) volumes.

The other neat news is that AWS Wavelength will be in partnership with Verizon starting in 2020, as well as working with other carriers like Vodafone, SK Telecom, and KDDI to expand Wavelength Zones to more locations by the end of 2020.

👉🏽Don’t forget to check out the docs: aws.amazon.com/wavelength
⚠Sign up to get notified when it’s released.

 

3. AWS DeepComposer: learn Machine Learning with a piano keyboard!

Backstory of why I picked this one…

I do not have a Machine Learning (ML) background. At all.

But I do have a piano and musical background. 🎹🎶I learnt how to play the piano at 4, and I first got into composing when I was about 12 years old. Not having a super fancy piano instructor at the time, I remember wondering how an average person could learn how to compose, regardless of your musical background.

What got launched?

AWS DeepComposer is a machine learning-enabled keyboard for developers that also uses AI (Artificial Intelligence) to create original songs and melodies.

Sounds cool, but what does that actually mean?

AWS DeepComposer includes tutorials, sample code, and training data that can be used to get started building generative models, all without having to write a single line of code! This is great, because it helps encourage people new to ML to still give it a whirl.

Now the other neat thing about AWS DeepComposer, is that it opens the door for you to learn about Generative AI — one of the biggest advancements in AI technology . You’ll learn about Generative Adversarial Networks (GANs), a Generative AI technique that puts two different neural networks against each other to produce new and original digital works based on sample inputs. With AWS DeepComposer, you are training and optimizing GAN models to create original music. 🎶

Is that awesome, or what?

👉🏽Don’t forget to check out the docs: aws.amazon.com/deepcomposer
⚠Sign up to get notified when it’s released.

 

4. Amplify: now it’s ready for iOS and Android devs too!

Backstory of why I picked this one…

I used to be a CSS developer. Joining the Back-End world was an accident for me, since I first assumed I’d always be a Front-End developer.

Amplify makes it easy for developers to build and deploy Full-Stack apps that leverage the cloud. It’s a service that really helps bridge the gap between Front and Back-End development. Seeing Amplify now offer SDKs and libraries for iOS and Android devs sounds even more inclusive and exciting!

What got launched?

The Amplify Framework (open source project for building cloud-enabled mobile and web apps) is ready for iOS and Andriod developers! There are now — in preview— Amplify iOS and Amplify Android libraries for building scalable and secure cloud powered serverless apps.

Sounds cool, but what does that actually mean?

Developers can now add capabilities of Analytics, AI/ML, API (GraphQL and REST), DataStore, and Storage to their mobile apps with these new iOS and Android Amplify libraries.

This release also included support for the Predictions category in Amplify iOS that allows developers to easily add and configure AI/ML use cases with very few lines of code. (And no machine learning experience required!) This allows developers to then accomplish other use cases of text translation, speech to text generation, image recognition, text to speech, insights from text, etc. You can even hook it up to services such as Amazon Rekognition, Amazon Translate, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Textract.

👉🏽Don’t forget to check out the docs…
📳Android: aws-amplify.github.io/docs/android/start
📱iOS: aws-amplify.github.io/docs/ios/start

 

5. EC2 Image Builder

Backstory of why I picked this one…

In my 1st year at AWS as a Developer Advocate, I got really into robotics and IoT. I’m not giving that up anytime soon, but for 2020, I’m also excited to serve more customers that are new to core AWS services. You know, things like storage, compute, containers, databases, etc.

Thus, it came as no surprise to me when this new launch caught my eye… 👀

What got launched?

EC2 Image Builder is a service that makes it easier and faster to build and maintain secure container images. It greatly simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.

Sounds cool, but what does that actually mean?

In the past, creating custom container images felt way too complex and time consuming. Most dev teams had to manually update VMs or build automation scripts to maintain these images.

Can you imagine?

Today, Amazon’s Image Builder service simplifies this process by allowing you to create custom OS images via an AWS GUI environment. You can also use it to build an automated pipeline that customizes, tests, and distributes your images in addition to keeping them secure and up-to-date. Sounds like a win-win to me. 🏆

👉🏽Don’t forget to check out the docs: aws.amazon.com/image-builder

 

¡Gracias por tu tiempo!
~Alejandra 💁🏻‍♀️ & Canela 🐾

New – Amazon Comprehend Medical Adds Ontology Linking

This post was originally published on this site

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights in unstructured text. It is very easy to use, with no machine learning experience required. You can customize Comprehend for your specific use case, for example creating custom document classifiers to organize your documents into your own categories, or custom entity types that analyze text for your specific terms. However, medical terminology can be very complex and specific to the healthcare domain.

For this reason, we introduced last year Amazon Comprehend Medical, a HIPAA eligible natural language processing service that makes it easy to use machine learning to extract relevant medical information from unstructured text. Using Comprehend Medical, you can quickly and accurately gather information, such as medical condition, medication, dosage, strength, and frequency from a variety of sources like doctors’ notes, clinical trial reports, and patient health records.

Today, we are adding the capability of linking the information extracted by Comprehend Medical to medical ontologies.

An ontology provides a declarative model of a domain that defines and represents the concepts existing in that domain, their attributes, and the relationships between them. It is typically represented as a knowledge base, and made available to applications that need to use or share knowledge. Within health informatics, an ontology is a formal description of a health-related domain.

The ontologies supported by Comprehend Medical are:

  • ICD-10-CM, to identify medical conditions as entities and link related information such as diagnosis, severity, and anatomical distinctions as attributes of that entity. This is a diagnosis code set that is very useful for population health analytics, and for getting payments from insurance companies based on medical services rendered.
  • RxNorm, to identify medications as entities and link attributes such as dose, frequency, strength, and route of administration to that entity. Healthcare providers use these concepts to enable use cases like medication reconciliation, which is is the process of creating the most accurate list possible of all medications a patient is taking.

For each ontology, Comprehend Medical returns a ranked list of potential matches. You can use confidence scores to decide which matches make sense, or what might need further review. Let’s see how this works with an example.

Using Ontology Linking
In the Comprehend Medical console, I start by giving some unstructured, doctor notes in input:

At first, I use some functionalities that were already available in Comprehend Medical to detect medical and protected health information (PHI) entities.

Among the recognized entities (see this post for more info) there are some symptoms and medications. Medications are recognized as generics or brands. Let’s see how we can connect some of these entities to more specific concepts.

I use the new features to link those entities to RxNorm concepts for medications.

In the text, only the parts mentioning medications are detected. In the details of the answer, I see more information. For example, let’s look at one of the detected medications:

  • The first occurrence of the term “Clonidine” (in second line in the input text above) is linked to the generic concept (on the left in the image below) in the RxNorm ontology.
  • The second occurrence of the term “Clonidine” (in the fourth line in the input text above) is followed by an explicit dosage, and is linked to a more prescriptive format that includes dosage (on the right in the image below) in the RxNorm ontology.

To look for for medical conditions using ICD-10-CM concepts, I am giving a different input:

The idea again is to link the detected entities, like symptoms and diagnoses, to specific concepts.

As expected, diagnoses and symptoms are recognized as entities. In the detailed results those entities are linked to the medical conditions in the ICD-10-CM ontology. For example, the two main diagnoses described in the input text are the top results, and specific concepts in the ontology are inferred by Comprehend Medical, each with its own score.

In production, you can use Comprehend Medical via API, to integrate these functionalities with your processing workflow. All the screenshots above render visually the structured information returned by the API in JSON format. For example, this is the result of detecting medications (RxNorm concepts):

{
    "Entities": [
        {
            "Id": 0,
            "Text": "Clonidine",
            "Category": "MEDICATION",
            "Type": "GENERIC_NAME",
            "Score": 0.9933062195777893,
            "BeginOffset": 83,
            "EndOffset": 92,
            "Attributes": [],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "Clonidine",
                    "Code": "2599",
                    "Score": 0.9148101806640625
                },
                {
                    "Description": "168 HR Clonidine 0.00417 MG/HR Transdermal System",
                    "Code": "998671",
                    "Score": 0.8215734958648682
                },
                {
                    "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet",
                    "Code": "892791",
                    "Score": 0.7519310116767883
                },
                {
                    "Description": "10 ML Clonidine Hydrochloride 0.5 MG/ML Injection",
                    "Code": "884225",
                    "Score": 0.7171697020530701
                },
                {
                    "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884185",
                    "Score": 0.6776907444000244
                }
            ]
        },
        {
            "Id": 1,
            "Text": "Vyvanse",
            "Category": "MEDICATION",
            "Type": "BRAND_NAME",
            "Score": 0.9995427131652832,
            "BeginOffset": 148,
            "EndOffset": 155,
            "Attributes": [
                {
                    "Type": "DOSAGE",
                    "Score": 0.9910679459571838,
                    "RelationshipScore": 0.9999822378158569,
                    "Id": 2,
                    "BeginOffset": 156,
                    "EndOffset": 162,
                    "Text": "50 mgs",
                    "Traits": []
                },
                {
                    "Type": "ROUTE_OR_MODE",
                    "Score": 0.9997182488441467,
                    "RelationshipScore": 0.9993833303451538,
                    "Id": 3,
                    "BeginOffset": 163,
                    "EndOffset": 165,
                    "Text": "po",
                    "Traits": []
                },
                {
                    "Type": "FREQUENCY",
                    "Score": 0.983681321144104,
                    "RelationshipScore": 0.9999642372131348,
                    "Id": 4,
                    "BeginOffset": 166,
                    "EndOffset": 184,
                    "Text": "at breakfast daily",
                    "Traits": []
                }
            ],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "lisdexamfetamine dimesylate 50 MG Oral Capsule [Vyvanse]",
                    "Code": "854852",
                    "Score": 0.8883932828903198
                },
                {
                    "Description": "lisdexamfetamine dimesylate 50 MG Chewable Tablet [Vyvanse]",
                    "Code": "1871469",
                    "Score": 0.7482635378837585
                },
                {
                    "Description": "Vyvanse",
                    "Code": "711043",
                    "Score": 0.7041242122650146
                },
                {
                    "Description": "lisdexamfetamine dimesylate 70 MG Oral Capsule [Vyvanse]",
                    "Code": "854844",
                    "Score": 0.23675969243049622
                },
                {
                    "Description": "lisdexamfetamine dimesylate 60 MG Oral Capsule [Vyvanse]",
                    "Code": "854848",
                    "Score": 0.14077001810073853
                }
            ]
        },
        {
            "Id": 5,
            "Text": "Clonidine",
            "Category": "MEDICATION",
            "Type": "GENERIC_NAME",
            "Score": 0.9982216954231262,
            "BeginOffset": 199,
            "EndOffset": 208,
            "Attributes": [
                {
                    "Type": "STRENGTH",
                    "Score": 0.7696017026901245,
                    "RelationshipScore": 0.9999960660934448,
                    "Id": 6,
                    "BeginOffset": 209,
                    "EndOffset": 216,
                    "Text": "0.2 mgs",
                    "Traits": []
                },
                {
                    "Type": "DOSAGE",
                    "Score": 0.777644693851471,
                    "RelationshipScore": 0.9999927282333374,
                    "Id": 7,
                    "BeginOffset": 220,
                    "EndOffset": 236,
                    "Text": "1 and 1 / 2 tabs",
                    "Traits": []
                },
                {
                    "Type": "ROUTE_OR_MODE",
                    "Score": 0.9981689453125,
                    "RelationshipScore": 0.999950647354126,
                    "Id": 8,
                    "BeginOffset": 237,
                    "EndOffset": 239,
                    "Text": "po",
                    "Traits": []
                },
                {
                    "Type": "FREQUENCY",
                    "Score": 0.99753737449646,
                    "RelationshipScore": 0.9999889135360718,
                    "Id": 9,
                    "BeginOffset": 240,
                    "EndOffset": 243,
                    "Text": "qhs",
                    "Traits": []
                }
            ],
            "Traits": [],
            "RxNormConcepts": [
                {
                    "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884185",
                    "Score": 0.9600071907043457
                },
                {
                    "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet",
                    "Code": "892791",
                    "Score": 0.8955953121185303
                },
                {
                    "Description": "24 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet",
                    "Code": "885880",
                    "Score": 0.8706559538841248
                },
                {
                    "Description": "12 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet",
                    "Code": "1013937",
                    "Score": 0.786146879196167
                },
                {
                    "Description": "Chlorthalidone 15 MG / Clonidine Hydrochloride 0.2 MG Oral Tablet",
                    "Code": "884198",
                    "Score": 0.601354718208313
                }
            ]
        }
    ],
    "ModelVersion": "0.0.0"
}

Similarly, this is the output when detecting medical conditions (ICD-10-CM concepts):

{
    "Entities": [
        {
            "Id": 0,
            "Text": "coronary artery disease",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9933860898017883,
            "BeginOffset": 90,
            "EndOffset": 113,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9682672023773193
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery without angina pectoris",
                    "Code": "I25.10",
                    "Score": 0.8199513554573059
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery",
                    "Code": "I25.1",
                    "Score": 0.4950370192527771
                },
                {
                    "Description": "Old myocardial infarction",
                    "Code": "I25.2",
                    "Score": 0.18753206729888916
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery with unstable angina pectoris",
                    "Code": "I25.110",
                    "Score": 0.16535982489585876
                },
                {
                    "Description": "Atherosclerotic heart disease of native coronary artery with unspecified angina pectoris",
                    "Code": "I25.119",
                    "Score": 0.15222692489624023
                }
            ]
        },
        {
            "Id": 2,
            "Text": "atrial fibrillation",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9923409223556519,
            "BeginOffset": 116,
            "EndOffset": 135,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9708861708641052
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Unspecified atrial fibrillation",
                    "Code": "I48.91",
                    "Score": 0.7011875510215759
                },
                {
                    "Description": "Chronic atrial fibrillation",
                    "Code": "I48.2",
                    "Score": 0.28612759709358215
                },
                {
                    "Description": "Paroxysmal atrial fibrillation",
                    "Code": "I48.0",
                    "Score": 0.21157972514629364
                },
                {
                    "Description": "Persistent atrial fibrillation",
                    "Code": "I48.1",
                    "Score": 0.16996538639068604
                },
                {
                    "Description": "Atrial premature depolarization",
                    "Code": "I49.1",
                    "Score": 0.16715925931930542
                }
            ]
        },
        {
            "Id": 3,
            "Text": "hypertension",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9993137121200562,
            "BeginOffset": 138,
            "EndOffset": 150,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9734011888504028
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Essential (primary) hypertension",
                    "Code": "I10",
                    "Score": 0.6827990412712097
                },
                {
                    "Description": "Hypertensive heart disease without heart failure",
                    "Code": "I11.9",
                    "Score": 0.09846580773591995
                },
                {
                    "Description": "Hypertensive heart disease with heart failure",
                    "Code": "I11.0",
                    "Score": 0.09182810038328171
                },
                {
                    "Description": "Pulmonary hypertension, unspecified",
                    "Code": "I27.20",
                    "Score": 0.0866364985704422
                },
                {
                    "Description": "Primary pulmonary hypertension",
                    "Code": "I27.0",
                    "Score": 0.07662317156791687
                }
            ]
        },
        {
            "Id": 4,
            "Text": "hyperlipidemia",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9998835325241089,
            "BeginOffset": 153,
            "EndOffset": 167,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "DIAGNOSIS",
                    "Score": 0.9702492356300354
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Hyperlipidemia, unspecified",
                    "Code": "E78.5",
                    "Score": 0.8378056883811951
                },
                {
                    "Description": "Disorders of lipoprotein metabolism and other lipidemias",
                    "Code": "E78",
                    "Score": 0.20186281204223633
                },
                {
                    "Description": "Lipid storage disorder, unspecified",
                    "Code": "E75.6",
                    "Score": 0.18514418601989746
                },
                {
                    "Description": "Pure hyperglyceridemia",
                    "Code": "E78.1",
                    "Score": 0.1438658982515335
                },
                {
                    "Description": "Other hyperlipidemia",
                    "Code": "E78.49",
                    "Score": 0.13983778655529022
                }
            ]
        },
        {
            "Id": 5,
            "Text": "chills",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9989762306213379,
            "BeginOffset": 211,
            "EndOffset": 217,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.9510533213615417
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Chills (without fever)",
                    "Code": "R68.83",
                    "Score": 0.7460958361625671
                },
                {
                    "Description": "Fever, unspecified",
                    "Code": "R50.9",
                    "Score": 0.11848161369562149
                },
                {
                    "Description": "Typhus fever, unspecified",
                    "Code": "A75.9",
                    "Score": 0.07497859001159668
                },
                {
                    "Description": "Neutropenia, unspecified",
                    "Code": "D70.9",
                    "Score": 0.07332006841897964
                },
                {
                    "Description": "Lassa fever",
                    "Code": "A96.2",
                    "Score": 0.0721040666103363
                }
            ]
        },
        {
            "Id": 6,
            "Text": "nausea",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9993392825126648,
            "BeginOffset": 220,
            "EndOffset": 226,
            "Attributes": [],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.9175007939338684
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Nausea",
                    "Code": "R11.0",
                    "Score": 0.7333012819290161
                },
                {
                    "Description": "Nausea with vomiting, unspecified",
                    "Code": "R11.2",
                    "Score": 0.20183530449867249
                },
                {
                    "Description": "Hematemesis",
                    "Code": "K92.0",
                    "Score": 0.1203150525689125
                },
                {
                    "Description": "Vomiting, unspecified",
                    "Code": "R11.10",
                    "Score": 0.11658868193626404
                },
                {
                    "Description": "Nausea and vomiting",
                    "Code": "R11",
                    "Score": 0.11535880714654922
                }
            ]
        },
        {
            "Id": 8,
            "Text": "flank pain",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9315784573554993,
            "BeginOffset": 235,
            "EndOffset": 245,
            "Attributes": [
                {
                    "Type": "ACUITY",
                    "Score": 0.9809532761573792,
                    "RelationshipScore": 0.9999837875366211,
                    "Id": 7,
                    "BeginOffset": 229,
                    "EndOffset": 234,
                    "Text": "acute",
                    "Traits": []
                }
            ],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.8182812929153442
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Unspecified abdominal pain",
                    "Code": "R10.9",
                    "Score": 0.4959934949874878
                },
                {
                    "Description": "Generalized abdominal pain",
                    "Code": "R10.84",
                    "Score": 0.12332479655742645
                },
                {
                    "Description": "Lower abdominal pain, unspecified",
                    "Code": "R10.30",
                    "Score": 0.08319114148616791
                },
                {
                    "Description": "Upper abdominal pain, unspecified",
                    "Code": "R10.10",
                    "Score": 0.08275411278009415
                },
                {
                    "Description": "Jaw pain",
                    "Code": "R68.84",
                    "Score": 0.07797083258628845
                }
            ]
        },
        {
            "Id": 10,
            "Text": "numbness",
            "Category": "MEDICAL_CONDITION",
            "Type": "DX_NAME",
            "Score": 0.9659366011619568,
            "BeginOffset": 255,
            "EndOffset": 263,
            "Attributes": [
                {
                    "Type": "SYSTEM_ORGAN_SITE",
                    "Score": 0.9976192116737366,
                    "RelationshipScore": 0.9999089241027832,
                    "Id": 11,
                    "BeginOffset": 271,
                    "EndOffset": 274,
                    "Text": "leg",
                    "Traits": []
                }
            ],
            "Traits": [
                {
                    "Name": "SYMPTOM",
                    "Score": 0.7310190796852112
                }
            ],
            "ICD10CMConcepts": [
                {
                    "Description": "Anesthesia of skin",
                    "Code": "R20.0",
                    "Score": 0.767346203327179
                },
                {
                    "Description": "Paresthesia of skin",
                    "Code": "R20.2",
                    "Score": 0.13602739572525024
                },
                {
                    "Description": "Other complications of anesthesia",
                    "Code": "T88.59",
                    "Score": 0.09990577399730682
                },
                {
                    "Description": "Hypothermia following anesthesia",
                    "Code": "T88.51",
                    "Score": 0.09953102469444275
                },
                {
                    "Description": "Disorder of the skin and subcutaneous tissue, unspecified",
                    "Code": "L98.9",
                    "Score": 0.08736388385295868
                }
            ]
        }
    ],
    "ModelVersion": "0.0.0"
}

Available Now
You can use Amazon Comprehend Medical via the console, AWS Command Line Interface (CLI), or AWS SDKs. With Comprehend Medical, you pay only for what you use. You are charged based on the amount of text processed on a monthly basis, depending on the features you use. For more information, please see the Comprehend Medical section in the Comprehend Pricing page. Ontology Linking is available in all regions were Amazon Comprehend Medical is offered, as described in the AWS Regions Table.

The new ontology linking APIs make it easy to detect medications and medical conditions in unstructured clinical text and link them to RxNorm and ICD-10-CM codes respectively. This new feature can help you reduce the cost, time and effort of processing large amounts of unstructured medical text with high accuracy.

Danilo