Tag Archives: AWS

AWS Weekly Roundup – Application Load Balancer IPv6, Amazon S3 pricing update, Amazon EC2 Flex instances, and more (May 20, 2024)

This post was originally published on this site

AWS Summit season is in full swing around the world, with last week’s events in Bengaluru, Berlin, and  Seoul, where my blog colleague Channy delivered one of the keynotes.

AWS Summit Seoul Keynote

Last week’s launches
Here are some launches that got my attention:

Amazon S3 will no longer charge for several HTTP error codesA customer reported how he was charged for Amazon S3 API requests he didn’t initiate and which resulted in AccessDenied errors. The Amazon Simple Storage Service (Amazon S3) service team updated the service to not charge such API requests anymore. As always when talking about pricing, the exact wording is important, so please read the What’s New post for the details.

Introducing Amazon EC2 C7i-flex instances – These instances delivers up to 19 percent better price performance compared to C6i instances. Using C7i-flex instances is the easiest way for you to get price performance benefits for a majority of compute-intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS and offer 5 percent lower prices compared to C7i.

Application Load Balancer launches IPv6 only support for internet clientsApplication Load Balancer now allows customers to provision load balancers without IPv4s for clients that can connect using just IPv6s. To connect, clients can resolve AAAA DNS records that are assigned to Application Load Balancer. The Application Load Balancer is still dual stack for communication between the load balancer and targets. With this new capability, you have the flexibility to use both IPv4s or IPv6s for your application targets while avoiding IPv4 charges for clients that don’t require it.

Amazon VPC Lattice now supports TLS Passthrough – We announced the general availability of TLS passthrough for Amazon VPC Lattice, which allows customers to enable end-to-end authentication and encryption using their existing TLS or mTLS implementations. Prior to this launch, VPC Lattice supported HTTP and HTTPS listener protocols only, which terminates TLS and performs request-level routing and load balancing based on information in HTTP headers.

Amazon DocumentDB zero-ETL integration with Amazon OpenSearch Service – This new integration provides you with advanced search capabilities, such as fuzzy search, cross-collection search and multilingual search, on your Amazon DocumentDB (with MongoDB compatibility) documents using the OpenSearch API. With a few clicks in the AWS Management Console, you can now synchronize your data from Amazon DocumentDB to Amazon OpenSearch Service, eliminating the need to write any custom code to extract, transform, and load the data.

Amazon EventBridge now supports customer managed keys (CMK) for event buses – This capability allows you to encrypt your events using your own keys instead of an AWS owned key (which is used by default). With support for CMK, you now have more fine-grained security control over your events, satisfying your company’s security requirements and governance policies.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional news items, open source projects, and Twitch shows that you might find interesting:

The Four Pillars of Managing Email Reputation – Dustin Taylor is the manager of anti-abuse and email deliverability for Amazon Simple Email Service (SES). He wrote a remarkable post exploring Amazon SES approach to managing domain and IP reputation. Maintaining a high reputation ensures optimal recipient inboxing. His post outlines how Amazon SES protects its network reputation to help you deliver high-quality email consistently. A worthy read, even if you’re not sending email at scale. I learned a lot.

AWS Build On Generative AIBuild On Generative AI – Season 3 of your favorite weekly Twitch show about all things generative artificial intelligence (AI) is in full swing! Streaming every Monday, 9:00 AM US PT, my colleagues Tiffany and Darko discuss different aspects of generative AI and invite guest speakers to demo their work.

AWS open source news and updates – My colleague Ricardo writes this weekly open source newsletter, in which he highlights new open source projects, tools, and demos from the AWS Community.

Upcoming AWS events

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— seb

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

New compute-optimized (C7i-flex) Amazon EC2 Flex instances

This post was originally published on this site

The vast majority of applications don’t run run the CPU flat-out at 100% utilization continuously. Take a web application, for instance. It typically fluctuates between periods of high and low demand, but hardly ever uses a server’s compute at full capacity.

a graph showing how a typical application runs with low-to-moderate CPU utilization most of the time with occasional peaks.

CPU utilization for many common workloads that customers run in the AWS Cloud today. (source: AWS Documentation)

One easy and cost-effective way to run such workloads is to use the Amazon EC2 M7i-flex instances which we introduced last August. These are lower-priced variants of the Amazon EC2 M7i instances offering the same next-generation specs for general purpose compute for the most popular sizes with the added benefit of giving you better price/performance if you don’t need full compute power 100 percent of the time. This makes them a great first choice if you are looking to reduce your running cost while meeting the same performance benchmarks.

This flexibility resonated really well with customers so, today, we are expanding our Flex portfolio by launching Amazon EC2 C7i-flex instances offering similar benefits of price/performance and lower costs for compute-intensive workloads. These are lower-priced variants of the Amazon EC2 C7i instances that offer a baseline level of CPU performance with the ability to scale up to the full compute performance 95% of the time.

C7i-flex instances
C7i-flex offers five of the most common sizes from large to 8xlarge, delivering 19 percent better price performance than Amazon EC2 C6i instances.

Instance name vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
c7i-flex.large 2 4 EBS-only up to 12.5 up to 10
c7i-flex.xlarge 4 8 EBS-only up to 12.5 up to 10
c7i-flex.2xlarge 8 16 EBS-only up to 12.5 up to 10
c7i-flex.4xlarge 16 32 EBS-only up to 12.5 up to 10
c7i-flex.8xlarge 32 64 EBS-only up to 12.5 up to 10

Should I use C7i-flex or C7i?
Both C7i-flex and C7i are compute-optmized instances powered by custom 4th Generation Intel Xeon Scalable processors which are only available at Amazon Web Services (AWS). They offer up to 15 percent better performance over comparable x86-based Intel processors used by other cloud providers.

They both also use DDR5 memory, feature a 2:1 ratio of memory to vCPU, and are ideal for running applications such as web and application servers, databases, caches, Apache Kafka, and Elasticsearch.

So why would you use one over the other? Here are three things to consider when deciding which one is right for you.

Usage pattern
EC2 flex instances are a great fit for when you don’t need to fully utilize all compute resources.

You can achieve 5 percent better price performance and 5 percent lower prices due to efficient use of compute resources. Typically, this is a great fit for most applications, so C7i-flex instances should be the first choice for compute-intensive workloads.

However, if your application requires continuous high CPU usage, then you should use C7i instances instead. They are likely more suitable for workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.

Instance sizes
C7i-flex instances offer the most common sizes used by a majority of workloads going up to a maximum of 8xlarge in size.

If you need higher specs, then you should look into the large C7i instances, which include 12xlarge, 16xlarge, 24xlarge, 48xlarge and two bare metal options with metal-24xl and metal-48xl sizes.

Network bandwidth
Larger sizes also offer higher network and Amazon Elastic Block Store (Amazon EBS) bandwidths so you may need to use one of the larger C7i instances depending on your requirements. C7i-flex instances offer up to 12.5 Gbps of network bandwidth and up to 10 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth which should be suitable for most applications.

Things to know
Regions – Visit AWS Services by Region to check whether C7i-flex instances are available in your preferred regions.

Purchasing options – C7i-Flex and C7i instances are available in On-Demand, Savings Plan, Reserved Instance, and Spot form. C7i instances are also available in Dedicated Host and Dedicated Instance form.

To learn more visit Amazon EC2 C7i and C7i-flex instances

Matheus Guimaraes

AWS Weekly Roundup: New capabilities in Amazon Bedrock, AWS Amplify Gen 2, Amazon RDS and more (May 13, 2024)

This post was originally published on this site

AWS Summit is in full swing around the world, with the most recent one being AWS Summit Singapore! Here is a sneak peek of the AWS staff and ASEAN community members at the Developer Lounge booth. It featured AWS Community speakers giving lightning talks on serverless, Amazon Elastic Kubernetes Service (Amazon EKS), security, generative AI, and more.

Last week’s launches
Here are some launches that caught my attention. Not surprisingly, a lot of interesting generative AI features!

Amazon Titan Text Premier is now available in Amazon Bedrock – This is the latest addition to the Amazon Titan family of large language models (LLMs) and offers optimized performance for key features like Retrieval Augmented Generation (RAG) on Knowledge Bases for Amazon Bedrock, and function calling on Agents for Amazon Bedrock.

Amazon Bedrock Studio is now available in public previewAmazon Bedrock Studio offers a web-based experience to accelerate the development of generative AI applications by providing a rapid prototyping environment with key Amazon Bedrock features, including Knowledge Bases, Agents, and Guardrails.

Amazon Bedrock Studio

Agents for Amazon Bedrock now supports Provisioned Throughput pricing model – As agentic applications scale, they require higher input and output model throughput compared to on-demand limits. The Provisioned Throughput pricing model makes it possible to purchase model units for the specific base model.

MongoDB Atlas is now available as a vector store in Knowledge Bases for Amazon Bedrock – With MongoDB Atlas vector store integration, you can build RAG solutions to securely connect your organization’s private data sources to foundation models (FMs) in Amazon Bedrock.

Amazon RDS for PostgreSQL supports pgvector 0.7.0 – You can use the open-source PostgreSQL extension for storing vector embeddings and add retrieval-augemented generation (RAG) capability in your generative AI applications. This release includes features that increase the number of dimensions of vectors you can index, reduce index size, and includes additional support for using CPU SIMD in distance computations. Also Amazon RDS Performance Insights now supports the Oracle Multitenant configuration on Amazon RDS for Oracle.

Amazon EC2 Inf2 instances are now available in new regions – These instances are optimized for generative AI workloads and are generally available in the Asia Pacific (Sydney), Europe (London), Europe (Paris), Europe (Stockholm), and South America (Sao Paulo) Regions.

New Generative Engine in Amazon Polly is now generally available – The generative engine in Amazon Polly is it’s most advanced text-to-speech (TTS) model and currently includes two American English voices, Ruth and Matthew, and one British English voice, Amy.

AWS Amplify Gen 2 is now generally availableAWS Amplify offers a code-first developer experience for building full-stack apps using TypeScript and enables developers to express app requirements like the data models, business logic, and authorization rules in TypeScript. AWS Amplify Gen 2 has added a number of features since the preview, including a new Amplify console with features such as custom domains, data management, and pull request (PR) previews.

Amazon EMR Serverless now includes performance monitoring of Apache Spark jobs with Amazon Managed Service for Prometheus – This lets you analyze, monitor, and optimize your jobs using job-specific engine metrics and information about Spark event timelines, stages, tasks, and executors. Also, Amazon EMR Studio is now available in the Asia Pacific (Melbourne) and Israel (Tel Aviv) Regions.

Amazon MemoryDB launched two new condition keys for IAM policies – The new condition keys let you create AWS Identity and Access Management (IAM) policies or Service Control Policies (SCPs) to enhance security and meet compliance requirements. Also, Amazon ElastiCache has updated it’s minimum TLS version to 1.2.

Amazon Lightsail now offers a larger instance bundle – This includes 16 vCPUs and 64 GB memory. You can now scale your web applications and run more compute and memory-intensive workloads in Lightsail.

Amazon Elastic Container Registry (ECR) adds pull through cache support for GitLab Container Registry – ECR customers can create a pull through cache rule that maps an upstream registry to a namespace in their private ECR registry. Once rule is configured, images can be pulled through ECR from GitLab Container Registry. ECR automatically creates new repositories for cached images and keeps them in-sync with the upstream registry.

AWS Resilience Hub expands application resilience drift detection capabilities – This new enhancement detects changes, such as the addition or deletion of resources within the application’s input sources.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects and blog posts that you might find interesting.

Building games with LLMs – Check out this fun experiment by Banjo Obayomi to generate Super Mario levels using different LLMs on Amazon Bedrock!

Troubleshooting with Amazon Q –  Ricardo Ferreira walks us through how he solved a nasty data serialization problem while working with Apache Kafka, Go, and Protocol Buffers.

Getting started with Amazon Q in VS Code – Check out this excellent step-by-step guide by Rohini Gaonkar that covers installing the extension for features like code completion chat, and productivity-boosting capabilities powered by generative AI.

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community. Check out Ricardo’s page for the latest updates.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Bengaluru (May 15–16), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— Abhishek

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

A new generative engine and three voices are now generally available on Amazon Polly

This post was originally published on this site

Today, we are announcing the general availability of the generative engine of Amazon Polly with three voices: Ruth and Matthew in American English and Amy in British English. The new generative engine was trained with publicly available and proprietary data, a variety of voices, languages, and styles. It performs with the highest precision to render context-dependent prosody, pausing, spelling, dialectal properties, foreign word pronunciation, and more.

Amazon Polly is a machine learning (ML) service that converts text to lifelike speech, called text-to-speech (TTS) technology. Now, Amazon Polly includes high-quality, natural-sounding human-like voices in dozens of languages, so you can select the ideal voice and distribute your speech-enabled applications in many locales or countries.

With Amazon Polly, you can select various voice options, including neural, long-form, and generative voices, which deliver ground-breaking improvements in speech quality and produce human-like, highly expressive, and emotionally adept voices. You can store speech output in standard formats like MP3 or OGG, adjust the speech rate, pitch, or volume with Speech Synthesis Markup Language (SSML) tags, and quickly deliver lifelike voices and conversational user experiences with consistently fast response times.

What’s the new generative engine?
Amazon Polly now supports four voice engines: standard, neural, long-form, and generative voices.

Standard TTS voices, introduced in 2016 use traditional concatenative synthesis. This method strings together the phonemes of recorded speech, producing very natural-sounding synthesized speech. However, the inevitable variations in speech and the techniques used to segment the waveforms limit the quality of speech.

Neural TTS (NTTS) voices, introduced in 2019, use a sequence-to-sequence neural network that converts a sequence of phonemes into spectrograms, and a neural vocoder that converts the spectrograms into a continuous audio signal. The NTTS produces even higher quality human-like voices than its standard voices.

Long-form voices, introduced in 2023, are developed with cutting-edge deep learning TTS technology and designed to captivate listeners’ attention for longer content, such as news articles, training materials, or marketing videos.

In February 2024, Amazon scientists introduced a new research TTS model called Big Adaptive Streamable TTS with Emergent abilities (BASE). With this technology, Polly Generative engine is able to create human-like synthetically generated voices. You can use these voices as a knowledgeable customer assistant, a virtual trainer, or an experienced marketer.

Here are the new generative voices:

Name Locale Gender Language Sample prompt NTTS voices
Generative voices
Ruth en_US Female English (US) Selma was lying on the ground halfway down the steps. 'Selma! Selma!' we shouted in panic.
Matthew en_US Male English (US) The guards were standing outside with some of our neighbours, listening to a transistor radio. 'Any good news?' I asked. 'No, we're listening to the names of people who were killed yesterday,' Bruno replied.
Amy en_GB Female English (British) What are you looking at?' he said as he stood over me. They got off the bus and started searching the baggage compartment. The tension on the bus was like a dark, menacing cloud that hovered above us.

You can choose from these voice options to suit your application and use case. To learn more about the generative engine, visit Generative voices in the AWS documentation.

Get started with using generative voices
You can access the new voices using the AWS Management Console, AWS Command Line Interface (AWS CLI), or the AWS SDKs.

To get started, go to the Amazon Polly console in the US (N. Virginia) Region and choose Text-to-Speech menu in the left pane. If you select the voice of Ruth or Matthew in the language of English, US or Amy in English, UK, you can choose Generative engine. Input your text and listen to or download the generated voice output.

Using the CLI, you can list the voices that use the new generative engine:

$ aws polly describe-voices --output json --region us-east-1 
| jq -r '.Voices[] | select(.SupportedEngines | index("generative")) | .Name'

Matthew
Amy
Ruth

Now, run the synthesize-speech CLI command to synthesize sample text to an audio file (hello.mp3) with the parameters of generative engine and a supported voice ID.

$ aws polly synthesize-speech --output-format mp3 --region us-east-1 
  --text "Hello. This is my first generative voices!" 
  --voice-id Matthew --engine generative hello.mp3

To learn more code examples using AWS SDKs, visit Code and Application Examples in the AWS documentation. You can use Java and Python code examples, application examples such as web applications using Java or Python, or iOS and Android applications.

Now available
The new generative voices of Amazon Polly are now available today in the US East (N. Virginia) Region. You only pay for what you use based on the number of characters of text that you convert to speech. To learn more, visit our Amazon Polly Pricing page.

Give new generative voices a try in the Amazon Polly console today and send feedback to AWS re:Post for Amazon Polly or through your usual AWS Support contacts.

Channy

AWS Weekly Roundup: Amazon Q, Amazon QuickSight, AWS CodeArtifact, Amazon Bedrock, and more (May 6, 2024)

This post was originally published on this site

April has been packed with new releases! Last week continued that trend with many new releases supporting a variety of domains such as security, analytics, devops, and many more, as well as more exciting new capabilities within generative AI.

If you missed the AWS Summit London 2024, you can now watch the sessions on demand, including the keynote by Tanuja Randery, VP & Marketing Director, EMEA, and many of the break-out sessions which will continue to be released over the coming weeks.

Last week’s launches
Here are some of the highlights that caught my attention this week:

Manual and automatic rollback from any stage in AWS CodePipeline – You can now rollback any stage, other than Source, to any previously known good state in if you use a V2 pipeline in AWS CodePipeline. You can configure automatic rollback which will use the source changes from the most recent successful pipeline execution in the case of failure, or you can initiate a manual rollback for any stage from the console, API or SDK and choose which pipeline execution you want to use for the rollback.

AWS CodeArtifact now supports RubyGems – Ruby community, rejoice, you can now store your gems in AWS CodeArtifact! You can integrate it with RubyGems.org, and CodeArtifact will automatically fetch any gems requested by the client and store them locally in your CodeArtifact repository. That means that you can have a centralized place for both your first-party and public gems so developers can access their dependencies from a single source.

Ruby-repo screenshot

Create a repository in AWS CodeArtifact and choose “rubygems-store” to connect your repository to RubyGems.org on the “Public upstream repositories” dropdown.

Amazon EventBridge Pipes now supports event delivery through AWS PrivateLink – You can now deliver events to an Amazon EventBridge Pipes target without traversing the public internet by using AWS PrivateLink. You can poll for events in a private subnet in your Amazon Virtual Private Cloud (VPC) without having to deploy any additional infrastructure to keep your traffic private.

Amazon Bedrock launches continue. You can now run scalable, enterprise-grade generative AI workloads with Cohere Command R & R+. And Amazon Titan Text V2 is now optimized for improving Retrieval-Augmented Generation (RAG).

AWS Trusted Advisor – last year we launched Trusted Advisor APIs enabling you to programmatically consume recommendations. A new API is available now that you can use to exclude resources from recommendations.

Amazon EC2 – there have been two new great launches this week for EC2 users. You can now mark your AMIs as “protected” to avoid them being deregistered by accident. You can also now easily discover your active AMIs by simply describing them.

Amazon CodeCatalyst – you can now view your git commit history in the CodeCatalyst console.

General Availability
Many new services and capabilities became generally available this week.

Amazon Q in QuickSight – Amazon Q has brought generative BI to Amazon QuickSight giving you the ability to build beautiful dashboards automatically simply by using natural language and it’s now generally available. To get started, head to the Quicksight Pricing page to explore all options or start a 30-day free trial which allows up to 4 users per QuickSight account to use all the new generative AI features.

With the new generative AI features enabled by Amazon Q in Amazon QuickSight you can use natural language queries to build, sort and filter dashboards. (source: AWS Documentation)

Amazon Q Business (GA) and Amazon Q Apps (Preview) – Also generally available now is Amazon Q Business which we launched last year at AWS re:Invent 2023 with the ability to connect seamlessly with over 40 popular enterprise systems, including Microsoft 365, Salesforce, Amazon Simple Storage Service (Amazon S3), Gmail, and so many more. This allows Amazon Q Business to know about your business so your employees can generate content, solve problems, and take actions that are specific to your business.

We have also launched support for custom plug-ins, so now you can create your own integrations with any third-party application.

Q-business screenshot

With general availability of Amazon Q Business we have also launched the ability to create your own custom plugins to connect to any third-party API.

Another highlight of this release is the launch of Amazon Q Apps, which enables you to quickly generate an app from your conversation with Amazon Q Business, or by describing what you would like it to generate for you. All guardrails from Amazon Q Business apply, and it’s easy to share your apps with colleagues through an admin-managed library. Amazon Q Apps is in preview now.

Check out Channy Yun’s post for a deeper dive into Amazon Q Business and Amazon Q Apps, which guides you through these new features.

Amazon Q Developer – you can use Q Developer to completely change your developer flow. It has all the capabilities of what was previously known as Amazon CodeWhisperer, such as Q&A, diagnosing common errors, generating code including tests, and many more. Now it has expanded, so you can use it to generate SQL, and build data integration pipelines using natural language. In preview, it can describe resources in your AWS account and help you retrieve and analyze cost data from AWS Cost Explorer.

For a full list of AWS announcements, be sure to keep an eye on the ‘What’s New with AWS?‘ page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community.

Discover Claude 3 – If you’re a developer looking for a good source to get started with Claude 3 them I recommend this great post from my colleague Haowen Huang: Mastering Amazon Bedrock with Claude 3: Developer’s Guide with Demos.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

GOTO EDA Day LondonJoin us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners.

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Matheus Guimaraes

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Stop the CNAME chain struggle: Simplified management with Route 53 Resolver DNS Firewall

This post was originally published on this site

Starting today, you can configure your DNS Firewall to automatically trust all domains in a resolution chain (such as aCNAME, DNAME, or Alias chain).

Let’s walk through this in nontechnical terms for those unfamiliar with DNS.

Why use DNS Firewall?
DNS Firewall provides protection for outbound DNS requests from your private network in the cloud (Amazon Virtual Private Cloud (Amazon VPC)). These requests route through Amazon Route 53 Resolver for domain name resolution. Firewall administrators can configure rules to filter and regulate the outbound DNS traffic.

DNS Firewall helps to protect against multiple security risks.

Let’s imagine a malicious actor managed to install and run some code on your Amazon Elastic Compute Cloud (Amazon EC2) instances or containers running inside one of your virtual private clouds (VPCs). The malicious code is likely to initiate outgoing network connections. It might do so to connect to a command server and receive commands to execute on your machine. Or it might initiate connections to a third-party service in a coordinated distributed denial of service (DDoS) attack. It might also try to exfiltrate data it managed to collect on your network.

Fortunately, your network and security groups are correctly configured. They block all outgoing traffic except the one to well-known API endpoints used by your app. So far so good—the malicious code cannot dial back home using regular TCP or UDP connections.

But what about DNS traffic? The malicious code may send DNS requests to an authoritative DNS server they control to either send control commands or encoded data, and it can receive data back in the response. I’ve illustrated the process in the following diagram.

DNS exfiltration illustrated

To prevent these scenarios, you can use a DNS Firewall to monitor and control the domains that your applications can query. You can deny access to the domains that you know to be bad and allow all other queries to pass through. Alternately, you can deny access to all domains except those you explicitly trust.

What is the challenge with CNAME, DNAME, and Alias records?
Imagine you configured your DNS Firewall to allow DNS queries only to specific well-known domains and blocked all others. Your application communicates with alexa.amazon.com; therefore, you created a rule allowing DNS traffic to resolve that hostname.

However, the DNS system has multiple types of records. The ones of interest in this article are

  • A records that map a DNS name to an IP address,
  • CNAME records that are synonyms for other DNS names,
  • DNAME records that provide redirection from a part of the DNS name tree to another part of the DNS name tree, and
  • Alias records that provide a Route 53 specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as Amazon CloudFront distributions and Amazon S3 buckets

When querying alexa.amazon.com, I see it’s actually a CNAME record that points to pitangui.amazon.com, which is another CNAME record that points to tp.5fd53c725-frontier.amazon.com, which, in turn, is a CNAME to d1wg1w6p5q8555.cloudfront.net. Only the last name (d1wg1w6p5q8555.cloudfront.net) has an A record associated with an IP address 3.162.42.28. The IP address is likely to be different for you. It points to the closest Amazon CloudFront edge location, likely the one from Paris (CDG52) for me.

A similar redirection mechanism happens when resolving DNAME or Alias records.

DNS resolution for alexa.amazon.com

To allow the complete resolution of such a CNAME chain, you could be tempted to configure your DNS Firewall rule to allow all names under amazon.com (*.amazon.com), but that would fail to resolve the last CNAME that goes to cloudfront.net.

Worst, the DNS CNAME chain is controlled by the service your application connects to. The chain might change at any time, forcing you to manually maintain the list of rules and authorized domains inside your DNS Firewall rules.

Introducing DNS Firewall redirection chain authorization
Based on this explanation, you’re now equipped to understand the new capability we launch today. We added a parameter to the UpdateFirewallRule API (also available on the AWS Command Line Interface (AWS CLI) and AWS Management Console) to configure the DNS Firewall so that it follows and automatically trusts all the domains in a CNAME, DNAME, or Alias chain.

This parameter allows firewall administrators to only allow the domain your applications query. The firewall will automatically trust all intermediate domains in the chain until it reaches the A record with the IP address.

Let’s see it in action
I start with a DNS Firewall already configured with a domain list, a rule group, and a rule that ALLOW queries for the domain alexa.amazon.com. The rule group is attached to a VPC where I have an EC2 instance started.

When I connect to that EC2 instance and issue a DNS query to resolve alexa.amazon.com, it only returns the first name in the domain chain (pitangui.amazon.com) and stops there. This is expected because pitangui.amazon.com is not authorized to be resolved.

DNS query for alexa.amazon.com is blocked at first CNAME

To solve this, I update the firewall rule to trust the entire redirection chain. I use the AWS CLI to call the update-firewall-rule API with a new parameter firewall-domain-redirection-action set to TRUST_REDIRECTION_DOMAIN.

AWS CLI to update the DNS firewall rule

The following diagram illustrates the setup at this stage.

DNS Firewall rule diagram

Back to the EC2 instance, I try the DNS query again. This time, it works. It resolves the entire redirection chain, down to the IP address ????.

DNS resolution for the full CNAME chain

Thanks to the trusted chain redirection, network administrators now have an easy way to implement a strategy to block all domains and authorize only known domains in their DNS Firewall without having to care about CNAME, DNAME, or Alias chains.

This capability is available at no additional cost in all AWS Regions. Try it out today!

— seb

Add your Ruby gems to AWS CodeArtifact

This post was originally published on this site

Ruby developers can now use AWS CodeArtifact to securely store and retrieve their gems. CodeArtifact integrates with standard developer tools like gem and bundler.

Applications often use numerous packages to speed up development by providing reusable code for common tasks like network access, cryptography, or data manipulation. Developers also embed SDKs–such as the AWS SDKs–to access remote services. These packages may come from within your organization or from third parties like open source projects. Managing packages and dependencies is integral to software development. Languages like Java, C#, JavaScript, Swift, and Python have tools for downloading and resolving dependencies, and Ruby developers typically use gem and bundler.

However, using third-party packages presents legal and security challenges. Organizations must ensure package licenses are compatible with their projects and don’t violate intellectual property. They must also verify that the included code is safe and doesn’t introduce vulnerabilities, a tactic known as a supply chain attack. To address these challenges, organizations typically use private package servers. Developers can only use packages vetted by security and legal teams made available through private repositories.

CodeArtifact is a managed service that allows the safe distribution of packages to internal developer teams without managing the underlying infrastructure. CodeArtifact now supports Ruby gems in addition to npm, PyPI, Maven, NuGet, SwiftPM, and generic formats.

You can publish and download Ruby gem dependencies from your CodeArtifact repository in the AWS Cloud, working with existing tools such as gem and bundler. After storing packages in CodeArtifact, you can reference them in your Gemfile. Your build system will then download approved packages from the CodeArtifact repository during the build process.

How to get started
Imagine I’m working on a package to be shared with other development teams in my organization.

In this demo, I show you how I prepare my environment, upload the package to the repository, and use this specific package build as a dependency for my project. I focus on the steps specific to Ruby packages. You can read the tutorial written by my colleague Steven to get started with CodeArtifact.

I use an AWS account that has a package repository (MyGemsRepo) and domain (stormacq-test) already configured.

CodeArtifact - Ruby repository

To let the Ruby tools acess my CodeArtifact repository, I start by collecting an authentication token from CodeArtifact.

export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token 
                                     --domain stormacq-test              
                                     --domain-owner 012345678912         
                                     --query authorizationToken          
                                     --output text`

export GEM_HOST_API_KEY="Bearer $CODEARTIFACT_AUTH_TOKEN"

Note that the authentication token expires after 12 hours. I must repeat this command after 12 hours to obtain a fresh token.

Then, I request the repository endpoint. I pass the domain name and domain owner (the AWS account ID). Notice the --format ruby option.

export RUBYGEMS_HOST=`aws codeartifact get-repository-endpoint  
                           --domain stormacq-test               
                           --domain-owner 012345678912          
                           --format ruby                        
                           --repository MyGemsRepo              
                           --query repositoryEndpoint           
                           --output text`

Now that I have the repository endpoint and an authentication token, gem will use these environment variable values to connect to my private package repository.

I create a very simple project, build it, and send it to the package repository.

CodeArtifact - building and pushing a custom package

$ gem build hola.gemspec 

Successfully built RubyGem
  Name: hola-codeartifact
  Version: 0.0.0
  File: hola-codeartifact-0.0.0.gem
  
$ gem push hola-codeartifact-0.0.0.gem 
Pushing gem to https://stormacq-test-486652066693.d.codeartifact.us-west-2.amazonaws.com/ruby/MyGemsRepo...

I verify in the console that the package is available.

CodeArtifact - Hola package is present

Now that the package is available, I can use it in my projects as usual. This involves configuring the local ~/.gemrc file on my machine. I follow the instructions provided by the console, and I make sure I replace ${CODEARTIFACT_AUTH_TOKEN} with its actual value.

CodeArtifact - console instructions to connect to the repo

Once ~/.gemrc is correctly configured, I can install gems as usual. They will be downloaded from my private gem repository.

$ gem install hola-codeartifact

Fetching hola-codeartifact-0.0.0.gem
Successfully installed hola-codeartifact-0.0.0
Parsing documentation for hola-codeartifact-0.0.0
Installing ri documentation for hola-codeartifact-0.0.0
Done installing documentation for hola-codeartifact after 0 seconds
1 gem installed

Install from upstream
I can also associate my repository with an upstream source. It will automatically fetch gems from upstream when I request one.

To associate the repository with rubygems.org, I use the console, or I type

aws codeartifact  associate-external-connection 
                   --domain stormacq-test       
                   --repository MyGemsRepo      
                   --external-connection public:ruby-gems-org

{
    "repository": {
        "name": "MyGemsRepo",
        "administratorAccount": "012345678912",
        "domainName": "stormacq-test",
        "domainOwner": "012345678912",
        "arn": "arn:aws:codeartifact:us-west-2:012345678912:repository/stormacq-test/MyGemsRepo",
        "upstreams": [],
        "externalConnections": [
            {
                "externalConnectionName": "public:ruby-gems-org",
                "packageFormat": "ruby",
                "status": "AVAILABLE"
            }
        ],
        "createdTime": "2024-04-12T12:58:44.101000+02:00"
    }
}

Once associated, I can pull any gems through CodeArtifact. It will automatically fetch packages from upstream when not locally available.

$ gem install rake 

Fetching rake-13.2.1.gem
Successfully installed rake-13.2.1
Parsing documentation for rake-13.2.1
Installing ri documentation for rake-13.2.1
Done installing documentation for rake after 0 seconds
1 gem installed

I use the console to verify the rake package is now available in my repo.

Things to know
There are some things to keep in mind before uploading your first Ruby packages.

Pricing and availability
CodeArtifact costs for Ruby packages are the same as for the other package formats already supported. CodeArtifact billing depends on three metrics: the storage (measured in GB per month), the number of requests, and the data transfer out to the internet or to other AWS Regions. Data transfer to AWS services in the same Region is not charged, meaning you can run your continuous integration and delivery (CI/CD) jobs on Amazon Elastic Compute Cloud (Amazon EC2) or AWS CodeBuild, for example, without incurring a charge for the CodeArtifact data transfer. As usual, the pricing page has the details.

CodeArtifact for Ruby packages is available in all 13 Regions where CodeArtifact is available.

Now, go build your Ruby applications and upload your private packages to CodeArtifact!

— seb

Amazon Q Business, now generally available, helps boost workforce productivity with generative AI

This post was originally published on this site

At AWS re:Invent 2023, we previewed Amazon Q Business, a generative artificial intelligence (generative AI)–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems.

With Amazon Q Business, you can deploy a secure, private, generative AI assistant that empowers your organization’s users to be more creative, data-driven, efficient, prepared, and productive. During the preview, we heard lots of customer feedback and used that feedback to prioritize our enhancements to the service.

Today, we are announcing the general availability of Amazon Q Business with many new features, including custom plugins, and a preview of Amazon Q Apps, generative AI–powered customized and sharable applications using natural language in a single step for your organization.

In this blog post, I will briefly introduce the key features of Amazon Q Business with the new features now available and take a look at the features of Amazon Q Apps. Let’s get started!

Introducing Amazon Q Business
Amazon Q Business connects seamlessly to over 40 popular enterprise data sources and stores document and permission information, including Amazon Simple Storage Service (Amazon S3), Microsoft 365, and Salesforce. It ensures that you access content securely with existing credentials using single sign-on, according to your permissions, and also includes enterprise-level access controls.

Amazon Q Business makes it easy for users to get answers to questions like company policies, products, business results, or code, using its web-based chat assistant. You can point Amazon Q Business at your enterprise data repositories, and it’ll search across all data, summarize logically, analyze trends, and engage in dialog with users.

With Amazon Q Business, you can build secure and private generative AI assistants with enterprise-grade access controls at scale. You can also use administrative guardrails, document enrichment, and relevance tuning to customize and control responses that are consistent with your company’s guidelines.

Here are the key features of Amazon Q Business with new features now available:

End-user web experience
With the built-in web experience, you can ask a question, receive a response, and then ask follow-up questions and add new information with in-text source citations while keeping the context from the previous answer. You can only get a response from data sources that you have access to.

With general availability, we’re introducing a new content creation mode in the web experience. In this mode, Amazon Q Business does not use or access the enterprise content but instead uses generative AI models built into Amazon Q Business for creative use cases such as summarization of responses and crafting personalized emails. To use the content creation mode, you can turn off Respond from approved sources in the conversation settings.

To learn more, visit Using an Amazon Q Business web experience and Customizing an Amazon Q Business web experience in the AWS documentation.

Pre-built data connectors and plugins
You can connect, index, and sync your enterprise data using over 40 pre-built data connectors or an Amazon Kendra retriever, as well as web crawling or uploading your documents directly.

Amazon Q Business ingests content using a built-in semantic document retriever. It also retrieves and respects permission information such as access control lists (ACLs) to allow it to manage access to the data after retrieval. When the data is ingested, your data is secured with the Service-managed key of AWS Key Management Service (AWS KMS).

You can configure plugins to perform actions in enterprise systems, including Jira, Salesforce, ServiceNow, and Zendesk. Users can create a Jira issue or a Salesforce case while chatting in the chat assistant. You can also deploy a Microsoft Teams gateway or a Slack gateway to use an Amazon Q Business assistant in your teams or channels.

With general availability, you can build custom plugins to connect to any third-party application through APIs so that users can use natural language prompts to perform actions such as submitting time-off requests or sending meeting invites directly through Amazon Q Business assistant. Users can also search real-time data, such as time-off balances, scheduled meetings, and more.

When you choose Custom plugin, you can define an OpenAPI schema to connect your third-party application. You can upload the OpenAPI schema to Amazon S3 or copy it to the Amazon Q Business console in-line schema editor compatible with the Swagger OpenAPI specification.

To learn more, visit Data source connectors and Configure plugins in the AWS documentation.

Admin control and guardrails
You can configure global controls to give users the option to either generate large language model (LLM)-only responses or generate responses from connected data sources. You can specify whether all chat responses will be generated using only enterprise data or whether your application can also use its underlying LLM to generate responses when it can’t find answers in your enterprise data. You can also block specific words.

With topic-level controls, you can specify restricted topics and configure behavior rules in response to the topics, such as answering using enterprise data or blocking completely.

To learn more, visit Admin control and guardrails in the AWS documentation.

You can alter document metadata or attributes and content during the document ingestion process by configuring basic logic to specify a metadata field name, select a condition, and enter or select a value and target actions, such as update or delete. You can also use AWS Lambda functions to manipulate document fields and content, such as using optical character recognition (OCR) to extract text from images.

To learn more, visit Document attributes and types in Amazon Q Business and Document enrichment in Amazon Q Business in the AWS documentation.

Enhanced enterprise-grade security and management
Starting April 30, you will need to use AWS IAM Identity Center for user identity management of all new applications rather than using the legacy identity management. You can securely connect your workforce to Amazon Q Business applications either in the web experience or your own interface.

You can also centrally manage workforce access using IAM Identity Center alongside your existing IAM roles and policies. As the number of your accounts scales, IAM Identity Center gives you the option to use it as a single place to manage user access to all your applications. To learn more, visit Setting up Amazon Q Business with IAM Identity Center in the AWS documentation.

At general availability, Amazon Q Business is now integrated with various AWS services to securely connect and store the data and easily deploy and track access logs.

You can use AWS PrivateLink to access Amazon Q Business securely in your Amazon Virtual Private Cloud (Amazon VPC) environment using a VPC endpoint. You can use the Amazon Q Business template for AWS CloudFormation to easily automate the creation and provisioning of infrastructure resources. You can also use AWS CloudTrail to record actions taken by a user, role, or AWS service in Amazon Q Business.

Also, we support Federal Information Processing Standards (FIPS) endpoints, based on the United States and Canadian government standards and security requirements for cryptographic modules that protect sensitive information.

To learn more, visit Security in Amazon Q Business and Monitoring Amazon Q Business in the AWS documentation.

Build and share apps with new Amazon Q Apps (preview)
Today we are announcing the preview of Amazon Q Apps, a new capability within Amazon Q Business for your organization’s users to easily and quickly create generative AI-powered apps based on company data, without requiring any prior coding experience.

With Amazon Q Apps, users simply describe the app they want, in natural language, or they can take an existing conversation where Amazon Q Business helped them solve a problem. With a few clicks, Amazon Q Business will instantly generate an app that accomplishes their desired task that can be easily shared across their organization.

If you are familiar with PartyRock, you can easily use this code-free builder with the added benefit of connecting it to your enterprise data already with Amazon Q Business.

To create a new Amazon Q App, choose Apps in your web experience and enter a simple text expression for a task in the input box. You can try out samples, such as a content creator, interview question generator, meeting note summarizer, and grammar checker.

I will make a document assistant to review and correct a document using the following prompt:

You are a professional editor tasked with reviewing and correcting a document for grammatical errors, spelling mistakes, and inconsistencies in style and tone. Given a file, your goal is to recommend changes to ensure that the document adheres to the highest standards of writing while preserving the author’s original intent and meaning. You should provide a numbered list for all suggested revisions and the supporting reason.

When you choose the Generate button, a document editing assistant app will be automatically generated with two cards—one to upload a document file as an input and another text output card that gives edit suggestions.

When you choose the Add card button, you can add more cards, such as a user input, text output, file upload, or pre-configured plugin by your administrator. If you want to create a Jira ticket to request publishing a post in the corporate blog channel as an author, you can add a Jira Plugin with the result of edited suggestions from the uploaded file.

Once you are ready to share the app, choose the Publish button. You can securely share this app to your organization’s catalog for others to use, enhancing productivity. Your colleagues can choose shared apps, modify them, and publish their own versions to the organizational catalog instead of starting from scratch.

Choose Library to see all of the published Amazon Q Apps. You can search the catalog by labels and open your favorite apps.

Amazon Q Apps inherit robust security and governance controls from Amazon Q Business, including user authentication and access controls, which empower organizations to safely share apps across functions that warrant governed collaboration and innovation.

In the administrator console, you can see your Amazon Q Apps and control or remove them from the library.

To learn more, visit Amazon Q Apps in the AWS documentation.

Now available
Amazon Q Business is generally available today in the US East (N. Virginia) and US West (Oregon) Regions. We are launching two pricing subscription options.

The Amazon Q Business Lite ($3/user/month) subscription provides users access to the basic functionality of Amazon Q Business.

The Amazon Business Pro ($20/user/month) subscription gets users access to all features of Amazon Q Business, as well as Amazon Q Apps (preview) and Amazon Q in QuickSight (Reader Pro), which enhances business analyst and business user productivity using generative business intelligence capabilities.

You can use the free trial (50 users for 60 days) to experiment with Amazon Q Business. For more information about pricing options, visit Amazon Q Business Plan page.

To learn more about Amazon Q Business, you can study Amazon Q Business Getting Started, a free, self-paced digital course on AWS Skill Builder and Amazon Q Developer Center to get more sample codes.

Give it a try in the Amazon Q Business console today! For more information, visit the Amazon Q Business product page and the User Guide in the AWS documentation. Provide feedback to AWS re:Post for Amazon Q or through your usual AWS support contacts.

Channy

AWS Weekly Roundup: Amazon Bedrock, AWS CodeBuild, Amazon CodeCatalyst, and more (April 29, 2024)

This post was originally published on this site

This was a busy week for Amazon Bedrock with many new features! Using GitHub Actions with AWS CodeBuild is much easier. Also, Amazon Q in Amazon CodeCatalyst can now manage more complex issues.

I was amazed to meet so many new and old friends at the AWS Summit London. To give you a quick glimpse, here’s AWS Hero Yan Cui starting his presentation at the AWS Community stage.

AWS Community at the AWS Summit London 2024

Last week’s launches
With so many interesting new features, I start with generative artificial intelligence (generative AI) and then move to the other topics. Here’s what got my attention:

Amazon Bedrock – For supported architectures such as Llama, Mistral, or Flan T5, you can now import custom models and access them on demand. Model evaluation is now generally available to help you evaluate, compare, and select the best foundation models (FMs) for your specific use case. You can now access Meta’s Llama 3 models.

Agents for Amazon Bedrock – A simplified agent creation and return of control, so that you can define an action schema and get the control back to perform those action without needing to create a specific AWS Lambda function. Agents also added support for Anthropic Claude 3 Haiku and Sonnet to help build faster and more intelligent agents.

Knowledge Bases for Amazon Bedrock – You can now ingest data from up to five data sources and provide more complete answers. In the console, you can now chat with one of your documents without needing to set up a vector database (read more in this Machine Learning blog post).

Guardrails for Amazon Bedrock – The capability to implement safeguards based on your use cases and responsible AI policies is now available with new safety filters and privacy controls.

Amazon Titan – The new watermark detection feature is now generally available in Amazon Bedrock. In this way, you can identify images generated by Amazon Titan Image Generator using an invisible watermark present in all images generated by Amazon Titan.

Amazon CodeCatalyst – Amazon Q can now split complex issues into separate, simpler tasks that can then be assigned to a user or back to Amazon Q. CodeCatalyst now also supports approval gates within a workflow. Approval gates pause a workflow that is building, testing, and deploying code so that a user can validate whether it should be allowed to proceed.

Amazon EC2 – You can now remove an automatically assigned public IPv4 address from an EC2 instance. If you no longer need the automatically assigned public IPv4 (for example, because you are migrating to using a private IPv4 address for SSH with EC2 instance connect), you can use this option to quickly remove the automatically assigned public IPv4 address and reduce your public IPv4 costs.

Network Load Balancer – Now supports Resource Map in AWS Management Console, a tool that displays all your NLB resources and their relationships in a visual format on a single page. Note that Application Load Balancer already supports Resource Map in the console.

AWS CodeBuild – Now supports managed GitHub Action self-hosted runners. You can configure CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts.

Amazon Route 53 – You can now define a standard DNS configuration in the form of a Profile, apply this configuration to multiple VPCs, and share it across AWS accounts.

AWS Direct Connect – Hosted connections now support capacities up to 25 Gbps. Before, the maximum was 10 Gbps. Higher bandwidths simplify deployments of applications such as advanced driver assistance systems (ADAS), media and entertainment (M&E), artificial intelligence (AI), and machine learning (ML).

NoSQL Workbench for Amazon DynamoDB – A revamped operation builder user interface to help you better navigate, run operations, and browse your DynamoDB tables.

Amazon GameLift – Now supports in preview end-to-end development of containerized workloads, including deployment and scaling on premises, in the cloud, or for hybrid configurations. You can use containers for building, deploying, and running game server packages.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

GQL, the new ISO standard for graphs, has arrived – GQL, which stands for Graph Query Language, is the first new ISO database language since the introduction of SQL in 1987.

Authorize API Gateway APIs using Amazon Verified Permissions and Amazon Cognito – Externalizing authorization logic for application APIs can yield multiple benefits. Here’s an example of how to use Cedar policies to secure a REST API.

Build and deploy a 1 TB/s file system in under an hour – Very nice walkthrough for something that used to be not so easy to do in the recent past.

Let’s Architect! Discovering Generative AI on AWS – A new episode in this amazing series of posts that provides a broad introduction to the domain and then shares a mix of videos, blog posts, and hands-on workshops.

Building scalable, secure, and reliable RAG applications using Knowledge Bases for Amazon Bedrock – This post explores the new features (including AWS CloudFormation support) and how they align with the AWS Well-Architected Framework.

Using the unified CloudWatch Agent to send traces to AWS X-Ray – With added support for the collection of AWS X-Ray and OpenTelemetry traces, you can now provision a single agent to capture metrics, logs, and traces.

The executive’s guide to generative AI for sustainability – A guide for implementing a generative AI roadmap within sustainability strategies.

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community. Check out Ricardo’s page for the latest updates.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

GOTO EDA Day LondonJoin us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners.

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Danilo

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Guardrails for Amazon Bedrock now available with new safety filters and privacy controls

This post was originally published on this site

Today, I am happy to announce the general availability of Guardrails for Amazon Bedrock, first released in preview at re:Invent 2023. With Guardrails for Amazon Bedrock, you can implement safeguards in your generative artificial intelligence (generative AI) applications that are customized to your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models (FMs), improving end-user experiences and standardizing safety controls across generative AI applications. You can use Guardrails for Amazon Bedrock with all large language models (LLMs) in Amazon Bedrock, including fine-tuned models.

Guardrails for Bedrock offers industry-leading safety protection on top of the native capabilities of FMs, helping customers block as much as 85% more harmful content than protection natively provided by some foundation models on Amazon Bedrock today. Guardrails for Amazon Bedrock is the only responsible AI capability offered by top cloud providers that enables customers to build and customize safety and privacy protections for their generative AI applications in a single solution, and it works with all large language models (LLMs) in Amazon Bedrock, as well as fine-tuned models.

Aha! is a software company that helps more than 1 million people bring their product strategy to life. “Our customers depend on us every day to set goals, collect customer feedback, and create visual roadmaps,” said Dr. Chris Waters, co-founder and Chief Technology Officer at Aha!. “That is why we use Amazon Bedrock to power many of our generative AI capabilities. Amazon Bedrock provides responsible AI features, which enable us to have full control over our information through its data protection and privacy policies, and block harmful content through Guardrails for Bedrock. We just built on it to help product managers discover insights by analyzing feedback submitted by their customers. This is just the beginning. We will continue to build on advanced AWS technology to help product development teams everywhere prioritize what to build next with confidence.”

In the preview post, Antje showed you how to use guardrails to configure thresholds to filter content across harmful categories and define a set of topics that need to be avoided in the context of your application. The Content filters feature now has two additional safety categories: Misconduct for detecting criminal activities and Prompt Attack for detecting prompt injection and jailbreak attempts. We also added important new features, including sensitive information filters to detect and redact personally identifiable information (PII) and word filters to block inputs containing profane and custom words (for example, harmful words, competitor names, and products).

Guardrails for Amazon Bedrock sits in between the application and the model. Guardrails automatically evaluates everything going into the model from the application and coming out of the model to the application to detect and help prevent content that falls into restricted categories.

You can recap the steps in the preview release blog to learn how to configure Denied topics and Content filters. Let me show you how the new features work.

New features
To start using Guardrails for Amazon Bedrock, I go to the AWS Management Console for Amazon Bedrock, where I can create guardrails and configure the new capabilities. In the navigation pane in the Amazon Bedrock console, I choose Guardrails, and then I choose Create guardrail.

I enter the guardrail Name and Description. I choose Next to move to the Add sensitive information filters step.

I use Sensitive information filters to detect sensitive and private information in user inputs and FM outputs. Based on the use cases, I can select a set of entities to be either blocked in inputs (for example, a FAQ-based chatbot that doesn’t require user-specific information) or redacted in outputs (for example, conversation summarization based on chat transcripts). The sensitive information filter supports a set of predefined PII types. I can also define custom regex-based entities specific to my use case and needs.

I add two PII types (Name, Email) from the list and add a regular expression pattern using Booking ID as Name and [0-9a-fA-F]{8} as the Regex pattern.

I choose Next and enter custom messages that will be displayed if my guardrail blocks the input or the model response in the Define blocked messaging step. I review the configuration at the last step and choose Create guardrail.

I navigate to the Guardrails Overview page and choose the Anthropic Claude Instant 1.2 model using the Test section. I enter the following call center transcript in the Prompt field and choose Run.

Please summarize the below call center transcript. Put the name, email and the booking ID to the top:
Agent: Welcome to ABC company. How can I help you today?
Customer: I want to cancel my hotel booking.
Agent: Sure, I can help you with the cancellation. Can you please provide your booking ID?
Customer: Yes, my booking ID is 550e8408.
Agent: Thank you. Can I have your name and email for confirmation?
Customer: My name is Jane Doe and my email is jane.doe@gmail.com
Agent: Thank you for confirming. I will go ahead and cancel your reservation.

Guardrail action shows there are three instances where the guardrails came in to effect. I use View trace to check the details. I notice that the guardrail detected the Name, Email and Booking ID and masked them in the final response.

I use Word filters to block inputs containing profane and custom words (for example, competitor names or offensive words). I check the Filter profanity box. The profanity list of words is based on the global definition of profanity. Additionally, I can specify up to 10,000 phrases (with a maximum of three words per phrase) to be blocked by the guardrail. A blocked message will show if my input or model response contain these words or phrases.

Now, I choose Custom words and phrases under Word filters and choose Edit. I use Add words and phrases manually to add a custom word CompetitorY. Alternatively, I can use Upload from a local file or Upload from S3 object if I need to upload a list of phrases. I choose Save and exit to return to my guardrail page.

I enter a prompt containing information about a fictional company and its competitor and add the question What are the extra features offered by CompetitorY?. I choose Run.

I use View trace to check the details. I notice that the guardrail intervened according to the policies I configured.

Now available
Guardrails for Amazon Bedrock is now available in US East (N. Virginia) and US West (Oregon) Regions.

For pricing information, visit the Amazon Bedrock pricing page.

To get started with this feature, visit the Guardrails for Amazon Bedrock web page.

For deep-dive technical content and to learn how our Builder communities are using Amazon Bedrock in their solutions, visit our community.aws website.

— Esra