In the Works – AWS Region in Taiwan

This post was originally published on this site

Today, we’re announcing that a new AWS Region will be coming to Taiwan by early 2025. The new AWS Asia Pacific (Taipei) Region will consist of three Availability Zones at launch, and will give AWS customers in Taiwan the ability to run workloads and store data that must remain in Taiwan.

Each of the Availability Zones will be physically independent of the others in the Region – close enough to support applications that need low latency, yet sufficiently distant to significantly reduce the risk that an event at an Availability Zone level might impact business continuity.

The Availability Zones in this Region will be connected together through high-bandwidth, low-latency network connections over dedicated, fully redundant fiber. This connectivity supports applications that need synchronous replication between Availability Zones for availability or redundancy. You can take a peek at the AWS Global Infrastructure page to learn more about how we design and build Regions and Availability Zones.

We are currently working on Regions in Malaysia, Mexico, New Zealand, the Kingdom of Saudi Arabia, Thailand, and the AWS European Sovereign Cloud. The AWS Cloud operates 105 Availability Zones within 33 AWS Regions around the world, with announced plans for 21 more Availability Zones and seven more Regions, including Taiwan.

AWS in Taiwan
AWS has been investing and supporting customers and partners in Taiwan for more than 10 years. To support our customers in Taiwan, we have business development teams, solutions architects, partner managers, professional services consultants, support staff, and various other roles working in our Taipei office.

Other AWS infrastructure includes two Amazon CloudFront edge locations along with access to the AWS global backbone through multiple redundant submarine cables. You can access any other AWS Region (except Beijing and Ningxia) from AWS Direct Connect locations in Taipei, operated by Chief Telecom and Chunghwa Telecom. With AWS Direct Connect, your data that would have previously been transported over the internet is delivered through a private network connection between your facilities and AWS.

You can also use AWS Outposts in Taiwan, a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. With AWS Local Zones in Taipei, you can deliver applications that require single-digit millisecond latency to end users.

AWS continues to invest in upskilling students, local developers and technical professionals, nontechnical professionals, and the next generation of IT leaders in Taiwan through offerings like AWS AcademyAWS Educate, and AWS Skill Builder. Since 2017, AWS has trained more than eight million people across the Asia Pacific-Japan region on cloud skills, including more than 100,000 people in Taiwan.

To learn more, join AWS Summit 2024 Taiwan in July; in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS.

AWS customers in Taiwan
AWS customers in Taiwan have been increasingly moving their applications to AWS and running their technology infrastructure in other AWS Regions around the world. With the addition of this new AWS Region, customers will be able to provide even lower latency to end users and use advanced technologies such as generative artificial intelligence (generative AI), Internet of Things (IoT), mobile services, and more, to drive innovation. This Region will give AWS customers the ability to run their workloads and store their content in Taiwan.

Here are some examples of customers using AWS to drive innovation:

Chunghwa Telecom is the largest integrated telecom provider in Taiwan. To improve AI data security and governance, they use Amazon Bedrock for a variety of generative AI applications, including automatically generating specifications documents for the software development lifecycle and crafting custom marketing campaigns. With Amazon Bedrock, Chunghwa Telecom is saving developer hours and has also developed an immersive, interactive virtual English teacher for the first time.

Gamania Group is a leader in the development and publication of online games in Taiwan. To maximize the value of running on AWS, they worked with AWS Training and Certification to enhance AWS skills across all of its departments, such as AWS Classroom training, AWS Well-Architected Framework, and AWS GameDay events. As a result, they reduced the time needed to make critical operational decisions by 50 percent, lowered its time-to-market by up to 70 percent, and accelerated the launch of new games.

KKCompany Technologies is a multimedia technology group building a music streaming platform, AI-powered streaming solution, and cloud intelligence service in Taiwan. The company specializes in generative AI, multimedia technology, and digital transformation consulting services for enterprises in Taiwan and Japan. You can find BlendVision, a cloud-based streaming solution in AWS Marketplace.

To learn more about Taiwan customer cases, visit AWS Customer Success Stories in English or our Traditional Chinese page.

Stay Tuned
We’ll announce the opening of this and the other Regions in future blog posts, so be sure to stay tuned! To learn more, visit the AWS Region in Taiwan page in Traditional Chinese.

Channy

Simplify AWS CloudTrail log analysis with natural language query generation in CloudTrail Lake (preview)

This post was originally published on this site

Today, I am happy to announce in preview the generative artificial intelligence (generative AI)–powered natural language query generation in AWS CloudTrail Lake, which is a managed data lake for capturing, storing, accessing, and analyzing AWS CloudTrail activity logs to meet compliance, security, and operational needs. You can ask a question using natural language about these activity logs (management and data events) stored in CloudTrail Lake without having the technical expertise to write a SQL query or spend time to decode the exact structure of activity events. For example, you might ask, “Tell me how many database instances are deleted without a snapshot”, and the feature will convert that question to a CloudTrail Lake query, which you can run as-is or modify to get the requested event information. Natural language query generation makes the process of exploration of AWS activity logs simpler.

Now, let me show you how to start using natural language query generation.

Getting started with natural language query generation
The natural language query generator uses generative AI to produce a ready-to-use SQL query from your prompt, which you can then choose to run in the query editor of CloudTrail Lake.

In the AWS CloudTrail console, I choose Query under Lake. The query generator can only generate queries for event data stores that collect CloudTrail management and data events. I choose an event data store for my CloudTrail Lake query from the dropdown list in Event data store. In the Query generator, I enter the following prompt in the Prompt field using natural language:

How many errors were logged during the past month?

Then, I choose Generate query. The following SQL query is automatically generated:

SELECT COUNT(*) AS error_count
FROM 8a6***
WHERE eventtime >= '2024-04-21 00:00:00'
    AND eventtime <= '2024-05-21 23:59:59'
    AND (
        errorcode IS NOT NULL
        OR errormessage IS NOT NULL
    )

I choose Run to see the results.

This is interesting, but I want to know more details. I want to see which services had the most errors and why these actions were erroring out. So I enter the following prompt to request additional details:

How many errors were logged during the past month for each service and what was the cause of each error?

I choose Generate query, and the following SQL query is generated:

SELECT eventsource,
    errorcode,
    errormessage,
    COUNT(*) AS errorCount
FROM 8a6***
WHERE eventtime >= '2024-04-21 00:00:00'
    AND eventtime <= '2024-05-21 23:59:59'
    AND (
        errorcode IS NOT NULL
        OR errormessage IS NOT NULL
    )
GROUP BY 1,
    2,
    3
ORDER BY 4 DESC;

I choose Run to see the results.

In the results, I see that my account experiences most number of errors related to Amazon S3, and top errors are related to CORS and object level configuration. I can continue to dig deeper to see more details by asking further questions. But now let me give natural language query generator another instruction. I enter the following prompt in the Prompt field:

What are the top 10 AWS services that I used in the past month? Include event name as well.

I choose Generate query, and the following SQL query is generated. This SQL statement retrieves the field names (eventSource,
eventName, COUNT(*) AS event_count), restricts the rows with the date interval of the past month in the WHERE clause, groups the rows by eventSource and eventName, sorts them by the usage count, and limit the result to 10 rows as I requested in a natural language.

SELECT eventSource,
    eventName,
    COUNT(*) AS event_count
FROM 8a6***
WHERE eventTime >= timestamp '2024-04-21 00:00:00'
    AND eventTime <= timestamp '2024-05-21 23:59:59'
GROUP BY 1,
    2
ORDER BY 3 DESC
LIMIT 10;

Again, I choose Run to see the results.

I now have a better understanding of how many errors were logged during the past month, what service the error was for, and what caused the error. You can try asking questions in plain language and run the generated queries over your logs to see how this feature works with your data.

Join the preview
Natural language query generation is available in preview in the US East (N. Virginia) Region as part of CloudTrail Lake.

You can use natural language query generation in preview for no additional cost. CloudTrail Lake query charges apply when running the query to generate results. For more information, visit AWS CloudTrail Pricing.

To learn more and get started using natural language query generation, visit AWS CloudTrail Lake User Guide.

— Esra

Introducing Amazon GuardDuty Malware Protection for Amazon S3

This post was originally published on this site

Today we are announcing the general availability of Amazon GuardDuty Malware Protection for Amazon Simple Storage Service (Amazon S3), an expansion of GuardDuty Malware Protection to detect malicious file uploads to selected S3 buckets. Previously, GuardDuty Malware Protection provided agentless scanning capabilities to identify malicious files on Amazon Elastic Block Store (Amazon EBS) volumes attached to Amazon Elastic Compute Cloud (Amazon EC2) and container workloads.

Now, you can continuously evaluate new objects uploaded to S3 buckets for malware and take action to isolate or eliminate any malware found. Amazon GuardDuty Malware Protection uses multiple Amazon Web Services (AWS) developed and industry-leading third-party malware scanning engines to provide malware detection without degrading the scale, latency, and resiliency profile of Amazon S3.

With GuardDuty Malware Protection for Amazon S3, you can use built-in malware and antivirus protection on your designated S3 buckets to help you remove the operational complexity and cost overhead associated with automating malicious file evaluation at scale. Unlike many existing tools used for malware analysis, this managed solution from GuardDuty does not require you to manage your own isolated data pipelines or compute infrastructure in each AWS account and AWS Region where you want to perform malware analysis.

Your development and security teams can work together to configure and oversee malware protection throughout your organization for select buckets where new uploaded data from untrusted entities is required to be scanned for malware. You can configure post-scan action in GuardDuty, such as object tagging, to inform downstream processing, or consume the scan status information provided through Amazon EventBridge to implement isolation of malicious uploaded objects.

Getting started with GuardDuty Malware Protection for your S3 bucket
To get started, in the GuardDuty console, select Malware Protection for S3 and choose Enable.

Enter the S3 bucket name or choose Browse S3 to select an S3 bucket name from a list of buckets that belong to the currently selected Region. You can select All the objects in the S3 bucket when you want GuardDuty to scan all the newly uploaded objects in the selected bucket. Or you can also select Objects beginning with a specific prefix when you want to scan the newly uploaded objects that belong to a specific prefix.

After scanning a newly uploaded S3 object, GuardDuty can add a predefined tag with the key as GuardDutyMalwareScanStatus and the value as the scan status:

  • NO_THREATS_FOUND – No threat found in the scanned object.
  • THREATS_FOUND – Potential threat detected during scan.
  • UNSUPPORTED – GuardDuty cannot scan this object because of size.
  • ACCESS_DENIED – GuardDuty cannot access object. Check permissions.
  • FAILED – GuardDuty could not scan the object.

When you want GuardDuty to add tags to your scanned S3 objects, select Tag objects. If you use tags, you can create policies to prevent objects from being accessed before the malware scan completes and prevent your application from accessing malicious objects.

Now, you must first create and attach an AWS Identity and Access Management (IAM) role that includes the required permissions:

  • EventBridge actions to create and manage the EventBridge managed rule so that Malware Protection for S3 can listen to your S3 Event Notifications.
  • Amazon S3 and EventBridge actions to send S3 Event Notifications to EventBridge for all events in this bucket.
  • Amazon S3 actions to access the uploaded S3 object and add a predefined tag to the scanned S3 object.
  • AWS Key Management Service (AWS KMS) key actions to access the object before scanning and putting a test object on buckets with the supported DSSE-KMS and SSE-KMS

To add these permissions, choose View permissions and copy the policy template and trust relationship template. These templates include placeholder values that you should replace with the appropriate values associated with your bucket and AWS account. You should also replace the placeholder value for the AWS KMS key ID.

Now, choose Attach permissions, which opens the IAM console in a new tab. You can choose to create a new IAM role or update an existing IAM role with the permissions from the copied templates. If you want to create or update your IAM role in advance, visit Prerequisite – Add IAM PassRole policy in the AWS documentation.

Finally, go back to the GuardDuty browser tab that has the IAM console open, choose your created or updated IAM role, and choose Enable.

Now, you will see Active in the protection Status column for this protected bucket.

Choose View all S3 malware findings to see the generated GuardDuty findings associated with your scanned S3 bucket. If you see the finding type S3Object:S3/MaliciousFile, GuardDuty has detected the listed S3 object as malicious. Choose the Threats detected section in the Findings details panel and follow the recommended remediation steps. To learn more, visit Remediating Malware Protection for S3 findings in the AWS documentation.

Things to know
You can set up GuardDuty Malware Protection for your S3 buckets even without GuardDuty enabled for your AWS account. However, if you enable GuardDuty in your account, you can use the full monitoring of foundational sources, such as AWS CloudTrail management events, Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, and DNS query logs, as well as malware protection features. You can also have security findings sent to AWS Security Hub and Amazon Detective for further investigation.

GuardDuty can scan files belonging to the following synchronous Amazon S3 storage classes: S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and Amazon S3 Glacier Instant Retrieval. It will scan the file formats known to be used to spread or contain malware. At the launch, the feature supports file sizes up to 5 GB, including archive files with up to five levels and 1,000 files per level after it is decompressed.

As I said, GuardDuty will send scan metrics to your EventBridge for each protected S3 bucket. You can set up alarms and define post-scan actions, such as tagging the object or moving the malicious object to a quarantine bucket. To learn more about other monitoring options, such as Amazon CloudWatch metrics and S3 object tagging, visit Monitoring malware scan status in the AWS documentation.

Now available
Amazon GuardDuty Malware Protection for Amazon S3 is generally available today in all AWS Regions where GuardDuty is available, excluding China Regions and GovCloud (US) Regions.

The pricing is based on the GB volume of the objects scanned and number of objects evaluated per month. This feature comes with a limited AWS Free Tier, which includes 1,000 requests and 1 GB each month, pursuant to conditions for the first 12 months of account creation for new AWS accounts, or until June 11, 2025, for existing AWS accounts. To learn more, visit the Amazon GuardDuty pricing page.

Give GuardDuty Malware Protection for Amazon S3 a try in the GuardDuty console. For more information, visit the Amazon GuardDuty User Guide and send feedback to AWS re:Post for Amazon GuardDuty or through your usual AWS support contacts.

Channy

IAM Access Analyzer Update: Extending custom policy checks & guided revocation

This post was originally published on this site

We are making IAM Access Analyzer even more powerful, extending custom policy checks and adding easy access to guidance that will help you to fine-tune your IAM policies. Both of these new features build on the Custom Policy Checks and the Unused Access analysis that were launched at re:Invent 2023. Here’s what we are launching:

New Custom Policy Checks – Using the power of automated reasoning, the new checks help you to detect policies that grant access to specific, critical AWS resources, or that grant any type of public access. Both of the checks are designed to be used ahead of deployment, possibly as part of your CI/CD pipeline, and will help you proactively detect updates that do not conform to your organization’s security practices and policies.

Guided Revocation – IAM Access Analyzer now gives you guidance that you can share with your developers so that they can revoke permissions that grant access that is not actually needed. This includes unused roles, roles with unused permissions, unused access keys for IAM users, and unused passwords for IAM users. The guidance includes the steps needed to either remove the extra items or to replace them with more restrictive ones.

New Custom Policy Checks
The new policy checks can be invoked from the command line or by calling an API function. The checks examine a policy document that is supplied as part of the request and return a PASS or FAIL value. In both cases, PASS indicates that the policy document properly disallows the given access, and FAIL indicates that the policy might allow some or all of the permissions. Here are the new checks:

Check No Public Access – This check operates on a resource policy, and checks to see if the policy grants public access to a specified resource type. For example, you can check a policy to see if it allows public access to an S3 bucket by specifying the AWS::S3::Bucket resource type. Valid resource types include DynamoDB tables and streams, EFS file systems, OpenSearch domains, Kinesis streams and stream consumers, KMS keys, Lambda functions, S3 buckets and access points, S3 Express directory buckets, S3 Outposts buckets and access points, Glacier, Secrets Manager secrets, SNS topics and queues, and IAM policy documents that assume roles. The list of valid resource types will expand over time and can be found in the CheckNoPublicAccess documentation,

Let’s say that I have a policy which accidentally grants public access to an Amazon Simple Queue Service (Amazon SQS) queue. Here’s how I check it:

$ aws accessanalyzer check-no-public-access --policy-document file://resource.json 
  --resource-type AWS::SQS::Queue --output json

And here is the result:

{
    "result": "FAIL",
    "message": "The resource policy grants public access for the given resource type.",
    "reasons": [
        {
            "description": "Public access granted in the following statement with sid: SqsResourcePolicy.",
            "statementIndex": 0,
            "statementId": "SqsResourcePolicy"
        }
    ]
}

I edit the policy to remove the access grant and try again, and this time the check passes:

{
    "result": "PASS",
    "message": "The resource policy does not grant public access for the given resource type."
}

Check Access Not Granted – This check operates on a single resource policy or identity policy at a time. It also accepts an list of actions and resources, both in the form that are acceptable as part of an IAM policy. The check sees if the policy grants unintended access to any of the resources in the list by way of the listed actions. For example, this check could be used to make sure that a policy does not allow a critical CloudTrail trail to be deleted:

$ aws accessanalyzer check-access-not-granted --policy-document file://ct.json 
  --access resources="arn:aws:cloudtrail:us-east-1:123456789012:trail/MySensitiveTrail" 
  --policy-type IDENTITY_POLICY --output json

IAM Access Analyzer indicates that the check fails:

{
    "result": "FAIL",
    "message": "The policy document grants access to perform one or more of the listed actions or resources.",
    "reasons": [
        {
            "description": "One or more of the listed actions or resources in the statement with index: 0.",
            "statementIndex": 0
        }
    ]
}

I fix the policy and try again, and this time the check passes, indicating that the policy does not grant access to the listed resources:

{
    "result": "PASS",
    "message": "The policy document does not grant access to perform the listed actions or resources."
}

Guided Revocation
In my earlier post I showed you how IAM Access Analyzer discovers and lists IAM items that grant access which is not actually needed. With today’s launch, you now get guidance to help you (or your developer team) to resolve these findings. Here are the latest findings from my AWS account:

Some of these are leftovers from times when I was given early access to a service so that I could use and then blog about it; others are due to my general ineptness as a cloud admin! Either way, I need to clean these up. Let’s start with the second one, Unused access key. I click on the item and can see the new Recommendations section at the bottom:

I can follow the steps and delete the access key or I can click Archive to remove the finding from the list of active findings and add it to the list of archived ones. I can also create an archive rule that will do the same for similar findings in the future. Similar recommendations are provided for unused IAM users, IAM roles, and passwords.

Now let’s take a look at a finding of Unused permissions:

The recommendation is to replace the existing policy with a new one. I can preview the new policy side-by-side with the existing one:

As in the first example I can follow the steps or I can archive the finding.

The findings and the recommendations are also available from the command line. I generate the recommendation by specifying an analyzer and a finding from it:

$ aws accessanalyzer generate-finding-recommendation 
  --analyzer-arn arn:aws:access-analyzer-beta:us-west-2:123456789012:analyzer/MyAnalyzer 
  --id 67110f3e-05a1-4562-b6c2-4b009e67c38e

Then I retrieve the recommendation. In this example, I am filtering the output to only show the steps since the entire JSON output is fairly rich:

$ aws accessanalyzer get-finding-recommendation 
  --analyzer-arn arn:aws:access-analyzer-beta:us-west-2:123456789012:analyzer/MyAnalyzer 
  --id 67110f3e-05a1-4562-b6c2-4b009e67c38e --output json | 
  jq .recommendedSteps[].unusedPermissionsRecommendedStep.recommendedAction
"CREATE_POLICY"
"DETACH_POLICY"

You can use these commands (or the equivalent API calls) to integrate the recommendations into your own tools and systems.

Available Now
The new checks and the resolution steps are available now and you can start using them today in all public AWS regions!

Jeff;

AWS adds passkey multi-factor authentication (MFA) for root and IAM users

This post was originally published on this site

Security is our top priority at Amazon Web Services (AWS), and today, we’re launching two capabilities to help you strengthen the security posture of your AWS accounts:

MFA is one of the simplest and most effective ways to enhance account security, offering an additional layer of protection to help prevent unauthorized individuals from gaining access to systems or data.

MFA with passkey for your root and IAM users
Passkey is a general term used for the credentials created for FIDO2 authentication.

A passkey is a pair of cryptographic keys generated on your client device when you register for a service or a website. The key pair is bound to the web service domain and unique for each one.

The public part of the key is sent to the service and stored on their end. The private part of the key is either stored in a secured device, such as a security key, or securely shared across your devices connected to your user account when you use cloud services, such as iCloud Keychain, Google accounts, or a password manager such as 1Password.

Typically, the access to the private part of the key is protected by a PIN code or a biometric authentication, such as Apple Face ID or Touch ID or Microsoft Hello, depending on your devices.

When I try to authenticate on a service protected with passkeys, the service sends a challenge to my browser. The browser then requests my device sign the challenge with my private key. This triggers a PIN or biometric authentication to access the secured storage where the private key is stored. The browser returns the signature to the service. When the signature is valid, it confirms I own the private key that matches the public key stored on the service, and the authentication succeeds.

You can read more about this process and the various standards at work (FIDO2, CTAP, WebAuthn) in the post I wrote when AWS launched support for passkeys in AWS IAM Identity Center back in November 2020.

Passkeys can be used to replace passwords. However, for this initial release, we choose to use passkeys as a second factor authentication, in addition to your password. The password is something you know, and the passkey is something you have.

Passkeys are more resistant to phishing attacks than passwords. First, it’s much harder to gain access to a private key protected by your fingerprint, face, or a PIN code. Second, passkeys are bound to a specific web domain, reducing the scope in case of unintentional disclosure.

As an end user, you will benefit from the convenience of use and easy recoverability. You can use the built-in authenticators in your phones and laptops to unlock a cryptographically secured credential to your AWS sign-in experience. And when using a cloud service to store the passkey (such as iCloud keychain, Google accounts, or 1Password), the passkey can be accessed from any of your devices connected to your passkey provider account. This helps you to recover your passkey in the unfortunate case of losing a device.

How to enable passkey MFA for an IAM user
To enable passkey MFA, I navigate to the AWS Identity and Access Management (IAM) section of the console. I select a user, and I scroll down the page to the Multi-factor authentication (MFA) section. Then, I select Assign MFA device.

Note that to help you increase resilience and account recovery, you can have multiple MFA devices enabled for a user.

Enable MFA in AM console

On the next page, I enter an MFA device name, and I select Passkey or security key. Then, I select next.

enable MFA : select passkey

When using a password manager application that supports passkeys, it will pop up and ask if you want to generate and store a passkey using that application. Otherwise, your browser will present you with a couple of options. The exact layout of the screen depends on the operating system (macOS or Windows) and the browser you use. Here is the screen I see on macOS with a Chromium-based browser.

Enable passkey : choose method

The rest of the experience depends on your selection. iCloud Keychain will prompt you for a Touch ID to generate and store the passkey.

In the context of this demo, I want to show you how to bootstrap the passkey on another device, such as a phone. I therefore select Use a phone, tablet, or security key instead. The browser presents me with a QR code. Then, I use my phone to scan the QR code. The phone authenticates me with Face ID and generates and stores the passkey.

Passkey : scan a QR code

This QR code-based flow allows a passkey from one device to be used to sign in on another device (a phone and my laptop in my demo). It is defined by the FIDO specification and known as cross device authentication (CDA).

When everything goes well, the passkey is now registered with the IAM user.

Enable passkey : success

Note that we don’t recommend using IAM users to authenticate human beings to the AWS console. We recommend configuring single sign-on (SSO) with AWS IAM Identity Center instead.

What’s the sign-in experience?
Once MFA is enabled and configured with a passkey, I try to sign in to my account.

The user experience differs based on the operating system, browser, and device you use.

For example, on macOS with iCloud Keychain enabled, the system prompts me for a touch on the Touch ID key. For this demo, I registered the passkey on my phone using CDA. Therefore, the system asks me to scan a QR code with my phone. Once scanned, the phone authenticates me with Face ID to unlock the passkey, and the AWS console terminates the sign-in procedure.

Authenticate with MFA and passkey

Enforcing MFA for root users
The second announcement today is that we have started to enforce the use of MFA for the root user on some AWS accounts. This change was announced last year in a blog post from Stephen Schmidt, Chief Security Officer at Amazon.

To quote Stephen:

Verifying that the most privileged users in AWS are protected with MFA is just the latest step in our commitment to continuously enhance the security posture of AWS customers.

We started with your most sensitive account: your management account for AWS Organizations. The deployment of the policy is progressive, with just a few thousand accounts at a time. Over the coming months, we will progressively deploy the MFA enforcement policy on root users for the majority of the AWS accounts.

When you don’t have MFA enabled on your root user account, and your account is updated, a new message will pop up when you sign in, asking you to enable MFA. You will have a grace period, after which the MFA becomes mandatory.

Enable MFA on root account

You can start to use passkeys for multi-factor authentication today in all AWS Regions, except in China.

We’re enforcing the use of multi-factor authentication in all AWS Regions, except for the two regions in China (Beijing, Ningxia) and for AWS GovCloud (US), because the AWS accounts in these Regions have no root user.

Now go activate passkey MFA for your root user in your accounts.

— seb

AWS Weekly Roundup: New AWS Heroes, Amazon API Gateway, Amazon Q and more (June 10, 2024)

This post was originally published on this site

In the last AWS Weekly Roundup, Channy reminded us on how life has ups and downs. It’s just how life is. But, that doesn’t mean that we should do it alone. Farouq Mousa, AWS Community Builder, is fighting brain cancer and Allen Helton, AWS Serverless Hero, his daughter is fighting leukemia.

If you have a moment, please visit their campaign pages and give your support.

Meanwhile, we’ve just finished a few AWS Summits in India, Korea and also Thailand. As always, I had so much fun working together at Developer Lounge with AWS Heroes, AWS Community Builders, and AWS User Group leaders. Here’s a photo from everyone here.

Last Week’s Launches
Here are some launches that caught my attention last week:

Welcome, new AWS Heroes! — Last week, we just announced new cohort for AWS Heroes, worldwide group of AWS experts who go above and beyond to share knowledge and empower their communities.

Amazon API Gateway increased integration timeout limit — If you’re using Regional REST APIs and private REST APIs in Amazon API Gateway, now you can increase the integration timeout limit greater than 29 seconds. This allows you to run various workloads requiring longer timeouts.

Amazon Q offers inline completion in the command line — Now, Amazon Q Developer provides real-time AI-generated code suggestions as you type in your command line. As a regular command line interface (CLI) user, I’m really excited about this.

New common control library in AWS Audit Manager — This announcement helps you to save time when mapping enterprise controls into AWS Audit Manager. Check out Danilo’s post where he elaborated how that you can simplify risk and complicance assessment with the new common control library.

Amazon Inspector container image scanning for Amazon CodeCatalyst and GitHub actions — If you need to integrate your CI/CD with software vulnerabilities checking, you can use Amazon Inspector. Now, with this native integration in GitHub actions and Amazon CodeCatalyst, it streamlines your development pipeline process.

Ingest streaming data with Amazon OpenSearch Ingestion and Amazon Managed Streaming for Apache Kafka — With this new capability, now you can build more efficient data pipelines for your complex analytics use cases. Now, you can seamlessly index the data from your Amazon MSK Serverless clusters in Amazon OpenSearch service.

Amazon Titan Text Embeddings V2 now available in Amazon Bedrock Knowledge Base — You now can embed your data into a vector database using Amazon Titan Text Embeddings V2. This will be helpful for you to retrieve relevant information for various tasks.

Max tokens 8,192
Languages 100+ in pre-training
Fine-tuning supported No
Normalization supported Yes
Vector size 256, 512, 1,024 (default)

From Community.aws
Here’s my 3 personal favorites posts from community.aws:

Upcoming AWS events
Check your calendars and sign up for these AWS and AWS Community events:

  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Japan (June 20), Washington, DC (June 26–27), and New York (July 10).

  • AWS re:Inforce — Join us for AWS re:Inforce (June 10–12) in Philadelphia, PA. AWS re:Inforce is a learning conference focused on AWS security solutions, cloud security, compliance, and identity. Connect with the AWS teams that build the security tools and meet AWS customers to learn about their security journeys.

  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), New Zealand (August 15), Nigeria (August 24), and New York (August 28).

You can browse all upcoming in-person and virtual events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Donnie

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!