20 years in the AWS Cloud – how time flies!

This post was originally published on this site

AWS has reached its 20th anniversary! With a steady pace of innovation, AWS has grown to offer over 240 comprehensive cloud services and continues to launch thousands of new features annually for millions of customers. During this time, over 4,700 posts have been published on this blog—more than double the number since Jeff Barr wrote the 10th anniversary post.

AWS changed my life
Reflecting on what I was doing 20 years ago, I met Jeff in Seoul on March 13, 2006, when he came as the keynote speaker for the Korea NGWeb conference. At that time, Amazon was one of the first pioneers to initiate an API economy, introducing ecommerce API services. After the keynote speech, he returned home that evening, and I believe he wrote the Amazon S3 launch blog post on the flight back to the United States.

That short meeting with him brought significant changes to my life. He became my role model as a blogger, and I began building API-based services in my company and opening them to third-party developers. When I was a PhD student while taking a break from work, I realized that for individual researchers like me, AWS Cloud services are powerful tools for conducting large-scale research projects. After returning to work, my company became one of the first AWS customers in Korea in 2014. Countless developers—myself included—have embraced cloud computing and actively used its capabilities to accomplish what was previously impossible.

Over the past decade, the technology landscape has transformed dramatically. Deep learning emerged as a breakthrough in AI, evolving through generative AI based on large language models (LLMs) to today’s agentic AI technology. Jeff wrote, “When looking into the future, you need to be able to distinguish between flashy distractions and genuine trends, while remaining flexible enough to pivot if yesterday’s niche becomes today’s mainstream technology.” This principle guides how AWS approaches innovation—we start by listening to what customers truly need. The real trend isn’t pursuing every emerging technology, but rather reimagining solutions that address customers’ most critical challenges.

20 years of AWS
For the first 10 years, Jeff selected his favorite AWS launches and blog posts. Amazon S3, Amazon EC2 (2006), Amazon Relational Database Service, Amazon Virtual Private Cloud (2009), Amazon DynamoDB, Amazon Redshift (2012), Amazon WorkSpaces, Amazon Kinesis (2013), AWS Lambda (2014), and AWS IoT (2015).

While I also hate to play favorites, I want to choose some of my favorite AWS blog posts of the past decade.

  • Deploying containers easily (2014) – Amazon Elastic Container Service makes it straightforward for you to run any number of containers across a managed cluster of Amazon EC2 instances using powerful APIs and other tools. In 2017, we launched Amazon Elastic Kubernetes Service as a fully managed Kubernetes service and AWS Fargate as a serverless deployment option.
  • High availability database at global scale (2017) – Amazon Aurora is a modern relational database service offering performance and high availability at scale. In 2018, we launched Amazon Aurora Serverless v1, and this serverless database evolved to Amazon Aurora Serverless v2 to scale down to zero. In 2025, we also launched Amazon Aurora DSQL is the fastest serverless distributed SQL database for always available applications.
  • Machine learning (ML) at your fingertips (2017) – Amazon SageMaker is a fully managed end-to-end ML service that data scientists, developers, and ML experts can use to quickly build, train, and host machine learning models at scale. In 2024, we launched the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI and introduced Amazon SageMaker AI to focus specifically on building, training, and deploying AI and ML models at scale.
  • Best price performance for cloud workloads (2018) – We launched Amazon EC2 A1 instances powered by the first generation of Arm-based AWS Graviton Processors designed to deliver the best price performance for your cloud workloads. Last year, we previewed EC2 M9g instances powered by AWS Graviton5 processors. Over 90,000 AWS customers have reaped the benefits of Graviton supporting popular AWS services such as Amazon ECS and Amazon EKS, AWS Lambda, Amazon RDS, Amazon ElastiCache, Amazon EMR, and Amazon OpenSearch Service.
  • Run AWS Cloud in your data center (2019) – AWS Outposts is a family of fully managed services delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. Now, AWS Outposts is available in a variety of form factors, from 1U and 2U Outposts servers to 42U Outposts racks, and multiple rack deployments. Customers such as DISH, Fanduel, Morningstar, Philips, and others use Outposts in workloads requiring low latency access to on-premises systems, local data processing, data residency, and application migration with local system interdependencies.
  • Best price performance for ML workloads (2019) – We launched Amazon EC2 Inf1 instances powered by the first generation of AWS Inferentia chips designed to provide fast, low-latency inferencing. In 2022, we launched Amazon EC2 Trn1 instances powered by the first generation of AWS Trainium chips optimized for high performance AI training. Last year, we launched Amazon EC2 Trn3 UltraServers powered by Trainium3 to deliver the best token economics for next-generation generative AI applications. Customers such as Anthropic, Decart, poolside, Databricks, Ricoh, Karakuri, SplashMusic, and others are realizing performance and cost benefits of Trainium-based instances and UltraServers.
  • Build your generative AI apps on AWS (2023) – Amazon Bedrock is a fully managed service that offers a choice of industry leading AI models along with a broad set of capabilities that you need to build generative AI applications, simplifying development with security, privacy, and responsible AI. Last year, we introduced Amazon Bedrock AgentCore, an agentic platform for building, deploying, and operating effective agents securely at scale. Now, more than 100,000 customers worldwide choose Amazon Bedrock to deliver personalized experiences, automate complex workflows, and uncover actionable insights.
  • Your AI coding companion (2023) – We launched Amazon CodeWhisperer as the industry’s first cloud-based AI coding assistant service. The service delivered code generation from comments, open-source code reference tracking, and vulnerability scanning capabilities. In 2024, we rebranded the service to Amazon Q Developer and expanded its features to include a chat-based assistant in the console, project-based code generation, and code transformation tools. In 2025, this service evolved into Kiro, a new agentic AI development tool that brings structure to AI coding through spec-driven development, taking projects from prototype to production. Recently, Kiro previewed an autonomous agent, a frontier agent that works independently on development tasks, maintaining context and learning from every interaction.
  • Broaden your AI model choices (2024) – We launched Amazon Titan models further increasing cost-effective AI model choice for text and multimodal needs in Amazon Bedrock. At AWS re:Invent 2024, we announced Amazon Nova models that delivers frontier intelligence and industry leading price performance. Now Amazon Nova has a portfolio of AI offerings—including Amazon Nova models, Amazon Nova Forge, a new service to build your own frontier models; and Amazon Nova Act, a new service to build agents that automate browser-based UI workflows powered by a custom Amazon Nova 2 Lite model.

Build with AI: Your path forward
A decade ago, AWS responded to the emergence of deep learning by launching the broadest and deepest ML services, such as Amazon SageMaker, democratizing AI for a wide range of customers—from individual developers and startups to large enterprises—regardless of their technical expertise.

AI technology has advanced significantly, but building and deploying AI models and applications still remains complex for many developers and organizations. AWS offers the broadest selection of AI models through Amazon Bedrock, including leading providers such as Anthropic and OpenAI. By using our model training and inference infrastructure and responsible AI both practical and scalable, you can accelerate trusted AI innovation while maintaining control of your data and costs—all built on our global infrastructure’s operational excellence.

Reinvent your idea, keep on learning, build confidently with AI you can trust, and share your successes with us! New AWS customers receive up to $200 in credits to try AWS AI for free. If you’re a student, start building with Kiro for free using 1,000 credits per month for one year.

Channy

Interesting Message Stored in Cowrie Logs, (Wed, Mar 18th)

This post was originally published on this site

This activity was found and reported by BACS student Adam Thorman as part of one of his assignments which I posted his final paper [1] last week. This activity appeared to only have occurred on the 19 Feb 2026 where at least 2 sensors detected on the same day by DShield sensor in the cowrie logs an echo command that included: "MAGIC_PAYLOAD_KILLER_HERE_OR_LEAVE_EMPTY_iranbot_was_here". My DShield sensor captured activity from source IP 64.89.161.198 between 30 Jan – 22 Feb 2026 that included portscans, a successful login via Telnet (TCP/23) and web access that included all the activity listed below captured by the DShield sensor (cowrie, webhoneypot & iptables logs).

Our First 2026 Heroes Cohort Is Here!

This post was originally published on this site

We’re thrilled to celebrate three exceptional developer community leaders as AWS Heroes. These individuals represent the heart of what makes the AWS community so vibrant. In addition to sharing technical knowledge, they build connections, forge genuine human relationships, and create pathways for others to grow. From pioneering cloud culture in mountain villages to leading cybersecurity education across continents, these Heroes demonstrate that true leadership extends beyond technical expertise to the communities we build and the lives we impact.

Maurizio Argoneto – Pignola, Italy

Community Hero Maurizio is a CTO and organizer of the AWS User Group Basilicata, recognized for his dedication to building tech ecosystems where they previously did not exist. For over a decade, he has pioneered cloud culture through a philosophy centered on genuine human connection and knowledge transfer. He founded an international tech conference in a small mountain village, creating a unique space where global experts and local talent meet, blending deep technical sessions on cloud architectures, DevOps, and web scaling with unconventional networking experiences. Beyond organizing events, Maurizio is a tireless mentor working across generations, which span from introducing children to coding to helping university students and professionals transition into cloud architecture. His impact is defined by a rare combination of technical leadership and inclusive community building that draws people from across Europe.

Ray Goh – Singapore

Artificial Intelligence Hero Ray Goh is a seasoned AWS machine learning and AI community leader based in Singapore and a long-standing contributor in various AWS community programs since 2018, from AWS ASEAN Cloud Warrior and AWS Dev/Cloud Alliance to being part of the pioneer batch of AWS Community Builders in 2020. He founded The Gen-C (a Generative AI Learning Community) in 2024, organizing regular public workshops at libraries across Singapore on topics ranging from LLM fine-tuning to AI agents on AWS. Ray has spoken at AWS re:Invent, AWS Summit ASEAN, AWS Community Day Hong Kong, and numerous user group meetups, and guest-authored for the AWS Machine Learning Blog. He spearheaded the world’s largest enterprise AWS DeepRacer program for DBS Bank in 2020, upskilling over 3,100 employees, and trained more than 1,300 ASEAN students in LLM techniques in 2025. His community work extends to skills-based CSR initiatives teaching AI and machine learning to women, children, and youths, with contributions featured on CNBC and Euromoney.

Sheyla Leacock – Panama City, Panama

Security Hero Sheyla Leacock is an IT security professional, mentor, technical author, and international speaker contributing to the global cloud and cybersecurity community. She has spoken at AWS Summit Mexico, AWS Summit LATAM in Peru, and led PeerTalk sessions at AWS re:Invent, while also leading the AWS User Group in Panama and regularly participating in AWS Community Days and regional meetups. Beyond AWS-focused events, she has delivered talks at more than 20 international conferences and publishes technical articles and educational content on AWS cloud computing and cybersecurity. She collaborates with universities as a guest lecturer, supporting the development of emerging technology and cybersecurity talent. Through community leadership, knowledge sharing, and education, she contributes to strengthening the AWS and cybersecurity ecosystem.

Learn More

Visit the AWS Heroes webpage if you’d like to learn more about the AWS Heroes program, or to connect with a Hero near you.

Taylor

Scans for "adminer", (Wed, Mar 18th)

This post was originally published on this site

A very popular target of attackers scanning our honeypots is "phpmyadmin". phpMyAdmin is a script first released in the late 90s, before many security concepts had been discovered. It's rich history of vulnerabilities made it a favorite target. Its alternative, "adminer", began appearing about a decade later (https://www.adminer.org). One of its main "selling" points was simplicity. Adminer is just a single PHP file. It requires no configuration. Copy it to your server, and you are ready to go. "adminer" has a much better security record and claims to prioritize security in its development.

IPv4 Mapped IPv6 Addresses, (Tue, Mar 17th)

This post was originally published on this site

Yesterday, in my diary about the scans for "/proxy/" URLs, I noted how attackers are using IPv4-mapped IPv6 addresses to possibly obfuscate their attack. These addresses are defined in RFC 4038. These addresses are one of the many transition mechanisms used to retain some backward compatibility as IPv6 is deployed. Many modern applications use IPv6-only networking code. IPv4-mapped IPv6 addresses can be used to represent IPv4 addresses in these cases. IPv4-mapped IPv6 addresses are not used on the network, but instead, translated to IPv4 before a packet is sent.

To map an IPv4 address into IPv6, the prefix "::ffff:/96" is used. This leaves the last 32 bits to represent the IPv4 address. For example, "10.5.2.1" turns into "::ffff:0a05:0201". Many applications display the last 4 bytes in decimal format to make it easier to read. For example, you will see "::ffff:10.5.2.1". 

If IPv4-mapped IPv6 addresses can be used depends on the particular application. Here are a few examples, but feel free to experiment yourself:

ping6 on macOS:

% ping6 ::ffff:0a05:0201
PING6(56=40+8+8 bytes) ::ffff:10.5.2.147 --> ::ffff:10.5.2.1
ping6: sendmsg: Invalid argument
ping6: wrote ::ffff:0a05:0201 16 chars, ret=-1

Note that ping6 displays the IPv4 address in decimal format but refuses to send any packets, since they would be IPv4 packets, not IPv6.

% ping ::ffff:0a05:0201
ping: cannot resolve ::ffff:0a05:0201: Unknown host

Regular IPv4 ping fails to recognize the format for an IP address, and instead interprets it as a hostname, which fails.

ping6 on Linux does not return an error. It just appears to "hang," and no packets are emitted. Running strace shows:

sendto(3, "200{2634i:271i(20062021222324252627"..., 64, 0, {sa_family=AF_INET6, sin6_port=htons(58), sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "::ffff:10.5.2.1", &sin6_addr), sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)
recvmsg(3, {msg_namelen=28}, MSG_DONTWAIT|MSG_ERRQUEUE) = -1 EAGAIN (Resource temporarily unavailable)

It attempts to set up an IPv6 connection based on the "AF_INET6" argument in the inet_pton call, but this fails for the mapped IPv4 address.

ssh, on the other hand (on MacOS and Linux) works just fine:

$ ssh ::ffff:0a05:0201 -p 11460
The authenticity of host '[::ffff:10.5.2.1]:11460 ([::ffff:10.5.2.1]:11460)' can't be established.

The traffic is sent properly as IPv4 traffic.

curl is kind of interesting in that it uses the IPv4-mapped IPv6 address as a host header:

$ curl -i http://[::ffff:0a80:010b]
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Tue, 17 Mar 2026 11:32:10 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: https://[::ffff:0a80:010b]/

I tried a couple of different web servers, and they all acted the same way. Browsers like Safari and Chrome could also use these addresses. In browsers, it may be possible to evade some filters by using IPv4-mapped IPv6 addresses when simple string matching is used. Note how in URLs the IPv6 address must be enclosed in square brackets to avoid "colon confusion". 

Any ideas what else to test or how to possibly use or abuse these addresses? Remember that on the network, you will end up with normal IPv4 traffic, not IPv6 traffic using IPv4-mapped IPv6 addresses. 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

/proxy/ URL scans with IP addresses, (Mon, Mar 16th)

This post was originally published on this site

Attempts to find proxy servers are among the most common scans our honeypots detect. Most of the time, the attacker attempts to use a host header or include the hostname in the URL to trigger the proxy server forwarding the request. In some cases, common URL prefixes like "/proxy/" are used. This weekend, I noticed a slightly different pattern in our logs:

First Seen Last Seen Count Path
2026-03-15 2026-03-16 2 /proxy/http:/[::ffff:a9fe:a9fe]/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/169.254.169.254/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/http:/169.254.169.254/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/absolute/[0:0:0:0:0:ffff:a9fe:a9fe]/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/absolute/[::ffff:a9fe:a9fe]/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/absolute/169.254.169.254/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/[0:0:0:0:0:ffff:a9fe:a9fe]/latest/dynamic/instance-identity/document
2026-03-15 2026-03-16 2 /proxy/[0:0:0:0:0:ffff:a9fe:a9fe]/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/[::ffff:a9fe:a9fe]/latest/dynamic/instance-identity/document
2026-03-15 2026-03-16 2 /proxy/[::ffff:a9fe:a9fe]/latest/meta-data/iam/security-credentials/
2026-03-15 2026-03-16 2 /proxy/169.254.169.254/latest/dynamic/instance-identity/document
2026-03-16 2026-03-16 1 /proxy/2852039166/latest/meta-data/iam/security-credentials/

The intent of these requests is to reach the cloud metadata service, which is typically listening on 169.254.169.254, a non-routable link-local address. The "security-credentials" directory should list entities with access to the service, and can then lead to requests for key material used for authentication.

The attacker does not just use the IPv4 address, but attempts to bypasspass some filters by using the IPv4 mapped IPv6 address. The prefix ::ffff/96, followed by the IPv4 address, is used to express IPv4 addresses in IPv6. Note that these addresses are not intended to be routable, but just like 169.254.169.254 they may work on the host itself. In addition, the attacker is used the "less apprviated" form by specifying the first few bytes with 0:0:0:0. Finally, the long unsigned integer form of the IP address is used.

The meta data service is often exploited using SSRF vulenrabilities. However, the more modern "version 2" of the meta data service is attempting to prevent simple SSRF attacks by requiring two requests with different methods and specific custom headers. SSRF vulnerabilities are just like a less functional open proxy. In this case, the attacker assumes a full proxy, and an attack may not be prevented by the more modern meta data service implementation.

Modern web applications use proxies in many different forms. For example you may have API gateways, load balancers, web application firewalls or even still some proxies to bypass CORS constraints. Any of these cases is potentially vulenrable if badly configured. The above list of URLs may make a good starting point to test the implementation of your proxy.


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SmartApeSG campaign uses ClickFix page to push Remcos RAT, (Sat, Mar 14th)

This post was originally published on this site

Introduction

This diary describes a Remcos RAT infection that I generated in my lab on Thursday, 2026-03-11. This infection was from the SmartApeSG campaign that used a ClickFix-style fake CAPTCHA page.

My previous in-depth diary about a SmartApeSG (ZPHP, HANEYMANEY) was in November 2025, when I saw NetSupport Manager RAT. Since then, I've fairly consistently seen what appears to be Remcos RAT from this campaign.

Finding SmartApeSG Activity

As previously noted, I find SmartApeSG indicators from the Monitor SG account on Mastodon, and I use URLscan to pivot on those indicators to find compromised websites with injected SmartApeSG script.

Details

Below is an image of HTML in a page from a legitimate but compromised website that shows the injected SmartApeSG script.


Shown above: Page from a legitimate but compromised site that highlights the injected SmartApeSG script.

The injected SmartApeSG script generates a fake CAPTCHA-style "verify you are human" page, which displays ClickFix-style instructions after checking a box on the page. A screenshot from this infection is shown below, and it notes the ClickFix-style script injected into the user's clipboard. Users are instructed to open a run window, paste the script into it, and hit the Enter key.


Shown above: Fake CAPTCHA page generated by a legitimate but compromised site, showing the ClickFix-style command.

I used Fiddler to reveal URLS from the HTTPS traffic, and I recorded the traffic and viewed it in Wireshark. Traffic from the infection chain is shown in the image below.


Shown above: Traffic from the infection in Fiddler and Wireshark.

After running the ClickFix-style instructions, the malware was sent as a ZIP archive and saved to disk with a .pdf file extension. This appears to be Remcos RAT in a malicious package that uses DLL side-loading to run the malware. This infection was made persistent with an update to the Windows Registry.


Shown above: Malware from the infection persistent on an infected Windows host.

Indicators of Compromise

Injected SmartApeSG script injected into page from legitimate but compromised site:

  • hxxps[:]//cpajoliette[.]com/d.js

Traffic to domain hosting the fake CAPTCHA page:

  • hxxps[:]//retrypoti[.]top/endpoint/signin-cache.js
  • hxxps[:]//retrypoti[.]top/endpoint/login-asset.php?Iah0QU0N
  • hxxps[:]//retrypoti[.]top/endpoint/handler-css.js?00109a4cb788daa811

Traffic generated by running the ClickFix-style script:

  • hxxp[:]//forcebiturg[.]com/boot  <– 302 redirect to HTTPS URL
  • hxxps[:]//forcebiturg[.]com/boot  <– returned HTA file
  • hxxp[:]//forcebiturg[.]com/proc  <– 302 redirect to HTTPS URL
  • hxxps[:]//forcebiturg[.]com/proc  <– returned ZIP archive archive with files for Remcos RAT

Post-infection traffic for Remcos RAT:

  • 193.178.170[.]155:443 – TLSv1.3 traffic using self-signed certificate

Example of ZIP archive for Remcos RAT:

  • SHA256 hash: b170ffc8612618c822eb03030a8a62d4be8d6a77a11e4e41bb075393ca504ab7
  • File size: 92,273,195 bytes
  • File type: Zip archive data, at least v2.0 to extract, compression method=deflate
  • Example of saved file location: C:Users[username]AppDataLocalTemp594653818594653818.pdf

Of note, the files, URLs and domains for SmartApeSG activity change on a near-daily basis, and the indicators described in this article are likely no longer current. However, the overall patterns of activity for SmartApeSG have remained fairly consistent over the past several months.


Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Twenty years of Amazon S3 and building what’s next

This post was originally published on this site

Twenty years ago today, on March 14, 2006, Amazon Simple Storage Service (Amazon S3) quietly launched with a modest one-paragraph announcement on the What’s New page:

Amazon S3 is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

Even Jeff Barr’s blog post was only a few paragraphs, written before catching a plane to a developer event in California. No code examples. No demo. Very low fanfare. Nobody knew at the time that this launch would shape our entire industry.

The early days: Building blocks that just work
At its core, S3 introduced two straightforward primitives: PUT to store an object and GET to retrieve it later. But the real innovation was the philosophy behind it: create building blocks that handle the undifferentiated heavy lifting, which freed developers to focus on higher-level work.

From day one, S3 was guided by five fundamentals that remain unchanged today.

Security means your data is protected by default. Durability is designed for 11 nines (99.999999999%), and we operate S3 to be lossless. Availability is designed into every layer, with the assumption that failure is always present and must be handled. Performance is optimized to store virtually any amount of data without degradation. Elasticity means the system automatically grows and shrinks as you add and remove data, with no manual intervention required.

When we get these things right, the service becomes so straightforward that most of you never have to think about how complex these concepts are.

S3 today: Scale beyond imagination
Throughout 20 years, S3 has remained committed to its core fundamentals even as it’s grown to a scale that’s hard to comprehend.

When S3 first launched, it offered approximately one petabyte of total storage capacity across about 400 storage nodes in 15 racks spanning three data centers, with 15 Gbps of total bandwidth. We designed the system to store tens of billions of objects, with a maximum object size of 5 GB. The initial price was 15 cents per gigabyte.

S3 key metrics illustration

Today, S3 stores more than 500 trillion objects and serves more than 200 million requests per second globally across hundreds of exabytes of data in 123 Availability Zones in 39 AWS Regions, for millions of customers. The maximum object size has grown from 5 GB to 50 TB, a 10,000 fold increase. If you stacked all of the tens of millions S3 hard drives on top of each other, they would reach the International Space Station and almost back.

Even as S3 has grown to support this incredible scale, the price you pay has dropped. Today, AWS charges slightly over 2 cents per gigabyte. That’s a price reduction of approximately 85% since launch in 2006. In parallel, we’ve continued to introduce ways to further optimize storage spend with storage tiers. For example, our customers have collectively saved more than $6 billion in storage costs by using Amazon S3 Intelligent-Tiering as compared to Amazon S3 Standard.

Over the past two decades, the S3 API has been adopted and used as a reference point across the storage industry. Multiple vendors now offer S3 compatible storage tools and systems, implementing the same API patterns and conventions. This means skills and tools developed for S3 often transfer to other storage systems, making the broader storage landscape more accessible.

Despite all of this growth and industry adoption, perhaps the most remarkable achievement is this: the code you wrote for S3 in 2006 still works today, unchanged. Your data went through 20 years of innovation and technical advances. We migrated the infrastructure through multiple generations of disks and storage systems. All the code to handle a request has been rewritten. But the data you stored 20 years ago is still available today, and we’ve maintained complete API backward compatibility. That’s our commitment to delivering a service that continually “just works.”

The engineering behind the scale
What makes S3 possible at this scale? Continuous innovation in engineering.

Much of what follows is drawn from a conversation between Mai-Lan Tomsen Bukovec, VP of Data and Analytics at AWS, and Gergely Orosz of The Pragmatic Engineer. The in-depth interview goes further into the technical details for those who want to go deeper. In the following paragraphs, I share some examples:

At the heart of S3 durability is a system of microservices that continuously inspect every single byte across the entire fleet. These auditor services examine data and automatically trigger repair systems the moment they detect signs of degradation. S3 is designed to be lossless: the 11 nines design goal reflects how the replication factor and re-replication fleet are sized, but the system is built so that objects aren’t lost.

S3 engineers use formal methods and automated reasoning in production to mathematically prove correctness. When engineers check in code to the index subsystem, automated proofs verify that consistency hasn’t regressed. This same approach proves correctness in cross-Region replication or for access policies.

Over the past 8 years, AWS has been progressively rewriting performance-critical code in the S3 request path in Rust. Blob movement and disk storage have been rewritten, and work is actively ongoing across other components. Beyond raw performance, Rust’s type system and memory safety guarantees eliminate entire classes of bugs at compile time. This is an essential property when operating at S3 scale and correctness requirements.

S3 is built on a design philosophy: “Scale is to your advantage.” Engineers design systems so that increased scale improves attributes for all users. The larger S3 gets, the more de-correlated workloads become, which improves reliability for everyone.

Looking forward
The vision for S3 extends beyond being a storage service to becoming the universal foundation for all data and AI workloads. Our vision is simple: you store any type of data one time in S3, and you work with it directly, without moving data between specialized systems. This approach reduces costs, eliminates complexity, and removes the need for multiple copies of the same data.

Here are a few standout launches from recent years:

  • S3 Tables – Fully managed Apache Iceberg tables with automated maintenance that optimize query efficiency and reduce storage cost over time.
  • S3 Vectors – Native vector storage for semantic search and RAG, supporting up to 2 billion vectors per index with sub-100ms query latency. In only 5 months (July–December 2025), you created more than 250,000 indices, ingested more than 40 billion vectors, and performed more than 1 billion queries.
  • S3 Metadata – Centralized metadata for instant data discovery, removing the need to recursively list large buckets for cataloging and significantly reducing time-to-insight for large data lakes.

Each of these capabilities operates at S3 cost structure. You can handle multiple data types that traditionally required expensive databases or specialized systems but are now economically feasible at scale.

From 1 petabyte to hundreds of exabytes. From 15 cents to 2 cents per gigabyte. From simple object storage to the foundation for AI and analytics. Through it all, our five fundamentals–security, durability, availability, performance, and elasticity–remain unchanged, and your code from 2006 still works today.

Here’s to the next 20 years of innovation on Amazon S3.

— seb

A React-based phishing page with credential exfiltration via EmailJS, (Fri, Mar 13th)

This post was originally published on this site

On Wednesday, a phishing message made its way into our handler inbox that contained a fairly typical low-quality lure, but turned out to be quite interesting in the end nonetheless. That is because the accompanying credential stealing web page was dynamically constructed using React and used a legitimate e-mail service for credential collection.

Introducing account regional namespaces for Amazon S3 general purpose buckets

This post was originally published on this site

Today, we’re announcing a new feature of Amazon Simple Storage Service (Amazon S3) you can use to create general purpose buckets in your own account regional namespace simplifying bucket creation and management as your data storage needs grow in size and scope. You can create general purpose bucket names across multiple AWS Regions with assurance that your desired bucket names will always be available for you to use.

With this feature, you can predictably name and create general purpose buckets in your own account regional namespace by appending your account’s unique suffix in your requested bucket name. For example, I can create the bucket mybucket-123456789012-us-east-1-an in my account regional namespace. mybucket is the bucket name prefix that I specified, then I add my account regional suffix to the requested bucket name: -123456789012-us-east-1-an. If another account tries to create buckets using my account’s suffix, their requests will be automatically rejected.

Your security teams can use AWS Identity and Access Management (AWS IAM) policies and AWS Organizations service control policies to enforce that your employees only create buckets in their account regional namespace using the new s3:x-amz-bucket-namespace condition key, helping teams adopt the account regional namespace across your organization.

Create your S3 bucket with account regional namespace in action
To get started, choose Create bucket in the Amazon S3 console. To create your bucket in your account regional namespace, choose Account regional namespace. If you choose this option, you can create your bucket with any name that is unique to your account and region.

This configuration supports all of the same features as general purpose buckets in the global namespace. The only difference is that only your account can use bucket names with your account’s suffix. The bucket name prefix and the account regional suffix combined must be between 3 and 63 characters long.

Using the AWS Command Line Interface (AWS CLI), you can create a bucket with account regional namespace by specifying the x-amz-bucket-namespace:account-regional request header and providing a compatible bucket name.

$ aws s3api create-bucket --bucket mybucket-123456789012-us-east-1-an 
   --bucket-namespace account-regional 
   --region us-east-1

You can use the AWS SDK for Python (Boto3) to create a bucket with account regional namespace using CreateBucket API request.

import boto3

class AccountRegionalBucketCreator:
    """Creates S3 buckets using account-regional namespace feature."""
    
    ACCOUNT_REGIONAL_SUFFIX = "-an"
    
    def __init__(self, s3_client, sts_client):
        self.s3_client = s3_client
        self.sts_client = sts_client
    
    def create_account_regional_bucket(self, prefix):
        """
        Creates an account-regional S3 bucket with the specified prefix.
        Resolves caller AWS account ID using the STS GetCallerIdentity API.
        Format: ---an
        """
        account_id = self.sts_client.get_caller_identity()['Account']
        region = self.s3_client.meta.region_name
        bucket_name = self._generate_account_regional_bucket_name(
            prefix, account_id, region
        )
        
        params = {
            "Bucket": bucket_name,
            "BucketNamespace": "account-regional"
        }
        if region != "us-east-1":
            params["CreateBucketConfiguration"] = {
                "LocationConstraint": region
            }
        
        return self.s3_client.create_bucket(**params)
    
    def _generate_account_regional_bucket_name(self, prefix, account_id, region):
        return f"{prefix}-{account_id}-{region}{self.ACCOUNT_REGIONAL_SUFFIX}"


if __name__ == '__main__':
    s3_client = boto3.client('s3')
    sts_client = boto3.client('sts')
    
    creator = AccountRegionalBucketCreator(s3_client, sts_client)
    response = creator.create_account_regional_bucket('test-python-sdk')
    
    print(f"Bucket created: {response}")

You can update your infrastructure as code (IaC) tools, such as AWS CloudFormation, to simplify creating buckets in your account regional namespace. AWS CloudFormation offers the pseudo parameters, AWS::AccountId and AWS::Region, making it easy to build CloudFormation templates that create account regional namespace buckets.

The following example demonstrates how you can update your existing CloudFormation templates to start creating buckets in your account regional namespace:

BucketName: !Sub "amzn-s3-demo-bucket-${AWS::AccountId}-${AWS::Region}-an"
BucketNamespace: "account-regional"

Alternatively, you can also use the BucketNamePrefix property to update your CloudFormation template. By using the BucketNamePrefix, you can provide only the customer defined portion of the bucket name and then it automatically adds the account regional namespace suffix based on the requesting AWS account and Region specified.

BucketNamePrefix: 'amzn-s3-demo-bucket'
BucketNamespace: "account-regional"

Using these options, you can build a custom CloudFormation template to easily create general purpose buckets in your account regional namespace.

Things to know
You can’t rename your existing global buckets to bucket names with account regional namespace, but you can create new general purpose buckets in your account regional namespace. Also, the account regional namespace is only supported for general purpose buckets. S3 table buckets and vector buckets already exist in an account-level namespace and S3 directory buckets exist in a zonal namespace.

To learn more, visit Namespaces for general purpose buckets in the Amazon S3 User Guide.

Now available
Creating general purpose buckets in your account regional namespace in Amazon S3 is now available in 37 AWS Regions including the AWS China and AWS GovCloud (US) Regions. You can create general purpose buckets in your account regional namespace at no additional cost.

Give it a try in the Amazon S3 console today and send feedback to AWS re:Post for Amazon S3 or through your usual AWS Support contacts.

Channy