Unlock the potential of your supply chain data and gain actionable insights with AWS Supply Chain Analytics

This post was originally published on this site

Today, we’re announcing the general availability of AWS Supply Chain Analytics powered by Amazon QuickSight. This new feature helps you to build custom report dashboards using your data in AWS Supply Chain. With this feature, your business analysts or supply chain managers can perform custom analyses, visualize data, and gain actionable insights for your supply chain management operations.

Here’s how it looks:

AWS Supply Chain Analytics leverages the AWS Supply Chain data lake and provides Amazon QuickSight embedded authoring tools directly into the AWS Supply Chain user interface. This integration provides you with a unified and configurable experience for creating custom insights, metrics, and key performance indicators (KPIs) for your operational analytics.

In addition, AWS Supply Chain Analytics provides prebuilt dashboards that you can use as-is or modify based on your needs. At launch, you will have the following prebuilt dashboards:

  1. Plan-Over-Plan Variance: Presents a comparison between two demand plans, showcasing variances in both units and values across key dimensions such as product, site, and time periods.
  2. Seasonality Analytics: Presents a year-over-year view of demand, illustrating trends in average demand quantities and highlighting seasonality patterns through heatmaps at both monthly and weekly levels.

Let’s get started
Let me walk you through the features of AWS Supply Chain Analytics.

The first step is to enable AWS Supply Chain Analytics. To do this, navigate to Settings, then select Organizations and choose Analytics. Here, I can Enable data access for Analytics.

Now I can edit existing roles or create a new role with analytics access. To learn more, visit User permission roles.

Once this feature is enabled, when I log in to AWS Supply Chain I can access the AWS Supply Chain Analytics feature by selecting either the Connecting to Analytics card or Analytics on the left navigation menu.

Here, I have an embedded Amazon QuickSight interface ready for me to use. To get started, I navigate to Prebuilt Dashboards.

Then, I can select the prebuilt dashboards I need in the Supply Chain Function dropdown list:

What I like the most about this prebuilt dashboards is I can easily get started. AWS Supply Chain Analytics will prepare all the datasets, analysis, and even a dashboard for me. I select Add to begin.

Then, I navigate to the dashboard page, and I can see the results. I can also share this dashboard with my team, which improves the collaboration aspect.

If I need to include other datasets for me to build a custom dashboard, I can navigate to Datasets and select New dataset.

Here, I have AWS Supply Chain data lake as an existing dataset for me to use.

Next, I need to select Create dataset.

Then, I can select a table that I need to include into my analysis. On the Data section, I can see all available fields. All data sets that start with asc_ are generated by AWS Supply Chain, such as data from Demand Planning, Insights, Supply Planning, and others.

I can also find all the datasets I have ingested into AWS Supply Chain. To learn more on data entities, visit the AWS Supply Chain documentation page. One thing to note here is if I have not ingested data into AWS Supply Chain Data Lake, I need to ingest data before using AWS Supply Chain Analytics. To learn how to ingest data into the data lake, visit the data lake page.

At this stage, I can start my analysis. 

Now available
AWS Supply Chain Analytics is now generally available in all regions where AWS Supply Chain is offered. Give it a try to experience and transform your operations with the AWS Supply Chain Analytics.

Happy building,
— Donnie

Amazon Aurora PostgreSQL Limitless Database is now generally available

This post was originally published on this site

Today, we are announcing the general availability of Amazon Aurora PostgreSQL Limitless Database, a new serverless horizontal scaling (sharding) capability of Amazon Aurora. With Aurora PostgreSQL Limitless Database, you can scale beyond the existing Aurora limits for write throughput and storage by distributing a database workload over multiple Aurora writer instances while maintaining the ability to use it as a single database.

When we previewed Aurora PostgreSQL Limitless Database at AWS re:Invent 2023, I explained that it uses a two-layer architecture consisting of multiple database nodes in a DB shard group – either routers or shards to scale based on the workload.

  • Routers – Nodes that accept SQL connections from clients, send SQL commands to shards, maintain system-wide consistency, and return results to clients.
  • Shards – Nodes that store a subset of tables and full copies of data, which accept queries from routers.

There will be three types of tables that contain your data: sharded, reference, and standard.

  • Sharded tables – These tables are distributed across multiple shards. Data is split among the shards based on the values of designated columns in the table, called shard keys. They are useful for scaling the largest, most I/O-intensive tables in your application.
  • Reference tables – These tables copy data in full on every shard so that join queries can work faster by eliminating unnecessary data movement. They are commonly used for infrequently modified reference data, such as product catalogs and zip codes.
  • Standard tables – These tables are like regular Aurora PostgreSQL tables. Standard tables are all placed together on a single shard so join queries can work faster by eliminating unnecessary data movement. You can create sharded and reference tables from standard tables.

Once you have created the DB shard group and your sharded and reference tables, you can load massive amounts of data into Aurora PostgreSQL Limitless Database and query data in those tables using standard PostgreSQL queries. To learn more, visit Limitless Database architecture in the Amazon Aurora User Guide.

Getting started with Aurora PostgreSQL Limitless Database
You can get started in the AWS Management Console and AWS Command Line Interface (AWS CLI) to create a new DB cluster that uses Aurora PostgreSQL Limitless Database, add a DB shard group to the cluster, and query your data.

1. Create an Aurora PostgreSQL Limitless Database Cluster
Open the Amazon Relational Database Service (Amazon RDS) console and choose Create database. For Engine options, choose Aurora (PostgreSQL Compatible) and Aurora PostgreSQL with Limitless Database (Compatible with PostgreSQL 16.4).

For Aurora PostgreSQL Limitless Database, enter a name for your DB shard group and values for minimum and maximum capacity measured by Aurora Capacity Units (ACUs) across all routers and shards. The initial number of routers and shards in a DB shard group is determined by this maximum capacity. Aurora PostgreSQL Limitless Database scales a node up to a higher capacity when its current utilization is too low to handle the load. It scales the node down to a lower capacity when its current capacity is higher than needed.

For DB shard group deployment, choose whether to create standbys for the DB shard group: no compute redundancy, one compute standby in a different Availability Zone, or two compute standbys in two different Availability Zones.

You can set the remaining DB settings to what you prefer and choose Create database. After the DB shard group is created, it is displayed on the Databases page.

You can connect, reboot, or delete a DB shard group, or you can change the capacity, split a shard, or add a router in the DB shard group. To learn more, visit Working with DB shard groups in the Amazon Aurora User Guide.

2. Create Aurora PostgreSQL Limitless Database tables
As shared previously, Aurora PostgreSQL Limitless Database has three table types: sharded, reference, and standard. You can convert standard tables to sharded or reference tables to distribute or replicate existing standard tables or create new sharded and reference tables.

You can use variables to create sharded and reference tables by setting the table creation mode. The tables that you create will use this mode until you set a different mode. The following examples show how to use these variables to create sharded and reference tables.

For example, create a sharded table named items with a shard key composed of the item_id and item_cat columns.

SET rds_aurora.limitless_create_table_mode='sharded';
SET rds_aurora.limitless_create_table_shard_key='{"item_id", "item_cat"}';
CREATE TABLE items(item_id int, item_cat varchar, val int, item text);

Now, create a sharded table named item_description with a shard key composed of the item_id and item_cat columns and collocate it with the items table.

SET rds_aurora.limitless_create_table_collocate_with='items';
CREATE TABLE item_description(item_id int, item_cat varchar, color_id int, ...);

You can also create a reference table named colors.

SET rds_aurora.limitless_create_table_mode='reference';
CREATE TABLE colors(color_id int primary key, color varchar);

You can find information about Limitless Database tables by using the rds_aurora.limitless_tables view, which contains information about tables and their types.

postgres_limitless=> SELECT * FROM rds_aurora.limitless_tables;

 table_gid | local_oid | schema_name | table_name  | table_status | table_type  | distribution_key
-----------+-----------+-------------+-------------+--------------+-------------+------------------
         1 |     18797 | public      | items       | active       | sharded     | HASH (item_id, item_cat)
         2 |     18641 | public      | colors      | active       | reference   | 

(2 rows)

You can convert standard tables into sharded or reference tables. During the conversion, data is moved from the standard table to the distributed table, then the source standard table is deleted. To learn more, visit Converting standard tables to limitless tables in the Amazon Aurora User Guide.

3. Query Aurora PostgreSQL Limitless Database tables
Aurora PostgreSQL Limitless Database is compatible with PostgreSQL syntax for queries. You can query your Limitless Database using psql or any other connection utility that works with PostgreSQL. Before querying tables, you can load data into Aurora Limitless Database tables by using the COPY command or by using the data loading utility.

To run queries, connect to the cluster endpoint, as shown in Connecting to your Aurora Limitless Database DB cluster. All PostgreSQL SELECT queries are performed on the router to which the client sends the query and shards where the data is located.

To achieve a high degree of parallel processing, Aurora PostgreSQL Limitless Database utilizes two querying methods: single-shard queries and distributed queries, which determines whether your query is single-shard or distributed and processes the query accordingly.

  • Single-shard query – A query where all the data needed for the query is on one shard. The entire operation can be performed on one shard, including any result set generated. When the query planner on the router encounters a query like this, the planner sends the entire SQL query to the corresponding shard.
  • Distributed query – A query run on a router and more than one shard. The query is received by one of the routers. The router creates and manages the distributed transaction, which is sent to the participating shards. The shards create a local transaction with the context provided by the router, and the query is run.

For examples of single-shard queries, you use the following parameters to configure the output from the EXPLAIN command.

postgres_limitless=> SET rds_aurora.limitless_explain_options = shard_plans, single_shard_optimization;
SET

postgres_limitless=> EXPLAIN SELECT * FROM items WHERE item_id = 25;

                     QUERY PLAN
--------------------------------------------------------------
 Foreign Scan  (cost=100.00..101.00 rows=100 width=0)
   Remote Plans from Shard postgres_s4:
         Index Scan using items_ts00287_id_idx on items_ts00287 items_fs00003  (cost=0.14..8.16 rows=1 width=15)
           Index Cond: (id = 25)
 Single Shard Optimized
(5 rows) 

To learn more about the EXPLAIN command, see EXPLAIN in the PostgreSQL documentation.

For examples of distributed queries, you can insert new items named Book and Pen into the items table.

postgres_limitless=> INSERT INTO items(item_name)VALUES ('Book'),('Pen')

This makes a distributed transaction on two shards. When the query runs, the router sets a snapshot time and passes the statement to the shards that own Book and Pen. The router coordinates an atomic commit across both shards, and returns the result to the client.

You can use distributed query tracing, a tool to trace and correlate queries in PostgreSQL logs across Aurora PostgreSQL Limitless Database. To learn more, visit Querying Limitless Database in the Amazon Aurora User Guide.

Some SQL commands aren’t supported. For more information, see Aurora Limitless Database reference in the Amazon Aurora User Guide.

Things to know
Here are a couple of things that you should know about this feature:

  • Compute – You can only have one DB shard group per DB cluster and set the maximum capacity of a DB shard group to 16–6144 ACUs. Contact us if you need more than 6144 ACUs. The initial number of routers and shards is determined by the maximum capacity that you set when you create a DB shard group. The number of routers and shards doesn’t change when you modify the maximum capacity of a DB shard group. To learn more, see the table of the number of routers and shards in the Amazon Aurora User Guide.
  • Storage – Aurora PostgreSQL Limitless Database only supports the Amazon Aurora I/O-Optimized DB cluster storage configuration. Each shard has a maximum capacity of 128 TiB. Reference tables have a size limit of 32 TiB for the entire DB shard group. To reclaim storage space by cleaning up your data, you can use the vacuuming utility in PostgreSQL.
  • Monitoring – You can use Amazon CloudWatch, Amazon CloudWatch Logs, or Performance Insights to monitor Aurora PostgreSQL Limitless Database. There are also new statistics functions and views and wait events for Aurora PostgreSQL Limitless Database that you can use for monitoring and diagnostics.

Now available
Amazon Aurora PostgreSQL Limitless Database is available today with PostgreSQL 16.4 compatibility in the AWS US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) Regions.

Give Aurora PostgreSQL Limitless Database a try in the Amazon RDS console. For more information, visit the Amazon Aurora User Guide and send feedback to AWS re:Post for Amazon Aurora or through your usual AWS support contacts.

Channy

October 2024 Activity with Username chenzilong, (Thu, Oct 31st)

This post was originally published on this site

After reviewing the Top 10 Not So Common SSH Usernames and Passwords [1] published by Johannes 2 weeks ago, I noticed activity by one in his list that we don't really know what it is. Beginning 12 October 2024, my DShield sensor started storing one of the usernames mentioned in his diary that I had never seen before (I have over a year of data). The username chenzilong has been used with 5 different passwords including, some combination with the same username. So far, this account activity has been used with 302 different IPs.

Scans for RDP Gateways, (Wed, Oct 30th)

This post was originally published on this site

RDP is one of the most prominent entry points into networks. Ransomware actors have taken down many large networks after initially entering via RDP. Credentials for RDP access are often traded by “initial access brokers".

I noticed today an uptick in scans for "/RDWeb/Pages/en-US/login.aspx" . This is often used to expose RDP gateways, and there are even well-known Google dorks that assist in finding these endpoints. The scans I observed today are spread between several hundred IP addresses, none of which "sticks out" as more frequent than others. This could indicate a large botnet being used to scan for this endpoint.

There are three variations of this URL being used, all with the same effect of detecting the presence of an RDP gateway:

/RDWeb
/RDWeb/Pages/en-US/login.aspx
/RDWeb/Pages/

The first two are used the most, and the third (/RDWeb/Pages/) is used very little.

This data is from a subset of our honeypots that appears particularly attractive to these scans. The idea isn't new, but it appears that some group has "rediscovered" it.

Graph showing scans per day for RDWeb with a significant increase these last 3 days.

 

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing Microsoft.PowerShell.PlatyPS 1.0.0-Preview1

This post was originally published on this site

PlatyPS is the primary tool for creating the PowerShell help displayed using Get-Help.
PowerShell help files are stored in an XML format known as
Microsoft Assistance Markup Language (MAML). Prior to PlatyPS, the help files were hand
authored using complex tool chains. Markdown is widely used in the open source community,
supported by many editors including Visual Studio Code, and easier to author. PlatyPS
simplifies the process by allowing you to write the help files in Markdown and then converted to
MAML.

Announcing Microsoft.PowerShell.PlatyPS

We’re pleased to announce the release of Microsoft.PowerShell.PlatyPS 1.0.0-Preview1. With
this release, there are two versions of PlatyPS.

  • platyPS v0.14.2 is the current version of PlatyPS that’s used to create PowerShell help files
    in Markdown format.
  • Microsoft.PowerShell.PlatyPS is the new version of PlatyPS that includes several improvements:
    • Provides a more accurate description of a PowerShell cmdlet and its parameters
    • Increased performance – processes 1000s of Markdown files in seconds
    • Creates an object model of the help file that you can manipulate in memory
    • Provides cmdlets that you can chain together to perform complex operations

Our main goal for this release is to address long standing issues, add more schema driven
features, and improve validity checking along with performance. This release is a substantial
rewrite with all new cmdlets. If you have scripts that use the older version of PlatyPS, you must
rewrite them to use the new cmdlets.

In this Preview release, we focused on:

  • Re-write in C# leveraging markdig for parsing Markdown.
  • New Markdown schema that includes all elements needed for Get-Help, plus information that was
    previously unavailable.
  • The new cmdlets produce objects, supporting chaining cmdlets for complex operations.
  • Full serialization to YAML to support our publishing pipeline.
  • Automatic conversion of existing Markdown to the new object model.
  • Export of the object model to Markdown, Yaml, and MAML.
  • The module contains the following cmdlets:
    • Compare-CommandHelp
    • Export-MamlCommandHelp
    • Export-MarkdownCommandHelp
    • Export-MarkdownModuleFile
    • Export-YamlCommandHelp
    • Export-YamlModuleFile
    • Import-MamlHelp
    • Import-MarkdownCommandHelp
    • Import-MarkdownModuleFile
    • Import-YamlCommandHelp
    • Import-YamlModuleFile
    • New-CommandHelp
    • New-MarkdownCommandHelp
    • New-UpdateableHelp
    • Test-MarkdownCommandHelp
    • Update-CommandHelp
    • Update-MarkdownCommandHelp

Microsoft.PowerShell.PlatyPS runs on:

  • Windows PowerShell 5.1+
  • PowerShell 7+ on Windows, Linux, and macOS

Installing Microsoft.PowerShell.PlatyPS

To begin working with Microsoft.PowerShell.PlatyPS 1.0.0 Preview1, download and install the
module from PSGallery.

Install-PSResource -Name Microsoft.PowerShell.PlatyPS -Prerelease

Documentation to get started

For the preview1 release, the cmdlet reference is available in the GitHub repository at
Microsoft.PowerShell.PlatyPS. We’re working on publishing the documentation to the Learn
platform before the next release.

For an example of how to use the new cmdlets, see Example #1 in New-MarkdownCommandHelp.

Call to action

Our goal is to make it easier for you to update and maintain PowerShell help files. We value your
feedback. Stop by our GitHub repository and let us know of any issues you find.

Jason Helmick

Sr. Product Manager, PowerShell

The post Announcing Microsoft.PowerShell.PlatyPS 1.0.0-Preview1 appeared first on PowerShell Team.

Announcing Microsoft.PowerShell.PlatyPS 1.0.0-Preview1

This post was originally published on this site

PlatyPS is the primary tool for creating the PowerShell help displayed using Get-Help.
PowerShell help files are stored in an XML format known as
Microsoft Assistance Markup Language (MAML). Prior to PlatyPS, the help files were hand
authored using complex tool chains. Markdown is widely used in the open source community,
supported by many editors including Visual Studio Code, and easier to author. PlatyPS
simplifies the process by allowing you to write the help files in Markdown and then converted to
MAML.

Announcing Microsoft.PowerShell.PlatyPS

We’re pleased to announce the release of Microsoft.PowerShell.PlatyPS 1.0.0-Preview1. With
this release, there are two versions of PlatyPS.

  • platyPS v0.14.2 is the current version of PlatyPS that’s used to create PowerShell help files
    in Markdown format.
  • Microsoft.PowerShell.PlatyPS is the new version of PlatyPS that includes several improvements:
    • Provides a more accurate description of a PowerShell cmdlet and its parameters
    • Increased performance – processes 1000s of Markdown files in seconds
    • Creates an object model of the help file that you can manipulate in memory
    • Provides cmdlets that you can chain together to perform complex operations

Our main goal for this release is to address long standing issues, add more schema driven
features, and improve validity checking along with performance. This release is a substantial
rewrite with all new cmdlets. If you have scripts that use the older version of PlatyPS, you must
rewrite them to use the new cmdlets.

In this Preview release, we focused on:

  • Re-write in C# leveraging markdig for parsing Markdown.
  • New Markdown schema that includes all elements needed for Get-Help, plus information that was
    previously unavailable.
  • The new cmdlets produce objects, supporting chaining cmdlets for complex operations.
  • Full serialization to YAML to support our publishing pipeline.
  • Automatic conversion of existing Markdown to the new object model.
  • Export of the object model to Markdown, Yaml, and MAML.
  • The module contains the following cmdlets:
    • Compare-CommandHelp
    • Export-MamlCommandHelp
    • Export-MarkdownCommandHelp
    • Export-MarkdownModuleFile
    • Export-YamlCommandHelp
    • Export-YamlModuleFile
    • Import-MamlHelp
    • Import-MarkdownCommandHelp
    • Import-MarkdownModuleFile
    • Import-YamlCommandHelp
    • Import-YamlModuleFile
    • New-CommandHelp
    • New-MarkdownCommandHelp
    • New-UpdateableHelp
    • Test-MarkdownCommandHelp
    • Update-CommandHelp
    • Update-MarkdownCommandHelp

Microsoft.PowerShell.PlatyPS runs on:

  • Windows PowerShell 5.1+
  • PowerShell 7+ on Windows, Linux, and macOS

Installing Microsoft.PowerShell.PlatyPS

To begin working with Microsoft.PowerShell.PlatyPS 1.0.0 Preview1, download and install the
module from PSGallery.

Install-PSResource -Name Microsoft.PowerShell.PlatyPS -Prerelease

Documentation to get started

For the preview1 release, the cmdlet reference is available in the GitHub repository at
Microsoft.PowerShell.PlatyPS. We’re working on publishing the documentation to the Learn
platform before the next release.

For an example of how to use the new cmdlets, see Example #1 in New-MarkdownCommandHelp.

Call to action

Our goal is to make it easier for you to update and maintain PowerShell help files. We value your
feedback. Stop by our GitHub repository and let us know of any issues you find.

Jason Helmick

Sr. Product Manager, PowerShell

The post Announcing Microsoft.PowerShell.PlatyPS 1.0.0-Preview1 appeared first on PowerShell Team.

Simplify and enhance Amazon S3 static website hosting with AWS Amplify Hosting

This post was originally published on this site

We are announcing an integration between AWS Amplify Hosting and Amazon Simple Storage Service (Amazon S3). Now, you can deploy static websites with content stored in your S3 buckets and serve over a content delivery network (CDN) with just a few clicks.

AWS Amplify Hosting is a fully managed service for hosting static sites that handles various aspects of deploying a website. It gives you benefits such as custom domain configuration with SSL, redirects, custom headers, and deployment on a globally available CDN powered by Amazon CloudFront.

When deploying a static website, Amplify remembers the connection between your S3 bucket and deployed website, so you can easily update your website with a single click when you make changes to website content in your S3 bucket. Using AWS Amplify Hosting is the recommended approach for static website hosting because it offers more streamlined and faster deployment without extensive setup.

Here’s how the integration works starting from the Amazon S3 console:

Deploying a static website using the Amazon S3 console
Let’s use this new integration to host a personal website directly from my S3 bucket.

To get started, I navigate to my bucket in the Amazon S3 console . Here’s the list of all the content in that S3 bucket:

To use the new integration with AWS Amplify Hosting, I navigate to the Properties section, then I scroll down until I find Static website hosting and select Create Amplify app.

Then, it redirects me to the Amplify page and populates the details from my S3 bucket. Here, I configure my App name and the Branch name. Then, I select Save and deploy.

Within seconds, AWS Amplify has deployed my static website, and I can visit the site by selecting Visit deployed URL. If I make any subsequent changes in my S3 bucket for my static website, I need to redeploy my application in the Amplify console by selecting the Deploy updates button.

I can also use the AWS Command Line Interface (AWS CLI) for programmatic deployment. To do that, I need to get the values for required parameters, such as APP_ID and BRANCH_NAME from my AWS Amplify dashboard. Here’s the command I use for deployment:

aws amplify start-deployment --appId APP_ID --branchName BRANCH_NAME --sourceUrlType=BUCKET_PREFIX --sourceUrl s3://S3_BUCKET/S3_PREFIX

After Amplify Hosting generates a URL for my website, I can optionally configure a custom domain for my static website. To do that, I navigate to my apps in AWS Amplify and select Custom domains in the navigation pane. Then, I select Add domain to start configuring a custom domain for my static website. Learn more about setting up custom domains in the Amplify Hosting User Guide.

In the following screenshot, I have my static website configured with my custom domain. Amplify also issues an SSL/TLS certificate for my domain so that all traffic is secured through HTTPS.

Now, I have my static site ready, and I can check it out at https://donnie.id.

Things you need to know
More available features – AWS Amplify Hosting has more features you can use for your static websites. Visit the AWS Amplify product page to learn more.

Deployment options – You can get started deploying a static website from Amazon S3 using the Amplify Hosting console, AWS CLI, or AWS SDKs.

Pricing – For pricing information, visit Amazon S3 pricing page and AWS Amplify pricing page.

Availability – Amplify Hosting integration with Amazon S3 is now available in AWS Regions where Amplify Hosting is available

Start building your static website with this new integration. To learn more about Amazon S3 static website hosting with AWS Amplify, visit the AWS Amplify Hosting User Guide

Happy building,

Donnie

Apple Updates Everything, (Mon, Oct 28th)

This post was originally published on this site

Today, Apple released updates for all of its operating systems. These updates include new AI features. For iOS 18 users, the only upgrade path is iOS 18.1, which includes the AI features. Same for users of macOS 15 Sequoia. For older operating systems versions (iOS 17, macOS 13, and 14), patches are made available, addressing only the security issues.

Celebrating 10 Years of Amazon ECS: Powering a Decade of Containerized Innovation

This post was originally published on this site

Today, we celebrate 10 years of Amazon Elastic Container Service (ECS) and its incredible journey of pushing the boundaries of what’s possible in the cloud! What began as a solution to streamline running Docker containers on Amazon Web Services (AWS) has evolved into a cornerstone technology, offering both impressive performance and operational simplicity, including a serverless option with AWS Fargate for seamless container orchestration.

Over the past decade, Amazon ECS has become a trusted solution for countless organizations, providing the reliability and performance that customers such as SmugMug rely on to power their operations without being bogged down by infrastructure challenges. As Andrew Shieh, Principal Engineer at SmugMug, shares, Amazon ECS has been the “unsung hero” behind their seamless transition to AWS and efficient handling of massive data operations, such as migrating petabytes of photos to Amazon Simple Storage Service (Amazon S3). “The blazingly fast container spin-ups allow us to deliver awesome experiences to our customers,” he adds. It’s this kind of dependable support that has made Amazon ECS a favorite among developers and platform teams, helping them scale their solutions and innovate over the years.

In the early 2010s, as containerized services like Docker gained traction, developers started looking for efficient ways to manage and scale their applications in this new paradigm. Traditional infrastructure was cumbersome, and managing containers at scale was challenging. Amazon ECS arrived in 2014, just when developers were looking to adopt containers at scale. It offered a fully managed, and reliable solution that streamlined container orchestration on AWS. Teams could focus on building and deploying applications without the overhead of managing clusters or complex infrastructure, ushering in a new era of cloud-native development.

When the Amazon ECS team set out to build the service, their vision was clear. As Deepak Singh, product manager who launched Amazon ECS now serving as VP of Next Generation Developer Experience, said at the time, “Our customers wanted a solution that was deeply integrated with AWS, that could work for them at scale and could grow as they grew.” Amazon ECS was designed to use the best of what AWS has to offer—scalability, availability, resilience, and security—to give customers the confidence to run their applications in production environments.

Evolution
Amazon ECS has consistently innovated for customers over the past decade. It marked the beginning of the container innovation journey at AWS, paving the way for a broader ecosystem of container-related services that have transformed how businesses build and manage applications.

Smartsheet proudly sings the praises of the significant impact that Amazon ECS, and especially AWS Fargate, had on their business to date. “Our teams can deploy more frequently, increase throughput, and reduce the engineering time to deploy from hours to minutes. We’ve gone from weekly deployments to deployments that we do multiple times a day. And from what used to be hours of at least two engineers’ time, we’ve been able to shave that down to several minutes,” said Skylar Graika, distinguished engineer at Smartsheet. ” Within the last year, we have been able to scale out its capacity by 50 times, and by leveraging deep integrations across AWS services, we have improved efficiencies and simplified our security and compliance process. Additionally, by adopting AWS Graviton with the Fargate deployments, we’ve seen a 20 percent reduction in cost.”

Amazon ECS played a pivotal role as the starting point for a decade of container evolution at AWS and today, it still stands as one of the most scalable and reliable container orchestration solutions, powering massive operations such as Prime Day 2024, where Amazon launched an impressive 77.24 million ECS tasks, Rufus, a shopping assistant experience powered by generative AI that uses Amazon ECS as part of its core architecture and so many others.

Rustem Feyzkhanov, ML engineering manager at Instrumental, and AWS Machine Learning Hero, is quick to recognize the increased efficiency gained from adopting the service. “Amazon ECS has become an indispensable tool in our work,” says Rastem. “Over the past years, it has simplified container management and service scaling, allowing us to focus on development rather than infrastructure. This service makes it possible for application code teams to co-own infrastructure and that speeds up the development process.”

Timeline
Let’s have a look at some of the key milestones that have shaped the evolution of ECS, marking pivotal moments that changed how customers harness the power of containers on AWS.

2014Introducing Amazon EC2 Container Service! – Check out this nostalgic blog post, which marked the release of ECS in preview mode. It shows how much functionality the service already launched with making a big impact from the get-go! Customers could already run, stop, and manage Docker containers on a cluster of Amazon Elastic Compute Cloud (EC2) instances, with built-in resource management and task scheduling. It became generally available on April 9, 2015.

2015Amazon ECS auto-scaling – With the introduction of added support for more Amazon CloudWatch metrics, customers could now automatically scale their clusters in and out by monitoring the CPU and memory usage in the cluster and configuring threshold values for auto scaling. I think this is a great example of how seemingly modest releases can have a huge impact for customers. Another impactful release was the introduction of Amazon ECR, a fully managed container registry that streamlines container storage and deployment.

2016Application Load Balancer (ALB) for ECS – The introduction of ALB for ECS, provided advanced routing features for containerized applications. ALB enabled more efficient load balancing across microservices, improving traffic management and scalability for ECS workloads. Windows users also benefitted from various releases this year including the added support for Windows Server 2016 with several AMIs and right and beta support for Windows Server Containers.

2017Introducing AWS Fargate! – Fargate was a huge leap forward towards customers being able to run containers without managing the underlying infrastructure, which significantly streamlined their operations. Developers no longer had to worry about provisioning, scaling, or maintaining the EC2 instances on which their containers ran and could now focus entirely on their application logic while AWS handled the rest. This helped them to scale faster and innovate more freely, accelerating their cloud-centered journeys and transforming how they approached containerized applications.

2018AWS Auto Scaling – With this release, teams could now build scaling plans easily for their Amazon ECS tasks. This year also saw the release of many improvements such as moving Amazon ECR to its own console experience outside of the Amazon ECS console, integration of Amazon ECS with AWS Cloud Map, and many others. Additionally, AWS Fargate continued to expand into regions world-wide.

2019Arm-based Graviton2 instances available on Amazon ECS – AWS Graviton2 was released during a time when many businesses were turning their attention towards reprioritizing their sustainability goals. With a focus on improved performance and lower power usage, EC2-instances powered by Graviton2 were supported on Amazon ECS from day 1 of their launch. Customers could take full advantage of this new groundbreaking custom chipset specially built for the cloud. Another great highlight from this year was the launch of AWS Fargate Spot which helped customers to achieve significant cost reductions.

2020Bottlerocket – An open-source, Linux-based operating system optimized for running containers. Designed to improve security and simplify updates, Bottlerocket helped Amazon ECS users achieve greater efficiency and stability in managing containerized workloads.

2021ECS Exec – Amazon ECS introduced ECS Exec in March 2021. With it, customers could run commands directly inside a running container on Amazon EC2 or AWS Fargate. This feature provided enhanced troubleshooting and debugging capabilities without requiring to modify or redeploy containers, streamlining operational workflows. This year also saw the release of Amazon ECS Windows containers streamlined operations for those running them in their cluster.

2022Amazon ECS introduces Service Connect – The release of ECS Service Connect marked a pivotal moment for organizations running microservices architectures on Amazon ECS because it abstracted away much of the complexity involved in service-to-service networking. This dramatically streamlined management of communication between services. With a native service discovery and service mesh capability, developers could now define and manage how their services interacted with each other seamlessly, improving observability, resilience, and security without the need to manage custom networking or load balancers.

2023Amazon GuardDuty ECS runtime monitoring – Last year, Amazon GuardDuty introduced ECS Runtime Monitoring for AWS Fargate, enhancing security by detecting potential threats within running containers. This feature provides continuous visibility into container workloads, improving security posture without additional performance overhead.

2024Amazon ECS Fargate with EBS Integration – In January this year, Amazon ECS and AWS Fargate added support for Amazon EBS volumes, enabling persistent storage for containers. This integration allows users to attach EBS volumes to Fargate tasks, making it much more effortless to deploy storage and support data intensive applications.

Where are we now?
Amazon ECS is in an exciting place right now as it enjoys a level of maturity that allows it to keep innovating while delivering huge value to both new and existing customers. This year has seen many improvements to the service making it increasingly more secure, cost-effective and straightforward to use.

This includes releases such as the support for automatic traffic encryption using TLS in Service Connect;  enhanced stopped task error messages which makes it more straightforward to troubleshoot task launch failures; and the ability to restart containers without having to relaunch the task. The introduction of Graviton2 based instances with AWS Fargate Spot provided customers with a great opportunity to double down on their cost savings.

As usual with AWS, the Amazon ECS team are very focused on delighting customers. “With Amazon ECS and AWS Fargate, we make it really easy for you to focus on your differentiated business logic while leveraging all the powerful compute that AWS offers without having to manage it,” says Nick Coult, director of Product and Science, Serverless Compute. “Our vision with these services was, and still is, to enable you to minimize infrastructure management, write less code, architect for extensibility, and drive high performance, resilience, and security. And, we have continuously innovated in these areas with this goal in mind over the past 10 years. At Amazon ECS, we remain steadfast in our commitment to delivering agility without compromising security, empowering developers with an exceptional experience, unlocking broader, simpler integrations, and new possibilities for emerging workloads like generative AI.”

Conclusion
Looking back on its history, it’s clear to me that ECS is a testament to the AWS approach of working backwards from customer needs. From its early days of streamlining container orchestration to the transformative introduction of Fargate and Service Connect, ECS has consistently evolved to remove barriers for developers and businesses alike.

As we look to the future, I think ECS will keep pushing boundaries, enabling even more innovative and scalable solutions. I encourage everyone to continue exploring what ECS has to offer, discovering new ways to build and pushing the platform to its full potential. There’s a lot more to come, and I’m excited to see where the journey takes us.

Learning resources
If you’re new to Amazon ECS, I recommend you read the comprehensive and accessible Getting Started With Amazon ECS guide.

When you’re ready to skill up with some hands-on free training, I recommend trying this self-paced Amazon ECS workshop, which covers many aspects of the service, including many of the features mentioned in this post.

Thank you, Amazon ECS, and thank you to all of you who use this service and continue to help us make it better for you. Here’s to another 10 years of container innovation! ????

Self-contained HTML phishing attachment using Telegram to exfiltrate stolen credentials, (Mon, Oct 28th)

This post was originally published on this site

Phishing authors have long ago discovered that adding HTML attachments to the messages they send out can have significant benefits for them – especially since an HTML file can contain an entire credential-stealing web page and does not need to reach out to the internet for any other reason than to send the credentials a victim puts in a login form to an attacker-controlled server[1]. Since this approach can be significantly more effective than just pointing recipients to a URL somewhere on the internet, the technique of sending out entire credential-stealing pages as attachments has become quite commonplace.