One of my hunting rules triggered some suspicious Python code, and, diving deeper, I found an interesting example of DLL side-loading. This technique involves placing a malicious DLL with the same name and export structure as a legitimate DLL in a location the application checks first, causing the application to load the malicious DLL instead of the intended one. This is a classic vulnerability seen for years in many software. The attacker also implemented simple tricks to bypass classic security controls.
Monthly Archives: March 2025
Static Analysis of GUID Encoded Shellcode, (Mon, Mar 17th)
I wanted to figure out how to statically decode the GUID encoded shellcode Xavier wrote about in his diary entry "Shellcode Encoded in UUIDs".
Here is the complete Python script:
I use re-search.py to select the GUIDs:
I then decode the hexadecimal data with my tool hex-to-bin.py. Option -H is needed to ignore all non-hexadecimal characters.
Notice that the text that resembles a User Agent String is mangled. That's because of the way GUIDs are encoded into binary data.
Take a GUID like this one:
{00112233-4455-6677-8899-AABBCCDDEEFF}
When it is encoded to binary data, the first 3 parts are encoded little-endian, and the last 2 parts are encoded big-endian. Giving this byte sequence:
33 22 11 00 55 44 77 66 88 99 AA BB CC DD EE FF
I will now use my translate.py tool to reproduce this encoding (in the original Python script, this encoding is done with a Win32 API call: ctypes.windll.Rpcrt4.UuidFromStringA).
First I split the byte stream into chuncks of 16 bytes (the length of a GUID) with a Python list comprehension:
Next I rewrite the GUID (data[i:i+16]) by changing the order of the first 3 parts: data[i:i+4][::-1] + data[i+4:i+6][::-1] + data[i+6:i+8][::-1] ([::-1] is the expression used to reverse a sequence in Python):
Now I can analyze this shellcode with my Cobalt Strike analysis tool 1768.py:
This gives me information like the IPv4 address of the C2, the port, the path, …
What I don't see, is the license ID. That's because the decoded data has trailing null bytes:
These 2 trailing null bytes are the result of the GUID encoding: each GUID is 16 bytes, so the decoded data has a length which is a multiple of 16 bytes, while the shellcode has a length which is not a multiple of 16 bytes. If I drop these 2 trailing null bytes, 1768.py will detect the license ID:
Didier Stevens
Senior handler
blog.DidierStevens.com
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Mirai Bot now incroporating (malformed?) DrayTek Vigor Router Exploits, (Sun, Mar 16th)
Last October, Forescout published a report disclosing several vulnerabilities in DrayTek routers. According to Forescount, about 700,000 devices were exposed to these vulnerabilities [1]. At the time, DrayTek released firmware updates for affected routers [2]. Forescout also noted that multiple APTs targeting devices.
Interestingly, Forescout's report used the URL "/cgi-bin/malfunction.cgi", a URL returning a 404 status for the DrayTek routers I investigated. On the other hand, later publications by Fortinet and others used "mainfunction.cgi", which appears to be the actual vulnerable script.
For most of the attacks we are seeing are just searching for DrayTek routers using URLs like "/cgi-bin/mainfunction.cgi" without any arguments. These go back to the end of March of 2020. Starting in June of 2020, we see first exploit attempts for the "keyPath" vulnerability, and these attacks still flare up from time to time. The other vulnerable parameter often exploited is "cvmcfgupload". Below, I create a plot showing the prevalence of these two attacks, and a third one, which I saw again flare up yesterday.
This third attack is what I believe to be a typo unless the attackers are looking for a completely different vulnerability. The attack URL is identical to the attacks above but missing the dash in "cgi-bin".
The goal of these attacks is the same as the others: They attempt to upload and execute copies of a bot, usually various variants of Mirai. I guess that they are adding so many vulnerabilities to these bots that a couple of ineffective exploits won't matter.
For an old vulnerability like this, it is odd to see a large spike all for a sudden, and even more curious that the exploit will likely not work. If anybody has any insight, let me know.
The latest malformed exploit attempts to download the usual simple multi-architecture bash script:
hxxp://45[.]116.104.123/hiroz3x.sh
Next, it attempts to download the actual bot:
hxxp://45[.]116.104.123/h0r0zx00xh0r0zx00xdefault/h0r0zx00x.x86
A quick string analysis of the bot shows attempts to exploit other vulnerabilities and likely some brute force component. A Virustotal analysis can be found here:
https://www.virustotal.com/gui/file/80bfbbbe5c5b9c78e391291a087d14370e342bd0ec651d9097a8b04694e7c9b9
[1] https://www.forescout.com/resources/draybreak-draytek-research/
[2] https://www.draytek.com/support/resources/routers#version
—
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
AWS Pi Day 2025: Data foundation for analytics and AI
Every year on March 14 (3.14), AWS Pi Day highlights AWS innovations that help you manage and work with your data. What started in 2021 as a way to commemorate the fifteenth launch anniversary of Amazon Simple Storage Service (Amazon S3) has now grown into an event that highlights how cloud technologies are transforming data management, analytics, and AI.
This year, AWS Pi Day returns with a focus on accelerating analytics and AI innovation with a unified data foundation on AWS. The data landscape is undergoing a profound transformation as AI emerges in most enterprise strategies, with analytics and AI workloads increasingly converging around a lot of the same data and workflows. You need an easy way to access all your data and use all your preferred analytics and AI tools in a single integrated experience. This AWS Pi Day, we’re introducing a slate of new capabilities that help you build unified and integrated data experiences.
The next generation of Amazon SageMaker: The center of all your data, analytics, and AI
At re:Invent 2024, we introduced the next generation of Amazon SageMaker, the center of all your data, analytics, and AI. SageMaker includes virtually all the components you need for data exploration, preparation and integration, big data processing, fast SQL analytics, machine learning (ML) model development and training, and generative AI application development. With this new generation of Amazon SageMaker, SageMaker Lakehouse provides you with unified access to your data and SageMaker Catalog helps you to meet your governance and security requirements. You can read the launch blog post written by my colleague Antje to learn more details.
Core to the next generation of Amazon SageMaker is SageMaker Unified Studio, a single data and AI development environment where you can use all your data and tools for analytics and AI. SageMaker Unified Studio is now generally available.
SageMaker Unified Studio facilitates collaboration among data scientists, analysts, engineers, and developers as they work on data, analytics, AI workflows, and applications. It provides familiar tools from AWS analytics and artificial intelligence and machine learning (AI/ML) services, including data processing, SQL analytics, ML model development, and generative AI application development, into a single user experience.
SageMaker Unified Studio also brings selected capabilities from Amazon Bedrock into SageMaker. You can now rapidly prototype, customize, and share generative AI applications using foundation models (FMs) and advanced features such as Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, Amazon Bedrock Agents, and Amazon Bedrock Flows to create tailored solutions aligned with your requirements and responsible AI guidelines all within SageMaker.
Last but not least, Amazon Q Developer is now generally available in SageMaker Unified Studio. Amazon Q Developer provides generative AI powered assistance for data and AI development. It helps you with tasks like writing SQL queries, building extract, transform, and load (ETL) jobs, and troubleshooting, and is available in the Free tier and Pro tier for existing subscribers.
You can learn more about SageMaker Unified Studio in this recent blog post written by my colleague Donnie.
During re:Invent 2024, we also launched Amazon SageMaker Lakehouse as part of the next generation of SageMaker. SageMaker Lakehouse unifies all your data across Amazon S3 data lakes, Amazon Redshift data warehouses, and third-party and federated data sources. It helps you build powerful analytics and AI/ML applications on a single copy of your data. SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg–compatible tools and engines. In addition, zero-ETL integrations automate the process of bringing data into SageMaker Lakehouse from AWS data sources such as Amazon Aurora or Amazon DynamoDB and from applications such as Salesforce, Facebook Ads, Instagram Ads, ServiceNow, SAP, Zendesk, and Zoho CRM. The full list of integrations is available in the SageMaker Lakehouse FAQ.
Building a data foundation with Amazon S3
Building a data foundation is the cornerstone of accelerating analytics and AI workloads, enabling organizations to seamlessly manage, discover, and utilize their data assets at any scale. Amazon S3 is the world’s best place to build a data lake, with virtually unlimited scale, and it provides the essential foundation for this transformation.
I’m always astonished to learn about the scale at which we operate Amazon S3: It currently holds over 400 trillion objects, exabytes of data, and processes a mind-blowing 150 million requests per second. Just a decade ago, not even 100 customers were storing more than a petabyte (PB) of data on S3. Today, thousands of customers have surpassed the 1 PB milestone.
Amazon S3 stores exabytes of tabular data, and it averages over 15 million requests to tabular data per second. To help you reduce the undifferentiated heavy lifting when managing your tabular data in S3 buckets, we announced Amazon S3 Tables at AWS re:Invent 2024. S3 Tables are the first cloud object store with built-in support for Apache Iceberg. S3 tables are specifically optimized for analytics workloads, resulting in up to threefold faster query throughput and up to tenfold higher transactions per second compared to self-managed tables.
Today, we’re announcing the general availability of Amazon S3 Tables integration with Amazon SageMaker Lakehouse Amazon S3 Tables now integrate with Amazon SageMaker Lakehouse, making it easy for you to access S3 Tables from AWS analytics services such as Amazon Redshift, Amazon Athena, Amazon EMR, AWS Glue, and Apache Iceberg–compatible engines such as Apache Spark or PyIceberg. SageMaker Lakehouse enables centralized management of fine-grained data access permissions for S3 Tables and other sources and consistently applies them across all engines.
For those of you who use a third-party catalog, have a custom catalog implementation, or only need basic read and write access to tabular data in a single table bucket, we’ve added new APIs that are compatible with the Iceberg REST Catalog standard. This enables any Iceberg-compatible application to seamlessly create, update, list, and delete tables in an S3 table bucket. For unified data management across all of your tabular data, data governance, and fine-grained access controls, you can also use S3 Tables with SageMaker Lakehouse.
To help you access S3 Tables, we’ve launched updates in the AWS Management Console. You can now create a table, populate it with data, and query it directly from the S3 console using Amazon Athena, making it easier to get started and analyze data in S3 table buckets.
The following screenshot shows how to access Athena directly from the S3 console.
When I select Query tables with Athena or Create table with Athena, it opens the Athena console on the correct data source, catalog, and database.
Since re:Invent 2024, we’ve continued to add new capabilities to S3 Tables at a rapid pace. For example, we added schema definition support to the CreateTable
API and you can now create up to 10,000 tables in an S3 table bucket. We also launched S3 Tables into eight additional AWS Regions, with the most recent being Asia Pacific (Seoul, Singapore, Sydney) on March 4, with more to come. You can refer to the S3 Tables AWS Regions page of the documentation to get the list of the eleven Regions where S3 Tables are available today.
Amazon S3 Metadata—announced during re:Invent 2024— has been generally available since January 27. It’s the fastest and easiest way to help you discover and understand your S3 data with automated, effortlessly-queried metadata that updates in near real time. S3 Metadata works with S3 object tags. Tags help you logically group data for a variety of reasons, such as to apply IAM policies to provide fine-grained access, specify tag-based filters to manage object lifecycle rules, and selectively replicate data to another Region. In Regions where S3 Metadata is available, you can capture and query custom metadata that is stored as object tags. To reduce the cost associated with object tags when using S3 Metadata, Amazon S3 reduced pricing for S3 object tagging by 35 percent in all Regions, making it cheaper to use custom metadata.
AWS Pi Day 2025
Over the years, AWS Pi Day has showcased major milestones in cloud storage and data analytics. This year, the AWS Pi Day virtual event will feature a range of topics designed for developers and technical decision-makers, data engineers, AI/ML practitioners, and IT leaders. Key highlights include deep dives, live demos, and expert sessions on all the services and capabilities I discussed in this post.
By attending this event, you’ll learn how you can accelerate your analytics and AI innovation. You’ll learn how you can use S3 Tables with native Apache Iceberg support and S3 Metadata to build scalable data lakes that serve both traditional analytics and emerging AI/ML workloads. You’ll also discover the next generation of Amazon SageMaker, the center for all your data, analytics, and AI, to help your teams collaborate and build faster from a unified studio, using familiar AWS tools with access to all your data whether it’s stored in data lakes, data warehouses, or third-party or federated data sources.
For those looking to stay ahead of the latest cloud trends, AWS Pi Day 2025 is an event you can’t miss. Whether you’re building data lakehouses, training AI models, building generative AI applications, or optimizing analytics workloads, the insights shared will help you maximize the value of your data.
Tune in today and explore the latest in cloud data innovation. Don’t miss the opportunity to engage with AWS experts, partners, and customers shaping the future of data, analytics, and AI.
If you missed the virtual event on March 14, you can visit the event page at any time—we will keep all the content available on-demand there!
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
Collaborate and build faster with Amazon SageMaker Unified Studio, now generally available
Today, we’re announcing the general availability of Amazon SageMaker Unified Studio, a single data and AI development environment where you can find and access all of the data in your organization and act on it using the best tool for the job across virtually any use case. Introduced as preview during AWS re:Invent 2024, my colleague, Antje, summarized it as:
SageMaker Unified Studio (preview) is a single data and AI development environment. It brings together functionality and tools from the range of standalone “studios,” query editors, and visual tools that we have today in Amazon Athena, Amazon EMR, AWS Glue, Amazon Redshift, Amazon Managed Workflows for Apache Airflow (Amazon MWAA), and the existing SageMaker Studio.
Here’s a video to see Amazon SageMaker Unified Studio in action:
SageMaker Unified Studio breaks down silos in data and tools, giving data engineers, data scientists, data analysts, ML developers and other data practitioners a single development experience. This saves development time and simplifies access control management so data practitioners can focus on what really matters to them—building data products and AI applications.
This post focuses on several important announcements that we’re excited to share:
- New capabilities for Amazon Bedrock in SageMaker Unified Studio — The integration now supports new foundation models (FMs), including Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1, enables data sourcing from Amazon Simple Storage Service (Amazon S3) folders within projects for knowledge base creation, extends guardrail functionality to flows, and provides a streamlined user management interface for domain administrators to manage model governance across multiple Amazon Web Service (AWS) accounts.
- Amazon Q Developer is now generally available in SageMaker Unified Studio — Amazon Q Developer, the most capable generative AI assistant for software development, streamlines development in Amazon SageMaker Unified Studio by providing natural language, conversational interfaces that simplify tasks like writing SQL queries, building ETL jobs, troubleshooting, and generating real-time code suggestions.
To get started, go to the Amazon SageMaker console and create a SageMaker Unified Studio domain. To learn more, visit Create an Amazon SageMaker Unified Studio domain in the AWS documentation.
New capabilities for Amazon Bedrock in SageMaker Unified Studio
The capabilities of Amazon Bedrock within Amazon SageMaker Unified Studio offer a governed collaborative environment for developers to rapidly create and customize generative AI applications. This intuitive interface caters to developers of all skill levels, providing seamless access to the high-performance FMs offered in Amazon Bedrock and advanced customization tools for collaborative development of tailored generative AI applications.
Since the preview launch, several new FMs have become available in Amazon Bedrock and are fully integrated with SageMaker Unified Studio, including Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1. These models can be used for building generative AI apps and chatting in the playground in SageMaker Unified Studio.
Here’s how you can choose Anthropic’s Claude 3.7 Sonnet on the model selection in your project.
You can also source data or documents from S3 folders within your project and select specific FMs when creating knowledge bases.
During preview, we introduced Amazon Bedrock Guardrails to help you implement safeguards for your Amazon Bedrock application based on your use cases and responsible AI policies. Now, Amazon Bedrock Guardrails is extended to Amazon Bedrock Flows with this general availability release.
Additionally, we have streamlined generative AI setup for associated accounts with a new user management interface in SageMaker Unified Studio, making it straightforward for domain administrators to grant associated account admins access to model governance projects. This enhancement eliminates the need for command line operations, streamlining the process of configuring generative AI capabilities across multiple AWS accounts.
These new features eliminate barriers between data, tools, and builders in the generative AI development process. You and your team will gain a unified development experience by incorporating the powerful generative AI capabilities of Amazon Bedrock — all within the same workspace.
Amazon Q Developer is now generally available in SageMaker Unified Studio
Amazon Q Developer is now generally available in Amazon SageMaker Unified Studio, providing data professionals with generative AI–powered assistance across the entire data and AI development lifecycle.
Amazon Q Developer integrates with the full suite of AWS analytics and AI/ML tools and services within SageMaker Unified Studio, including data processing, SQL analytics, machine learning model development, and generative AI application development, to accelerate collaboration and help teams build data and AI products faster. To get started, you can select Amazon Q Developer icon.
For new users of SageMaker Unified Studio, Amazon Q Developer serves as an invaluable onboarding assistant. It can explain core concepts such as domains and projects, provide guidance on setting up environments, and answer your questions.
Amazon Q Developer helps you discover and understand data using powerful natural language interactions with SageMaker Catalog. What makes this implementation particularly powerful is how Amazon Q Developer combines broad knowledge of AWS analytics and AI/ML services with the user’s context to provide personalized guidance.
You can chat about your data assets through a conversational interface, asking questions such as “Show all payment related datasets” without needing to navigate complex metadata structures.
Amazon Q Developer offers SQL query generation through its integration with the built-in query editor available in SageMaker Unified Studio. Data professionals of varying skill levels can now express their analytical needs in natural language, receiving properly formatted SQL queries in return.
For example, you can ask, “Analyze payment method preferences by age group and region” and Amazon Q Developer will generate the appropriate SQL with proper joins across multiple tables.
Additionally, Amazon Q Developer is also available to assist with troubleshooting and generating real-time code suggestions in SageMaker Unified Studio Jupyter notebooks, as well as building ETL jobs.
Now available
- Availability — Amazon SageMaker Unified Studio is now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London), South America (São Paulo). Learn more about the availability of these capabilities on supported Region documentation page.
- Amazon Q Developer subscription — The free tier of Amazon Q Developer is available by default in SageMaker Unified Studio, requiring no additional setup or configuration. If you already have Amazon Q Developer Pro Tier subscriptions, you can use those enhanced capabilities within the SageMaker Unified Studio environment. For more information, visit the documentation page.
- Amazon Bedrock capabilities — To learn more about the capabilities of Amazon Bedrock in Amazon SageMaker Unified Studio, refer to this documentation page.
Start building with Amazon SageMaker Unified Studio today. For more information, visit the Amazon SageMaker Unified Studio page.
Happy building!
— How is the News Blog doing? Take this 1 minute survey! (This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
Amazon S3 Tables integration with Amazon SageMaker Lakehouse is now generally available
At re:Invent 2024, we launched Amazon S3 Tables, the first cloud object store with built-in Apache Iceberg support to streamline storing tabular data at scale, and Amazon SageMaker Lakehouse to simplify analytics and AI with a unified, open, and secure data lakehouse. We also previewed S3 Tables integration with Amazon Web Services (AWS) analytics services for you to stream, query, and visualize S3 Tables data using Amazon Athena, Amazon Data Firehose, Amazon EMR, AWS Glue, Amazon Redshift, and Amazon QuickSight.
Our customers wanted to simplify the management and optimization of their Apache Iceberg storage, which led to the development of S3 Tables. They were simultaneously working to break down data silos that impede analytics collaboration and insight generation using the SageMaker Lakehouse. When paired with S3 Tables and SageMaker Lakehouse in addition to built-in integration with AWS analytics services, they can gain a comprehensive platform unifying access to multiple data sources enabling both analytics and machine learning (ML) workflows.
Today, we’re announcing the general availability of Amazon S3 Tables integration with Amazon SageMaker Lakehouse to provide unified S3 Tables data access across various analytics engines and tools. You can access SageMaker Lakehouse from Amazon SageMaker Unified Studio, a single data and AI development environment that brings together functionality and tools from AWS analytics and AI/ML services. All S3 tables data integrated with SageMaker Lakehouse can be queried from SageMaker Unified Studio and engines such as Amazon Athena, Amazon EMR, Amazon Redshift, and Apache Iceberg-compatible engines like Apache Spark or PyIceberg.
With this integration, you can simplify building secure analytic workflows where you can read and write to S3 Tables and join with data in Amazon Redshift data warehouses and third-party and federated data sources, such as Amazon DynamoDB or PostgreSQL.
You can also centrally set up and manage fine-grained access permissions on the data in S3 Tables along with other data in the SageMaker Lakehouse and consistently apply them across all analytics and query engines.
S3 Tables integration with SageMaker Lakehouse in action
To get started, go to the Amazon S3 console and choose Table buckets from the navigation pane and select Enable integration to access table buckets from AWS analytics services.
Now you can create your table bucket to integrate with SageMaker Lakehouse. To learn more, visit Getting started with S3 Tables in the AWS documentation.
1. Create a table with Amazon Athena in the Amazon S3 console
You can create a table, populate it with data, and query it directly from the Amazon S3 console using Amazon Athena with just a few steps. Select a table bucket and select Create table with Athena, or you can select an existing table and select Query table with Athena.
When you want to create a table with Athena, you should first specify a namespace for your table. The namespace in an S3 table bucket is equivalent to a database in AWS Glue, and you use the table namespace as the database in your Athena queries.
Choose a namespace and select Create table with Athena. It goes to the Query editor in the Athena console. You can create a table in your S3 table bucket or query data in the table.
2. Query with SageMaker Lakehouse in the SageMaker Unified Studio
Now you can access unified data across S3 data lakes, Redshift data warehouses, third-party and federated data sources in SageMaker Lakehouse directly from SageMaker Unified Studio.
To get started, go to the SageMaker console and create a SageMaker Unified Studio domain and project using a sample project profile: Data Analytics and AI-ML model development. To learn more, visit Create an Amazon SageMaker Unified Studio domain in the AWS documentation.
After the project is created, navigate to the project overview and scroll down to project details to note down the project role Amazon Resource Name (ARN).
Go to the AWS Lake Formation console and grant permissions for AWS Identity and Access Management (IAM) users and roles. In the in the Principals section, select the <project role ARN>
noted in the previous paragraph. Choose Named Data Catalog resources in the LF-Tags or catalog resources section and select the table bucket name you created for Catalogs. To learn more, visit Overview of Lake Formation permissions in the AWS documentation.
When you return to SageMaker Unified Studio, you can see your table bucket project under Lakehouse in the Data menu in the left navigation pane of project page. When you choose Actions, you can select how to query your table bucket data in Amazon Athena, Amazon Redshift, or JupyterLab Notebook.
When you choose Query with Athena, it automatically goes to Query Editor to run data query language (DQL) and data manipulation language (DML) queries on S3 tables using Athena.
Here is a sample query using Athena:
select * from "s3tablecatalog/s3tables-integblog-bucket”.”proddb"."customer" limit 10;
To query with Amazon Redshift, you should set up Amazon Redshift Serverless compute resources for data query analysis. And then you choose Query with Redshift and run SQL in the Query Editor. If you want to use JupyterLab Notebook, you should create a new JupyterLab space in Amazon EMR Serverless.
3. Join data from other sources with S3 Tables data
With S3 Tables data now available in SageMaker Lakehouse, you can join it with data from data warehouses, online transaction processing (OLTP) sources like relational or non-relational database, Iceberg tables, and other third party sources to gain more comprehensive and deeper insights.
For example, you can add connections to data sources such as Amazon DocumentDB, Amazon DynamoDB, Amazon Redshift, PostgreSQL, MySQL, Google BigQuery, or Snowflake and combine data using SQL without extract, transform, and load (ETL) scripts.
Now you can run the SQL query in the Query editor to join the data in the S3 Tables with the data in the DynamoDB.
Here is a sample query to join between Athena and DynamoDB:
select * from "s3tablescatalog/s3tables-integblog-bucket"."blogdb"."customer",
"dynamodb1"."default"."customer_ddb" where cust_id=pid limit 10;
To learn more about this integration, visit Amazon S3 Tables integration with Amazon SageMaker Lakehouse in the AWS documentation.
Now available
S3 Tables integration with SageMaker Lakehouse is now generally available in all AWS Regions where S3 Tables are available. To learn more, visit the S3 Tables product page and the SageMaker Lakehouse page.
Give S3 Tables a try in the SageMaker Unified Studio today and send feedback to AWS re:Post for Amazon S3 and AWS re:Post for Amazon SageMaker or through your usual AWS Support contacts.
In the annual celebration of the launch of Amazon S3, we will introduce more awesome launches for Amazon S3 and Amazon SageMaker. To learn more, join the AWS Pi Day event on March 14.
— Channy
—
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
File Hashes Analysis with Power BI from Data Stored in DShield SIEM, (Wed, Mar 12th)
I previously used Power BI [2] to analyze DShield sensor data and this time I wanted to show how it could be used by selecting certain type of data as a large dataset and export it for analysis. This time, I ran a query in Elastic Discover and exported that data to analyze it in PowerBI into a CSV format. The first step was to run a query in Discover and select the past 60 days with the following query: file.name : *
Get started with Microsoft Desired State Configuration v3.0.0
This is the second post in a multi-part series about the new release of DSC.
Microsoft Desired State Configuration (DSC) v3.0.0 is a modern, cross-platform configuration
management framework designed to help administrators and developers declaratively define and enforce
system states. Whether you’re managing infrastructure, deploying applications, or automating system
configurations, DSC provides a flexible and scalable approach to configuration as code.
TIP
This post uses the following terminology:
- DSC refers to Desired State Configuration (DSC) v3.0.0.
- PSDSC refers to PowerShell Desired State Configuration (PSDSC) v1.1 and v2.
Installing DSC
To get started, follow these steps to install DSC on your system:
On Windows, you can install DSC from the Microsoft Store using winget
. By installing from the
Store or using winget
, you get automatic updates for DSC.
winget search DesiredStateConfiguration
winget install --id <insert-package-id> --source msstore
On Linux and macOS, you can install DSC using the following steps:
- Download the latest release from the PowerShell/DSC repository.
- Expand the release archive.
- Add the folder containing the expanded archive contents to your
PATH
environment variable.
Getting started with the DSC command
The dsc
command operates on a configuration document or invokes specific resources to manage
settings.
Run the following command to display the dsc
command help:
dsc --help
Apply configuration or invoke specific DSC resources
Usage: dsc.exe [OPTIONS] <COMMAND>
Commands:
completer Generate a shell completion script
config Apply a configuration document
resource Invoke a specific DSC resource
schema Get the JSON schema for a DSC type
help Print this message or the help of the given subcommand(s)
Options:
-l, --trace-level <TRACE_LEVEL> Trace level to use [possible values: error, warn, info, debug, trace]
-t, --trace-format <TRACE_FORMAT> Trace format to use [default: default] [possible values: default, plaintext, json]
-h, --help Print help
-V, --version Print version
Use the command to get version information.
dsc --version
dsc 3.0.0
To learn more, see the dsc
command reference documentation.
Access DSC resources with dsc resource
The dsc resource
command displays or invokes a specific DSC resource. The dsc resource
command
contains subcommands for listing DSC resources and invoking them directly.
Use the following command to display a list of installed DSC resources.
dsc resource list
Type Kind Version Caps RequireAdapter Description
----------------------------------------------------------------------------------------------------
Microsoft.DSC.Transitional/RunCommandOnSet Resource 0.1.0 gs------ Takes a si…
Microsoft.DSC/Assertion Group 0.1.0 gs--t--- `test` wil…
Microsoft.DSC/Group Group 0.1.0 gs--t--- All resour…
Microsoft.DSC/PowerShell Adapter 0.1.0 gs--t-e- Resource a…
Microsoft.Windows/RebootPending Resource 0.1.0 g------- Returns in…
Microsoft.Windows/Registry Resource 0.1.0 gs-w-d-- Manage Win…
Microsoft.Windows/WMI Adapter 0.1.0 g------- Resource a…
Microsoft.Windows/WindowsPowerShell Adapter 0.1.0 gs--t--- Resource a…
When the command includes the adapter option, dsc
checks for any resource adapters with a matching
name. Classic PowerShell resources are part of the Microsoft.Windows/WindowsPowerShell
adapter.
dsc resource list --adapter Microsoft.Windows/WindowsPowerShell
Partial listing
Type Kind Version Caps RequireAdapter
----------------------------------------------------------------------------------------------------
PSDesiredStateConfiguration/Archive Resource 1.1 gs--t--- Microsoft.Windo…
PSDesiredStateConfiguration/Environment Resource 1.1 gs--t--- Microsoft.Windo…
PSDesiredStateConfiguration/File Resource 1.0.0 gs--t--- Microsoft.Windo…
PSDesiredStateConfiguration/Group Resource 1.1 gs--t--- Microsoft.Windo…
PSDesiredStateConfiguration/GroupSet Resource 1.1 gs--t--- Microsoft.Windo…
PSDesiredStateConfiguration/Log Resource 1.1 gs--t--- Microsoft.Windo…
To learn more, see the dsc
resource command reference documentation.
Manage a basic configuration
The dsc config
command includes subcommands for managing the resource instances defined in a DSC
configuration document.
The following YAML configuration document calls the classic PowerShell resource WindowsFeature
from the PSDesiredStateConfiguration module to install a Windows web server (IIS) on Windows
Server.
$schema: https://raw.githubusercontent.com/PowerShell/DSC/main/schemas/2024/04/config/document.json
resources:
- name: Use Windows PowerShell resources
type: Microsoft.Windows/WindowsPowerShell
properties:
resources:
- name: Web server install
type: PSDesiredStateConfiguration/WindowsFeature
properties:
Name: Web-Server
Ensure: Present
To set a machine to the configuration, use the dsc config set
subcommand. The following example
shows how you can send the configuration document to DSCv3 using PowerShell:
Get-Content ./web.config.dsc.yaml | dsc config set
To learn more, see the dsc config
command reference documentation.
Next steps
Learn more about Authoring Enhancements in Desired State Configuration v3.0.0.
Call to action
For more information about DSC v3.0, see the DSCv3 documentation. We value your feedback. Stop
by our GitHub repository and let us know of any issues you find.
Jason Helmick
Sr. Product Manager, PowerShell
The post Get started with Microsoft Desired State Configuration v3.0.0 appeared first on PowerShell Team.
Announcing Microsoft Desired State Configuration v3.0.0
This is the first post in a multi-part series about the new release of DSC.
We’re pleased to announce the General Availability of Microsoft’s Desired State Configuration (DSC)
version 3.0.0.
This version marks a significant evolution in cloud-native configuration management
for cross-platform environments. DSC is a declarative configuration and orchestration platform that
defines a standard way of exposing settings for applications and services. It’s a tool for managing
systems and applications by describing what they should look like rather than how to make it that
way. DSC simplifies system, service, and application management by separating what to do from how to
do it.
Benefits of DSC
- Declarative and Idempotent: DSC configuration documents are declarative JSON or YAML files
that define the desired state of your system in a straight-forward way. They include the instances
of DSC resources that need configuration. DSC ensures the system matches that state, repeatedly if
needed, without making unnecessary changes. - Flexible: DSC Resources define how to manage state for a particular system or application
component. Resources can be authored in any language, not only PowerShell. - Cross-Platform: DSC works on Linux, macOS, and Windows without needing extra tools or
dependencies. - Integratable: Designed to be easily integrated into existing configuration solutions. DSC
returns schematized JSON objects for trace messages and command output. Tool developers and script
authors can easily validate and parse the output for integration with other configuration tools
and frameworks. DSC simplifies how you call it by accepting JSON from stdin for all configuration
and resource commands. DSC resources include a manifest that defines the resource properties as a
JSON schema and how to invoke the resource. You can reuse this definition across various
toolchains for tighter integration with DSC. - Backwards compatible: This release of DSC can use all existing PowerShell 7 and Windows
PowerShell DSC resources.
With DSC, you can:
- Create configuration files that define how your environment should look.
- Write DSC resources in any programming language to manage your systems and applications.
- Invoke DSC resources to perform specific actions.
- Define a standard way for applications and services to make their settings discoverable and
usable. This means that you can discover and invoke resources directly, even without DSC.
Differences from PowerShell DSC
Windows PowerShell 5.1 includes PowerShell Desired State Configuration (PSDSC). We refer to as
classic DSC, which encompasses PSDSC v1.1 and v2. However, DSC can use any classic DSC resources
that exist today, including the script-based and class-based PSDSC resources. You can use PSDSC
resources in DSC with both Windows PowerShell and PowerShell.
The release of DSC is a major change to the DSC platform. DSC differs from PSDSC in a few important
ways:
- DSC no longer includes or supports the Local Configuration Manager (LCM).
- DSC doesn’t depend on PowerShell. You can use DSC without PowerShell installed and manage
resources written in bash, python, C#, Go, or any other language. - DSC doesn’t include a local configuration manager. DSC is invoked as a command-line tool. It doesn’t
run as a service. - The PSDSC configuration documents used Managed Object Format (MOF) files. Few tools were able to
parse MOF files, especially on non-Windows platforms. DSC isn’t compatible with MOF files, but you
can still use all existing PSDSC resources. - DSC is built on industry standards, such as JSON, JSON Schema, and YAML. These standards make DSC
easier to integrate into tools and workflows compared to PSDSC. - DSC configuration documents are defined in JSON or YAML. The configuration documents use
expression functions to enable dynamic values, rather than using PowerShell code to retrieve
environment variables or join strings. - DSC supports supplying parameter values for configuration documents at runtime either as JSON
or by pointing to a parameters file instead of generating a configuration MOF file before
applying the configuration. - Unlike PSDSC, DSC returns strongly structured output. This structured output adheres to a
published JSON Schema, making it easier to understand the output and to integrate it into your own
scripts, reporting, and other tooling. When you test or set resources and configurations
with DSC, the output tells you how a resource is out of the desired state or what DSC changed
on your system.
Features of DSC
- Groups: DSC supports a new resource kind that changes how DSC processes a list of resources.
Resource authors can define their own group resources and configuration authors can use any of the
built-in group resources.The DSC repository has an example that shows how you can group resources together and use
thedependsOn
keyword to define the order those groups are applied in a configuration. - Assertions: Use the
Microsoft.Dsc/Assertion
(a special group resource) to validate the
environment before running the configuration.The DSC repository has an example that shows how you can use an assertion to manage a
resource that should only run on a specific operating system. - Importers: DSC supports a new resource kind that pulls in a configuration from an external
source for reuse in the current configuration document. Resource authors can define their own
importer resources and configuration authors can use the built-inMicrosoft.DSC/Include
resource.The DSC repository has an example that shows how you can use theMicrosoft.Dsc/Include
resource to reuse a separate configuration document file, enabling you to compose a complex
configuration from smaller, simpler configuration documents. - Exporting: DSC supports a new operation that resources can implement to return the list of all
existing instances of that resource. You can use thedsc resource export
command to get
every instance of that resource on a machine. Use thedsc config export
command to look up
a set of resources and return a new configuration document containing every instance of those
resources. - Configuration functions: DSC configuration documents support a set of functions that
enable you to change how DSC processes the resources.The DSC repository has an example that shows how you can reference the output from one
resource in the properties of another.
Support lifecycle
DSC follows semantic versioning. The first release of DSC, version 3.0.0, is a Stable release.
The first release of DSC, version 3.0.0
, is a Stable release. Patch releases update the third
digit of the semantic version number. For example, 3.0.1 is a patch update to 3.0.0. Stable releases
receive patches for critical bugs and security vulnerabilities for three months after the next
Stable release. For example, version 3.0.0 is supported for three months after 3.1.0 is released.
Always update to the latest patch version of the release you’re using.
Next steps
As I mentioned at the top of this post, this was the first in a series of posts about the new DSC. For the subsequent posts:
- DSC refers to Desired State Configuration (DSC) v3.0.0
- PSDSC refers to PowerShell Desired State Configuration (PSDSC) v1.1 and v2
Now you are ready for the next post: Get Started with Desired State Configuration v3.0.0 (DSC)
Call to action
For more information about Desired State Configuration v3.0 (DSC), visit the
DSC documentation. We value your feedback. Stop by our GitHub repository and let us
know of any issues you find.
Jason Helmick
Sr. Product Manager, PowerShell
The post Announcing Microsoft Desired State Configuration v3.0.0 appeared first on PowerShell Team.
Scans for VMWare Hybrid Cloud Extension (HCX) API (Brutefording Credentials?), (Wed, Mar 12th)
Today, I noticed increased scans for the VMWare Hyprid Cloud Extension (HCX) "sessions" endpoint. These endpoints are sometimes associated with exploit attempts for various VMWare vulnerabilities to determine if the system is running the extensions or to gather additional information to aid exploitation.
The specific URL seen above is
/hybridity/api/sessions
This particular request is likely used to brute force credentials. the "sessions" endpoint expects a JSON payload with the username and payload like:
{
"username": "admin",
"password": "somecomplexpassword"
}
The response will either be a 401 response if the authentication failed or a 200 response if it succeeded. A successful response includes a "sessionId", which will be used as a bearer token to authenticate additional requests.
So far, we see these requests mostly from one IP address: %%ip:107.173.125.163%% using randomized valid user agents. The IP address was first seen yesterday in our logs, and is scanning for Log4j vulnerable systems and a few other issues. It may also attempt to brute force a few other web applications. For a complete list of requests sent by this IP address, see this page.
—
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.