Every morning, security people around the world face the same ritual: opening their vulnerability feed to find a lot of new CVE entries that appeared overnight. Over the past decade, this flood has become a defining challenge of modern defensive security. Some numbers[1]:
Lumma Stealer infection with Sectop RAT (ArechClient2), (Fri, Apr 17th)
Introduction
This diary provides indicators from a Lumma Stealer infection that was followed by Sectop RAT (ArechClient2). I searched for cracked versions of popular copyright-protected software, and I downloaded the initial malware after following the results of one such search. This is a common distribution technique for various families of malware, and I often find Lumma Stealer this way.
In this case, the initial malware for Lumma Stealer was delivered as a password-protected 7-zip archive. The extracted malware is an inflated Windows executable (EXE) file at 806 MB. The EXE is padded with null-bytes (0x00), a technical which increases the EXE size while allowing the compressed archive file to be much smaller. The password-protected archive and inflated EXE file are designed to avoid detection.
Images from the infection

Shown above: Example of a page with instructions to download the initial malware file.

Shown above: Traffic from the infection filtered in Wireshark.

Shown above: Sectop RAT persistent on an infected Windows host.
Indicators of Compromise
Example of download link from the site advertising cracked versions of copyright-protected software:
hxxps[:]//incolorand[.]com/how-visual-patch-enhances-ui-consistency-across-releases/?utm_source={CID}&utm_term=Adobe%20Premiere%20Pro%20(2026)%20Full%20v26.0.2%20Espa%C3%B1ol%20[Mega]&utm_content={SUBID1}&utm_medium={SUBID2}
Example of URL for page with the file download instructions:
hxxps[:]//mega-nz.goldeneagletransport[.]com/Adobe_Premiere_Pro_%282026%29_Full_v26.0.2_Espa%C3%B1ol_%5BMega%5D.zip?c=ABUZ4WkRgQUA_YUCAFVTFwASAAAAAACh&s=360721
Example of URL for file download from site above site impersonating MEGA:
hxxps[:]//arch.primedatahost3[.]cfd/auth/media/JvWcFd5vUoYTrImvtWQAASTh/Adobe_Premiere_Pro_(2026)_Full_v26.0.2_Espa%C3%B1ol_%5BMega%5D.zip
Downloaded file:
- SHA256 hash: c7489e3bf546c5f2d958ac833cc7dbca4368dfba03a792849bc99c48a6b2a14f
- File size: 3,888,051 bytes
- File name: adobe_premiere_pro_(2026)_full_v26.0.2_espan?ol_[mega].7z
- File type: 7-zip archive data, version 0.4
- File description: Password-protected 7-zip archive
- Password: 6919
Extracted malware:
- SHA256 hash: 4849f76dafbef516df91fecfc23a72afffaf77ade51f805eae5ad552bed88923
- File size: 806,127,604 bytes
- File name: appFile.exe
- File type: PE32 executable (GUI) Intel 80386, for MS Windows
- File description: Inflated Windows EXE file for Lumma Stealer, padded with null-bytes
Deflated malware:
- SHA256 hash: 353ddce78d58aef2083ca0ac271af93659cf0039b0b29d0d169fc015bd3610bc
- File size: 7,114,156 bytes
- File type: PE32 executable (GUI) Intel 80386, for MS Windows
- File description: Above appFile.exe with most of null-byte padding removed
- Any.Run sandbox analysis
- Triage sandbox analysis
Lumma Stealer command and control (C2) domains from Triage sandbox analysis:
- cankgmr[.]cyou
- carytui[.]vu
- decrnoj[.]club
- genugsq[.]best
- longmbx[.]click
- mushxhb[.]best
- pomflgf[.]vu
- strikql[.]shop
- ulmudhw[.]shop
Follow-up malware:
- SHA256 hash: d9b576eb6827f38e33eda037d2cda4261307511303254a8509eeb28048433b2f
- File size: 16,450,560 bytes
- File type: PE32+ executable (DLL) (GUI) x86-64, for MS Windows
- Retrieved from: hxxps[:]//enotsosun[.]pw/NetGui.dll
- Saved to: C:Users[username]AppDataLocalTemp16XBPQ29ZBG94TYNOA.dll
- File description: 64-bit DLL to install and run Sectop RAT (ArechClient2)
- Run method: rundll32 [file path],LoadForm
- Any.Run sandbox analysis
- Triage sandbox analysis
Example of Sectop RAT C2 traffic from an infected Windows host:
- hxxp[:]//91.92.241[.]102:9000/wmglb
- hxxp[:]//91.92.241[.]102:9000/wbinjget?q=66B553A8B94CE37C16F4EBC863D51FCC
- tcp[:]//91.92.241[.]102:443/ – encoded or otherwise encrypted traffic (not HTTPS/TLS)
—
Bradley Duncan
brad [at] malware-traffic-analysis.net
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Introducing Anthropic’s Claude Opus 4.7 model in Amazon Bedrock
Today, we’re announcing Claude Opus 4.7 in Amazon Bedrock, Anthropic’s most intelligent Opus model for advancing performance across coding, long-running agents, and professional work.
Claude Opus 4.7 is powered by Amazon Bedrock’s next generation inference engine, delivering enterprise-grade infrastructure for production workloads. Bedrock’s new inference engine has brand-new scheduling and scaling logic which dynamically allocates capacity to requests, improving availability particularly for steady-state workloads while making room for rapidly scaling services. It provides zero operator access—meaning customer prompts and responses are never visible to Anthropic or AWS operators—keeping sensitive data private.
According to Anthropic, Claude Opus 4.7 model provides improvements across the workflows that teams run in production such as agentic coding, knowledge work, visual understanding,long-running tasks. Opus 4.7 works better through ambiguity, is more thorough in its problem solving, and follows instructions more precisely.
- Agentic coding: The model extends Opus 4.6’s lead in agentic coding, with stronger performance on long-horizon autonomy, systems engineering, and complex code reasoning tasks. According to Anthropic, the model records high-performance scores with 64.3% on SWE-bench Pro, 87.6% on SWE-bench Verified, and 69.4% on Terminal-Bench 2.0.
- Knowledge work: The model advances professional knowledge work, with stronger performance on document creation, financial analysis, and multi-step research workflows. The model reasons through underspecified requests, making sensible assumptions and stating them clearly, and self-verifies its output to improve quality on the first step. According to Anthropic, the model reaches 64.4% on Finance Agent v1.1.
- Long-running tasks: The model stays on track over longer horizons, with stronger performance over its full 1M token context window as it reasons through ambiguity and self-verifies its output.
- Vision: the model adds high-resolution image support, improving accuracy on charts, dense documents, and screen UIs where fine detail matters.
The model is an upgrade from Opus 4.6 but may require prompting changes and harness tweaks to get the most out of the model. To learn more, visit Anthropic’s prompting guide.
Claude Opus 4.7 model in action
You can get started with Claude Opus 4.7 model in Amazon Bedrock console. Choose Playground under Test menu and choose Claude Opus 4.7 when you select model. Now, you can test your complex coding prompt with the model.

I run the following prompt example about technical architecture decision:
Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions.

You can also access the model programmatically using the Anthropic Messages API to call the bedrock-runtime through Anthropic SDK or bedrock-mantle endpoints, or keep using the Invoke and Converse API on bedrock-runtime through the AWS Command Line Interface (AWS CLI) and AWS SDK.
To get started with making your first API call to Amazon Bedrock in minutes, choose Quickstart in the left navigation pane in the console. After choosing your use case, you can generate a short term API key to authenticate your requests as testing purpose.
When you choose the API method such as the OpenAI-compatible Responses API, you can get sample codes to run your prompt to make your inference request using the model.

To invoke the model through the Anthropic Claude Messages API, you can proceed as follows using anthropic[bedrock] SDK package for a streamlined experience:
from anthropic import AnthropicBedrockMantle
# Initialize the Bedrock Mantle client (uses SigV4 auth automatically)
mantle_client = AnthropicBedrockMantle(aws_region=REGION)
# Create a message using the Messages API
message = mantle_client.messages.create(
model="anthropic.claude-opus-4-7",
max_tokens=2048,
messages=[
{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions"}
]
)
print(message.content[0].text)
You can also run the following command to invoke the model directly to bedrock-runtime endpoint using the AWS CLI and the Invoke API:
aws bedrock-runtime invoke-model
--model-id anthropic.claude-opus-4-7
--region us-east-1
--body '{"messages": [{"role": "user", "content": "Design a distributed architecture on AWS in Python that should support 100k requests per second across multiple geographic regions."}], "max_tokens": 512, "temperature": 0.5, "top_p": 0.9}'
--cli-binary-format raw-in-base64-out
invoke-model-output.txt
For more intelligent reasoning capability, you can use Adaptive thinking with Claude Opus 4.7, which lets Claude dynamically allocate thinking token budgets based on the complexity of each request.
To learn more, visit the Anthropic Claude Messages API and check out code examples for multiple use cases and a variety of programming languages.
Things to know
Let me share some important technical details that I think you’ll find useful.
- Choosing APIs: You can choose from a variety of Bedrock APIs for model inference, as well as the Anthropic Messages API. The Bedrock-native Converse API supports multi-turn conversations and Guardrails integration. The Invoke API provides direct model invocation and lowest-level control.
- Scaling and capacity: Bedrock’s new inference engine is designed to rapidly provision and serve capacity across many different models. When accepting requests, we prioritize keeping steady state workloads running, and ramp usage and capacity rapidly in response to changes in demand. During periods of high demand, requests are queued, rather than rejected. Up to 10,000 requests per minute (RPM) per account per Region are available immediately, with more available upon request.
Now available
Anthropic’s Claude Opus 4.7 model is available today in the US East (N. Virginia), Asia Pacific (Tokyo), Europe (Ireland), and Europe (Stockholm) Regions; check the full list of Regions for future updates. To learn more, visit the Claude by Anthropic in Amazon Bedrock page and the Amazon Bedrock pricing page.
Give Anthropic’s Claude Opus 4.7 a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.
— Channy
[Guest Diary] Compromised DVRs and Finding Them in the Wild, (Thu, Apr 16th)
Scanning for AI Models, (Tue, Apr 14th)
Starting March 10, 2026, my DShield sensor started getting probe for various AI models such as claude, openclaw, huggingface, etc. Reviewing the data already reported by other DShield sensors to ISC, the DShield database shows reporting of these probes started that day and has been active ever since.
AWS Weekly Roundup: Claude Mythos Preview in Amazon Bedrock, AWS Agent Registry, and more (April 13, 2026)
In my last Week in Review post, I mentioned how much time I’ve been spending on AI-Driven Development Lifecycle (AI-DLC) workshops with customers this year. A common theme in those sessions is the need for better cost visibility. Teams are moving fast with AI, but as they go from experimenting to full production, finance and leadership really need to know who is using which resources and at what cost. That’s why I was so excited to see the launch of Amazon Bedrock new support for cost allocation by IAM user and role this week. This lets you tag IAM principals with attributes like team or cost center and then activate those tags in your Billing and Cost Management console. The resulting cost data flows into AWS Cost Explorer and the detailed Cost and Usage Report, giving you a clear line of sight into model inference spending. Whether you’re scaling agents across teams, tracking foundation model use by department, or running tools like Claude Code on Amazon Bedrock, this new feature is a game changer for tracking and managing your AI investments. You can get all the details on setting this up in the IAM principal cost allocation documentation.
Now, let’s get into this week’s AWS news…
Headlines
Amazon Bedrock now offers Claude Mythos Preview Anthropic’s most sophisticated AI model to date is now available on Amazon Bedrock as a gated research preview through Project Glasswing. Claude Mythos introduces a new model class focused on cybersecurity, capable of identifying sophisticated security vulnerabilities in software, analyzing large codebases, and delivering state of the art performance across cybersecurity, coding, and complex reasoning tasks. Security teams can use it to discover and address vulnerabilities in critical software before threats emerge. Access is currently limited to allowlisted organizations, with Anthropic and AWS prioritizing internet critical companies and open source maintainers.
AWS Agent Registry for centralized agent discovery and governance now in preview AWS launched Agent Registry through Amazon Bedrock AgentCore, providing organizations with a private catalog for discovering and managing AI agents, tools, skills, MCP servers, and custom resources. The registry helps teams locate existing capabilities rather than duplicating them, with semantic and keyword search, approval workflows, and CloudTrail audit trails. It is accessible via the AgentCore Console, AWS CLI, SDK, and as an MCP server queryable from IDEs.
Last week’s launches
Here are some launches and updates from this past week that caught my attention:
- Announcing Amazon S3 Files, making S3 buckets accessible as file systems — Amazon S3 Files transforms S3 buckets into shared file systems that connect any AWS compute resource directly with your S3 data. Built on Amazon EFS technology, it delivers full file system semantics with low latency performance, caching actively used data and providing multiple terabytes per second of aggregate read throughput. Applications can access S3 data through both file system and S3 APIs simultaneously without code modifications or data migration.
- Amazon OpenSearch Service supports Managed Prometheus and agent tracing —Amazon OpenSearch Service now provides a unified observability platform that consolidates metrics, logs, traces, and AI agent tracing into a single interface. The update includes native Prometheus integration with direct PromQL query support, RED metrics monitoring, and OpenTelemetry GenAI semantic convention support for LLM execution visibility. Operations teams can correlate slow traces to logs and overlay Prometheus metrics on dashboards without switching between tools.
- Amazon WorkSpaces Advisor now available for AI powered troubleshooting— AWS launched Amazon WorkSpaces Advisor, an AI powered administrative tool that uses generative AI to help IT administrators troubleshoot Amazon WorkSpaces Personal deployments. It analyzes WorkSpace configurations, detects problems automatically, and provides actionable recommendations to restore service and optimize performance.
- Amazon Braket adds support for Rigetti’s 108 qubit Cepheus QPU — Amazon Braket now offers access to Rigetti’s Cepheus-1-108Q device, the first 100+ qubit superconducting quantum processor on the platform. The modular design features twelve 9 qubit chiplets with CZ gates that offer enhanced resilience to phase errors. It supports multiple frameworks including Braket SDK, Qiskit, CUDA-Q, and Pennylane, with pulse level control for researchers.
For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.
Other AWS news
Here are some additional posts and resources that you might find interesting:
- Building automated AWS Regional availability checks with Amazon S3— Storage blog post on implementing automated systems for monitoring service availability across AWS regions using Amazon S3 as core infrastructure.
- Understanding Amazon Bedrock model lifecycle — Machine learning blog post that walks through the stages foundation models go through in Bedrock from availability through deprecation, helping teams plan for model updates and manage version dependencies in production.
- Building memory intensive apps with AWS Lambda managed instances — Compute blog post exploring how Lambda managed instances extend the platform beyond lightweight workloads to support memory intensive applications while maintaining serverless benefits.
- Deploy OpenClaw on AWS: Choose the right options for your AI workload — Builder Center guide comparing four AWS deployment options for OpenClaw: Amazon Lightsail for individual developers, Amazon EC2 for startups needing deeper AWS integration, Amazon Bedrock AgentCore for serverless multiuser scenarios, and Amazon EKS for enterprises requiring VM level isolation and advanced orchestration.
- We’re bringing back the Kiro startup credits program — Kiro is relaunching its startup credits initiative, offering eligible early stage companies complimentary access to Kiro Pro+ for up to one year. The three tier program (Starter, Growth, Scale) provides 2 to 30 users based on team size, with rolling applications accepted globally.
Upcoming AWS events
Check your calendar and sign up for upcoming AWS events:
- What’s Next with AWS (April 28, Virtual) Join this livestream at 9am PT for a candid discussion about how agentic AI is transforming how businesses operate. Featuring AWS CEO Matt Garman, SVP Colleen Aubrey, and OpenAI leaders discussing emerging agent capabilities, Amazon’s internal experiences, and new agentic solutions and platform capabilities.
Browse here for upcoming AWS led in person and virtual events, startup events, and developer focused events.
That’s all for this week. Check back next Monday for another Weekly Roundup!
~ micah
Scans for EncystPHP Webshell, (Mon, Apr 13th)
Last week, I wrote about attackers scanning for various webshells, hoping to find some that do not require authentication or others that use well-known credentials. But some attackers are paying attention and are deploying webshells with more difficult-to-guess credentials. Today, I noticed some scans for what appears to be the "EncystPHP" web shell. Fortinet wrote about this webshell back in January. It appears to be a favorite among attackers compromising vulnerable FreePBX systems.
PowerShell MSI package deprecation and preview updates
Beginning with PowerShell 7.7-preview.1 (April 2026), the MSIX package will be the primary
installation method for PowerShell on Windows. We will no longer ship the MSI installer package for
new PowerShell releases.
For existing releases, including PowerShell 7.6, we will continue to provide MSI packages. However,
MSI isn’t planned for future releases, including PowerShell 7.7 GA and beyond.
Why we’re making this change
MSIX provides a modern installation and servicing model and is supported by Windows deployment
tools. It uses a declarative model that’s more predictable and reliable than MSI, which relies on
custom actions and scripts that can lead to inconsistent behavior. MSIX supports built-in update
mechanisms with differential updates. Microsoft is investing in improving MSIX.
MSI is a legacy technology. Servicing MSI installations requires external tooling and often results
in full reinstalls. MSI doesn’t meet modern accessibility requirements, particularly for screen
reader scenarios. To be accessible, MSI must present predictable tab stops and accurate
announcements for screen readers, which it doesn’t. Accessibility is a core requirement for
PowerShell.
This decision isn’t just about modernizing packaging for its own sake. It’s about ensuring that
PowerShell installations are modern and accessible for all users, now and in the future.
Looking forward
Our goal is to provide a fully accessible, reliable, and enterprise-ready installation experience.
At this time, MSIX doesn’t support all use case scenarios that MSI enabled, such as remoting and
execution by system-level services (like Task Scheduler). We recognize this gap and are actively
working to address it.
As part of this work, we’re investing in:
- Improving MSIX support for system-level and enterprise deployment scenarios
- Ensuring accessibility requirements are fully met across all installation paths
- Providing clearer guidance and tooling for deployment at scale
We will continue to share updates as this work progresses.
Closing
We understand this change may require adjustments, especially in environments that rely heavily on
MSI-based deployment. We appreciate your patience as we make this transition.
Our focus is to ensure PowerShell remains accessible, predictable, and practical for all users.
— The PowerShell Team
The post PowerShell MSI package deprecation and preview updates appeared first on PowerShell Team.
Obfuscated JavaScript or Nothing, (Thu, Apr 9th)
I spotted an interesting piece of JavaScript code that was delivered via a phishing email in a RAR archive. The file was called “cbmjlzan.JS” (SHA256:a8ba9ba93b4509a86e3d7dd40fd0652c2743e32277760c5f7942b788b74c5285) and is only identified as malicious by 15 AV’s on VirusTotal[1].
Number Usage in Passwords: Take Two, (Thu, Apr 9th)
In a previous diary [1], we looked to see how numbers were used within passwords submitted to honeypots. One of the items of interest was how dates, and more specifically years, were represented within the data and how that changed over time. It is often seen that years and seasons are used in passwords, especially when password change requirements include frequenty password changes. Some examples we might see today: