Proxying the Unproxyable? Sending EXE traffic to a Proxy, (Wed, May 13th)

This post was originally published on this site

.. if “unproxyable” is a word that is ..

I had a recent engagement where I had to look at the network traffic generated by a Windows executable.  Unfortunately, it was all TLS, and all TLS1.3 to boot.  So from a PCAP all I got was a whole lot of “yup, that’s encrypted”, and since it was TLSv1.3 all I really had to work with was the IP addresses, not even server names in the server hello packets to help out.  And the IP addresses involved were those “500 DNS names AWS” shotgun addresses, so no help there.

What I really needed was something to take specific traffic, say traffic from an executable, and redirect that to a proxy.  If that proxy is then burp suite, then Bob’s yer Uncle, now I can look at the traffic!!  If you’d rather use fiddler or some other proxy, go for it, anything will work.

A few minutes of Googling, and I found Proxifier (https://www.proxifier.com/)

Proxifier allows you set up rules, for instance “send traffic from abc.exe to proxy A”, “send traffic from def.exe to proxy B”, or “send everything else direct”, or any combination.  Proxies can be direct or Socks5.

In my case, I was looking at a client executable, and was able to follow all the API calls and data transferred, it was EXACTLY what I needed that day.

I can’t show you the client output – watching the API’s roll by was as cool as it gets though, and the proxy intercept in burp lets you “play” with individual calls if that’s what you need.  But I can certainly show you how this works, let’s use curl as our example exe. 

Let's start in proxifier.  First you need to set up your proxy(s).  In this case I'm using Burp Suite Pro running locally, so the proxy is:

Next, we’ll set up the rules:

The first rule says “anything to my own machine, send direct”.  Given how much loopback cruft happens on a typical Windows box, this rule is gold (unless that’s what you are looking for that is).

The second rule is “anything from curl.exe, send to the proxy we just defined” (or whatever your executable is).

You can have multiple of these rules doing different things.

The final rule is “everything else, send direct”

Now, let’s run a test with curl:


(and so on)

On proxifier, you see the transaction happen in real time:

The top pane shows the executable, target and so on.  It’s somewhat ephemeral, it’ll show the live view, then will go grey after the transaction complets, then after a few second disappears.  The bottom pane scrolls in a more “log like” manner. 

Over in Burp, you see all the business that most sites have as their lead page:

Which is exactly what you need, and can't get these days from a packet capture!

What else does Proxifier do?  It also spits out a configurable log file, you can configure what’s in the logs and where to send it:

 

You can set similar sensitivity on the live on-screen log.

All in all, this tool was a life-saver for me, I’ve used it for a few years now and keep coming up with things that it can bail me out of!

Got a cool use for a tool like this?  Give it a try and share your experiences in our comment form below (please keep any NDA’s in mind).

Do you have a similar or better tool for this, again, by all means share in our comment form!

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Amazon Redshift introduces AWS Graviton-based RG instances with an integrated data lake query engine

This post was originally published on this site

Since 2013, Amazon Redshift has given the full power of a data warehouse in the cloud, at a fraction of the on-premises cost. Every architectural generation—from dense compute to Amazon RA3 instances, from provisioned to Amazon Redshift Serverless—has made each query cheaper, faster, and more efficient than the last.

For over a decade, as data volumes have grown and analytics requirements have evolved, organizations increasingly leverage both data warehouse tables for structured, frequently-accessed data and data lakes for cost-effective storage of diverse datasets. Add AI agents to the mix and they query your data warehouse at a scale that dwarfs typical human usage, leading to spiraling operational costs.

Amazon Redshift has doubled down on its core strengths to meet the demands of any workload — whether driven by humans or AI agents. For example, in March 2026, Amazon Redshift improved the performance of business intelligence (BI) dashboards and ETL workloads by speeding up new queries by up to 7 times. This significantly improves the response times of low-latency SQL queries, such as those used in near-real-time analytics applications, BI dashboards, ETL pipelines, and autonomous, goal-seeking AI agents.

Today, we’re announcing Amazon Redshift RG instances, a new instance family powered by AWS Graviton. RG instances deliver better performance, running data warehouse workloads up to 2.2x as fast as RA3 instances at 30% lower price per vCPU. Their integrated data lake query engine lets you run SQL analytics across your data warehouse and data lake from a single engine with performance up to 2.4x as fast as RA3 for Apache Iceberg and up to 1.5x as fast as RA3 for Apache Parquet. This blend of speed, cost efficiency, and an integrated data lake query engine makes Redshift RG instances well-suited to handle the high query volumes and low-latency requirements of today’s analytics and agentic AI workloads.

You can compare new RG instances and current RA3 instances:

Current RA3 Instance Recommended RG instance vCPU Memory (GB) Primary Use Case
ra3.xlplus rg.xlarge 4 32 Small cluster departmental analytics
ra3.4xlarge rg.4xlarge 12 → 16 (1.33:1) 96 GB → 128 GB (1.33:1) Standard production workloads, medium data volumes

This approach reduces total analytics costs for customers running combined data warehouse and data lake workloads, while simplifying operations through a single system for querying both warehouse tables and Amazon Simple Storage Service (Amazon S3) data lakes. We recommend using the AWS Pricing Calculator with your specific workload patterns to estimate savings.

Getting started with Amazon Redshift RG instances
You can launch new clusters or migrate existing clusters through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS API. The integrated data lake query engine is enabled by default.

In the Amazon Redshift console, you can choose new RG instances when you create a cluster.

You can migrate previous-generation instances to RG instances with optimal paths based on your cluster configuration to estimate costs, validate compatibility, and automate execution.

  • Elastic Resize—in-place migration with 10-15 minutes downtime for compatible configurations
  • Snapshot and Restore—create a RG cluster from an RA3 snapshot. This is best for customers who want to make configuration changes during the migration

Your external tables, schemas, and query syntax—including existing Spectrum queries—remain unchanged. There is no need to recreate external tables or modify application code. To learn more, visit the Redshift Management Guide.

Amazon Redshift now executes data lake queries on cluster nodes—the same compute that processes data warehouse workloads. As a result, Amazon Redshift Spectrum is no longer required. Data lake queries stay within your VPC boundary, use existing IAM roles, and incur zero per-terabyte scanning charges. This removes the $5/TB Spectrum scanning fees that previously added to total Redshift costs.

Now available
Amazon Redshift RG instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Taiwan, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, Milan, London, Paris, Spain, Stockholm), Middle East (UAE), and South America (São Paulo). For Regional availability and a future roadmap, visit the AWS Capabilities by Region. For Redshift Provisioned, you can select On-Demand Instances with hourly billing and no commitments or choose Reserved Instances for cost savings. To learn more, visit the Amazon Redshift Pricing page.

Give RG instances a try in the Redshift console and send feedback to AWS re:Post for Amazon Redshift or through your usual AWS Support contacts.

Channy

Apple Patches Everything, (Mon, May 11th)

This post was originally published on this site

Apple today released its typical feature update across it's operating systems (iOS, iPadOS, macOS, tvOS, watchOS, vision OS). With this update, Apple patched 84 different vulnerabilities. Updates are available for the "26" series of operating systems, as well as for the previous "18" version of iOS/iPadOS, and two versions back for macOS (version 14 and 15).

AWS Weekly Roundup: Amazon Bedrock AgentCore payments, Agent Toolkit for AWS, and more (May 11, 2026)

This post was originally published on this site

My most exciting news of last week: Amazon Bedrock AgentCore previewed the first managed payment capabilities enabling AI agents to autonomously access and pay for APIs, MCP servers, web content, and other agents. Built in partnership with Coinbase and Stripe, it removes the undifferentiated heavy lifting of building customized systems for billing, credential management, and compliance.

You can connect a Coinbase CDP wallet or Stripe Privy wallet as a payment connection, set session-level spending limits, and your agent transacts autonomously during execution. What excites me most is what AgentCore payments can unlock—like a research agent that can pay for real-time market data on the fly, or a coding agent calling paid APIs mid-task.

To learn more, visit the blog post, dive deeper using the documentation, and get started with the AgentCore CLI.

Last week’s launches
Here are last week’s launches that caught my attention:

  • Agent Toolkit for AWS – A production-ready suite of tools and guidance, available at no additional charge, that helps AI coding agents build on AWS with fewer errors, lower token costs, and enterprise-grade security controls. The Agent Toolkit for AWS is the successor to the MCP servers, plugins, and skills available on AWS Labs. To get started, visit the quick start guide or browse the available skills and plugins on GitHub.
  • AWS MCP Server GA – You can use a managed remote Model Context Protocol (MCP) server that gives AI agents and coding assistants secure, authenticated access to all AWS services through a small, fixed set of tools. It is part of the Agent Toolkit for AWS. To learn more, visit Seb Stormacq’s blog post.
  • Amazon WorkSpaces for AI agents (Preview) – You can use AI agents to securely access and operate desktop applications through managed WorkSpaces environments. This capability allows organizations to automate everyday workflows at scale while maintaining full enterprise-grade governance and compliance. To learn more, visit Micah Walter’s blog post.
  • Amazon EC2 M8idn/M8idb and R8idn/R8idb instances – These instances are powered by custom sixth-generation Intel Xeon Scalable processors available only on AWS and the latest sixth-generation AWS Nitro cards. These instances deliver up to 43% better compute performance per vCPU compared to previous-generation instances. M8idn/R8idn instances offer up to 600 Gbps network bandwidth, and M8idb/R8idb instances deliver up to 300 Gbps EBS bandwidth.

For a full list of AWS announcements, be sure to keep an eye on the What’s New with AWS page.

Additional updates
Here are some additional news items that you might find interesting:

  • Valkey turns two – Valkey stands as proof that open, community-driven technology innovates faster, scales further, and delivers more value than any single-vendor model. Valkey has surpassed 100 million Docker pulls (up 17x year over year) and attracted more than 225 contributors who have submitted over 1,500 pull requests, roughly double the development pace of Redis over the same period. You can also use the latest Valkey 9.0 in Amazon ElastiCache.
  • Query billion-scale vectors with SQL – You can learn how to query Amazon S3 Vectors from Amazon Aurora PostgreSQL-Compatible Edition using standard SQL, and how to combine vector similarity results with relational filters in a single query, for example, finding the most semantically similar products and then filtering by price, stock status, or tenant in one SQL statement.
  • Building an end-to-end agentic SRE using AWS DevOps Agent – Learn how to configure DevOps Agent Spaces that define an investigation scope, integrating seamlessly with Amazon CloudWatch, Splunk, GitHub, and Slack. You can also learn how to trigger automated investigations via webhooks, generate mitigation plans, and hand off agent-ready specs to coding agents like Kiro for implementation.

For a full list of AWS blog posts, be sure to keep an eye on the AWS Blogs page.

Learn more about AWS, browse and join upcoming AWS-led in-person and virtual events, startup events, and developer-focused events as well as AWS Summits and AWS Community Days. Join the AWS Builder Center to connect with builders, share solutions, and access content that supports your development.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Channy

Another Universal Linux Local Privilege Escalation (LPE) Vulnerability: Dirty Frag, (Fri, May 8th)

This post was originally published on this site

Less than two weeks after the public disclosure of the Copy Fail vulnerability (CVE-2026-31431), another local privilege escalation (LPE) vulnerability in the Linux kernel has been revealed. Referred to as "Dirty Frag," this vulnerability was discovered and reported by Hyunwoo Kim (@v4bel) [1]. In this diary, I will provide a brief background on Dirty Frag, and discuss its relationship to Copy Fail. I will then discuss how to mitigate Dirty Frag and outline recommended next steps for system owners.

An Adaptive Cyber Analytics UI for Web Honeypot Logs [Guest Diary], (Wed, May 6th)

This post was originally published on this site

[This is a Guest Diary by Eric Roldan, an ISC intern as part of the SANS.edu BACS program]

Through the expansion of Large Language Models (LLMs), cybersecurity has exploded with a variety of tools for both offensive and defensive purposes. A majority of software and cyber tools are integrating Artificial Intelligence (AI) solutions into their applications, largely in the form of chatbots, automation tools through Model Context Protocol (MCP), or ingestion to prompt response type interfaces.

An overlooked and underestimated aspect of AI that is slowly arising is the creation of bespoke user interfaces (UI). That is simply put — a UI that is created custom fit to the specific needs and data provided impromptu from the user. With the ability for these models to ingest large amounts of data, it can orchestrate the appropriate elements for a UI that will be tailored to the ingested dataset.

Rather than the user having to adjust their queries or layouts for the logs that they are analyzing, the LLM will determine the proper UI elements to give to the user. This allows the user to focus on analyzing rather than tool setup.

Over days of web traffic on the DShield webhoneypot, there are large variations of intent behind the interactions. Some days, some actors may focus only on scanning and recon. Other days may be heavy on stealing credentials or trying to get a web shell exploit.

As a regular user would try to identify patterns then use their discretion to find the next proper 'grep', 'jq', or other similar pattern recognizing POSIX tools, the LLM does the same.

Before this type of bespoke UIs in cyber analytics, analysts would have to spend extensive time and energy to understand what to look for and how to use the appropriate tools. With LLM's able to do this heavy lifting, more analysts will be able to recognize attacks on their web servers with little to no cyber experience.

When developers have to manage feature implementations, documentation updates, meetings (which are always productive of course…), and dreaded bugs – security and active monitoring become an afterthought. To make the internet a safer place, we have to lower the barrier to entry for recognizing web attacks.

Okay enough selling you on how much potential this has, let's talk about how it actually works.

It works like this: the system reads your DShield web honeypot log file, then a Python analyzer goes through the entries and turns them into a clean summary of what happened instead of dumping raw attacker text into the AI. That summary includes things like top IPs, top URLs, time patterns, and tags for probe/attack types such as WordPress probes, SSRF, path traversal, CGI abuse, and other recognizable patterns. Then Claude looks at the cleaned summary and writes a React dashboard component that fits the shape of the attack activity for that day, so the UI can change depending on whether the logs are mostly one big campaign or a mix of background internet noise.

The safe part is that the LLM never gets the raw malicious strings directly, and the generated UI never gets to run loose in the main page. Instead, the app serves the generated dashboard through a backend API, caches it so it does not constantly change, and renders it inside a sandboxed iframe. If the generated code is broken, the system validates it and falls back to a static dashboard. So the whole flow is basically: logs came in -> analyzer summarizes them -> Claude generates a matching UI -> frontend loads it safely and pulls chart data from the backend.

Let's take a look at some examples now! On days where there is more noise we do not see any dominant patterns highlighted on the UI

However, on days where there is a clear pattern from certain actors we see an immediate highlight…

Furthermore, it is able to recognize and highlight attack signatures that were most obvious (or would be obvious to an experienced analyst) at the very top of the UI’s dashboard

There are sometimes some interesting quirks like the LLM creating a dashboard with light mode instead of dark mode.

Nonetheless it is interesting to see how the LLM adapts to each day’s attack logs. I imagine if I could “vibe code” this idea in a few hours, it could become a full-blown platform and toolkit for major organizations and analysts. So yea…I didn’t write the code for all this madness, I simply took a problem that I constantly face when looking at attack logs – what is it that I’m actually looking for? And created a unique bespoke UI for each day’s scenario.

Shout out to Claude Code for agentically writing the repo which can be found here.

Shout out to ChatGPT for helping me write the ‘how it works’ section of this blog.

And a special shout out to my Internship mentor Guy Bruneau for helping me think bigger in terms of recognizing interesting attacks on my webhoneypot.

Be sure to subscribe to my youtube channel for more edgy tech content and cyber insights.
youtube.com/@gnarcoding

[1] https://github.com/gnarcoding/bespoke-ui-cyber-analytics/
[2] https://isc.sans.edu/tools/honeypot/
[3] https://www.sans.edu/cyber-security-programs/bachelors-degree/

———–
Guy Bruneau IPSS Inc.
My GitHub Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Modernize your workflows: Amazon WorkSpaces now gives AI agents their own desktop (preview)

This post was originally published on this site

Enterprises face a significant challenge when deploying AI agents: the desktop and legacy applications that power most business workflows are simply inaccessible to modern AI systems. According to a 2024 Gartner report, 75% of organizations run legacy applications that lack modern APIs, and 71% of Fortune 500 companies operate critical processes on mainframe systems without adequate programmatic access. For many organizations, this has meant choosing between delaying AI adoption or undertaking expensive and risky modernization projects.

Today, we are announcing that Amazon WorkSpaces now enables AI agents to securely operate desktop applications without requiring application modernization. The same managed virtual desktops that millions of employees use and trust can now also serve AI agents, turning WorkSpaces into infrastructure for scaling enterprise productivity, not just delivering it. Because agents operate within your existing WorkSpaces environment, there are no APIs to build, no application migrations to plan, and no new infrastructure to manage.

Some of our customers had an early opportunity to give their agents a WorkSpace. Chris Noon, Director, Nuvens Consulting shared with us, “WorkSpaces lets our clients give AI agents the same secure, governed desktop environment their employees already use — no custom API integrations, full audit trails, and enterprise-grade isolation out of the box. For regulated industries, that’s not a nice-to-have — it’s the baseline.”

Secure cloud desktop access for AI agents
With WorkSpaces, AI agents can securely access and operate desktop applications running inside managed WorkSpaces environments to complete complex business workflows. Agents authenticate through AWS Identity and Access Management (IAM) and connect via Workspaces with complete audit trails available through AWS CloudTrail and Amazon CloudWatch. Because agents operate within secure WorkSpaces environments rather than on local machines, your existing security controls and compliance policies remain fully intact.

Amazon Workspaces supports the industry-standard Model Context Protocol (MCP), which means WorkSpaces works with any agent framework, such as LangChain, CrewAI and Strands Agents.

Let’s try it out
To set up a WorkSpaces environment for AI agents, I started in the AWS Management Console by creating a new WorkSpaces Applications stack—the environment definition that controls how agents connect and what they’re allowed to do.

From the Amazon WorkSpaces console, I chose Create stack and configured the basics: name, fleet association, and VPC endpoints. In Step 3 of the stack creation workflow, I noticed the new AI agents section with two options. The first, No AI agent access, is the default configuration for standard WorkSpaces designed for people. The second, Add AI Agents, allows AI agents to securely access and operate applications using their own identity and permissions. I selected Add AI Agents to enable agent connections on this stack.

Workspaces Screenshot

Next, I will enable storage before configuring the agent access settings to define how agents interact with the desktop.

Workspaces screenshot

Under Agent features, I enabled three capabilities. Computer input allows the agent to click, type, and scroll within the desktop. Computer vision allows the agent to capture screenshots of the desktop, which is how it “sees” the application. Finally, screenshot storage configures where session screenshots are stored for audit and debugging.

Workspaces Screenshot

Under Desktop screen layout, I set the screen resolution to 1280×720 and image format to PNG. The resolution determines the fidelity of what the agent sees during a session—a complex application with dense UI elements might benefit from higher resolution, while a terminal-style interface works well at 720p.

Workspaces Screenshot

With my stack configured, WorkSpaces exposes a managed MCP endpoint. I pointed my agent framework to this endpoint, provided IAM credentials for authentication, and my agent began interacting with the desktop applications installed on the fleet’s image.

To see this in action, here’s an agent built with the Strands Agent SDK and Amazon Bedrock handling a prescription refill, looking up the patient record, searching for the medication, placing the order, and confirming a successful refill, all inside a sample pharmacy system with no API.

The application doesn’t know an agent is driving it. Nothing about the software was modified, rebuilt, or integrated. The agent worked with it exactly as it exists today.

Now available
This feature is available today in public preview at no additional cost in US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, Paris), and Asia (Tokyo, Mumbai, Sydney, Seoul, Singapore) Regions.

Get started building today using our GitHub repo, or visit the WorkSpaces page for more details.