In this diary, I’ll show you a practical example of how steganography is used to hide payloads (or other suspicious data) from security tools and Security Analysts’ eyes. Steganography can be defined like this: It is the art and science of concealing a secret message, file, or image within an ordinary-looking carrier—such as a digital photograph, audio clip, or text—so that the very existence of the hidden data is undetectable to casual observers (read: security people). Many online implementations of basic steganography allow you to embed a message (a string) into a picture[1].
All posts by David
In the works – New Availability Zone in Maryland for US East (Northern Virginia) Region
The US East (Northern Virginia) Region was the first Region launched by Amazon Web Services (AWS), and it has seen tremendous growth and customer adoption over the past several years. Now hosting active customers ranging from startups to large enterprises, AWS has steadily expanded the US East (Northern Virginia) Region infrastructure and capacity. The US East (Northern Virginia) Region consists of six Availability Zones, providing customers with enhanced redundancy and the ability to architect highly available applications.
Today, we’re announcing that a new Availability Zone located in Maryland will be added to the US East (Northern Virginia) Region, which is expected to open in 2026. This new Availability Zone will be connected to other Availability Zones by high-bandwidth, low-latency network connections over dedicated, fully redundant fiber. The upcoming Availability Zone in Maryland will also be instrumental in supporting the rapid growth of generative AI and advanced computing workloads in the US East (Northern Virginia) Region.
All Availability Zones are physically separated in a Region by a meaningful distance, many kilometers (km) from any other Availability Zone, although all are within 100 km (60 miles) of each other. The network performance is sufficient to accomplish synchronous replication between Availability Zones in Maryland and Virginia within the US East (Northern Virginia) Region. If your application is partitioned across multiple Availability Zones, your workloads are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more.
With this announcement, AWS now has four new Regions in the works—New Zealand, Kingdom of Saudi Arabia, Taiwan, and the AWS European Sovereign Cloud—and 13 upcoming new Availability Zones.
Geographic information for the new Availability Zone
In March, we provided more granular visibility into the geographic location information of all AWS Regions and Availability Zones. We have updated the AWS Regions and Availability Zones page to reflect the new geographic information for this upcoming Availability Zone in Maryland. As shown in the following screenshot, the infrastructure for the upcoming Availability Zone will be located in Maryland, United States of America, for the US East (Northern Virginia) us-east-1
Region.
You can continue to use this geographic information to choose Availability Zones that align with your regulatory, compliance, and operational requirements.
After the new Availability Zone is launched, it will be available along with other Availability Zones in the US East (Northern Virginia) Region through the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Stay tuned
We plan to make this new Availability Zone in the US East (Northern Virginia) Region generally available in 2026. As usual, check out the Regional news of the AWS News Blog so that you’ll be among the first to know when the new Availability Zone is open!
To learn more, visit the AWS Global Infrastructure Regions and Availability Zones page or AWS Regions and Availability Zones in the AWS documentation and send feedback to AWS re:Post or through your usual AWS Support contacts.
— Channy
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
Enhance real-time applications with AWS AppSync Events data source integrations
Today, we are announcing that AWS AppSync Events now supports data source integrations for channel namespaces, enabling developers to create more sophisticated real-time applications. With this new capability you can associate AWS Lambda functions, Amazon DynamoDB tables, Amazon Aurora databases, and other data sources with channel namespace handlers. With AWS AppSync Events, you can build rich, real-time applications with features like data validation, event transformation, and persistent storage of events.
With these new capabilities, developers can create sophisticated event processing workflows by transforming and filtering events using Lambda functions or save batches of events to DynamoDB using the new AppSync_JS batch utilities. The integration enables complex interactive flows while reducing development time and operational overhead. For example, you can now automatically persist events to a database without writing complex integration code.
First look at data source integrations
Let’s walk through how to set up data source integrations using the AWS Management Console. First, I’ll navigate to AWS AppSync in the console and select my Event API (or create a new one).
Persisting event data directly to DynamoDB
There are multiple kinds of data source integrations to choose from. For this first example, I’ll create a DynamoDB table as a data source. I’m going to need a DynamoDB table first, so I head over to DynamoDB in the console and create a new table called event-messages
. For this example, all I need to do is create the table with a Partition Key called id
. From here, I can click Create table and accept the default table configuration before I head back to AppSync in the console.
Back in the AppSync console, I return to the Event API I set up previously, select Data Sources from the tabbed navigation panel and click the Create data source button.
After giving my Data Source a name, I select Amazon DynamoDB from the Data source drop down menu. This will reveal configuration options for DynamoDB.
Once my data source is configured, I can implement the handler logic. Here’s an example of a Publish handler that persists events to DynamoDB:
import * as ddb from '@aws-appsync/utils/dynamodb'
import { util } from '@aws-appsync/utils'
const TABLE = 'events-messages'
export const onPublish = {
request(ctx) {
const channel = ctx.info.channel.path
const timestamp = util.time.nowISO8601()
return ddb.batchPut({
tables: {
[TABLE]: ctx.events.map(({id, payload}) => ({
channel, id, timestamp, ...payload,
})),
},
})
},
response(ctx) {
return ctx.result.data[TABLE].map(({ id, ...payload }) => ({ id, payload }))
},
}
To add the handler code, I go the tabbed navigation for Namespaces where I find a new default namespace already created for me. If I click to open the default namespace, I find the button that allows me to add an Event handler just below the configuration details.
Clicking on Create event handlers brings me to a new dialog where I choose Code with data source as my configuration, and then select the DynamoDB data source as my publish configuration.
After saving the handler, I can test the integration using the built-in testing tools in the console. The default values here should work, and as you can see below, I’ve successfully written two events to my DynamoDB table.
Here’s all my messages captured in DynamoDB!
Error handling and security
The new data source integrations include comprehensive error handling capabilities. For synchronous operations, you can return specific error messages that will be logged to Amazon CloudWatch, while maintaining security by not exposing sensitive backend information to clients. For authorization scenarios, you can implement custom validation logic using Lambda functions to control access to specific channels or message types.
Available now
AWS AppSync Events data source integrations are available today in all AWS Regions where AWS AppSync is available. You can start using these new features through the AWS AppSync console, AWS command line interface (CLI), or AWS SDKs. There is no additional cost for using data source integrations – you pay only for the underlying resources you use (such as Lambda invocations or DynamoDB operations) and your existing AppSync Events usage.
To learn more about AWS AppSync Events and data source integrations, visit the AWS AppSync Events documentation and get started building more powerful real-time applications today.
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
Attacks against Teltonika Networks SMS Gateways, (Thu, Apr 24th)
Ever wonder where all the SMS spam comes from? If you are trying to send SMS "at scale," there are a few options: You could sign up for a messaging provider like Twilio, the AWS SNS service, or several similar services. These services offer easily scriptable and affordable ways to send SMS messages. We have previously covered how attackers attempt to steal related credentials to use these services even cheaper (for free!).
But if you are not into cloud or SaaS, maybe you instead like to send your own SMS messages directly? Or would you like to become the next Twilio? In this case, special SMS gateways are available. One company making these gateways is Teltonika Networks. They offer a wide range of products to send and receive SMS, including devices for IoT remote management and enterprise SMS gateways.
But of course, you need to authenticate to send SMS messages. Nobody wants complex login credentials and passwords. Teltonika offers simple default credentials: "user1" as user name, and "user_pass" as password.
I am surprised it took so long for us to see some scans for these well known credentials. For example:
/cgi-bin/sms_send?username=user1&password=user_pass&number=00966549306573&text=test
This request will send an SMS "test" to 00966549306573, a number in Saudi Arabia. Oddly enough, I ever so often see Saudi Arabian numbers used in SMS related scans.
Here are a few other passwords I have seen, all for the user "user1":
1234
admin
p8xr6tINNA0eGBIY
root
rut9xx
teltonika
test
user1
The long "random" password is interesting. It was used several times, and I am not sure if that is some kind of "support" backdoor. The "rut9xx" password makes sense as the model numbers for the industrial Teltonika gateways start with "RUT", like RUT140, RUT901, RUT906…,
Numbers I have seen as a recipient:
00966549306573 (Saudi Arabia)
0032493855785& (Belgium)
As usual, change default passwords, particularly for more professional equipment like this: Throw it back at the vendor (HARD!) if it comes with a default password.
—
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Honeypot Iptables Maintenance and DShield-SIEM Logging, (Wed, Apr 23rd)
New Amazon EC2 Graviton4-based instances with NVMe SSD storage
Since the launch of AWS Graviton processors in 2018, we have continued to innovate and deliver improved performance for our customers’ cloud workloads. Following the success of our Graviton3-based instances, we are excited to announce three new Amazon Elastic Compute Cloud (Amazon EC2) instance families powered by AWS Graviton4 processors with NVMe-based SSD local storage: compute optimized (C8gd), general purpose (M8gd), and memory optimized (R8gd) instances. These instances deliver up to 30% better compute performance, 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances.
Let’s look at some of the improvements that are now available in our new instances. These instances offer larger instance sizes with up to 3x more vCPUs (up to 192 vCPUs), 3x the memory (up to 1.5 TiB), 3x the local storage (up to 11.4TB of NVMe SSD storage), 75% higher memory bandwidth, and 2x more L2 cache compared to their Graviton3-based predecessors. These features help you to process larger amounts of data, scale up your workloads, improve time to results, and lower your total cost of ownership (TCO). These instances also offer up to 50 Gbps network bandwidth and up to 40 Gbps Amazon Elastic Block Store (Amazon EBS) bandwidth, a significant improvement over Graviton3-based instances. Additionally, you can now adjust the network and Amazon EBS bandwidth on these instances by up to 25% using EC2 instance bandwidth weighting configuration, providing you greater flexibility with the allocation of your bandwidth resources to better optimize your workloads.
Built on AWS Graviton4, these instances are great for storage intensive Linux-based workloads including containerized and micro-services-based applications built using Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Registry (Amazon ECR), Kubernetes, and Docker, as well as applications written in popular programming languages such as C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP. AWS Graviton4 processors are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications than AWS Graviton3 processors.
Instance specifications
These instances also offer two bare metal sizes (metal-24xl and metal-48xl), allowing you to right size your instances and deploy workloads that benefit from direct access to physical resources. Additionally, these instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. In addition, Graviton4 processors offer you enhanced security by fully encrypting all high-speed physical hardware interfaces.
The instances are available in 10 sizes per family, as well as two bare metal configurations each:
Instance Name | vCPUs | Memory (GiB) (C/M/R) | Storage (GB) | Network Bandwidth (Gbps) | EBS Bandwidth (Gbps) |
---|---|---|---|---|---|
medium | 1 | 2/4/8* | 1 x 59 | Up to 12.5 | Up to 10 |
large | 2 | 4/8/16* | 1 x 118 | Up to 12.5 | Up to 10 |
xlarge | 4 | 8/16/32* | 1 x 237 | Up to 12.5 | Up to 10 |
2xlarge | 8 | 16/32/64* | 1 x 474 | Up to 15 | Up to 10 |
4xlarge | 16 | 32/64/128* | 1 x 950 | Up to 15 | Up to 10 |
8xlarge | 32 | 64/128/256* | 1 x 1900 | 15 | 10 |
12xlarge | 48 | 96/192/384* | 3 x 950 | 22.5 | 15 |
16xlarge | 64 | 128/256/512* | 2 x 1900 | 30 | 20 |
24xlarge | 96 | 192/384/768* | 3 x 1900 | 40 | 30 |
48xlarge | 192 | 384/768/1536* | 6 x 1900 | 50 | 40 |
metal-24xl | 96 | 192/384/768* | 3 x 1900 | 40 | 30 |
metal-48xl | 192 | 384/768/1536* | 6 x 1900 | 50 | 40 |
*Memory values are for C8gd/M8gd/R8gd respectively
Availability and pricing
M8gd, C8gd, and R8gd instances are available today in US East (N. Virginia, Ohio) and US West (Oregon) Regions. These instances can be purchased as On-Demand instances, Savings Plans, Spot instances, or as Dedicated instances or Dedicated hosts.
Get started today
You can launch M8gd, C8gd and R8gd instances today in the supported Regions through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs. To learn more, check out the collection of Graviton resources to help you start migrating your applications to Graviton instance types. You can also visit the Graviton Getting Started Guide to begin your Graviton adoption journey.
How is the News Blog doing? Take this 1 minute survey!
(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)
It's 2025… so why are obviously malicious advertising URLs still going strong?, (Mon, Apr 21st)
While the old adage stating that “the human factor is the weakest link in the cyber security chain” will undoubtedly stay relevant in the near (and possibly far) future, the truth is that the tech industry could – and should – help alleviate the problem significantly more than it does today.
RedTail, Remnux and Malware Management [Guest Diary], (Wed, Apr 16th)
Apple Patches Exploited Vulnerability, (Wed, Apr 16th)
Today, Apple patched two vulnerabilities that had already been exploited. The vulnerabilities were exploited against iOS but also exist in macOS, tvOS, and visionOS. Apple released updates for all affected operating systems.
iOS 18.4.1 and iPadOS 18.4.1 | macOS Sequoia 15.4.1 | tvOS 18.4.1 | visionOS 2.4.1 |
---|---|---|---|
CVE-2025-31200: Processing an audio stream in a maliciously crafted media file may result in code execution. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on iOS.. Affects CoreAudio |
|||
x | x | x | x |
CVE-2025-31201: An attacker with arbitrary read and write capability may be able to bypass Pointer Authentication. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on iOS.. Affects RPAC |
|||
x | x | x | x |
—
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Online Services Again Abused to Exfiltrate Data, (Tue, Apr 15th)
If Attackers can abuse free online services, they will do for sure! Why spend time to deploy a C2 infrastructure if you have plenty of ways to use "official" services. Not only, they don't cost any money but the traffic can be hidden in the normal traffic; making them more difficult to detect. A very popular one was anonfiles[.]com. It was so abused that they closed in 2023![1]. A funny fact is that I still see lot of malicious scripts that refer to this domain. Of course, alternatives popped up here and there, like anonfile[.]la[2].