Simple Analysis Of A CVE-2021-40444 .docx Document, (Sat, Sep 18th)

This post was originally published on this site

Analysing a malicious Word document like prod.docx that exploits %%cve:2021-40444%% is not difficult.

We need to find the malicious URL in this document. As I've shown before, this is quite simple: extract all XML files from the ZIP container (.docx files are OOXML files, that's a ZIP container with (mostly) XML files) and use a regular expression to search for URLs.

This can be done with my tools zipdump.py and re-search.py:

OOXML files contain a lot of legitimate URLs. Like schemas.microsoft.com. These can be filtered out with my tool re-search.py:

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Malicious Calendar Subscriptions Are Back?, (Fri, Sep 17th)

This post was originally published on this site

Did this threat really disappear? This isn’t a brand new technique to deliver malicious content to mobile devices but it seems that attackers started new waves of spam campaigns based on malicious calendar subscriptions. Being a dad, you can imagine that I always performed security awareness with my daughters. Since they use computers and the Internet, my message was always the same: “Don’t be afraid to ask me, there are no stupid questions or shame if you think you did something wrong”.

A few days ago, my youngest one came to me and told me she had the impression that her iPhone was hacked. After a quick check and reassuring her, I switched my dad's cap to the handler one and had a deeper look.

She told me that a pop-up was displayed on the screen and clicked on “Ok” too quickly. It was an unwanted calendar invitation and she subscribed to a spam feed. Her calendar became quickly flooded with events:

They are in French but easy to understand. They pretend to notify you about viruses found on the device and, using reminders, they keep the pressure on the victim:

If you visit the proposed link, you'll get more annoying ads pages, etc. This time hopefully, nothing very malicious but, seeing the latest iOS vulnerabilities[1], this technique could be used to deliver exploits. To get rid of all those messages, you just need to unsubscribe from the calendar.

In conclusion, already read carefully all popups displayed on your mobile phones (obviously on any type of device!).

[1] https://support.apple.com/en-us/HT212807

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing Amazon MSK Connect – Stream Data to and from Your Apache Kafka Clusters Using Managed Connectors

This post was originally published on this site

Apache Kafka is an open-source platform for building real-time streaming data pipelines and applications. At re:Invent 2018, we announced Amazon Managed Streaming for Apache Kafka, a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data.

When you use Apache Kafka, you capture real-time data from sources such as IoT devices, database change events, and website clickstreams, and deliver it to destinations such as databases and persistent storage.

Kafka Connect is an open-source component of Apache Kafka that provides a framework for connecting with external systems such as databases, key-value stores, search indexes, and file systems. However, manually running Kafka Connect clusters requires you to plan and provision the required infrastructure, deal with cluster operations, and scale it in response to load changes.

Today, we’re announcing a new capability that makes it easier to manage Kafka Connect clusters. MSK Connect allows you to configure and deploy a connector using Kafka Connect with a just few clicks. MSK Connect provisions the required resources and sets up the cluster. It continuously monitors the health and delivery state of connectors, patches and manages the underlying hardware, and auto-scales connectors to match changes in throughput. As a result, you can focus your resources on building applications rather than managing infrastructure.

MSK Connect is fully compatible with Kafka Connect, which means you can migrate your existing connectors without code changes. You don’t need an MSK cluster to use MSK Connect. It supports Amazon MSK, Apache Kafka, and Apache Kafka compatible clusters as sources and sinks. These clusters can be self-managed or managed by AWS partners and 3rd parties as long as MSK Connect can privately connect to the clusters.

Using MSK Connect with Amazon Aurora and Debezium
To test MSK Connect, I want to use it to stream data change events from one of my databases. To do so, I use Debezium, an open-source distributed platform for change data capture built on top of Apache Kafka.

I use a MySQL-compatible Amazon Aurora database as the source and the Debezium MySQL connector with the setup described in this architectural diagram:

Architectural diagram.

To use my Aurora database with Debezium, I need to turn on binary logging in the DB cluster parameter group. I follow the steps in the How do I turn on binary logging for my Amazon Aurora MySQL cluster article.

Next, I have to create a custom plugin for MSK Connect. A custom plugin is a set of JAR files that contain the implementation of one or more connectors, transforms, or converters. Amazon MSK will install the plugin on the workers of the connect cluster where the connector is running.

From the Debezium website, I download the MySQL connector plugin for the latest stable release. Because MSK Connect accepts custom plugins in ZIP or JAR format, I convert the downloaded archive to ZIP format and keep the JARs files in the main directory:

$ tar xzf debezium-connector-mysql-1.6.1.Final-plugin.tar.gz
$ cd debezium-connector-mysql
$ zip -9 ../debezium-connector-mysql-1.6.1.zip *
$ cd ..

Then, I use the AWS Command Line Interface (CLI) to upload the custom plugin to an Amazon Simple Storage Service (Amazon S3) bucket in the same AWS Region I am using for MSK Connect:

$ aws s3 cp debezium-connector-mysql-1.6.1.zip s3://my-bucket/path/

On the Amazon MSK console there is a new MSK Connect section. I look at the connectors and choose Create connector. Then, I create a custom plugin and browse my S3 buckets to select the custom plugin ZIP file I uploaded before.

Console screenshot.

I enter a name and a description for the plugin and then choose Next.

Console screenshot.

Now that the configuration of the custom plugin is complete, I start the creation of the connector. I enter a name and a description for the connector.

Console screenshot.

I have the option to use a self-managed Apache Kafka cluster or one that is managed by MSK. I select one of my MSK cluster that is configured to use IAM authentication. The MSK cluster I select is in the same virtual private cloud (VPC) as my Aurora database. To connect, the MSK cluster and Aurora database use the default security group for the VPC. For simplicity, I use a cluster configuration with auto.create.topics.enable set to true.

Console screenshot.

In Connector configuration, I use the following settings:

connector.class=io.debezium.connector.mysql.MySqlConnector
tasks.max=1
database.hostname=<aurora-database-writer-instance-endpoint>
database.port=3306
database.user=my-database-user
database.password=my-secret-password
database.server.id=123456
database.server.name=ecommerce-server
database.include.list=ecommerce
database.history.kafka.topic=dbhistory.ecommerce
database.history.kafka.bootstrap.servers=<bootstrap servers>
database.history.consumer.security.protocol=SASL_SSL
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.producer.security.protocol=SASL_SSL
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
include.schema.changes=true

Some of these settings are generic and should be specified for any connector. For example:

  • connector.class is the Java class of the connector.
  • tasks.max is the maximum number of tasks that should be created for this connector.

Other settings are specific to the Debezium MySQL connector:

  • The database.hostname contains the writer instance endpoint of my Aurora database.
  • The database.server.name is a logical name of the database server. It is used for the names of the Kafka topics created by Debezium.
  • The database.include.list contains the list of databases hosted by the specified server.
  • The database.history.kafka.topic is a Kafka topic used internally by Debezium to track database schema changes.
  • The database.history.kafka.bootstrap.servers contains the bootstrap servers of the MSK cluster.
  • The final eight lines (database.history.consumer.* and database.history.producer.*) enable IAM authentication to access the database history topic.

In Connector capacity, I can choose between autoscaled or provisioned capacity. For this setup, I choose Autoscaled and leave all other settings at their defaults.

Console screenshot.

With autoscaled capacity, I can configure these parameters:

  • MSK Connect Unit (MCU) count per worker – Each MCU provides 1 vCPU of compute and 4 GB of memory.
  • The minimum and maximum number of workers.
  • Autoscaling utilization thresholds – The upper and lower target utilization thresholds on MCU consumption in percentage to trigger auto scaling.

Console screenshot.

There is a summary of the minimum and maximum MCUs, memory, and network bandwidth for the connector.

Console screenshot.

For Worker configuration, you can use the default one provided by Amazon MSK or provide your own configuration. In my setup, I use the default one.

In Access permissions, I create a IAM role. In the trusted entities, I add kafkaconnect.amazonaws.com to allow MSK Connect to assume the role.

The role is used by MSK Connect to interact with the MSK cluster and other AWS services. For my setup, I add:

The Debezium connector needs access to the cluster configuration to find the replication factor to use to create the history topic. For this reason, I add to the permissions policy the kafka-cluster:DescribeClusterDynamicConfiguration action (equivalent Apache Kafka’s DESCRIBE_CONFIGS cluster ACL).

Depending on your configuration, you might need to add more permissions to the role (for example, in case the connector needs access to other AWS resources such as an S3 bucket). If that is the case, you should add permissions before creating the connector.

In Security, the settings for authentication and encryption in transit are taken from the MSK cluster.

Console screenshot.

In Logs, I choose to deliver logs to CloudWatch Logs to have more information on the execution of the connector. By using CloudWatch Logs, I can easily manage retention and interactively search and analyze my log data with CloudWatch Logs Insights. I enter the log group ARN (it’s the same log group I used before in the IAM role) and then choose Next.

Console screenshot.

I review the settings and then choose Create connector. After a few minutes, the connector is running.

Testing MSK Connect with Amazon Aurora and Debezium
Now let’s test the architecture I just set up. I start an Amazon Elastic Compute Cloud (Amazon EC2) instance to update the database and start a couple of Kafka consumers to see Debezium in action. To be able to connect to both the MSK cluster and the Aurora database, I use the same VPC and assign the default security group. I also add another security group that gives me SSH access to the instance.

I download a binary distribution of Apache Kafka and extract the archive in the home directory:

$ tar xvf kafka_2.13-2.7.1.tgz

To use IAM to authenticate with the MSK cluster, I follow the instructions in the Amazon MSK Developer Guide to configure clients for IAM access control. I download the latest stable release of the Amazon MSK Library for IAM:

$ wget https://github.com/aws/aws-msk-iam-auth/releases/download/1.1.0/aws-msk-iam-auth-1.1.0-all.jar

In the ~/kafka_2.13-2.7.1/config/ directory I create a client-config.properties file to configure a Kafka client to use IAM authentication:

# Sets up TLS for encryption and SASL for authN.
security.protocol = SASL_SSL

# Identifies the SASL mechanism to use.
sasl.mechanism = AWS_MSK_IAM

# Binds SASL client implementation.
sasl.jaas.config = software.amazon.msk.auth.iam.IAMLoginModule required;

# Encapsulates constructing a SigV4 signature based on extracted credentials.
# The SASL client bound by "sasl.jaas.config" invokes this class.
sasl.client.callback.handler.class = software.amazon.msk.auth.iam.IAMClientCallbackHandler

I add a few lines to my Bash profile to:

  • Add Kafka binaries to the PATH.
  • Add the MSK Library for IAM to the CLASSPATH.
  • Create the BOOTSTRAP_SERVERS environment variable to store the bootstrap servers of my MSK cluster.
$ cat >> ~./bash_profile
export PATH=~/kafka_2.13-2.7.1/bin:$PATH
export CLASSPATH=/home/ec2-user/aws-msk-iam-auth-1.1.0-all.jar
export BOOTSTRAP_SERVERS=<bootstrap servers>

Then, I open three terminal connections to the instance.

In the first terminal connection, I start a Kafka consumer for a topic with the same name as the database server (ecommerce-server). This topic is used by Debezium to stream schema changes (for example, when a new table is created).

$ cd ~/kafka_2.13-2.7.1/
$ kafka-console-consumer.sh --bootstrap-server $BOOTSTRAP_SERVERS 
                            --consumer.config config/client-config.properties 
                            --topic ecommerce-server --from-beginning

In the second terminal connection, I start another Kafka consumer for a topic with a name built by concatenating the database server (ecommerce-server), the database (ecommerce), and the table (orders). This topic is used by Debezium to stream data changes for the table (for example, when a new record is inserted).

$ cd ~/kafka_2.13-2.7.1/
$ kafka-console-consumer.sh --bootstrap-server $BOOTSTRAP_SERVERS 
                            --consumer.config config/client-config.properties 
                            --topic ecommerce-server.ecommerce.orders --from-beginning

In the third terminal connection, I install a MySQL client using the MariaDB package and connect to the Aurora database:

$ sudo yum install mariadb
$ mysql -h <aurora-database-writer-instance-endpoint> -u <database-user> -p

From this connection, I create the ecommerce database and a table for my orders:

CREATE DATABASE ecommerce;

USE ecommerce

CREATE TABLE orders (
       order_id VARCHAR(255),
       customer_id VARCHAR(255),
       item_description VARCHAR(255),
       price DECIMAL(6,2),
       order_date DATETIME DEFAULT CURRENT_TIMESTAMP
);

These database changes are captured by the Debezium connector managed by MSK Connect and are streamed to the MSK cluster. In the first terminal, consuming the topic with schema changes, I see the information on the creation of database and table:

Struct{source=Struct{version=1.6.1.Final,connector=mysql,name=ecommerce-server,ts_ms=1629202831473,db=ecommerce,server_id=1980402433,file=mysql-bin-changelog.000003,pos=9828,row=0},databaseName=ecommerce,ddl=CREATE DATABASE ecommerce,tableChanges=[]}
Struct{source=Struct{version=1.6.1.Final,connector=mysql,name=ecommerce-server,ts_ms=1629202878811,db=ecommerce,table=orders,server_id=1980402433,file=mysql-bin-changelog.000003,pos=10002,row=0},databaseName=ecommerce,ddl=CREATE TABLE orders ( order_id VARCHAR(255), customer_id VARCHAR(255), item_description VARCHAR(255), price DECIMAL(6,2), order_date DATETIME DEFAULT CURRENT_TIMESTAMP ),tableChanges=[Struct{type=CREATE,id="ecommerce"."orders",table=Struct{defaultCharsetName=latin1,primaryKeyColumnNames=[],columns=[Struct{name=order_id,jdbcType=12,typeName=VARCHAR,typeExpression=VARCHAR,charsetName=latin1,length=255,position=1,optional=true,autoIncremented=false,generated=false}, Struct{name=customer_id,jdbcType=12,typeName=VARCHAR,typeExpression=VARCHAR,charsetName=latin1,length=255,position=2,optional=true,autoIncremented=false,generated=false}, Struct{name=item_description,jdbcType=12,typeName=VARCHAR,typeExpression=VARCHAR,charsetName=latin1,length=255,position=3,optional=true,autoIncremented=false,generated=false}, Struct{name=price,jdbcType=3,typeName=DECIMAL,typeExpression=DECIMAL,length=6,scale=2,position=4,optional=true,autoIncremented=false,generated=false}, Struct{name=order_date,jdbcType=93,typeName=DATETIME,typeExpression=DATETIME,position=5,optional=true,autoIncremented=false,generated=false}]}}]}

Then, I go back to the database connection in the third terminal to insert a few records in the orders table:

INSERT INTO orders VALUES ("123456", "123", "A super noisy mechanical keyboard", "50.00", "2021-08-16 10:11:12");
INSERT INTO orders VALUES ("123457", "123", "An extremely wide monitor", "500.00", "2021-08-16 11:12:13");
INSERT INTO orders VALUES ("123458", "123", "A too sensible microphone", "150.00", "2021-08-16 12:13:14");

In the second terminal, I see the information on the records inserted into the orders table:

Struct{after=Struct{order_id=123456,customer_id=123,item_description=A super noisy mechanical keyboard,price=50.00,order_date=1629108672000},source=Struct{version=1.6.1.Final,connector=mysql,name=ecommerce-server,ts_ms=1629202993000,db=ecommerce,table=orders,server_id=1980402433,file=mysql-bin-changelog.000003,pos=10464,row=0},op=c,ts_ms=1629202993614}
Struct{after=Struct{order_id=123457,customer_id=123,item_description=An extremely wide monitor,price=500.00,order_date=1629112333000},source=Struct{version=1.6.1.Final,connector=mysql,name=ecommerce-server,ts_ms=1629202993000,db=ecommerce,table=orders,server_id=1980402433,file=mysql-bin-changelog.000003,pos=10793,row=0},op=c,ts_ms=1629202993621}
Struct{after=Struct{order_id=123458,customer_id=123,item_description=A too sensible microphone,price=150.00,order_date=1629115994000},source=Struct{version=1.6.1.Final,connector=mysql,name=ecommerce-server,ts_ms=1629202993000,db=ecommerce,table=orders,server_id=1980402433,file=mysql-bin-changelog.000003,pos=11114,row=0},op=c,ts_ms=1629202993630}

My change data capture architecture is up and running and the connector is fully managed by MSK Connect.

Availability and Pricing
MSK Connect is available in the following AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), EU (Stockholm), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon). For more information, see the AWS Regional Services List.

With MSK Connect you pay for what you use. The resources used by your connectors can be scaled automatically based on your workload. For more information, see the Amazon MSK pricing page.

Simplify the management of your Apache Kafka connectors today with MSK Connect.

Danilo

AA21-259A: APT Actors Exploiting Newly Identified Vulnerability in ManageEngine ADSelfService Plus

This post was originally published on this site

Original release date: September 16, 2021

Summary

This Joint Cybersecurity Advisory uses the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK®) framework, Version 8. See the ATT&CK for Enterprise for  referenced threat actor tactics and for techniques.

This joint advisory is the result of analytic efforts between the Federal Bureau of Investigation (FBI), United States Coast Guard Cyber Command (CGCYBER), and the Cybersecurity and Infrastructure Security Agency (CISA) to highlight the cyber threat associated with active exploitation of a newly identified vulnerability (CVE-2021-40539) in ManageEngine ADSelfService Plus—a self-service password management and single sign-on solution.

CVE-2021-40539, rated critical by the Common Vulnerability Scoring System (CVSS), is an authentication bypass vulnerability affecting representational state transfer (REST) application programming interface (API) URLs that could enable remote code execution. The FBI, CISA, and CGCYBER assess that advanced persistent threat (APT) cyber actors are likely among those exploiting the vulnerability. The exploitation of ManageEngine ADSelfService Plus poses a serious risk to critical infrastructure companies, U.S.-cleared defense contractors, academic institutions, and other entities that use the software. Successful exploitation of the vulnerability allows an attacker to place webshells, which enable the adversary to conduct post-exploitation activities, such as compromising administrator credentials, conducting lateral movement, and exfiltrating registry hives and Active Directory files.

Zoho ManageEngine ADSelfService Plus build 6114, which Zoho released on September 6, 2021, fixes CVE-2021-40539. FBI, CISA, and CGCYBER strongly urge users and administrators to update to ADSelfService Plus build 6114. Additionally, FBI, CISA, and CGCYBER strongly urge organizations ensure ADSelfService Plus is not directly accessible from the internet.

The FBI, CISA, and CGCYBER have reports of malicious cyber actors using exploits against CVE-2021-40539 to gain access [T1190] to ManageEngine ADSelfService Plus, as early as August 2021. The actors have been observed using various tactics, techniques, and procedures (TTPs), including:

  • Frequently writing webshells [T1505.003] to disk for initial persistence
  • Obfuscating and Deobfuscating/Decoding Files or Information  [T1027 and T1140]
  • Conducting further operations to dump user credentials [T1003]
  • Living off the land by only using signed Windows binaries for follow-on actions [T1218]
  • Adding/deleting user accounts as needed [T1136]
  • Stealing copies of the Active Directory database (NTDS.dit) [T1003.003] or registry hives
  • Using Windows Management Instrumentation (WMI) for remote execution [T1047]
  • Deleting files to remove indicators from the host [T1070.004]
  • Discovering domain accounts with the net Windows command [1087.002]
  • Using Windows utilities to collect and archive files for exfiltration [T1560.001]
  • Using custom symmetric encryption for command and control (C2) [T1573.001]

The FBI, CISA, and CGCYBER are proactively investigating and responding to this malicious cyber activity.

  • FBI is leveraging specially trained cyber squads in each of its 56 field offices and CyWatch, the FBI’s 24/7 operations center and watch floor, which provides around-the-clock support to track incidents and communicate with field offices across the country and partner agencies.
  • CISA offers a range of no-cost cyber hygiene services to help organizations assess, identify, and reduce their exposure to threats. By requesting these services, organizations of any size could find ways to reduce their risk and mitigate attack vectors.
  • CGCYBER has deployable elements that provide cyber capability to marine transportation system critical infrastructure in proactive defense or response to incidents.

Sharing technical and/or qualitative information with the FBI, CISA, and CGCYBER helps empower and amplify our capabilities as federal partners to collect and share intelligence and engage with victims while working to unmask and hold accountable, those conducting malicious cyber activities. See the Contact section below for details.

Click here for a PDF version of this report.

Technical Details

Successful compromise of ManageEngine ADSelfService Plus, via exploitation of CVE-2021-40539, allows the attacker to upload a .zip file containing a JavaServer Pages (JSP) webshell masquerading as an x509 certificate: service.cer. Subsequent requests are then made to different API endpoints to further exploit the victim’s system.

After the initial exploitation, the JSP webshell is accessible at /help/admin-guide/Reports/ReportGenerate.jsp. The attacker then attempts to move laterally using Windows Management Instrumentation (WMI), gain access to a domain controller, dump NTDS.dit and SECURITY/SYSTEM registry hives, and then, from there, continues the compromised access.

Confirming a successful compromise of ManageEngine ADSelfService Plus may be difficult—the attackers run clean-up scripts designed to remove traces of the initial point of compromise and hide any relationship between exploitation of the vulnerability and the webshell.

Targeted Sectors

APT cyber actors have targeted academic institutions, defense contractors, and critical infrastructure entities in multiple industry sectors—including transportation, IT, manufacturing, communications, logistics, and finance. Illicitly obtained access and information may disrupt company operations and subvert U.S. research in multiple sectors.

Indicators of Compromise

Hashes:

068d1b3813489e41116867729504c40019ff2b1fe32aab4716d429780e666324
49a6f77d380512b274baff4f78783f54cb962e2a8a5e238a453058a351fcfbba

File paths:

C:ManageEngineADSelfService Pluswebappsadssphelpadmin-guidereportsReportGenerate.jsp
C:ManageEngineADSelfService Pluswebappsadssphtmlpromotionadap.jsp
C:ManageEngineADSelfService PlusworkCatalinalocalhostROOTorgapachejsphelp
C:ManageEngineADSelfService PlusjrebinSelfSe~1.key (filename varies with an epoch timestamp of creation, extension may vary as well)
C:ManageEngineADSelfService PluswebappsadsspCertificatesSelfService.csr
C:ManageEngineADSelfService Plusbinservice.cer
C:UsersPubliccustom.txt
C:UsersPubliccustom.bat
C:ManageEngineADSelfService PlusworkCatalinalocalhostROOTorgapachejsphelp (including subdirectories and contained files)

Webshell URL Paths:

/help/admin-guide/Reports/ReportGenerate.jsp

/html/promotion/adap.jsp

Check log files located at C:ManageEngineADSelfService Pluslogs for evidence of successful exploitation of the ADSelfService Plus vulnerability:

  • In access* logs:
    • /help/admin-guide/Reports/ReportGenerate.jsp
    • /ServletApi/../RestApi/LogonCustomization
    • /ServletApi/../RestAPI/Connection
  • In serverOut_* logs:
    • Keystore will be created for "admin"
    • The status of keystore creation is Upload!
  • In adslog* logs:
    • Java traceback errors that include references to NullPointerException in addSmartCardConfig or getSmartCardConfig

TTPs:

  • WMI for lateral movement and remote code execution (wmic.exe)
  • Using plaintext credentials acquired from compromised ADSelfService Plus host
  • Using pg_dump.exe to dump ManageEngine databases
  • Dumping NTDS.dit and SECURITY/SYSTEM/NTUSER registry hives
  • Exfiltration through webshells
  • Post-exploitation activity conducted with compromised U.S. infrastructure
  • Deleting specific, filtered log lines

Yara Rules:

rule ReportGenerate_jsp {
   strings:
      $s1 = “decrypt(fpath)”
      $s2 = “decrypt(fcontext)”
      $s3 = “decrypt(commandEnc)”
      $s4 = “upload failed!”
      $s5 = “sevck”
      $s6 = “newid”
   condition:
      filesize < 15KB and 4 of them
}

 

rule EncryptJSP {
   strings:
      $s1 = “AEScrypt”
      $s2 = “AES/CBC/PKCS5Padding”
      $s3 = “SecretKeySpec”
      $s4 = “FileOutputStream”
      $s5 = “getParameter”
      $s6 = “new ProcessBuilder”
      $s7 = “new BufferedReader”
      $s8 = “readLine()”
   condition:
      filesize < 15KB and 6 of them
}

Mitigations

Organizations that identify any activity related to ManageEngine ADSelfService Plus indicators of compromise within their networks should take action immediately.

Zoho ManageEngine ADSelfService Plus build 6114, which Zoho released on September 6, 2021, fixes CVE-2021-40539. FBI, CISA, and CGCYBER strongly urge users and administrators to update to ADSelfService Plus build 6114. Additionally, FBI, CISA, and CGCYBER strongly urge organizations ensure ADSelfService Plus is not directly accessible from the internet.

Additionally, FBI, CISA, and CGCYBER strongly recommend domain-wide password resets and double Kerberos Ticket Granting Ticket (TGT) password resets if any indication is found that the NTDS.dit file was compromised.

Actions for Affected Organizations

Immediately report as an incident to CISA or the FBI (refer to Contact Information section below) the existence of any of the following:

  • Identification of indicators of compromise as outlined above.
  • Presence of webshell code on compromised ManageEngine ADSelfService Plus servers.
  • Unauthorized access to or use of accounts.
  • Evidence of lateral movement by malicious actors with access to compromised systems.
  • Other indicators of unauthorized access or compromise.

Contact Information

Recipients of this report are encouraged to contribute any additional information that they may have related to this threat.

For any questions related to this report or to report an intrusion and request resources for incident response or technical assistance, please contact:

  • To report suspicious or criminal activity related to information found in this Joint Cybersecurity Advisory, contact your local FBI field office at https://www.fbi.gov/contact-us/field-offices, or the FBI’s 24/7 Cyber Watch (CyWatch) at (855) 292-3937 or by e-mail at CyWatch@fbi.gov. When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact.
  • To request incident response resources or technical assistance related to these threats, contact CISA at Central@cisa.gov.
  • To report cyber incidents to the Coast Guard pursuant to 33 CFR Subchapter H, Part 101.305 please contact the USCG National Response Center (NRC) Phone: 1-800-424-8802, email: NRC@uscg.mil.

Revisions

  • September 16, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Phishing 101: why depend on one suspicious message subject when you can use many?, (Thu, Sep 16th)

This post was originally published on this site

There are many e-mail subjects that people tend to associate with phishing due to their overuse in this area. Among the more traditional and common phishing subjects, that most people have probably seen at some point, are variations on the “Your account was hacked”, “Your mailbox is full”, “You have a postal package waiting”, “Here are urgent payment instructions” and “Important COVID-19 information” themes.

Since security awareness courses often explicitly cover these, and e-mail messages with similar subjects are therefore usually classified by users as prima facie phishing attempts, one would reasonably expect that when a threat actor decides to use any such subject line, they would at least try to make the body of the e-mail a little more believable… However, as it turns out, this is not always the case.

We’ve recently received a phishing on our Handler e-mail address, which I found interesting, since its authors obviously decided to go the “all in” route when came to the use of multiple obviously suspicious message subjects, rather than try to make their creation more believable.

“But how could a single phishing e-mail have multiple subjects”, I hear you ask, dear reader.

Well, in this case, the phishing was a variation on the “You have undelivered e-mail messages waiting” theme, but instead of a list of urgent looking, yet believable subject lines, it contained pretty much the whole aforementioned set of suspicious-at-first-glance subjects, as you may see for yourself in the following image…

Apart from this rather interesting (and slightly funny) approach on the side of its authors, the e-mail was rather a low-quality example of a phishing, its less than professional origins showing – among other places – in the fact that multiple links pointed to URLs that were obviously intended for previous recipients/recipients from other domains.

The only link that did lead to a phishing page pointed to an HTML document hosted on the Google Firebase Storage that, when accessed, displayed a dynamically generated login prompt and tried to load a web page hosted on the domain to which the e-mail address belonged to in an iframe bellow this prompt in an attempt to make the login request look more believable (a technique that is fairly common[1], which provides another good reason why it’s advisable to use CSP/X-Frame-Options headers on ones webservers).

IoC
hxxps://firebasestorage[.]googleapis[.]com/v0/b/g656-6f582.appspot.com/o/hghhg.html?alt=media&token=3a94d041-9a90-4428-85ca-41779f9605a1#address@domain.tld

[1] https://isc.sans.edu/forums/diary/Slightly+broken+overlay+phishing/26586/

———–
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS IQ expansion: Connect with Experts and Consulting Firms based in the UK and France

This post was originally published on this site

AWS IQ launched in 2019 and has been helping customers worldwide engage thousands of AWS Certified third-party experts and consulting firms for on-demand project work. Whether you need to learn about AWS, plan your project, setup new services, migrate existing applications, or optimize your spend, AWS IQ connects you with experts and consulting firms who can help. You can share your project objectives with a description, receive responses within the AWS IQ application, approve permissions and budget, and will be charged directly through AWS billing.

Until yesterday, experts had to reside in the United States to offer their hands-on help on AWS IQ. Today, I’m happy to announce that AWS Certified experts and consulting firms based in the UK and France can participate in AWS IQ.

If you are an AWS customer based in the UK or France and need to connect with local AWS experts, now you can reach out to a wider pool of experts and consulting firms during European business hours. When creating a new project, you can now indicate a preferred expert location.

As an AWS Certified expert you can now view the buyer’s preferred expert location to ensure the right fit. AWS IQ simplifies finding relevant opportunities and it helps you access a customer’s AWS environment securely. It also takes care of billing so more time is spent on solving customer problems, instead of administrative tasks. Your payments will be disbursed by AWS Marketplace in USD towards a US bank account. If you don’t already have a US bank account, you may be able to obtain one through third-party services such as Hyperwallet.

AWS IQ User Interface Update
When you create a new project request, you can select a Preferred expert or firm location: Anywhere, France, UK, or US.

Check out Jeff Barr’s launch article to learn more about the full request creation process.

You can also work on the same project with multiple experts from different locations.

When browsing experts and firms, you will find their location under the company name and reviews.

Available Today
AWS IQ is available for customers anywhere in the world (except China) for all sorts of project work, delivered by AWS experts in the United States, the United Kingdom, and France. Get started by creating your project request on iq.aws.amazon.com. Here you can discover featured experts or browse experts for a specific service such as Amazon Elastic Compute Cloud (EC2) or DynamoDB.

If you’re interested in getting started as an expert, check out AWS IQ for Experts. Your profile will showcase your AWS Certifications as well as the ratings and reviews from completed projects.

I’m excited about the expansion of AWS IQ for experts based in the UK and France, and I’m looking forward to further expansions in the future.

Alex

Hancitor campaign abusing Microsoft's OneDrive, (Wed, Sep 15th)

This post was originally published on this site

Introduction

Malicious spam (malspam) pushing Hancitor malware (AKA: Chanitor, MAN1, Moskalvzapoe, or TA551) sometimes changes tactics when delivering malware .  Since June 2021, this campaign stopped using docs.google.com links in their malspam and began using feedproxy.google.com to kick off an infection chain.  Criminals behind Hancitor have been abusing Google services since October 2020.

These Google links redirect to a URL from another domain.  This new "redirect URL" delivers a Hancitor Word document.  These "redirect URLs" return a web page with script using base64 text to generate a Hancitor Word document as described here.  The base64 text is converted to a malicious Word document and shows up in the web browser as a file to save.

But in September of 2021, this campaign stopped using script with base64 text.  Instead, Hancitor Word docs are now hosted on Microsoft OneDrive URLs.  The Hancitor campaign is currently abusing both Google and Microsoft services.


Shown above:  Change in tactics for Hancitor malware distribution seen in September 2021.

Previous method: script with base64 text

See below for images of traffic from a "redirect URL" that returned script with base64 text to generate a Hancitor Word document.


Shown above:  Script with base64 text used to generate Hancitor Word doc (part 1 of 2).


Shown above:  Script with base64 text used to generate Hancitor Word doc (part 2 of 2).

New method: OneDrive URLs

Instead of script using base64 text to generate a Hancitor Word doc, these "redirect URLs" now present script with OneDrive URLs to deliver a Word doc.  See the images below from Tuesday 2021-09-14.


Shown above:  Script from "redirect URLs" now have OneDrive links.


Shown above:  Manually using the OneDrive URL to download a Hancitor Word doc.

Final words

A packet capture of the infection traffic, 18 email examples, some malware samples, and a list of IOCs from a Hancitor infection on Tuesday 2021-09-14 are available here.  Another Hancitor run has also occurred today on Wednesday 2021-09-15.

We continue to see criminals abusing services offered by companies like Google, Microsoft, and other big names.  While the malicious links can be quickly reported and taken off-line, criminals merely return to establish new URLs using the same services.

This is a cycle we see over and over again.  As long as it remains cost-effective for criminals to operate this way, they will continue to abuse these services.

Hancitor is just one of many campaigns that routinely engage in such abuse.

Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

How AWS is Supporting Nonprofits, Governments, and Communities Impacted by Hurricane Ida

This post was originally published on this site
Image of a man in a blue shirt running cable through a wall.

AWS Disaster Response Team volunteer Paul Fries runs cable to provide connectivity in a police station that has been converted into a supply distribution center and housing for National Guard troops and FEMA.

During the crisis that has resulted in the aftermath of Hurricane Ida, the AWS Disaster Response Team is providing personnel and resources to support our customers, including the Information Technology Disaster Resource Center (ITDRC), St. Bernard Parish government, and Crisis Cleanup.

In 2018, Amazon Web Services launched the AWS Disaster Response Program to support governments and nonprofits actively responding to natural disasters. Guided by the belief that technology has the power to solve the world’s most pressing issues, the AWS Disaster Response Team has been working around the clock to provide pro bono assistance to public sector organizations responding to Hurricane Ida’s widespread damage.

The team brings in technical expertise in the form of solutions architects, machine learning practitioners, experts in edge computing, and others across AWS. This allows AWS to support organizations responding to disasters and help them overcome the unique challenges they face.

Establishing Connectivity

When Hurricane Ida hit the United States on August 29th, 2021, it caused power outages and major disruptions to infrastructures for water, power, cellular, and commercial communications. In the South, many were left homeless or without electricity, gas, or water by Ida’s winds and flooding. In New York and New Jersey, hundreds of thousands of people lost power. The AWS Disaster Response team activated its response protocol, including technical volunteer support on the ground and financial support to disaster relief nonprofit ITDRC for its efforts to establish local internet connectivity and cell phone charging stations. This critical support – including almost 100 sites in New Orleans, Houma, and other heavily impacted areas of Louisiana – is enabling public safety agencies, responding relief organizations, social services, and community members to communicate and coordinate as they respond to and recover from Ida.

Increasing Capacity

AWS volunteers supported disaster relief nonprofit Crisis Cleanup by providing technical AWS service architecture guidance, enabling them to increase their capacity to provide call-backs to community members requesting assistance with tasks like cutting fallen trees and tarping roofs that were damaged by the storm. AWS volunteers are also helping Crisis Cleanup by conducting call-backs, bolstering their volunteer base to reach a greater number of people more quickly than otherwise would have been possible.

Building Resiliency

To support the St. Bernard Parish government, located just outside of New Orleans, AWS assisted with the deployment of Amazon WorkMail to facilitate more resilient communication across teams when their email server was knocked offline when the power went out. Using Amazon WorkMail helped enable government personnel to more effectively communicate by utilizing the cloud instead of traditional infrastructures.

The AWS Disaster Response Team continues to field requests for support following Hurricane Ida, and is also working with other organizations to support their resilience planning and disaster preparedness efforts.

Learn More

Check out the AWS Disaster Response page for more information. To donate cash and supplies to organizations such as Feeding America and Save the Children, visit the Amazon page on Hurricane Ida or just say, “Alexa, I want to donate to Hurricane Ida relief.”

Microsoft September 2021 Patch Tuesday, (Tue, Sep 14th)

This post was originally published on this site

This month we got patches for <> vulnerabilities. Of these, <> are critical, <> were previously disclosed and <> is being exploited according to Microsoft.

As expected, Microsoft released the patch for the zero-day affecting MSHTML that could allow an attacker to execute remote code on an affected system. According to the advisory, an attacker could craft a malicious ActiveX control to be used by a Microsoft Office document that hosts the browser rendering engine. The attacker would then have to convince the user to open the malicious document. The CVSS for this vulnerability is 8.80 (out of 10).

See my dashboard for a more detailed breakout: https://patchtuesdaydashboard.com/

September Early Security Releases

Description
CVE Disclosed Exploited Exploitability (old versions) current version Severity CVSS Base (AVG) CVSS Temporal (AVG)
Chromium: CVE-2021-30606 Use after free in Blink
%%cve:2021-30606%% No No    
Chromium: CVE-2021-30607 Use after free in Permissions
%%cve:2021-30607%% No No    
Chromium: CVE-2021-30608 Use after free in Web Share
%%cve:2021-30608%% No No    
Chromium: CVE-2021-30609 Use after free in Sign-In
%%cve:2021-30609%% No No    
Chromium: CVE-2021-30610 Use after free in Extensions API
%%cve:2021-30610%% No No    
Chromium: CVE-2021-30611 Use after free in WebRTC
%%cve:2021-30611%% No No    
Chromium: CVE-2021-30612 Use after free in WebRTC
%%cve:2021-30612%% No No    
Chromium: CVE-2021-30613 Use after free in Base internals
%%cve:2021-30613%% No No    
Chromium: CVE-2021-30614 Heap buffer overflow in TabStrip
%%cve:2021-30614%% No No    
Chromium: CVE-2021-30615 Cross-origin data leak in Navigation
%%cve:2021-30615%% No No    
Chromium: CVE-2021-30616 Use after free in Media
%%cve:2021-30616%% No No    
Chromium: CVE-2021-30617 Policy bypass in Blink
%%cve:2021-30617%% No No    
Chromium: CVE-2021-30618 Inappropriate implementation in DevTools
%%cve:2021-30618%% No No    
Chromium: CVE-2021-30619 UI Spoofing in Autofill
%%cve:2021-30619%% No No    
Chromium: CVE-2021-30620 Insufficient policy enforcement in Blink
%%cve:2021-30620%% No No    
Chromium: CVE-2021-30621 UI Spoofing in Autofill
%%cve:2021-30621%% No No    
Chromium: CVE-2021-30622 Use after free in WebApp Installs
%%cve:2021-30622%% No No    
Chromium: CVE-2021-30623 Use after free in Bookmarks
%%cve:2021-30623%% No No    
Chromium: CVE-2021-30624 Use after free in Autofill
%%cve:2021-30624%% No No    
Microsoft Edge (Chromium-based) Elevation of Privilege Vulnerability
%%cve:2021-26436%% No No Less Likely Less Likely Important 6.1 5.3
%%cve:2021-36930%% No No Less Likely Less Likely Important 5.3 4.6
Microsoft Edge (Chromium-based) Tampering Vulnerability
%%cve:2021-38669%% No No Less Likely Less Likely Important 6.4 5.6
Microsoft Edge for Android Information Disclosure Vulnerability
%%cve:2021-26439%% No No Moderate 4.6 4.0
Microsoft Edge for Android Spoofing Vulnerability
%%cve:2021-38641%% No No Less Likely Less Likely Important 6.1 5.3
Microsoft Edge for iOS Spoofing Vulnerability
%%cve:2021-38642%% No No Less Likely Less Likely Important 6.1 5.3
Microsoft MSHTML Remote Code Execution Vulnerability
%%cve:2021-40444%% Yes Yes Detected Detected Important 8.8 7.9


Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Shipping to Elasticsearch Microsoft DNS Logs, (Sat, Sep 11th)

This post was originally published on this site

This parser takes the logs from a Windows 2012R2 and/or 2019 server (C:DNSLogswindns.log) and parses them into usable metatada which can be monitored and queried via an ELK dashboard. The logs have been mapped using DNS ECS field meta here [1].

→ First step is to load the Microsoft DNS templates [3][4] via Kibana Dev Tools to create the microsoft.dns Index Management and Index Lifecycle Policy. Follow the instructions at the top of each template.

→ Second step is to install Logstash (if not already done) and add to Logstash [2] this configuration file (i.e. /etc/logstash/conf.d/logstash-filter-ms-dns.conf) and start the logstash service.

This configuration file also contains the option of resolving the IP addresses to hostname and should be adjusted to reflect the local internal network. Edit logstash-filter-ms-dns.conf and change 192.168.25 to reflect the local network:

# This filter drop internal domain and internal IP range with in-addr.arpa
# This reduce the amount of noise
filter {
  if [log][file][path] =~ "windns.log" {
    if [dns.question.name] =~ /168.192.in-addr.arpa/ {
      drop {}
    }
    if [dns.question.name] =~ /erin.ca/ {
      drop {}
    }
  }
}

# DNS lookup for dns.resolved_ip that start with 192.168.25

filter {
  if [log][file][path] =~ "windns.log" and [dns.resolved_ip] =~ /192.168.25/ {
    mutate {
      copy => { "dns.resolved_ip" => "source.registered_domain" }
      convert   => { "geoip.postal_code" => "string" }
      # copy dns.question.name to related.host
      copy => { "dns.question.name" => "related.hosts" }
    }
    dns {
      reverse => [ "source.registered_domain" ]
      action => "replace"
    }
  }
}

→ Third step, Login Windows server and setup file-based DNS debug logging

Windows DNS Debug Logging is used to collect the logs. Queries are logged one per line. Enable DNS debug logging using these steps:

  • Create directory: C:DNSLogs
  • Open the DNS Management console (dnsmgmt.msc).
  • Right-click on the DNS Server name and choose Properties from the context menu.
  • Under the Debug Logging tab, enable Log packets for debugging and configure as per picture below.
  • File path and name: C:DNSLogswindns.log
  • Maximum size (bytes): 5000000
  • Apply the changes
  • Verify the file C:DNSLogswindns.log is now receiving data

→ Forth step is to install filebeat on the Windows server (C:Program Filesfilebeat), configured as a service and change the filebeat.yml configuration to only contain the following information. Change the IP address in this file to the IP address (192.168.25.23) of the logstash service and install the filebeat service:

# This filebeat shipper is used for
# Microsoft DNS logs

# 9 Jan 2021
# Version: 1.0

filebeat.inputs:

# Filebeat input for Windows DNS logs

– type: log
  paths:
    – "C:/DNSLogs/windns.log"
  include_lines: ["^[0-9]{4}-"]
  fields_under_root: true

#==================== Queued Event ====================
#queue.mem:
#  events: 4096
#  flush.min_events: 512
#  flush.timeout: 5s

#queue.disk:
#  path: "/op/filebeat/diskqueue"
#  max_size: 10GB

#==================== Output Event ====================
output.logstash:
  hosts: ["192.168.25.23:5044"]

At this point, the logs should start going to ELK. From the Windows server, verify the connection has been established to logstash by running at the command line: netstat -an | findstr 5044

In the Elasticsearch server, under Stack Management → Index Management, look for an new instance with microsoft.dns-* (something like this: microsoft.dns-2021.09.10-000001) which should start collecting Microsoft DNS metadata.

→ Last step is to load the dashboard [5] to Elasticsearch under Stack Management → Saved Objects and Import the file Microsoft_DNS_7.14_v1.ndjson, this will load the new dashboard and the Index Pattern.

Look under the Dashboard tab for Microsoft DNS Server [Microsoft DNS Tag]. This is a sample of the top part of the dashboard:

[1] https://www.elastic.co/guide/en/ecs/current/ecs-dns.html
[2] https://handlers.sans.edu/gbruneau/elk/logstash-filter-ms-dns.conf
[3] https://handlers.sans.edu/gbruneau/elk/Windows_DNS_ilm_policy.txt
[4] https://handlers.sans.edu/gbruneau/elk/Windows_DNS_template.txt
[5] https://handlers.sans.edu/gbruneau/elk/Microsoft_DNS_7.14_v1.ndjson
[6] https://isc.sans.edu/forums/diary/Secure+Communication+using+TLS+in+Elasticsearch/26902/
[7] https://www.elastic.co/guide/en/ecs/master/ecs-field-reference.html
[8] https://handlers.sans.edu/gbruneau/elastic.htm

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Iron Castle Systems