Amazon Detective – Rapid Security Investigation and Analysis

This post was originally published on this site

Almost five years ago, I blogged about a solution that automatically analyzes AWS CloudTrail data to generate alerts upon sensitive API usage. It was a simple and basic solution for security analysis and automation. But demanding AWS customers have multiple AWS accounts, collect data from multiple sources, and simple searches based on regular expressions are not enough to conduct in-depth analysis of suspected security-related events. Today, when a security issue is detected, such as compromised credentials or unauthorized access to a resource, security analysts cross-analyze several data logs to understand the root cause of the issue and its impact on the environment. In-depth analysis often requires scripting and ETL to connect the dots between data generated by multiple siloed systems. It requires skilled data engineers to answer basic questions such as “is this normal?”. Analysts use Security Information and Event Management (SIEM) tools, third-party libraries, and data visualization tools to validate, compare, and correlate data to reach their conclusions. To further complicate the matters, new AWS accounts and new applications are constantly introduced, forcing analysts to constantly reestablish baselines of normal behavior, and to understand new patterns of activities every time they evaluate a new security issue.

Amazon Detective is a fully managed service that empowers users to automate the heavy lifting involved in processing large quantities of AWS log data to determine the cause and impact of a security issue. Once enabled, Detective automatically begins distilling and organizing data from AWS Guard Duty, AWS CloudTrail, and Amazon Virtual Private Cloud Flow Logs into a graph model that summarizes the resource behaviors and interactions observed across your entire AWS environment.

At re:invent 2019, we announced a preview of Amazon Detective. Today, it is our pleasure to announce its availability for all AWS Customers.

Amazon Detective uses machine learning models to produce graphical representations of your account behavior and helps you to answer questions such as “is this an unusual API call for this role?” or “is this spike in traffic from this instance expected?”. You do not need to write code, to configure or to tune your own queries.

To get started with Amazon Detective, I open the AWS Management Console, I type “detective” in the search bar and I select Amazon Detective from the provided results to launch the service. I enable the service and I let the console guide me to configure “member” accounts to monitor and the “master” account in which to aggregate the data. After this one-time setup, Amazon Detective immediately starts analyzing AWS telemetry data and, within a few minutes, I have access to a set of visual interfaces that summarize my AWS resources and their associated behaviors such as logins, API calls, and network traffic. I search for a finding or resource from the Amazon Detective Search bar and, after a short while, I am able to visualize the baseline and current value for a set of metrics.

I select the resource type and ID and start to browse the various graphs.

I can also investigate a AWS Guard Duty finding by using the native integrations within the Guard Duty and AWS Security Hub consoles. I click the “Investigate” link from any finding from AWS Guard Duty and jump directly into a Amazon Detective console that provides related details, context, and guidance to investigate and to respond to the issue. In the example below, Guard Duty reports an unauthorized access that I decide to investigate:

Amazon Detective console opens:

I scroll down the page to check the graph of failed API calls. I click a bar in the graph to get the details, such as the IP addresses where the calls originated:

Once I know the source IP addresses, I click New behavior: AWS role and observe where these calls originated from to compare with the automatically discovered baseline.

Amazon Detective works across your AWS accounts, it is a multi-account solution that aggregates data and findings from up to 1000 AWS accounts into a single security-owned “master” account making it easy to view behavioral patterns and connections across your entire AWS environment.

There are no agents, sensors, or additional software to deploy in order to use the service. Amazon Detective retrieves, aggregates and analyzes data from AWS Guard Duty, AWS CloudTrail and Amazon Virtual Private Cloud Flow Logs. Amazon Detective collects existing logs directly from AWS without touching your infrastructure, thereby not causing any impact to cost or performance.

Amazon Detective can be administered via the AWS Management Console or via the Amazon Detective management APIs. The management APIs enable you to build Amazon Detective into your standard account registration, enablement, and deployment processes.

Amazon Detective is a regional service. I activate the service in every AWS Regions in which I want to analyze findings. All data are processed in the AWS Region where they are generated. Amazon Detective maintains data analytics and log summaries in the behavior graph for a 1-year rolling period from the date of log ingestion. This allows for visual analysis and deep dives over a large data set for a long period of time. When I disable the service, all data is expunged to ensure no data remains.

There are no additional charges or upfront commitments required to use Amazon Detective. We charge per GB of data ingested from AWS AWS CloudTrail, Amazon Virtual Private Cloud Flow Logs, and AWS Guard Duty findings. Amazon Detective offers a 30-day free trial. As usual, check the pricing page for the details.

Amazon Detective is available in these 14 AWS Regions : US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Canada (Central), and South America (São Paulo).

You can start to use it today.

— seb

Backblaze Pushes Past 1 Exabyte of Data Stored

This post was originally published on this site

Backblaze Pushes Past 1 Exabyte of Data Stored

Backblaze, a data protection and cloud storage company has announced they are storing more than 1 exabyte of customer data. That’s an achievement in itself, but with 125,000 hard drives under management, does this now justify some active data optimisation? Exabyte An exabyte certainly sounds like a lot of data. There’s a handy visualisation blog post on the Backblaze website …

The post Backblaze Pushes Past 1 Exabyte of Data Stored appeared first on Architecting IT.

Vembu BDR Suite 4.1 for VMware and Hyper-V

This post was originally published on this site

Vembu BDR Suite 4.1 brings some new features which we’ll talk about today. If you don’t know Vembu Backup and Disaster Recovery Suite you can check one of our older articles or the Full product review here: Vembu BDR Suite 4 Product Review. Vembu BDR Suite can backup virtual and physical environments running VMware vSphere, […]

Read the full post Vembu BDR Suite 4.1 for VMware and Hyper-V at ESX Virtualization.

Cumulative Update for vRealize Automation 7.6 Fails with Error related to Timezone

This post was originally published on this site

I was trying to apply Patch Version:  HF4 and the precheck was failing with errors: TimezoneMisMatch – Timezone on the node is ABC Time and the required timezone is XYZ Time ManagementAgent – ManagementAgent is not running, last update was 8 hours ago Screenshots: Seems like this is a known issue while applying the cumulative patch […]

PowerCLI: Upload to Content Library using PowerCLI

This post was originally published on this site

Summary:
Basically was looking to upload an iso or ovf from my system not using the web client to content library.  Couldn’t find an example to upload from my local system only to have vCenter pull it from somewhere.

Details:
So I took VMware’s example and added “PUSH” functionality to upload from my local system to the content library.  Learned some interesting things in the process.  Mainly related to OVF uploads.  The content library service, or possibly something else, parses the OVF looking for the related files.  Then the Content Library is instructed to essentially wait for those other files to be uploaded before it closes the upload task.  Kinda interesting.

Anyway, the script is powershell core based, so compatible across all platforms.

Links:

Kwampirs Targeted Attacks Involving Healthcare Sector, (Tue, Mar 31st)

This post was originally published on this site

There is no honor among thieves. Even after some ransomware gangs claimed to seize targeting the healthcare sector, attacks continue to happen. But ransomware isn’t alone. Last week, the FBI updated an advisory regarding the Kwampirs malware, pointing out the healthcare sector as one of its targets. Kwampirs isn’t picky in its targeting. It has been observed going after various sectors (financial, energy, software supply chain, and healthcare, among others). One differentiator of Kwampirs is its modular structure. After penetrating a particular target network, the malware will load appropriate modules based on the targets it encounters. In general terms, Kwampirs is a “Remote Admin Tool” (RAT). It provides access to the target and can be used to execute additional payloads at the attacker’s choosing.

The modular nature makes it difficult to enumerate the capabilities of the tool. Likely, addons are developed continuously as new capabilities are required to penetrate a particular network.

Kwampirs exhibits several behaviors that put it in the “Advanced Persistent Threat (APT)” category:

  • It is patient. Kwampirs does not launch fast “hit and run” attacks. Instead, it can infiltrate a network and only communicate daily, asking for updates. I took some networks three years to detect Kwampirs.
  • Kwampirs infiltrates software vendors and uses them to spread to customers. These supply chain attacks are well suited to target specific industries.
  • It does not have a clear financial motive, like stealing PII or payment card data. The malware has not been observed destroying or encrypting data for ransom.

Kwampirs will likely enter your network undetected as part of a software update from a trusted vendor. Anti-malware solutions will detect past versions. But do not put too much trust in anti-malware to detect the next version that is likely tailored to your organization.

There are a few indicators that have been observed in the past, and it is certainly important to verify your network that you are not already infected. See the prior FBI bulletins for more details and Yara signatures.

But of course, this behavior is going to change. For future versions of this (and other threats), it is useful to abstract these signatures:

Check for new services popping up in your network. Do not look just for specific names like “WmiApSrvEx”, but investigate any service that you haven’t see before
New processes. This is tricky and maybe too noisy.
New files being added to system folders. Again, don’t focus on the specific names.
Kwampirs will also propagate through administrative shares. Deception techniques are an excellent option to catch this type of behavior.

Of course, I always like network detection techniques to identify malicious behavior. For Kwampirs, this may be a bit tricky, but it depends on what exact version you encounter. Some versions apparently will connect to an IP address directly, skipping DNS. Outbound connections without a DNS lookup returning the target IP should be one of your standard signatures. In the past, Kwampirs used some odd domain names that may stick out. For example, it used the “tk” top-level domain, which has sadly become almost an indicator of compromise in itself. Declaring yourself authoritative for .tk and redirecting queries to a sensor is an excellent way of detecting these and many other exploits. I probably wouldn’t spend too much time looking for the specific hostnames listed in the FBI advisory. These hostnames tend to be very ephemeral, and they are not going to “last” very long. But a historical search of your DNS logs (did I mention Zeek?) may be appropriate.

If you find anything interesting, please let us know. Refer to the FBI advisories I uploaded here for more detailed IOCs. 

[1] https://isc.sans.edu/diaryimages/Kwampirs_PIN_20200330-001.pdf
[2]  https://isc.sans.edu/diaryimages/FLASH-CP-000111-MW_downgraded_version.pdf
[3] https://isc.sans.edu/diaryimages/FLASH-CP-000118-MW_downgraded_version.pdf


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

vSpeaking Podcast Ep 150: What’s New in vSphere 7

This post was originally published on this site

vSpeaking Podcast Ep 150: What’s New in vSphere 7

This month VMware announced vSphere 7 touting it as the biggest innovation since the launch of ESXi. This is a prettty signifigant release. So far the virtually speaking podcast covered part of the release in two previous episodes (vSphere with Kubernetes and vSphere Lifecycle Manager […]