Skyline Advisor Proactive Findings – January Edition

This post was originally published on this site

Tweet VMware Skyline releases new Proactive Findings every month. Findings are prioritized by trending issues in VMware Support, issues raised through Post Escalation review, Security vulnerabilities, and issues raised from VMware engineering, and customers. For the month of January, we released 28 new Findings. Of these, there are 14 Findings based on trending issues, 12 … Continued

The post Skyline Advisor Proactive Findings – January Edition appeared first on VMware Support Insider.

Malicious ISO Embedded in an HTML Page, (Fri, Jan 28th)

This post was originally published on this site

I spotted an interesting phishing email. As usual, the message was delivered with a malicious attachment that is a simple HTML page called “Order_Receipt.html” (SHA256:a0989ec9ad1b74c5e8dedca4a02dcbb06abdd86ec05d1712bfc560bf209e3b39) with a low VT score of 5/59[1]! This is a text file and, therefore, looks less suspicious. When the page is opened in the victim's browser, it displays a simple message and offers the victim to download an ISO file:

New – Amazon EC2 X2iezn Instances Powered by the Fastest Intel Xeon Scalable CPU for Memory-Intensive Workloads

This post was originally published on this site

Electronic Design Automation (EDA) workloads require high computing performance and a large memory footprint. These workloads are sensitive to faster CPU performance and higher clock speeds since the faster performance allows more jobs to be completed on the lower number of cores. At AWS re:Invent 2020, we launched Amazon EC2 M5zn instances which use second-generation Intel Xeon Scalable (Cascade Lake) processors with an all-core turbo clock frequency of up to 4.5 GHz, which is the fastest of any cloud instance.

Our customers have enjoyed the high single-threaded performance, high-speed networking, and balanced memory-to-vCPU ratio of EC2 M5zn instances. They have asked for instances that will leverage these features while also providing them a greater memory footprint per vCPU.

Today, we are launching Amazon EC2 X2iezn instances, which use the same Intel Xeon Scalable processors as M5zn instances, with an all-core turbo clock frequency of 4.5 GHz and up to 1.5 TiB of memory, which is the fastest of any cloud instance for EDA workloads. These instances are capable of delivering up to 55 percent better price-performance per vCPU compared to X1e instances.

X2iezn instances offer 32 GiB of memory per vCPU and will support up to 48 vCPUs and 1536 GiB of memory. Built on the AWS Nitro, they deliver up to 100 Gbps of networking bandwidth and 19 Gbps of dedicated Amazon EBS bandwidth to improve performance for EDA applications.

You might have noticed that we’re now using the “i” suffix in the instance type to specify that the instances are using an Intel processor, “e” in the memory-optimized instance family to indicate extended memory, “z” which indicates high-frequency processors, and “n” to support higher network bandwidth up to 100 Gbps.

X2iezn instances are VPC only, HVM-only, and EBS-Optimized, with support for Optimize vCPU. As you can see, the memory-to-vCPU ratio on these instances is the same as that of previous-generation X1e instances:

Instance Name vCPUs RAM (GiB) Network Bandwidth (Gbps) EBS-Optimized Bandwidth (Gbps)
x2iezn.2xlarge 8 256 Up to 25 3.170
x2iezn.4xlarge 16 512 Up to 25 4.750
x2iezn.6xlarge 24 768 50 9.5
x2iezn.8xlarge 32 1024 75 12
x2iezn.12xlarge 48 1536 100 19
x2iezn.metal 48 1536 100 19

Many customers will be able to benefit from using X2iezn instances to improve performance and efficiency for their EDA workloads. Here are some examples:

  • Annapurna Labs tested the X2iezn instances with Calibre’s Design Rule Checking, which has shown a 40 percent faster runtime compared to X1e instances, and a 25 percent faster runtime over R5d instances.
  • Astera Labs is a fabless, cloud-based semiconductor company developing purpose-built CXL, PCIe, and Ethernet connectivity solutions for data-centric systems. They were able to see performance gains of up to 25 percent compared to similar EDA workloads running on R5 instances.
  • Cadence tested the X2iezn instances using their Pegasus True Cloud feature, which allows designers to run physical verification jobs on the cloud and observed a 50 percent performance improvement over R5 instances. They see X2iezn instances as an excellent environment for testing EDA workloads.
  • NXP Semiconductors worked with AWS to run their Calibre and Spectre workloads on Amazon EC2 X2iezn instances, which measured 10-15 percent higher performance using X2iezn instances compared to their on-premises, Xeon Gold 6254 with max turbo frequency of 4.0GHz.
  • Siemens EDA worked with AWS to test the new Amazon EC2 X2iezn HPC/EDA focused instances with the industry performance and sign-off leader Calibre evaluating advanced node DRC workloads. They were pleased to demonstrate performance improvements of up to 14% using the 4.5 GHz all core turbo frequency of X2iezn instances for all VMs in the run. Additionally, they successfully demonstrated the use of a heterogeneous server configuration using the X2iezn as the primary node and other lower memory VMs for remote compute – providing an 11% speed up and attractive value. These results confirmed the X2iezn is a good fit for primary server EDA workloads for Calibre Physical and Circuit verification applications.
  • Synopsys IC Validator provides highly scalable high-performance physical verification signoff. They achieved 15 percent performance improvement, scalability to 1000s of cores, and 30 percent better efficiency using IC Validator’s unique elastic CPU management technology versus R5d instances.

Things to Know
Here are some fun facts about the X2iezn instances:

Optimizing CPU—You can disable Intel Hyper-Threading Technology for workloads that perform well with single-threaded CPUs, like some HPC applications.

NUMA—You can make use of non-uniform memory access (NUMA) on x2iezn.12xlarge instances. This advanced feature is worth exploring if you have a deep understanding of your application’s memory access patterns.

Available Now
Amazon EC2 X2iezn instances are now available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. You can use On-Demand Instances, Reserved Instances, Savings Plan, and Spot Instances. Dedicated Instances and Dedicated Hosts are also available.

To learn more, visit our EC2 X2i Instances page, and please send feedback to the AWS forum for EC2 or through your usual AWS Support contacts.

Channy

Over 20 thousand servers have their iLO interfaces exposed to the internet, many with outdated and vulnerable versions of FW, (Wed, Jan 26th)

This post was originally published on this site

Integrated Lights-Out (iLO) is a low-level server management system intended for out-of-band configuration, which is embedded by Hewlett-Packard Enterprise on some of their servers[1]. Besides its use for maintenance, it is often used by administrators for an emergency access to the server when everything "above it" (hypervisor or OS) fails and/or is unreachable. Since these kinds of platforms/interfaces are quite sensitive from the security standpoint, access to them should always be limited to relevant administrator groups only and their firmware should always be kept up to date.

New – Replication for Amazon Elastic File System (EFS)

This post was originally published on this site

Amazon Elastic File System (Amazon EFS) allows EC2 instances, AWS Lambda functions, and containers to share access to a fully-managed file system. First announced in 2015 and generally available in 2016, Amazon EFS delivers low-latency performance for a wide variety of workloads and can scale to thousands of concurrent clients or connections. Since the 2016 launch we have continued to listen and to innovate, and have added many new features and capabilities in response to your feedback. These include on-premises access via Direct Connect (2016), encryption of data at rest (2017), provisioned throughput and encryption of data in transit (2018), an infrequent access storage class (2019), IAM authorization & access points (2020), lower-cost one zone storage classes (2021), and more.

Introducing Replication
Today I am happy to announce that you can now use replication to automatically maintain copies of your EFS file systems for business continuity or to help you to meet compliance requirements as part of your disaster recovery strategy. You can set this up in minutes for new or existing EFS file systems, with replication either within a single AWS region or between two AWS regions in the same AWS partition.

Once configured, replication begins immediately. All replication traffic stays on the AWS global backbone, and most changes are replicated within a minute, with an overall Recovery Point Objective (RPO) of 15 minutes for most file systems. Replication does not consume any burst credits and it does not count against the provisioned throughput of the file system.

Configuring Replication
To configure replication, I open the Amazon EFS Console , view the file system that I want to replicate, and select the Replication tab:

I click Create replication, choose the desired destination region, and select the desired storage (Regional or One Zone). I can use the default KMS key for encryption or I can choose another one. I review my settings and click Create replication to proceed:

Replication begins right away and I can see the new, read-only file system immediately:

A new CloudWatch metric, TimeSinceLastSync, is published when the initial replication is complete, and periodically after that:

The replica is created in the selected region. I create any necessary mount targets and mount the replica on an EC2 instance:

EFS tracks modifications to the blocks (currently 4 MB) that are used to store files and metadata, and replicates the changes at a rate of up to 300 MB per second. Because replication is block-based, it is not crash-consistent; if you need crash-consistency you may want to take a look at AWS Backup.

After I have set up replication, I can change the lifecycle management, intelligent tiering, throughput mode, and automatic backup setting for the destination file system. The performance mode is chosen when the file system is created, and cannot be changed.

Initiating a Fail-Over
If I need to fail over to the replica, I simply delete the replication. I can do this from either side (source or destination), by clicking Delete and confirming my intent:

I enter delete, and click Delete replication to proceed:

The former read-only replica is now a writable file system that I can use as part of my recovery process. To fail-back, I create a replica in the original location, wait for replication to finish, and delete the replication.

I can also use the command line and the EFS APIs to manage replication. For example:

createreplication-configuration / CreateReplicationConfiguration – Establish replication for an existing file system.

describe-replication-configurations / DescribeReplicationConfigurations – See the replication configuration for a source or destination file system, or for all replication configurations in an AWS account. The data returned for a destination file system also includes LastReplicatedTimestamp, the time of the last successful sync.

delete-replication-configuration / DeleteReplicationConfiguration – End replication for a file system.

Available Now
This new feature is available now and you can start using it today in the AWS US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), and GovCloud Regions.

You pay the usual storage fees for the original and replica file systems and any applicable cross-region or intra-region data transfer charges.

Jeff;

Mixed VBA & Excel4 Macro In a Targeted Excel Sheet, (Sat, Jan 22nd)

This post was originally published on this site

Yesterday, Nick, one of our readers, shared with us a very interesting Excel sheet and asked us to check if it was malicious. Guess what? Of course, it was and he accepted to be mentioned in a diary. Thanks to him! This time, we also have the context and how the file was used. It was delivered to the victim and this person was called beforehand to make it more confident with the file. A perfect example of social engineering attack. The Excel sheet contains details of a real-estate project. The Excel sheet is called "Penthouse_8271.xls" and, once opened, you see this:

Amazon GuardDuty Enhances Detection of EC2 Instance Credential Exfiltration

This post was originally published on this site

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon Simple Storage Service (Amazon S3). Informed by a multitude of public and AWS-generated data feeds and powered by machine learning, GuardDuty analyzes billions of events in pursuit of trends, patterns, and anomalies that are recognizable signs that something is amiss. You can enable it with a click and see the first findings within minutes.

Today, we are adding to GuardDuty the ability to detect when your Amazon Elastic Compute Cloud (Amazon EC2) instance credentials are being used from another AWS Account. EC2 instance credentials are the temporary credentials made available through the EC2 metadata service to any applications running on an instance, when an AWS Identity and Access Management (IAM) role is attached to it.

What Are the Risks?
When your workloads deployed on EC2 instances access AWS services, they use an access key, a secret access key, and a session token. The secure mechanism to pass access key credentials to your workloads is to define the permissions required by your workload, create one or several IAM policies with the permissions, attach the policies to an IAM role and, finally, attach the role to the instance.

Any process running on an EC2 instance with a role attached can retrieve the security credentials by calling the EC2 metadata service:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

These credentials are limited in time and in scope. They are valid for a maximum of six hours. They are limited to the scope of the permissions attached to the IAM role associated with the EC2 instance.

All AWS SDK are able to retrieve and renew such credentials automatically. No additional code is necessary in your application.

Now imagine that your application running on the EC2 instance is compromised and a malicious actor managed to access the instance’s meta data service. The malicious actor would extract the credentials. These credentials have the permissions you defined in the IAM role attached to the instance. Depending on your application, attackers might have the possibility to exfiltrate data from S3 or DynamoDB, to start or terminate EC2 instances, or even to create new IAM users or roles.

Since the launch of GuardDuty, it has detected when such credentials are used from IP addresses outside of AWS. Smart attackers therefore might hide their activity from another AWS account to operate outside of the sight of GuardDuty. Starting today, GuardDuty also detects when the credentials are used from other AWS accounts, inside the AWS network.

What Alerts Are Generated?
There are legitimate reasons why the source IP address communicating with AWS Services APIs might be different than the EC2 instance IP address. Think about complex network topologies that route traffic to one or multiple VPCs; AWS Transit Gateway, or AWS Direct Connect for example. In addition, multi-Region configurations, or not using AWS Organizations, makes it non trivial to detect if the AWS account using the credentials belongs to you or not. Large companies have implemented their own solution to detect such security compromises, but these type of solutions are not easy to build and to maintain. Only a handful of organizations have the resources required to tackle this challenge. When they do so, they distract their engineering efforts from their core business. This is why we decided to address this.

Starting today, GuardDuty generates alerts when it detects a misuse of EC2 instance credentials. When the credentials are used from an affiliated account, the alert is labeled as medium-severity. Otherwise, a high-severity alert is generated. Affiliated accounts are accounts monitored by the same GuardDuty administrator account, also known as GuardDuty member accounts. They might be part of your organization or not.

In Practice
To learn how it’s working, let’s capture and exfiltrate a set of EC2 credentials from one of my EC2 instances. I use SSH to connect to one of my instances, and I use curl to retrieve the credentials, as shown earlier:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

The instance has an IAM role with permissions allowing to read S3 buckets in this AWS account. I copy and paste the credentials. Then I connect to another EC2 instance running in a different AWS account, not affiliated with the same GuardDuty administrator account. I use SSH to connect to that other instance, and then I configure the AWS CLI with the compromised credentials. I attempt to access a private S3 bucket.


# first verify I do not have access 
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

# then I configure the CLI using the compromised credentials
[ec2-user@ip-1-1-0-79 ~]$ aws configure
AWS Access Key ID [None]: AS...J5
AWS Secret Access Key [None]: r1...9m
Default region name [None]: us-east-1
Default output format [None]:

[ec2-user@ip-1-1-0-79 ~]$ aws configure set aws_session_token IQ...z5Q==

# Finally, I attempt to access S3 again
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket
                     PRE folder1/
                     PRE folder2/
                     PRE folder3/
2021-01-22 16:37:48 6148 .DS_Store

Shortly after, I use the AWS Management Console to access GuardDuty in the AWS account where I stole the credentials. I can verify a high-severity alert was generated.

GuardDuty EC2 credentials exfiltration alarm

And So What?
Attackers may extract credentials when they have remote code execution (RCE), local presence on the instance, or by exploiting application-level vulnerabilities like Server Side Request Forgery (SSRF) and XML External Entity (XXE) injection. There are multiple methods to mitigate RCE or local access, including rebuilding the instances from a secured and patched AMI to eliminate remote access, rotate access credentials, and so on. When the vulnerability is at the application level, you or the application vendor are required to patch the application code to eliminate the vulnerability.

When you receive an alert indicating a risk of compromised credentials, the first thing to do is to verify the account ID. Is it one of your company accounts or not? During the analysis, when the business case allows, you may terminate the compromised instances or shut down the application. This prevents the attacker from extracting renewed instance credentials upon expiration. When in doubt, contact the AWS Trust & Safety team using the Report Amazon AWS abuse form or by contacting abuse@amazonaws.com. Provide all the necessary information, including the suspicious AWS account ID, logs in plaintext, and so on, when you submit your request.

Availability
This new ability is available in all AWS Regions at no additional cost. It is enabled by default when GuardDuty is already enabled on your AWS account.

Otherwise, enable GuardDuty now, and start the 30-day trial period.

— seb

When PowerShellGet v1 fails to install the NuGet Provider

This post was originally published on this site

Recently there’s been a number of users who have encountered a particular bug with PowerShellGet 1.0.0.1 in Windows PowerShell.
This bug occurs when you try to to use a PowerShellGet cmdlet that is dependent on PackageManagement,
including cmdlets such as Find-Module, Install-Module, Save-Module, etc. Running any of these cmdlets will prompt you to install
the NuGet provider that both PowerShellGet and PackageManagement are dependent on. Even though the prompt offers two ways to
install the provider– you can run ‘Y’ to have PowerShellGet automatically install the provider, or you can run
‘Install-PackageProvider’ yourself– both of these suggestions fail.

Unable to find package provider

But, again, even when attempting to explicitly install the package provider, the process command fails.
Unable to install package provider

The earliest version of PackageManagement (version 1.0.0.1) did not ship with the NuGet provider, so any use of PowerShellGet also required that the NuGet provider be bootstrapped or explicitly installed. Understandably, it can be a source of deep frustration when the tool you use to install packages is dependent on a package that it cannot install.

The underlying issue here is that the remote endpoint used to bootstrap the provider requires TLS 1.2 and the client may not have it enabled. The two options below should help you resolve any issues encountered when attempting to install the NuGet provider and get back up and running again with PowerShellGet!

How to find the versions you’re using

You can find out what version of PowerShellGet and PackageManagement you’re using by running:

Get-Module PowerShellGet, PackageManagement -ListAvailable

The output will be order by priority, so if multiple paths are displayed, the top first path will be the one that gets referenced during an import.

If the version of PackageManagement you’re using is 1.0.0.1 then this issue will likely apply to you.

Option 1: Change your TLS version to 1.2

The easiest thing to do here is to update the TLS version on your machine. It’s highly recommended to use this option, but if necessary you can manually install PackageManagement as outlined under Option 2.

If you only want to update the current PowerShell session you can run:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

After running this command you can run

[Net.ServicePointManager]::SecurityProtocol

To validate that TLS12 is being used.
Once TLS 1.2 is enabled, you can successfully run the original command.

If you prefer to update your client so that you don’t need to run the command above in every PowerShell session, you can follow the instructions laid out here.

Option 2: Manually update PackageManagement

You can also update PackageManagement to a version that ships with the NuGet provider– that is PackageManagement 1.1.0.0 or later.

Simply go to the PackageManagement package page on the PowerShell Gallery and under “Installation Options”, click on the “Manual Download” tab and then “Download the raw nupkg file”.

Manual Download Options

You can then go to your downloads folder and unzip the .nupkg.

You can then move the folder into your modules path. To find out what this specific path is, follow the steps specified in “How to find the versions you’re using” above. Use the first path listed.

Get-Module PackageManagement

Create a folder under the PackageManagement directory listed here. This new folder should have the same name as the PackageManagement version that was downloaded. For example, in the case above, under “C:Program FilesWindowsPowerShellModulePackageManagement” you would create a directory named “1.1.0.0”. You can then place the contents of the unzipped nupkg into this newly created version directory.

After doing this, start a fresh session of PowerShell or run:

Import-Module PackageManagement -Force

You’re good to go

After completing either option 1 or 2 you should find your issue resolved. If not feel free to reach out via GitHub or Twitter.

The post When PowerShellGet v1 fails to install the NuGet Provider appeared first on PowerShell Team.

RedLine Stealer Delivered Through FTP, (Thu, Jan 20th)

This post was originally published on this site

Here is a piece of malicious Python script that injects a RedLine[1] stealer into its own process. Process injection is a common attacker’s technique these days (for a long time already). The difference, in this case, is that the payload is delivered through FTP! It’s pretty unusual because FTP is today less and less used for multiple reasons (lack of encryption by default, complex to filter with those passive/active modes). Support for FTP has even been disabled by default in Chrome starting with version 95! But FTP remains a common protocol in the IoT/Linux landscape with malware families like Mirai. My honeypots still collect a lot of Mirai samples on FTP servers. I don't understand why the attacker chose this protocol because, in most corporate environments, FTP is not allowed by default (and should definitely not be!).

Phishing e-mail with…an advertisement?, (Tue, Jan 18th)

This post was originally published on this site

Authors of phishing and malspam messages like to use various techniques to make their creations appear as legitimate as possible in the eyes of the recipients. To this end, they often try to make their messages look like reports generated by security tools[1], responses to previous communication initiated by the recipient[2], or instructions from someone at the recipients organization[3], just to name a few. Most such techniques have been with us for a long time, however, last week I came across one that I don’t believe I’ve ever seen before – inclusion of what may be thought of as an advertisement in the body of the message.