Deobfuscating Scripts: When Encodings Help, (Sun, Apr 30th)

This post was originally published on this site

I found this sample on MalwareBazaar, tagged as unknown.

Taking a look with my tool file-magic.py:

It's UTF16 LE text. This is confirmed when taking a look at the malware file inside the ZIP container with zipdump.py:

Notice the FFFE BOM.

zipdump.py can convert utf16 text to utf8 text with option translate (-t utf16):

A search for the name in the first comment gives me already an indication of what this might be.

Taking a look at the encoded strings at the end of the file (grep var) with base64dump.py gives me this:

split and join: the strings are split according to a given separator, and then joined together again. The result is that the "separator" has been removed.

The "separators" are the 2 last strings in the red box above.

The u notation (like u2193) is a unicode code point notation supported in JavaScript: that string representents different arrow symbols, that are inserted into the script for obfuscation. I could search and replace for these arrows, but there's an easier method. Since these are non-ANSI characters, I can just convert the utf16 text to latin (ANSI), like this (i=utf16 means input is utf16, o=latin means output should be latin).

As can be seen, this throws an error, because the latin-1 codec can not handle these arrow characters. The trick is to let the latin-1 codec ignore (drop) these characters: latin:ignore. Like this:

And now all the arrow symbols are gone. I'm left with another obfuscating string that should be removed (!…!). Since this is a ANSI string, I can just remove it with a search and replace with sed:

This is a RAT written in JavaScript, probably a variant of hworm/wshrat:

The C2 can be observed in the configuration part of the code:

When an ANSI script is obfuscated with non-ANSI characters (in UTF16), one can do a (partial) deobfuscation by converting the script back to ANSI and throw away all non-ANSI characters.

 

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Introducing Athena Provisioned Capacity

This post was originally published on this site

Today we launch the ability to provision capacity to run your Amazon Athena queries.

Athena is a query service that makes it simple to analyze data in Amazon Simple Storage Service (Amazon S3) data lakes and 30 different data sources, including on-premises data sources or other cloud systems, using standard SQL queries. Athena is serverless, so there is no infrastructure to manage, and–until today–you pay only for the queries that you run. Starting today, you can get dedicated capacity for your queries and use new workload management features to prioritize, control, and scale your most important queries, paying only for the capacity you provision.

At AWS, 90 percent of the new services and features are driven by your direct feedback. Many of you Athena customers told us that, when running a large volume of queries, you sometimes experience queuing, which might slow down some applications or business processes. To work around this, you typically create a query prioritization mechanism to prioritize mission-critical queries over less critical, interactive, or exploratory queries. This prioritization mechanism helps to get the highest priority queries run first, at the price of building and maintaining code or business processes outside of Athena itself. You also told us it is difficult to forecast your Athena costs. Athena charges by the volume of data scanned, which is often difficult to predict as it depends on the size of your data set, the construction of the user queries, and the storage format for the data.

We heard this feedback, and today, we introduce the capability to provision dedicated query processing capacity at scale. With provisioned capacity, you provision a dedicated set of compute resources to run your queries. This always-on capacity can serve your business-critical queries with near-zero latency and no queuing. It gives you control over workload performance characteristics such as cost, concurrency, and query prioritization. Similar to provisioned capacity for other AWS services, you pay only for the capacity provisioned, not for the actual usage. With provisioned capacity, your Athena bills are predictable, and you do not have to limit user queries to stay within your monthly budget. I’ll share more about the billing model down below.

Behind the scenes, Athena maintains a large pool of compute in each AWS Region that it operates in. You can think of this as one large pool of compute, divided logically across customers. When you reserve capacity in Athena, the capacity is held for your exclusive use. You can choose which queries run on the capacity you provisioned and which run on Athena’s multi-tenant, on-demand capacity. Multiple queries can share the capacity you provisioned. You may add additional capacity units at any time, based on your evolving business requirements. You may also adjust the provisioned capacity down after a minimum period of time of 8 hours.

The unit of capacity is a Data Processing Unit (DPU). A single DPU is equivalent to four vCPU and 16 Gb RAM. The minimum capacity you may provision is 24 DPU for 8 hours. This new provisioned capacity for Athena is ideal for those of you running any volume of queries, but the sweet spot to start using provisioned capacity is when you spend $100 or more per month on Athena.

The number of DPUs you need depends on your goals and analysis patterns. For example, if you need queries to start immediately and without queuing, you should provision enough DPUs to meet your peak concurrent query demand. Provisioning fewer DPUs than your peak demand is allowed, but may result in queuing. When this occurs, queries are held in a queue and executed when capacity is available. If your goal is to run queries within a fixed budget, you can use the AWS Pricing Calculator to determine the number of DPUs that meets your budget. Lastly, remember that data size, storage format, and query construction influence the number of DPU a query requires. You can increase query performance by compressing, partitioning, and converting your data into columnar formats. Athena’s documentation provides you with guidelines to determine how much capacity you might require to run multiple queries at the same time.

How Does It Work?
Getting started is a three-step process. I navigate to the Athena page in the AWS Management Console and select Capacity Reservations on the left-side navigation menu.
(The console you see on this demo is based on the new Cloudscape open-source design system, yours might still see the traditional design on your AWS account.)

Athena Capacity Reservation landing page in the console

I select the Create capacity reservation button at the top right of the page.

On the Create capacity reservation page, I enter a Capacity reservation name and the number of DPUs I want to provision.

Athena Capacity Reservation - Create Reservation

I select Review to review my choices, and I select Create capacity reservation to create my reservation. After a brief period of time, the capacity reservation status becomes ✅ Active.

Athena Capacity Reservation - Status

The third and last step is to create a workgroup and assign the workgroup to the provisioned capacity. A workgroup is an Athena mechanism allowing you to separate users, teams, applications, or workloads to set limits on the amount of data each query or the entire workgroup can process and to track costs.

Queries belonging to the assigned workgroup will run on the capacity you provisioned. Capacity may be shared with multiple workgroups as long as they all use the same Athena engine version. This concept, depicted in the diagram below, is surfaced through a capacity allocation policy, which defines how capacity is assigned over workgroups. This gives you the flexibility to run queries with more or less capacity, depending on your business needs.

Athena Capacity Reservation - shared workgroups

To create a workgroup, I navigate to the Workgroups section of the Athena page. Then, I select Create workgroup.

Athena Capacity Reservation - Create Workgroup

I make sure the analytics engine selected in the reservation matches the one in the workgroup.

Athena Capacity Reservation - select analytic engineThen, I go back to the capacity reservation I just created, and I select Add workgroups to add the workgroup I just created.

Athena Capacity Reservation - Add workgroup

That’s it! Now that the configuration is ready, I can run my queries. Existing queries will run on the provisioned capacity unmodified. I make sure to select the workgroup I just created when I run queries. I choose a workgroup on the top right side of the query editor, or use the --work-group argument on the AWS command line, such as:

aws athena start-query-execution --work-group AWSNewsBlog

Athena Capacity Reservation - Select workgroup

Availability and Pricing
As I explained in the introduction, we charge for the number of DPUs you provisioned and the duration. The minimum duration is 8 hours, and after that, billing is per minute. You can release the provisioned capacity at any time. Cancellations within the minimum duration period are billed for the full term, and capacity is deallocated as soon as all currently running queries are terminated.

Queries run from a workgroup assigned to a provisioned capacity are not billed for the amount of data scanned. You effectively pay a flat rate depending on the provisioned capacity, not the usage. If you have excess capacity, you can reduce the number of DPUs you provisioned or add workgroups to consume the excess capacity.

As usual, the Athena pricing page has all the details.

Athena provisioned capacity is available today in US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Sydney, Tokyo), and Europe (Ireland, Stockholm) AWS Regions.

Go and provision your Athena capacity today!

— seb

Quick IOC Scan With Docker, (Fri, Apr 28th)

This post was originally published on this site

When investigating an incident, you must perform initial tasks quickly. There is one tool in my arsenal that I'm using to quickly scan for interesting IOCs ("Indicators of Compromise"). This tool is called Loki[1], the free version of the Thor scanner. I like this tool because you can scan for a computer (processes & files) or a specific directory (only files) for suspicious content. The tool has many interesting YARA rules, but you can always add your own to increase the detection capabilities. 

Loki is delivered as a package with an executable for the Windows environment but is being developed in Python. Therefore, why not create a Docker image ready to scan your pieces of evidence?

Here is a simple Dockerfile to build a container:

FROM ubuntu:latest
RUN apt update
RUN apt -y install git
RUN apt -y install python3-pip libssl-dev
WORKDIR /opt
RUN git clone https://github.com/Neo23x0/Loki.git
WORKDIR /opt/Loki
RUN chmod a+x loki.py
RUN pip install -r requirements.txt
RUN ln -s /usr/bin/python3 /usr/bin/python
ENTRYPOINT [ "/usr/bin/python", "/opt/Loki/loki.py" ]
CMD ["--help"]

Now you can scan any directory:

remnux@remnux:/MalwareZoo/Evidences$ docker run --rm -it -v $(PWD):/evidences loki -p /evidences --noprocscan

Just give no arguments to get some help:

remnux@remnux:/MalwareZoo/Evidences$ docker run --rm -it loki
usage: loki.py [-h] [-p path] [-s kilobyte] [-l log-file] [-r remote-loghost] [-t remote-syslog-port] [-a alert-level] [-w warning-level] [-n notice-level] [--allhds] [--alldrives]
               [--printall] [--allreasons] [--noprocscan] [--nofilescan] [--vulnchecks] [--nolevcheck] [--scriptanalysis] [--rootkit] [--noindicator] [--dontwait] [--intense] [--csv]
               [--onlyrelevant] [--nolog] [--update] [--debug] [--maxworkingset MAXWORKINGSET] [--syslogtcp] [--logfolder log-folder] [--nopesieve] [--pesieveshellc] [--python PYTHON]
               [--nolisten] [--excludeprocess EXCLUDEPROCESS] [--force] [--version]

Loki - Simple IOC Scanner

options:
  -h, --help            show this help message and exit
  -p path               Path to scan
  -s kilobyte           Maximum file size to check in KB (default 5000 KB)
  -l log-file           Log file
  -r remote-loghost     Remote syslog system
  -t remote-syslog-port
                        Remote syslog port
  -a alert-level        Alert score
  -w warning-level      Warning score
  -n notice-level       Notice score
  --allhds              Scan all local hard drives (Windows only)
  --alldrives           Scan all drives (including network drives and removable media)
  --printall            Print all files that are scanned
  --allreasons          Print all reasons that caused the score
  --noprocscan          Skip the process scan
  --nofilescan          Skip the file scan
  --vulnchecks          Run the vulnerability checks
  --nolevcheck          Skip the Levenshtein distance check
  --scriptanalysis      Statistical analysis for scripts to detect obfuscated code (beta)
  --rootkit             Skip the rootkit check
  --noindicator         Do not show a progress indicator
  --dontwait            Do not wait on exit
  --intense             Intense scan mode (also scan unknown file types and all extensions)
  --csv                 Write CSV log format to STDOUT (machine processing)
  --onlyrelevant        Only print warnings or alerts
  --nolog               Don't write a local log file
  --update              Update the signatures from the "signature-base" sub repository
  --debug               Debug output
  --maxworkingset MAXWORKINGSET
                        Maximum working set size of processes to scan (in MB, default 100 MB)
  --syslogtcp           Use TCP instead of UDP for syslog logging
  --logfolder log-folder
                        Folder to use for logging when log file is not specified
  --nopesieve           Do not perform pe-sieve scans
  --pesieveshellc       Perform pe-sieve shellcode scan
  --python PYTHON       Override default python path
  --nolisten            Dot not show listening connections
  --excludeprocess EXCLUDEPROCESS
                        Specify an executable name to exclude from scans, can be used multiple times
  --force               Force the scan on a certain folder (even if excluded with hard exclude in LOKI's code
  --version             Shows welcome text and version of loki, then exit

Because we run Ubuntu in the container, you can, of course, mount disk images from loop devices directly in the container and scan them:

remnux@remnux:/MalwareZoo/Evidences$ docker run --rm -it --privileged --entrypoint bash loki
root@d0256e7ad441:/opt/Loki# mount -o ro,loop,offset=1048576 /dev/loop1 /mnt
root@d0256e7ad441:/opt/Loki# python ./loki.py -p /mnt --noprocscan

This docker container works perfectly on my Macbook. No need to boot a Windows VM to scan a disk image…

[1] https://github.com/Neo23x0/Loki

Xavier Mertens (@xme)
Xameco
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS.edu Research Journal: Volume 3 , (Thu, Apr 27th)

This post was originally published on this site

One of my privileges as dean of research for the SANS.edu college is the ability to work with some of our graduate students as they complete their research projects. More recently, I have also been lucky to advise many of our undergraduate students as they participate in our Internet Storm Center internship. You may have seen me highlight some of the work done by our students as part of diaries or as part of the daily podcast. At times, I could interview some of our students for some episodes.

SANS.edu college research journal

Yesterday, SANS.edu released the third volume of our research journal, summarizing the best papers completed by students over the last year. Each student is assigned a member of our research committee to assist them as they conduct the research. Thanks to this research committee, our writing center, and all the other resources assisting our students in creating this fantastic work. To be included in the journal, papers must be graded with an "A."

When selecting research topics, students are asked to investigate solutions to current, relevant problems. Papers not only present the solution but also prove that the solution works. Our students are asked to conduct experiments to test solutions and to show how they apply to the problem they are supposed to address. 

In line with our "SANS promise," the research papers, just like any SANS class, should provide you with information you can apply "the next day at work." This year, we are also highlighting some of the work of our undergraduate interns.

The SANS.edu college research journal is available for download here: https://www.sans.edu/cyber-security-research.

Please let me know if you find it useful or if there is anything we should improve in future editions. (email me at jullrich – sans.edu ).


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

VMware Skyline Advisor Pro Proactive Findings – April 2023 Edition

This post was originally published on this site

Tweet VMware Skyline Advisor Pro releases new proactive Findings every month. Findings are prioritized by trending issues in VMware Technical Support, issues raised through post escalation review, security vulnerabilities, issues raised from VMware engineering, and nominated by customers. For the month of April, we released 45 new Findings. Of these, there are 34 Findings based … Continued

The post <strong>VMware Skyline Advisor Pro Proactive Findings – April 2023 Edition</strong> appeared first on VMware Support Insider.

Strolling through Cyberspace and Hunting for Phishing Sites, (Wed, Apr 26th)

This post was originally published on this site

From time to time and as much as my limited time permits, I often explore the Internet and my DShield logs to see if I can uncover any interesting artifacts that suggest nefarious behaviour. Time-driven events such as tax filing are also considered when I perform such hunting activities. I recently discovered one such site masquerading as the Inland Revenue Authority of Singapore (IRAS) and observed some interesting points.

Calculating CVSS Scores with ChatGPT, (Tue, Apr 25th)

This post was originally published on this site

Everybody appears to be set to use ChatGPT for evil. After all, what is the fun in making the world a better place if, instead, you can make fun of a poor large large-scale language model whose developers only hinted at what it could mean to be good?

Having not given up on machines finally taking over to beat the "humane" into "humanity," I recently looked at some ways to use ChatGPT more defensively.

An issue I have been struggling with is vendors like Apple providing very terse and unstructured vulnerability summaries. You may have seen my attempt to create a more structured version of them and to assign severities to these vulnerabilities. Given that there are often dozens of vulnerabilities and limitations of my human form, the severity I assign is more of a "best guess." So I figured I would try to automate this with ChatGPT, and the initial results are not bad. 

For example, let's take the last Apple vulnerability, CVE-2023-28206. This was an already exploited ("0-Day") privilege escalation vulnerability. 

Chat GPT delivers the following analysis:

Given the limited information, I think a score of 8.8, and the analysis, isn't bad. Personally, I would have rated it probably a bit lower.

I will probably add this to my Apple vulnerability parser and use this the next time Apple releases an update 🙂

 

 


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
 

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Choose Korean in AWS Support as Your Preferred Language

This post was originally published on this site

Today, we are announcing the general availability of AWS Support in Korean as your preferred language, in addition to English, Japanese, and Chinese.

As the number of customers speaking Korean grows, AWS Support is invested in providing the best support experience possible. You can now communicate with AWS Support engineers and agents in Korean when you create a support case at the AWS Support Center.

Now all customers can receive account and billing support in Korean by email, phone, and live chat at no additional cost during the supported hours. Per your Support plan, customers subscribed to Enterprise, Enterprise On-Ramp, or Business Support plans can receive personalized technical support 24 hours a day and 7 days a week in Korean. Customers subscribed to the Developer Support plan can receive technical support during business hours generally defined as 9:00 AM to 6:00 PM in the customer country as set in My Account console, excluding holidays and weekends. These times may vary in countries with multiple time zones.

We also added the localized user interface of the AWS Support Center in Korean, in addition to Japanese and Chinese. AWS Support Center will be displayed in the language you select from the dropdown of available languages in Unified Settings of your AWS Account.

Here is a new AWS Support Center page in Korean:

You can also access customer service, AWS documentation, technical papers, and support forums in Korean.

Getting Started with Your Supported Language in AWS Support
To get started with AWS Support in your supported language, create a Support case in AWS Support Center. In the final step in creating a Support case, you can choose a supported language, such as English, Chinese (中文), Korean (한국어), or Japanese (日本語) as your Preferred contact language.

When you choose Korean, the customized contact options will be shown by your support plan.

For example, in the case of Basic Support plan customers, you can choose Web to get support via email, Phone, or Live Chat when available. AWS customers with account and billing inquiries can receive support in Korean from our customer service representatives with proficiency in Korean at no additional cost during business hours defined as 09:00 AM to 06:00 PM Korean Standard Time (GMT+9), excluding holidays and weekends.

If you get technical support per your Support plan, you may choose Web, Phone, or Live Chat depending on your Support plan to get in touch with support staff with proficiency in Korean, in addition to English, Japanese, and Chinese.

Here is a screen in Korean to get technical support in the Enterprise Support plan:

When you create a support case in your preferred language, the case will be routed to support staff with proficiency in the language indicated in your preferred language selection. To learn more, see Getting started with AWS Support in the AWS documentation.

Now Available
AWS Support in Korean is now available today, in addition to English, Japanese, and Chinese. Give it a try, learn more about AWS Support, and send feedback to your usual AWS Support contacts.

Channy

This article was translated into Korean (한국어) in the AWS Korea Blog.

Amazon S3 Compatible Storage on AWS Snowball Edge Compute Optimized Devices Now Generally Available

This post was originally published on this site

We have added a collection of purpose-built services to the AWS Snow Family for customers, such as Snowball Edge in 2016 and Snowcone in 2020. These services run compute intensive workloads and stores data in edge locations with denied, disrupted, intermittent, or limited network connectivity and for transferring large amounts of data from on-premises and rugged or mobile environments.

Each new service is optimized for space- or weight-constrained environments, portability, and flexible networking options. For example, Snowball Edge devices have three options for device configurations. AWS Snowball Edge Compute Optimized provides a suitcase-sized, secure, and rugged device that customers can deploy in rugged and tactical edge locations to run their compute applications. Customers modernize their edge applications in the cloud use AWS compute services and storage services such as Amazon Simple Storage Service (Amazon S3), and then deploy these applications on Snow devices at the edge.

We heard from customers that they also needed access to local object store to run applications at the edge, such as 5G mobile core and real-time data analytics, to process end-user transactions, and they had limited storage infrastructure availability in these environments. Although the Amazon S3 Adapter for Snowball enables the basic storage and retrieval of objects on a Snow device, customers wanted access to a broader set of Amazon S3 APIs, including flexibility at scale, local bucket management, object tagging, and S3 event notifications.

Today, we’re announcing the general availability of Amazon S3 compatible storage on Snow for our Snowball Edge Compute Optimized devices. This makes it easy for you to store data and run applications with local S3 buckets that require low latency processing at the edge.

With Amazon S3 compatible storage on Snow, you can use an expanded set of Amazon S3 APIs to easily build applications on AWS and deploy them on Snowball Edge Compute Optimized devices. This eliminates the need to re-architect applications for each deployment. You can manage applications requiring Amazon S3 compatible storage across the cloud, on-premises, and at the edge in connected and disconnected environments with a consistent experience.

Moreover, you can use AWS OpsHub, a graphical user interface, to manage your Snow Family services and Amazon S3 compatible storage on the devices at the edge or remotely from a central location. You can also use Amazon S3 SDK or AWS Command Line Interface (AWS CLI) to create and manage S3 buckets, get S3 event notifications using MQTT, and local service notifications using SMTP, just as you do in AWS Regions.

With Amazon S3 compatible storage on Snow, we are now able to address various use cases in limited network environments, giving customers secure, durable local object storage. For example, customers in the intelligence community and in industrial IoT deploy applications such as video analytics in rugged and mobile edge locations.

Getting Started with S3 Compatible Storage on Snowball Edge Compute Optimized
To order new Amazon S3 enabled Snowball Edge devices, create a job in the AWS Snow Family console. You can replace an existing Snow device or cluster with new replacement devices that support S3 compatible storage.

In Step 1 – Job type, input your job name and choose Local compute and storage only. In Step 2 – Compute and storage, choose your preferred Snowball Edge Compute Optimized device.

Select Amazon S3 compatible storage, a new option for S3 compatible storage. The current S3 Adapter solution is on deprecation path, and we recommend migrating workloads to use Amazon S3 compatible storage on Snow.

When you select Amazon S3 compatible storage, you can configure Amazon S3 compatible storage capacity for a single device or for a cluster. The Amazon S3 storage capacity depends on the quantity and type of Snowball Edge device.

  • For single-device deployment, you can provision granular Amazon S3 capacity up to a maximum of 31 TB on a Snowball Edge Compute Optimized device.
  • For a cluster setup, all storage capacity on a device is allocated to Amazon S3 compatible storage on Snow. You can provision a maximum of 500 TB on a 16 node cluster of Snowball Edge Compute Optimized devices.

When you provide all necessary job details and create your job, you can see the status of the delivery of your device in the job status section.

Manage S3 Compatible Storage on Snow with OpsHub
Once your device arrives at your site, power it on, and connect it to your network. To manage your device, download, install, and launch the OpsHub application in your laptop. After installation, you can unlock the device and start managing it and using supported AWS services locally.

OpsHub provides a dashboard that summarizes key metrics, such as storage capacity and active instances on your device. It also provides a selection of AWS services that are supported on the Snow Family devices.

Log in to OpsHub, then choose Manage Storage. This takes you to the Amazon S3 compatible storage on Snow landing page.

For Start service setup type, choose Simple if your network uses dynamic host configuration protocol (DHCP). With this option, the virtual network interface cards (VNICs) are created automatically on each device when you start the service. When your network uses static IP addresses, you need to create VNICs for each device manually, so choose the Advanced option.

Once the service starts, you’ll see its status is active with a list of endpoints. The following example shows the service activated in a single device:

Choose Create bucket if you want the new S3 bucket in your device. Otherwise, you can upload files to your selected bucket. New uploaded objects have destination URLs such as s3-snow://test123/test_file with the unique bucket name in the device or cluster.

You can also use the bucket lifecycle rule to define when to trigger object deletion based on age or date. Choose Create lifecycle rule in the Management tab to add a new lifecycle rule.

You can select either Delete objects or Delete incomplete multipart uploads as a rule action. Configure the rule trigger that schedules deletion based on a specific date or object’s age. In this example, I set two days to delete objects after being uploaded.

You can also use the Amazon S3 SDK/CLI for all API operations supported by S3 for Snowball Edge. To learn more, see API Operations Supported on Amazon S3 for Snowball Edge in the AWS documentation.

Things to know
Keep these things in mind regarding additional features and considerations when you use Amazon S3 compatible storage on Snow:

  • Capacity: If you fully utilize Amazon S3 capacity on your device or cluster, your write (PUT) requests return an insufficient capacity error. Read (GET) operations continue to function normally. To monitor the available Amazon S3 capacity, you can use the OpsHub S3 on the Snow page or use the describe-service CLI command. Upon detecting insufficient capacity on the Snow device or cluster, you must free up space by deleting data or transferring data to an S3 bucket in the Region or another on-premises device.
  • Resiliency: Amazon S3 compatible storage on Snow stores data redundantly across multiple disks on each Snow device and multiple devices in your cluster, with built-in protection against correlated hardware failures. In the event of a disk or device failure within the quorum range, Amazon S3 compatible storage on Snow continues to operate until hardware is replaced. Additionally, Amazon S3 compatible storage on Snow continuously scrubs data on the device to make sure of data integrity and recover any corrupted data. For workloads that require local storage, the best practice is to back up your data to further protect your data stored on Snow devices.
  • Notifications: Amazon S3 compatible storage on Snow continuously monitors the health status of the device or cluster. Background processes respond to data inconsistencies and temporary failures to heal and recover data to make sure of resiliency. In the case of nonrecoverable hardware failures, Amazon S3 compatible storage on Snow can continue operations and provides proactive notifications through emails, prompting you to work with AWS to replace failed devices. For connected devices, you have the option to enable the “Remote Monitoring” feature, which will allow AWS to monitor service health online and proactively notify you of any service issues.
  • Security: Amazon S3 compatible storage on Snow supports encryption using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) or customer-provided keys (SSE-C) and authentication and authorization using Snow IAM actions namespace (s3:*) to provide you with distinct controls for data stored on your Snow devices. Amazon S3 compatible storage on Snow doesn’t support object-level access control list and bucket policies. Amazon S3 compatible storage on Snow defaults to Bucket Owner is Object Owner, making sure that the bucket owner has control over objects in the bucket.

Now Available
Amazon S3 compatible storage on Snow is now generally available for AWS Snowball Edge Compute Optimized devices in all AWS Commercial and GovCloud Regions where AWS Snow is available.

To learn more, see the AWS Snowball Edge Developer Guide and send feedback to AWS re:Post for AWS Snowball or through your usual AWS support contacts.

Channy