Amazon FSx for Windows File Server – Storage Size and Throughput Capacity Scaling

This post was originally published on this site

Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible over the Server Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory integration, consistent with operating an on-premises Microsoft Windows file server. Today, we are happy to announce two new features: storage capacity scaling and throughput capacity scaling. The storage capacity scaling allows you to increase your file system size as your data set increases, and throughput capacity is bidirectional letting you can adjust throughput up or down dynamically to help fine-tune performance and reduce costs. With the capability to grow storage capacity, you can adjust your storage size as your data sets grow, so you don’t need to worry about growing data sets when creating the file system. With the capability to change throughput capacity, you can dynamically adjust throughput capacity for cyclical workloads or for one-time bursts to achieve a time-sensitive goal such as data migration.

When we create a file system, we specify Storage Capacity and Throughput Capacity.

The storage capacity of SSD can be specified between 32 GiB and 65,536 GiB, and the capacity of HDD can be specified between 2,000 GiB and 65,536 GiB. With throughput capacity, every Amazon FSx file system has a throughput capacity that you configure when the file system is created. The throughput capacity determines the speed at which the file server hosting your file system can serve file data to clients accessing it. Higher levels of throughput capacity also come with more memory for caching data on the file server and support higher levels of IOPS.

With this release, you can scale up storage capacity and can scale up / down throughput capacity on your file system with the click of a button within the AWS Management Console, or you can use the AWS Software Development Kit (SDK) or Command Line Interface (CLI) tools. The file system is available online while scaling is in progress and you’ll have full access to it for storage scaling. During scaling throughput, Amazon FSx for Windows switches out the file servers on your file system, so you’ll see an automatic failover and failback on multi-AZ file systems.

So, let’s have a little trip through the new feature. We’ll look at the AWS Management Console at first.

Operation by AWS Management Console

Before we begin, we assume AWS Managed Microsoft AD by AWS Directory Service and Amazon FSx for Windows File Server are already set up. You can obtain a walkthrough guide here. With Actions drop down, we can select Update storage capacity and Update throughput capacity

We can assign new storage capacity by Percentage or Absolute value.

With throughput scaling, we can select the desired capacity from the drop down list.

Then, Status is changed to In Progress, and you still have access to the file system.

Scaling Storage Capacity and Throughput Capacity via CLI

First, we need a CLI environment. I prefer to work on AWS Cloud9, but you can use whatever you want. We need to know the file system ID to scale it. Type in the command below:

aws fsx --endpoint-url <endpoint> describe-file-systems

The endpoint differs among AWS Regions, and you can get a full list here. We’ll get a return, which is long and detailed. The file system ID is at the top of the return.

Let’s change Storage Capacity. The command below is the one to change it:

aws fsx --endpoint-url <endpoint> update-file-system --file-system-id=<FileSystemId> --storage-capacity <new capacity>

The <new capacity> should be a number up to 65536, and the new assigned capacity should be at least 10% larger than the current capacity. Once we type in the command, the new capacity is available for use within minutes. Once the new storage capacity is available on our file system, Amazon FSx begins storage optimization, which is the process of migrating the file system’s data to the new, larger disks. If needed, we can accelerate the storage optimization process at any time by temporarily increasing the file system’s throughput capacity. There is minimal performance impact while Amazon FSx performs these operations in the background, and we always have full access to our file system.

If you enter the following command, you’ll see that file system update is in “IN_PROGESS” and storage optimization is in “PENDING” at the bottom part of the log return.

aws fsx --endpoint-url <endpoint> describe-file-systems

After the storage optimization process begins:

We can also go further and run throughput scaling at the same time. Type the command below:

aws fsx --endpoint-url <endpoint> update-file-system --file-system-id=<FileSystemId> --windows-configuration ThroughputCapacity=<new capacity>

The “new capacity” should be <8 or 16 or 32 or 64 or 128 or 256 or 512 or 1024 or 2048> and should be larger than the current capacity.

Now, we can see that throughput scaling and storage optimization are both in progress. Again, we still have full access to the file system.

With throughput scaling, we can select the desired capacity from the drop down list.

When we need further large capacity more than 65,536 GiB, we can use Microsoft’s Distributed File System (DFS) Namespaces to group multiple file systems under a single namespace.

Available Today

Storage capacity scaling and throughput capacity scaling are available today for all AWS Regions where Amazon FSx for Windows File Server is available. This support is available for new file systems starting today, and will be expanded to all file system in the coming weeks. Check our documentation for more details.

– Kame;

XLMMacroDeobfuscator: An Update, (Mon, Jun 1st)

This post was originally published on this site

XLMMacroDeobfuscator is an open-source tool to deobfuscate Excel 4 macros. I wrote diary entries about it here and here.

In my first diary entry, I remark that I also had to install a missing Python module. This is no longer the case with the latest versions, I just install it with a single pip command.

The author also commented on my diary entry, suggesting the use of a couple of options to yield a cleaner output ready for grepping.

Like this:

Indeed, this provides cleaner output when grepping for http URLs, for example:

And this output can also be used to extract the relevant macros, with inverted greps for RUN, GOTO, …, like this:

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Learning the Basics of VMware Horizon 7.12 – Part 1 – Introduction

This post was originally published on this site

Recently, my employer asked me to learn Horizon 7 for potential projects. The last time I looked at VMware Horizon (Horizon) was in the summer of 2015 for a vendor project. Horizon has changed a lot, for the better, in the last five years. I spent two weeks installing, configuring, breaking, and repeating the process. […]

VMware vExpert 2020 Round 2 Application Now Live!

This post was originally published on this site

The Round 2 of vExpert 202 application is now live! This will run from 1st Jun to 19th Jun. Do not miss this dateline.

Start preparing now and submit your application.
If you need more help or not sure what information to be included, join us in our VMUG SG session coming 3rd Jun, 1300 Singapore Time here.
Here is also a post I did for those who are looking at applying to give you all the information I can think off.
So head over to https://vexpert.vmware.com/ and submit your application.
All the best and good luck!

How To Apply to become VMware vExpert and Why?

This post was originally published on this site

vExpert Application Program by VMware is just opening a second half application for 2020, starting today, 1st June. It’s a unique opportunity for IT admins to get in touch with this amazing community of people which you can get in touch with, learn more about VMware products, get more recognition in your job, and access […]

Read the full post How To Apply to become VMware vExpert and Why? at ESX Virtualization.

vExpert Applications are Open – Don’t Miss Out!

This post was originally published on this site

  vExpert Applications are Open! Don’t miss out on the opportunity to join this amazing program & community. Applications will be open from June 1st, 2020 to July 19th, 2020 and the awards will be announced on July 17th. Apply for vExpert 2020   What the Program is About The vExpert Program is about contributing

The post vExpert Applications are Open – Don’t Miss Out! appeared first on VMware vExpert Blog.

Windows 10 Built-in Packet Sniffer – PktMon, (Sun, May 31st)

This post was originally published on this site

Microsoft released with the October 2018 Update a built-in packet sniffer for Windows 10 located in C:Windowssystem32PktMon.exe. At ISC we like packets and this is one of the multiple ways to capture packets and send us a copy for analysis. Rob previously published another way of capturing packets in Windows here. If Windows 10 was compromised, this application would be a prime target by malicious actors and it need to be monitored, protected or removed in an enterprise.

In order to collect packets you need to launch a Windows 10 command prompt as admin before using PktMon.

The first thing to do is figure out what can be done with PktMon, if you execute PktMon filter add help it list all posible options by MAC address, datalink, VLAN, protocol, IPv4/IPv6 and services:

For example, let’s capture SSL traffic on port 443, the filter will look like this:

PktMon filter add -p 443

To view the port filtered list:
PktMon filter list

To remove the same filter when done will look like this:
PktMon filter remove -p 443

To clear the packet port filtered list (capture all ports):
PktMon filter remove list

To list the interfaces available for packet capture on Windows 10, use PktMon comp list. This list can contains several interfaces (i.e. wireless, VPN, Ethernet, etc)

Starting PktMon with -p 0 to capture the entire packet (default to 128 bytes), start packet capture from Ethernet interface Id: 10 and save the packets to a log file with Event Tracing for Windows (–etw default filename is PktMon1.etl):
pktmon start –etw -p 0 -c 10

Stopping PktMon you get the traffic statistics from the interface and leave a file PktMon1.etl on the drive where PktMon was started:

The file PktMon1.etl can be converted to text:

pktmon format PktMon.etl -o https.txt

14:08:19.937939100 MAC Dest 0x000C2986BE53, MAC Src 0x247703FD6DE8, EtherType IPv4 , VlanId 0, IP Dest 192.168.25.181, IP Src 192.168.25.165, Protocol UDP , Port Dest 62594, Port Src 3389, TCPFlags 0, PktGroupId 1125899906842838, PktCount 1, Appearance 1, Direction Tx , Type Ethernet , Component 95, Edge 1, Filter 0

Finally, reset all counter back to 0 and get ready for the next packet capture:

PktMon reset
All counters reset to 0.

Microsoft Network Monitor is dated and no longer actively supported by Microsft but until the next release of PktMon in Windows 10 2004 supporting conversion to pcapng, it can be used to open and read these packet capture files or read them as text has previous demonstratred.

[1] https://docs.microsoft.com/en-us/windows/win32/etw/about-event-tracing
[2] https://isc.sans.edu/forums/diary/Packet+Editor+and+Builder+by+Colasoft/24682/
[3] https://www.microsoft.com/en-in/download/details.aspx?id=4865
[4] https://isc.sans.edu/forums/diary/No+Wireshark+No+TCPDump+No+Problem/19409/
[5] https://docs.microsoft.com/en-us/windows/whats-new/whats-new-windows-10-version-2004

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

YARA v4.0.1, (Sat, May 30th)

This post was originally published on this site

A couple of weeks ago, YARA 4.0.0. was released with support for BASE64 strings.

If you plan on using/testing this new feature, be sure to use the latest YARA version 4.0.1 (a bugfix version).

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

The Impact of Researchers on Our Data, (Fri, May 29th)

This post was originally published on this site

Researchers have been using various tools to perform internet-wide scans for many years. Some will publish data continuously to either notify users of infected or misconfigured systems. Others will use the data to feed internal proprietary systems, or publish occasional research papers.

We have been offering a feed of IP addresses used by researchers. This feed is available via our API. To get the complete list, use:

https://isc.sans.edu/api/threatcategory/research
(add ?json or ?tab to get it in JSON or a tab delimited format)

We also have feeds for specific research groups (see https://isc.sans.edu/api ). 

Some of the research groups I have seen recently:

  • Shodan: Probably the best-known group. Shodan publishes the results of its scans at shodan.io.
  • Shadowserver: Also relatively well known. Shadowserver doesn’t make much of its data public. But it will make data available to ISPs to flag vulnerable/infected systems. You can find more about shadowserver at https://www.shadowserver.org/
  • Strechoid: I just recently came across this group, and do not know much about them. They have a very minimal web page with an opt-out form: http://www.stretchoid.com/
  • Onyphe: A company selling threat feeds. See https://www.onyphe.io/
  • CyberGreen: See https://www.cybergreen.net/ . A bit like Shadowserver in that it is organized as a not-for-profit collaborative effort. Some data is made public, but more in aggregate form.

The next question: Should you block these IP addresses? Well… my simple honest answer: Probably not, but it all depends.

Shodan for example (I put them in the research category) will publish the data it collects, and an attacker may now use Shodan to find a vulnerable system instead of performing their own scan. There are anecdotal stories of that happening, and I have seen pentesters do this. But we had a SANS Technology Institute student perform some research to find the impact of Shodan and he did not find a significant change in attack traffic depending on if an IP was listed or not [1]. On the other hand, he also found that many IP addresses that appear to be used by Shodan are not identified as such via a reverse DNS lookup. Our list will likely miss a lot of them.

But then again, it probably doesn’t hurt (right… our lists are “perfect”? Probably not). And blocking these scans at the perimeter may cut down on some of the noise.

So what is the impact? Here is some data I pulled from yesterday. We had a total of about 260k IP addresses reported to us. They generated about 30 million reports. So on average, a single source generates about 117 reports. The one Researcher exceeding this number significantly is Shodan, with about 5176 reports per source. Remember that Shodan will hit multiple target ports. Also, Shodan uses a relatively small set of published source IPs.

As far a the number of reports go, Stetchoid is actually the “winner” with Shodan 2nd and Shadowserver third. Cybergreen with a total of 100 reports (compared to Stretchoids 164k) hardly shows up. This may in part be due to us missing a lot of the Cybergreen addresses. I will have to look into that again.

What about the legality and ethics of these scans? The legality of port scans has often been discussed, and I am not a legal expert to weigh in on that. In my opinion, an ethical researcher should have a well-published “opt-out” option. IP addresses should reverse resolve to a hostname that will provide additional information about the organization performing the scan. Scans also should be careful to not cause any damage. A famous example is an old (very old) vulnerability in Cisco routers where an empty UDP packet to port 500 caused the router to crash. Researchers should not go beyond a simple connection attempt (using valid payload) and a banner “grab”. These scans should not attempt to log in, and rate-limiting has to be done carefully. In particular, if IP addresses are scanned sequentially, it may happen that several fo these IPs point to the same server.

Anything else you have seen researchers do that you liked or didn’t like? There are more researchers than I listed here. I need to add more to the feed. Also, not all of them scan continuously, and the data I am showing here is only from yesterday.


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.