AWS re:Inforce 2019 – Security, Identity, and Compliance

This post was originally published on this site

AWS re:Inforce, our new conference dedicated to cloud security, opens in Boston on June 25th. We’re expecting about 8,000 attendees, making this bigger than the first re:Invent! Just like re:Invent, re:Inforce is a learning conference for builders.

With over 300 breakout sessions (intermediate, advanced, and expert) spanning four tracks and a virtual Capture The Flag event, attendees will walk away knowing how to use cloud-based infrastructure in a secure and compliant manner. The re:Inforce agenda also includes a healthy collection of bootcamps, chalk talks, workshops, full-day hands-on labs, builder sessions, leadership sessions, and the Security Jam.

Diving deeper into the session offerings, a wide range of services will be considered – including (to name a few) AWS WAF, AWS Firewall Manager, AWS KMS, AWS Secrets Manager, AWS Lambda, AWS Control Tower, Amazon SageMaker, Amazon GuardDuty, AWS CloudTrail, Amazon Macie, Amazon RDS, Amazon Aurora, AWS Identity and Access Management, Amazon EKS, and Amazon Inspector. You will have the opportunity to learn about a broad variety of important topics including building secure APIs, encryption, privileged access, auditing the cloud, open source, DevSecOps, building a security culture, hiring/staffing, and privacy by design as well as specific compliance regimes such as PCI, NIST, SOC, FedRAMP, and HIPAA.

To learn more about re:Inforce, read the FAQ, check out the Venue & Hotel info, and review the Code of Conduct.

Register Now & Save $100
If you register now and use code RFSAL19, you can save $100, while supplies last.

Jeff;

 

 

Using PSScriptAnalyzer to check PowerShell version compatibility

This post was originally published on this site

PSScriptAnalyzer version 1.18 was released recently, and ships with powerful new rules that can check PowerShell scripts for incompatibilities with other PowerShell versions and environments.

In this blog post, the first in a series, we’ll see how to use these new rules to check a script for problems running on PowerShell 3, 5.1 and 6.

Wait, what’s PSScriptAnalyzer?

PSScriptAnalzyer is a module providing static analysis (or linting) and some dynamic analysis (based on the state of your environment) for PowerShell. It’s able to find problems and fix bad habits in PowerShell scripts as you create them, similar to the way the C# compiler will give you warnings and find errors in C# code before it’s executed.

If you use the VSCode PowerShell extension, you might have seen the “green squigglies” and problem reports that PSScriptAnalyzer generates for scripts you author:

Image of PSScriptAnalyzer linting in VSCode with green squigglies

You can install PSScriptAnalyzer to use on your own scripts with:

Install-Module PSScriptAnalyzer -Scope CurrentUser

PSScriptAnalyzer works by running a series of rules on your scripts, each of which independently assesses some issue. For example AvoidUsingCmdletAliases checks that aliases aren’t used in scripts, and MisleadingBackticks checks that backticks at the ends of lines aren’t followed by whitespace.

For more information, see the PSScriptAnalyzer deep dive blog series.

Introducing the compatibility check rules

The new compatibility checking functionality is provided by three new rules:

  • PSUseCompatibleSyntax, which checks whether a syntax used in a script will work in other PowerShell versions.
  • PSUseCompatibleCommands, which checks whether commands used in a script are available in other PowerShell environments.
  • PSUseCompatibleTypes, which checks whether .NET types and static methods/properties are available in other PowerShell environments.

The syntax check rule simply requires a list of PowerShell versions you want to target, and will tell you if a syntax used in your script won’t work in any of those versions.

The command and type checking rules are more sophisticated and rely on profiles (catalogs of commands and types available) from commonly used PowerShell platforms. They require configuration to use these profiles, which we’ll go over below.

For this post, we’ll look at configuring and using PSUseCompatibleSyntax and PSUseCompatibleCommands to check that a script works with different versions of PowerShell. We’ll look at PSUseCompatibleTypes in a later post, although it’s configured very similarly to PSUseCompatibleCommands.

Working example: a small PowerShell script

Imagine we have a small (and contrived) archival script saved to .archiveScript.ps1:

# Import helper module to get folders to archive
Import-Module -FullyQualifiedName @{ ModuleName = ArchiveHelper; ModuleVersion = '1.1' }

$paths = Get-FoldersToArchive -RootPath 'C:DocumentsDocumentsToArchive'
$archiveBasePath = 'ArchiveServerDocumentArchive'

# Dictionary to collect hashes
$hashes = [System.Collections.Generic.Dictionary[string, string]]::new()
foreach ($path in $paths)
{
    # Get the hash of the file and turn it into a base64 string
    $hash = (Get-FileHash -LiteralPath $path).Hash

    # Add this file to the hash catalog
    $hashes[$hash] = $path

    # Now give the archive a unique name and zip it up
    $name = Split-Path -LeafBase $path
    Compress-Archive -LiteralPath $path -DestinationPath (Join-Path $archiveBasePath "$name-$hash.zip")
}

# Add the hash catalog to the archive directory
ConvertTo-Json $hashes | Out-File -LiteralPath (Join-Path $archiveBasePath "catalog.json") -NoNewline

This script was written in PowerShell 6.2, and we’ve tested that it works there. But we also want to run it on other machines, some of which run PowerShell 5.1 and some of which run PowerShell 3.0.

Ideally we will test it on those other platforms, but it would be nice if we could try to iron out as many bugs as possible ahead of time.

Checking syntax with PSUseCompatibleSyntax

The first and easiest rule to apply is PSUseCompatibleSyntax. We’re going to create some settings for PSScriptAnalyzer to enable the rule, and then run analysis on our script to get back any diagnostics about compatibility.

Running PSScriptAnalyzer is straightforward. It comes as a PowerShell module, so once it’s installed on your module path you just invoke it on your file with Invoke-ScriptAnalyzer, like this:

Invoke-ScriptAnalyzer -Path '.archiveScript.ps1`

A very simple invocation like this one will run PSScriptAnalyzer using its default rules and configurations on the script you point it to.

However, because they require more targeted configuration, the compatibility rules are not enabled by default. Instead, we need to supply some settings to run the syntax check rule. In particular, PSUseCompatibleSyntax requires a list of the PowerShell versions you are targeting with your script.

$settings = @{
    Rules = @{
        PSUseCompatibleSyntax = @{
            # This turns the rule on (setting it to false will turn it off)
            Enable = $true

            # List the targeted versions of PowerShell here
            TargetVersions = @(
                '3.0',
                '5.1',
                '6.2'
            )
        }
    }
}

Invoke-ScriptAnalyzer -Path .archiveScript.ps1 -Settings $settings

Running this will present us with the following output:

RuleName                            Severity     ScriptName Line  Message
--------                            --------     ---------- ----  -------
PSUseCompatibleSyntax               Warning      archiveScr 8     The constructor syntax
                                                 ipt.ps1          '[System.Collections.Generic.Dictionary[string,
                                                                  string]]::new()' is not available by default in
                                                                  PowerShell versions 3,4

This is telling us that the [dictionary[string, string]]::new() syntax we used won’t work in PowerShell 3. Better than that, in this case the rule has actually proposed a fix:

$diagnostics = Invoke-ScriptAnalyzer -Path .archiveScript.ps1 -Settings $settings
$diagnostics[0].SuggestedCorrections
File              : C:UsersroholtDocumentsDevsandboxVersionedScriptarchiveScript.ps1
Description       : Use the 'New-Object @($arg1, $arg2, ...)' syntax instead for compatibility with PowerShell versions 3,4
StartLineNumber   : 8
StartColumnNumber : 11
EndLineNumber     : 8
EndColumnNumber   : 73
Text              : New-Object 'System.Collections.Generic.Dictionary[string,string]'
Lines             : {New-Object 'System.Collections.Generic.Dictionary[string,string]'}
Start             : Microsoft.Windows.PowerShell.ScriptAnalyzer.Position
End               : Microsoft.Windows.PowerShell.ScriptAnalyzer.Position

The suggested correction is to use New-Object instead. The way this is suggested might seem slightly unhelpful here with all the position information, but we’ll see later why this is useful.

This dictionary example is a bit artificial of course (since a hashtable would come more naturally), but having a spanner thrown into the works in PowerShell 3 or 4 because of a ::new() is not uncommon. The PSUseCompatibleSyntax rule will also warn you about classes, workflows and using statements depending on the versions of PowerShell you’re authoring for.

We’re not going to make the suggested change just yet, since there’s more to show you first.

Checking command usage with PSUseCompatibleCommands

We now want to check the commands. Because command compatibility is a bit more complicated than syntax (since the availability of commands depends on more than what version of PowerShell is being run), we have to target profiles instead.

Profiles are catalogs of information taken from stock machines running common PowerShell environments. The ones shipped in PSScriptAnalyzer can’t always match your working environment perfectly, but they come pretty close (there’s also a way to generate your own profile, detailed in a later blog post). In our case, we’re trying to target PowerShell 3.0, PowerShell 5.1 and PowerShell 6.2 on Windows. We have the first two profiles, but in the last case we’ll need to target 6.1 instead. These targets are very close, so warnings will still be pertinent to using PowerShell 6.2. Later when a 6.2 profile is made available, we’ll be able to switch over to that.

We need to look under the PSUseCompatibleCommands documentation for a list of profiles available by default. For our desired targets we pick:

  • PowerShell 6.1 on Windows Server 2019 (win-8_x64_10.0.14393.0_6.1.3_x64_4.0.30319.42000_core)
  • PowerShell 5.1 on Windows Server 2019 (win-8_x64_10.0.17763.0_5.1.17763.316_x64_4.0.30319.42000_framework)
  • PowerShell 3.0 on Windows Server 2012 (win-8_x64_6.2.9200.0_3.0_x64_4.0.30319.42000_framework)

The long names on the right are canonical profile identifiers, which we use in the settings:

$settings = @{
    Rules = @{
        PSUseCompatibleCommands = @{
            # Turns the rule on
            Enable = $true

            # Lists the PowerShell platforms we want to check compatibility with
            TargetProfiles = @(
                'win-8_x64_10.0.14393.0_6.1.3_x64_4.0.30319.42000_core',
                'win-8_x64_10.0.17763.0_5.1.17763.316_x64_4.0.30319.42000_framework',
                'win-8_x64_6.2.9200.0_3.0_x64_4.0.30319.42000_framework'
            )
        }
    }
}

Invoke-ScriptAnalyzer -Path ./archiveScript.ps1 -Settings $settings

There might be a delay the first time you execute this because the rules have to load the catalogs into a cache. Each catalog of a PowerShell platform contains details of all the modules and .NET assemblies available to PowerShell on that platform, which can be as many as 1700 commands with 15,000 parameters and 100 assemblies with 10,000 types. But once it’s loaded, further compatibility analysis will be fast. We get output like this:

RuleName                            Severity     ScriptName Line  Message
--------                            --------     ---------- ----  -------
PSUseCompatibleCommands             Warning      archiveScr 2     The parameter 'FullyQualifiedName' is not available for
                                                 ipt.ps1          command 'Import-Module' by default in PowerShell version
                                                                  '3.0' on platform 'Microsoft Windows Server 2012
                                                                  Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 12    The command 'Get-FileHash' is not available by default in
                                                 ipt.ps1          PowerShell version '3.0' on platform 'Microsoft Windows
                                                                  Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 18    The parameter 'LeafBase' is not available for command
                                                 ipt.ps1          'Split-Path' by default in PowerShell version
                                                                  '5.1.17763.316' on platform 'Microsoft Windows Server
                                                                  2019 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 18    The parameter 'LeafBase' is not available for command
                                                 ipt.ps1          'Split-Path' by default in PowerShell version '3.0' on
                                                                  platform 'Microsoft Windows Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 19    The command 'Compress-Archive' is not available by
                                                 ipt.ps1          default in PowerShell version '3.0' on platform
                                                                  'Microsoft Windows Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 23    The parameter 'NoNewline' is not available for command
                                                 ipt.ps1          'Out-File' by default in PowerShell version '3.0' on
                                                                  platform 'Microsoft Windows Server 2012 Datacenter'

This is telling us that:

  • Import-Module doesn’t support -FullyQualifiedName in PowerShell 3.0;
  • Get-FileHash doesn’t exist in PowerShell 3.0;
  • Split-Path doesn’t have -LeafBase in PowerShell 5.1 or PowerShell 3.0;
  • Compress-Archive isn’t available in PowerShell 3.0, and;
  • Out-File doesn’t support -NoNewline in PowerShell 3.0

One thing you’ll notice is that the Get-FoldersToArchive function is not being warned about. This is because the compatibility rules are designed to ignore user-provided commands; a command will only be marked as incompatible if it’s present in some profile and not in one of your targets.

Again, we can change the script to fix these warnings, but before we do, I want to show you how to make this a more continuous experience; as you change your script, you want to know if the changes you make break compatibility, and that’s easy to do with the steps below.

Using a settings file for repeated invocation

The first thing we want is to make the PSScriptAnalyzer invocation more automated and reproducible. A nice step toward this is taking the settings hashtable we made and turning it into a declarative data file, separating out the “what” from the “how”.

PSScriptAnalyzer will accept a path to a PSD1 in the -Settings parameter, so all we need to do is turn our hashtable into a PSD1 file, which we’ll make ./PSScriptAnalyzerSettings.psd1. Notice we can merge the settings for both PSUseCompatibleSyntax and PSUseCompatibleCommands:

# PSScriptAnalyzerSettings.psd1
# Settings for PSScriptAnalyzer invocation.
@{
    Rules = @{
        PSUseCompatibleCommands = @{
            # Turns the rule on
            Enable = $true

            # Lists the PowerShell platforms we want to check compatibility with
            TargetProfiles = @(
                'win-8_x64_10.0.14393.0_6.1.3_x64_4.0.30319.42000_core',
                'win-8_x64_10.0.17763.0_5.1.17763.316_x64_4.0.30319.42000_framework',
                'win-8_x64_6.2.9200.0_3.0_x64_4.0.30319.42000_framework'
            )
        }
        PSUseCompatibleSyntax = @{
            # This turns the rule on (setting it to false will turn it off)
            Enable = $true

            # Simply list the targeted versions of PowerShell here
            TargetVersions = @(
                '3.0',
                '5.1',
                '6.2'
            )
        }
    }
}

Now we can run the PSScriptAnalyzer again on the script using the settings file:

Invoke-ScriptAnalyzer -Path ./archiveScript.ps1 -Settings ./PSScriptAnalyzerSettings.psd1

This gives the output:

RuleName                            Severity     ScriptName Line  Message
--------                            --------     ---------- ----  -------
PSUseCompatibleCommands             Warning      archiveScr 1     The parameter 'FullyQualifiedName' is not available for
                                                 ipt.ps1          command 'Import-Module' by default in PowerShell version
                                                                  '3.0' on platform 'Microsoft Windows Server 2012
                                                                  Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 9     The command 'Get-FileHash' is not available by default in
                                                 ipt.ps1          PowerShell version '3.0' on platform 'Microsoft Windows
                                                                  Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 12    The parameter 'LeafBase' is not available for command
                                                 ipt.ps1          'Split-Path' by default in PowerShell version '3.0' on
                                                                  platform 'Microsoft Windows Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 12    The parameter 'LeafBase' is not available for command
                                                 ipt.ps1          'Split-Path' by default in PowerShell version
                                                                  '5.1.17763.316' on platform 'Microsoft Windows Server
                                                                  2019 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 13    The command 'Compress-Archive' is not available by
                                                 ipt.ps1          default in PowerShell version '3.0' on platform
                                                                  'Microsoft Windows Server 2012 Datacenter'
PSUseCompatibleCommands             Warning      archiveScr 16    The parameter 'NoNewline' is not available for command
                                                 ipt.ps1          'Out-File' by default in PowerShell version '3.0' on
                                                                  platform 'Microsoft Windows Server 2012 Datacenter'
PSUseCompatibleSyntax               Warning      archiveScr 6     The constructor syntax
                                                 ipt.ps1          '[System.Collections.Generic.Dictionary[string,
                                                                  string]]::new()' is not available by default in
                                                                  PowerShell versions 3,4

Now we don’t depend on any variables anymore, and have a separate spefication of the analysis you want. Using this, you could put this into continuous integration environments for example to check that changes in scripts don’t break compatibility.

But what we really want is to know that PowerShell scripts stay compatible as you edit them. That’s what the settings file is building to, and also where it’s easiest to make the changes you need to make your script compatible. For that, we want to integrate with the VSCode PowerShell extension.

Integrating with VSCode for on-the-fly compatibility checking

As explained at the start of this post, VSCode PowerShell extension has builtin support for PSScriptAnalyzer. In fact, as of version 1.12.0, the PowerShell extension ships with PSScriptAnalyzer 1.18, meaning you don’t need to do anything other than create a settings file to do compatibility analysis.

We already have our settings file ready to go from the last step, so all we have to do is point the PowerShell extension to the file in the VSCode settings.

You can open the settings with Ctrl+, (use Cmd instead of Ctrl on macOS). In the Settings view, we want PowerShell > Script Analysis: Settings Path. In the settings.json view this is "powershell.scriptAnalysis.settingsPath". Entering a relative path here will find a settings file in our workspace, so we just put ./PSScriptAnalyzerSettings.psd1:

VSCode settings GUI with PSScriptAnalyzer settings path configured to "./PSScriptAnalyzerSettings.psd1"

In the settings.json view this will look like:

"powershell.scriptAnalysis.settingsPath": "./PSScriptAnalyzerSettings.psd1"

Now, opening the script in VSCode we see “green squigglies” for compatibility warnings:

VSCode window containing script, with green squigglies underneath incompatible code

In the problems pane, you’ll get a full desrciption of all the incompatibilities:

VSCode problems pane, listing and describing identified compatibility issues

Let’s fix the syntax problem first. If you remember, PSScriptAnalyzer supplies a suggested correction to this problem. VSCode integrates with PSScriptAnalyzer’s suggested corrections and can apply them if you click on the lightbulb or with Ctrl+Space when the region is under the cursor:

VSCode suggesting New-Object instead of ::new() syntax

Applying this change, the script is now:

Import-Module -FullyQualifiedName @{ ModuleName = ArchiveHelper; ModuleVersion = '1.1' }

$paths = Get-FoldersToArchive -RootPath 'C:DocumentsDocumentsToArchive'
$archivePath = 'ArchiveServerDocumentArchive'

$hashes = New-Object 'System.Collections.Generic.Dictionary[string,string]'
foreach ($path in $paths)
{
    $hash = (Get-FileHash -LiteralPath $path).Hash
    $hashes[$hash] = $path
    $name = Split-Path -LeafBase $path
    Compress-Archive -LiteralPath $path -DestinationPath (Join-Path $archivePath "$name-$hash.zip")
}

ConvertTo-Json $hashes | Out-File -LiteralPath (Join-Path $archivePath "catalog.json") -NoNewline

The other incompatibilities don’t have corrections; for now PSUseCompatibleCommands knows what commands are available on each platform, but not what to substitute with when a command isn’t available. So we just need to apply some PowerShell knowledge:

  • Instead of Import-Module -FullyQualifiedName @{...} we use Import-Module -Name ... -Version ...;
  • Instead of Get-FileHash, we’re going to need to use .NET directly and write a function;
  • Instead of Split-Path -LeafBase, we can use [System.IO.Path]::GetFileNameWithoutExtension();
  • Instead of Compress-Archive we’ll need to use more .NET methods in a function, and;
  • Instead of Out-File -NoNewline we can use New-Item -Value

We end up with something like this (the specific implementation is unimportant, but we have something that will work in all versions):

Import-Module -Name ArchiveHelper -Version '1.1'

function CompatibleGetFileHash
{
    param(
        [string]
        $LiteralPath
    )

    try
    {
        $hashAlg = [System.Security.Cryptography.SHA256]::Create()
        $file = [System.IO.File]::Open($LiteralPath, 'Open', 'Read')
        $file.Position = 0
        $hashBytes = $hashAlg.ComputeHash($file)
        return [System.BitConverter]::ToString($hashBytes).Replace('-', '')
    }
    finally
    {
        $file.Dispose()
        $hashAlg.Dispose()
    }
}

function CompatibleCompressArchive
{
    param(
        [string]
        $LiteralPath,

        [string]
        $DestinationPath
    )

    if ($PSVersion.Major -le 3)
    {
        # PSUseCompatibleTypes identifies that [System.IO.Compression.ZipFile]
        # isn't available by default in PowerShell 3 and we have to do this.
        # We'll cover that rule in the next blog post.
        Add-Type -AssemblyName System.IO.Compression.FileSystem -ErrorAction Ignore
    }

    [System.IO.Compression.ZipFile]::Create(
        $LiteralPath,
        $DestinationPath,
        'Optimal',
        <# includeBaseDirectory #> $true)
}

$paths = Get-FoldersToArchive -RootPath 'C:DocumentsDocumentsToArchive'
$archivePath = 'ArchiveServerDocumentArchive'

$hashes = New-Object 'System.Collections.Generic.Dictionary[string,string]'
foreach ($path in $paths)
{
    $hash = CompatibleGetFileHash -LiteralPath $path
    $hashes[$hash] = $path
    $name = [System.IO.Path]::GetFileNameWithoutExtension($path)
    CompatibleCompressArchive -LiteralPath $path -DestinationPath (Join-Path $archivePath "$name-$hash.zip")
}

$jsonStr = ConvertTo-Json $hashes
New-Item -Path (Join-Path $archivePath "catalog.json") -Value $jsonStr

You should notice that as you type, VSCode displays new analysis of what you’re writing and the green squigglies drop away. When we’re done we get a clean bill of health for script compatibility:

VSCode window with script and problems pane, with no green squigglies and no problems

This means you’ll now be able to use this script across all the PowerShell versions you need to target. Better, you now have a configuration in your workspace so as you write more scripts, there is continual checking for compatibility. And if your compatibility targets change, all you need to do is change your configuration file in one place to point to your desired targets, at which point you’ll get analysis for your updated target platforms.

Summary

Hopefully in this blog post you got some idea of the new compatibility rules that come with PSScriptAnalyzer 1.18.

We’ve covered how to set up and use the syntax compatibility checking rule, PSUseCompatibleSyntax, and the command checking rule, PSUseCompatibleCommands, both using a hashtable configuration and a settings PSD1 file.

We’ve also looked at using the compatibility rules in with the PowerShell extension for VSCode, where they come by default from version 1.12.0.

If you’ve got the latest release of the PowerShell extension for VSCode (1.12.1), you’ll be able to set your configuration file and instantly get compatibility checking.

In the next blog post, we’ll look at how to use these rules and PSUseCompatibleTypes (which checks if .NET types and static methods are available on target platforms) can be used to help you write scripts that work cross platform across Windows and Linux using both Windows PowerShell and PowerShell Core.


Rob Holt

Software Engineer

PowerShell Team

The post Using PSScriptAnalyzer to check PowerShell version compatibility appeared first on PowerShell.

Safely validating usernames with Amazon Cognito

This post was originally published on this site

Guest post by AWS Community Hero Larry Ogrodnek. Larry is an independent consultant focused on cloud architecture, DevOps, serverless, and general software development on AWS. He’s always ready to talk about AWS over coffee, and enjoys development and helping other developers.

How are users identified in your system? Username? Email? Is it important that they’re unique?

You may be surprised to learn, as I was, that in Amazon Cognito both usernames and emails are treated as case-sensitive. So “JohnSmith” is different than “johnsmith,” who is different than “jOhNSmiTh.” The same goes for email addresses—it’s even specified in the SMTP RFC that users “smith” and “Smith” may have different mailboxes. That is crazy!

I recently added custom signup validation for Amazon Cognito. In this post, let me walk you through the implementation.

The problem with uniqueness

Determining uniqueness is even more difficult than just dealing with case insensitivity. Like many of you, I’ve received emails based on Internationalized Domain Name homograph attacks. A site is registered for “example.com” but with Cyrillic character “a,” attempting to impersonate a legitimate site and collect information. This same type of attack is possible for user name registration. If I don’t check for this, someone may be able to impersonate another user on my site.

Do you have reservations?

Does my application have user-generated messages or content? Besides dealing with uniqueness, I may want to reserve certain names. For example, if I have user-editable information at user.myapp.com or myapp.com/user, what if someone registers “signup” or “faq” or “support”? What about “billing”?

It’s possible that a malicious user could impersonate my site and use it as part of an attack. Similar attacks are also possible if users have any kind of inbox or messaging. In addition to reserving usernames, I should also separate out user content to its own domain to avoid confusion. I remember GitHub reacting to something similar when it moved user pages from github.com to github.io in 2013.

James Bennet wrote about these issues in great detail in his excellent post, Let’s talk about usernames. He describes the types of validation performed in his django-registration application.

Integrating with Amazon Cognito

Okay, so now that you know a little bit more about this issue, how do I handle this with Amazon Cognito?

Well, I’m in luck, because Amazon Cognito lets me customize much of my authentication workflow with AWS Lambda triggers.

To add username or email validation, I can implement a pre-sign-up Lambda trigger, which lets me perform custom validation and accept or deny the registration request.

It’s important to note that I can’t modify the request. To perform any kind of case or name standardization (for example, forcing lower case), I have to do that on the client. I can only validate that it was done in my Lambda function. It would be handy if this was something available in the future.

To declare a sign-up as invalid, all I have to do is return an error from the Lambda function. In Python, this is as simple as raising an exception. If my validation passes, I just return the event, which already includes the fields that I need for a generic success response. Optionally, I can auto-verify some fields.

To enforce that my frontend is sending usernames standardized as lowercase, all I need is the following code:

def run(event, context):
  user = event[‘userName’]
  if not user.isLower():
    raise Exception(“Username must be lowercase”)
  return event

Adding unique constraints and reservations

I’ve extracted the validation checks from django-registration into a Python module named username-validator to make it easier to perform these types of uniqueness checks in Lambda:

pip install username-validator

In addition to detecting confusing homoglyphs, it also includes a standard set of reserved names like “www”, “admin”, “root”, “security”, “robots.txt”, and so on. You can provide your own additions for application-specific reservations, as well as perform individual checks.

To add this additional validation and some custom reservations, I update the function as follows:

from username_validator import UsernameValidator

MY_RESERVED = [
  "larry",
  "aws",
  "reinvent"
]

validator = UsernameValidator(additional_names=MY_RESERVED)

def run(event, context):
  user = event['userName']

  if not user.islower():
    raise Exception("Username must be lowercase")

  validator.validate_all(user)

  return event

Now, if I attach that Lambda function to the Amazon Cognito user pool as a pre–sign-up trigger and try to sign up for “aws”, I get a 400 error. I also get some text that I could include in the signup form: Other attributes, including email (if used) are available under event[‘request’][‘userAttributes’]}. For example:

{ "request": {
    "userAttributes": {"name": "larry", "email": "larry@example.com" }
  }
}

What’s next?

I can validate other attributes in the same way. Or, I can add other custom validation by adding additional checks and raising an exception, with a custom message if it fails.

In this post, I covered why it’s important to think about identity and uniqueness, and demonstrated how to add additional validations to user signups in Amazon Cognito.

Now you know more about controlling signup validation with a custom Lambda function. I encourage you to check out the other user pool workflow customizations that are possible with Lambda triggers.

Protecting Against Ransomware

This post was originally published on this site

Original release date: April 11, 2019

What is ransomware?

Ransomware is a type of malware threat actors use to infect computers and encrypt computer files until a ransom is paid. (See Protecting Against Malicious Code for more information on malware.) After the initial infection, ransomware will attempt to spread to connected systems, including shared storage drives and other accessible computers.

If the threat actor’s ransom demands are not met (i.e., if the victim does not pay the ransom), the files or encrypted data will usually remain encrypted and unavailable to the victim. Even after a ransom has been paid to unlock encrypted files, threat actors will sometimes demand additional payments, delete a victim’s data, refuse to decrypt the data, or decline to provide a working decryption key to restore the victim’s access. The Federal Government does not support paying ransomware demands. (See the FBI’s ransomware article.)

How does ransomware work?

Ransomware identifies the drives on an infected system and begins to encrypt the files within each drive. Ransomware generally adds an extension to the encrypted files, such as .aaa, .micro, .encrypted, .ttt, .xyz, .zzz, .locky, .crypt, .cryptolocker, .vault, or .petya, to show that the files have been encrypted—the file extension used is unique to the ransomware type.

Once the ransomware has completed file encryption, it creates and displays a file or files containing instructions on how the victim can pay the ransom. If the victim pays the ransom, the threat actor may provide a cryptographic key that the victim can use to unlock the files, making them accessible.

How is ransomware delivered?

Ransomware is commonly delivered through phishing emails or via “drive-by downloads.” Phishing emails often appear as though they have been sent from a legitimate organization or someone known to the victim and entice the user to click on a malicious link or open a malicious attachment. A “drive-by download” is a program that is automatically downloaded from the internet without the user’s consent or often without their knowledge. It is possible the malicious code may run after download, without user interaction. After the malicious code has been run, the computer becomes infected with ransomware.

What can I do to protect my data and networks?

  • Back up your computer. Perform frequent backups of your system and other important files, and verify your backups regularly. If your computer becomes infected with ransomware, you can restore your system to its previous state using your backups.  
  • Store your backups separately. Best practice is to store your backups on a separate device that cannot be accessed from a network, such as on an external hard drive. Once the backup is completed, make sure to disconnect the external hard drive, or separate device from the network or computer. (See the Software Engineering Institute’s page on Ransomware).
  • Train your organization. Organizations should ensure that they provide cybersecurity awareness training to their personnel. Ideally, organizations will have regular, mandatory cybersecurity awareness training sessions to ensure their personnel are informed about current cybersecurity threats and threat actor techniques. To improve workforce awareness, organizations can test their personnel with phishing assessments that simulate real-world phishing emails.

What can I do to prevent ransomware infections?

  • Update and patch your computer. Ensure your applications and operating systems (OSs) have been updated with the latest patches. Vulnerable applications and OSs are the target of most ransomware attacks. (See Understanding Patches and Software Updates.)
  • Use caution with links and when entering website addresses. Be careful when clicking directly on links in emails, even if the sender appears to be someone you know. Attempt to independently verify website addresses (e.g., contact your organization’s helpdesk, search the internet for the sender organization’s website or the topic mentioned in the email). Pay attention to the website addresses you click on, as well as those you enter yourself. Malicious website addresses often appear almost identical to legitimate sites, often using a slight variation in spelling or a different domain (e.g., .com instead of .net). (See Using Caution with Email Attachments.)
  • Open email attachments with caution. Be wary of opening email attachments, even from senders you think you know, particularly when attachments are compressed files or ZIP files.
  • Keep your personal information safe. Check a website’s security to ensure the information you submit is encrypted before you provide it. (See Protecting Your Privacy.)
  • Verify email senders. If you are unsure whether or not an email is legitimate, try to verify the email’s legitimacy by contacting the sender directly. Do not click on any links in the email. If possible, use a previous (legitimate) email to ensure the contact information you have for the sender is authentic before you contact them.
  • Inform yourself. Keep yourself informed about recent cybersecurity threats and up to date on ransomware techniques. You can find information about known phishing attacks on the Anti-Phishing Working Group website. You may also want to sign up for CISA product notifications, which will alert you when a new Alert, Analysis Report, Bulletin, Current Activity, or Tip has been published.
  • Use and maintain preventative software programs. Install antivirus software, firewalls, and email filters—and keep them updated—to reduce malicious network traffic. (See Understanding Firewalls for Home and Small Office Use.)

How do I respond to a ransomware infection?

  • Isolate the infected system. Remove the infected system from all networks, and disable the computer’s wireless, Bluetooth, and any other potential networking capabilities. Ensure all shared and networked drives are disconnected whether wired or wireless.  
  • Turn off other computers and devices. Power-off and segregate (i.e., remove from the network) the infected computer(s). Power-off and segregate any other computers or devices that shared a network with the infected computer(s) that have not been fully encrypted by ransomware. If possible, collect and secure all infected and potentially infected computers and devices in a central location, making sure to clearly label any computers that have been encrypted. Powering-off and segregating infected computers and computers that have not been fully encrypted may allow for the recovery of partially encrypted files by specialists. (See Before You Connect a New Computer to the Internet for tips on how to make a computer more secure before you reconnect it to a network.)
  • Secure your backups. Ensure that your backup data is offline and secure. If possible, scan your backup data with an antivirus program to check that it is free of malware.

What do I do if my computer is infected with ransomware?

References

Author: CISA

This product is provided subject to this Notification and this Privacy & Use policy.

The Next Release of PowerShell – PowerShell 7

This post was originally published on this site

Recently, the PowerShell Team shipped the Generally Available (GA) release of PowerShell Core 6.2. Since that release, we’ve already begun work on the next iteration!

We’re calling the next release PowerShell 7, the reasons for which will be explained in this blog post.

Why 7 and not 6.3?

PowerShell Core usage has grown significantly in the last two years. In particular, the bulk of our growth has come from Linux usage, an encouraging statistic given our investment in making PowerShell viable cross-platform.

image

However, we also can clearly see that our Windows usage has not been growing as significantly, surprising given that PowerShell was popularized on the Windows platform. We believe that this could be occurring because existing Windows PowerShell users have existing automation that is incompatible with PowerShell Core because of unsupported modules, assemblies, and APIs. These folks are unable to take advantage of PowerShell Core’s new features, increased performance, and bug fixes. To address this, we are renewing our efforts towards a full replacement of Windows PowerShell 5.1 with our next release.

This means that Windows PowerShell and PowerShell Core users will be able to use the same version of PowerShell to automate across Windows, Linux, and macOS and on Windows, and PowerShell 7 users will have a very high level of compatibility with Windows PowerShell modules they rely on today.

We’re also going to take the opportunity to simplify our references to PowerShell in documentation and product pages, dropping the “Core” in “PowerShell 7”. The PSEdition will still reflect Core, but this will only be a technical distinction in APIs and documentation where appropriate.

Note that the major version does not imply that we will be making significant breaking changes. While we took the opportunity to make some breaking changes in 6.0, many of those were compromises to ensure our compatibility on non-Windows platforms. Prior to that, Windows PowerShell historically updated its major version based on new versions of Windows rather than Semantic Versioning

.NET Core 3.0

PowerShell Core 6.1 brought compatibility with many built-in Windows PowerShell modules, and our estimation is that PowerShell 7 can attain compatibility with 90+% of the inbox Windows PowerShell modules by leveraging changes in .NET Core 3.0 that bring back many APIs required by modules built on .NET Framework so that they work with .NET Core runtime. For example, we expect Out-GridView to come back (for Windows only, though)!

A significant effort for PowerShell 7 is porting the PowerShell Core 6 code base to .NET Core 3.0 and also working with Windows partner teams to validate their modules against PowerShell 7.

Support Lifecycle Changes

Currently, PowerShell Core is under the Microsoft Modern Lifecycle Policy. This means that PowerShell Core 6 is fix-forward: we produce servicing releases for security fixes and critical bug fixes,
and you must install the latest stable version within 6 months of a new minor version release.

In PowerShell 7, we will align more closely with the .NET Core support lifecycle, enabling PowerShell 7 to have both LTS (Long Term Servicing) and non-LTS releases.

We will still have monthly Preview releases to get feedback early.

When do I get PowerShell 7?

The first Preview release of PowerShell 7 will likely be in May. Be aware, however, that this depends on completing integration and validation of PowerShell with .NET Core 3.0.

Since PowerShell 7 is aligned with the .NET Core timeline, we expect the generally available (GA) release to be some time after the GA of .NET Core 3.0.

What about shipping in Windows?

We are planning on eventually shipping PowerShell 7 in Windows as a side-by-side feature with Windows PowerShell 5.1, but we still need to work out some of the details on how you will manage this inbox version of PowerShell 7.

And since the .NET Core timeline doesn’t align with the Windows timeline, we can’t say right now when it will show up in a future version of Windows 10 or Windows Server.

What other features will be in PowerShell 7?

We haven’t closed on our feature planning yet, but expect another blog post relatively soon with a roadmap of our current feature level plans for PowerShell 7.

Steve Lee
https://twitter.com/Steve_MSFT
Principal Engineering Manager
PowerShell Team

The post The Next Release of PowerShell – PowerShell 7 appeared first on PowerShell.

Docker, Amazon ECS, and Spot Fleets: A Great Fit Together

This post was originally published on this site

Guest post by AWS Container Hero Tung Nguyen. Tung is the president and founder of BoltOps, a consulting company focused on cloud infrastructure and software on AWS. He also enjoys writing for the BoltOps Nuts and Bolts blog.

EC2 Spot Instances allow me to use spare compute capacity at a steep discount. Using Amazon ECS with Spot Instances is probably one of the best ways to run my workloads on AWS. By using Spot Instances, I can save 50–90% on Amazon EC2 instances. You would think that folks would jump at a huge opportunity like a black Friday sales special. However, most folks either seem to not know about Spot Instances or are hesitant. This may be due to some fallacies about Spot.

Spot Fallacies

With the Spot model, AWS can remove instances at any time. It can be due to a maintenance upgrade; high demand for that instance type; older instance type; or for any reason whatsoever.

Hence the first fear and fallacy that people quickly point out with Spot:

What do you mean that the instance can be replaced at any time? Oh no, that must mean that within 20 minutes of launching the instance, it gets killed.

I felt the same way too initially. The actual Spot Instance Advisor website states:

The average frequency of interruption across all Regions and instance types is less than 5%.

From my own usage, I have seen instances run for weeks. Need proof? Here’s a screenshot from an instance in one of our production clusters.

If you’re wondering how many days that is….

Yes, that is 228 continuous days. You might not get these same long uptimes, but it disproves the fallacy that Spot Instances are usually interrupted within 20 minutes from launch.

Spot Fleets

With Spot Instances, I place a single request for a specific instance in a specific Availability Zone. With Spot Fleets, instead of requesting a single instance type, I can ask for a variety of instance types that meet my requirements. For many workloads, as long as the CPU and RAM are close enough, many instance types do just fine.

So, I can spread my instance bets across instance types and multiple zones with Spot Fleets. Using Spot Fleets dramatically makes the system more robust on top of the already mentioned low interruption rate. Also, I can run an On-Demand cluster to provide additional safeguard capacity.

ECS and Spot Fleets: A Great Fit Together

This is one of my favorite ways to run workloads because it gives me a scalable system at a ridiculously low cost. The technologies are such a great fit together that one might think they were built for each other.

  1. Docker provides a consistent, standard binary format to deploy. If it works in one Docker environment, then it works in another. Containers can be pulled down in seconds, making them an excellent fit for Spot Instances, where containers might move around during an interruption.
  2. ECS provides a great ecosystem to run Docker containers. ECS supports a feature called connection instance draining that allows me to tell ECS to relocate the Docker containers to other EC2 instances.
  3. Spot Instances fire off a two-minute warning signal letting me know when it’s about to terminate the instance.

These are the necessary pieces I need for building an ECS cluster on top of Spot Fleet. I use the two-minute warning to call ECS connection draining, and ECS automatically moves containers to another instance in the fleet.

Here’s a CloudFormation template that achieves this: ecs-ec2-spot-fleet. Because the focus is on understanding Spot Fleets, the VPC is designed to be simple.

The template specifies two instance types in the Spot Fleet: t3.small and t3.medium with 2 GB and 4 GB of RAM, respectively. The template weights the t3.medium twice as much as the t3.small. Essentially, the Spot Fleet TargetCapacity value equals the total RAM to provision for the ECS cluster. So if I specify 8, the Spot Fleet service might provision four t3.small instances or two t3.medium instances. The cluster adds up to at least 8 GB of RAM.

To launch the stack run, I run the following command:

aws cloudformation create-stack --stack-name ecs-spot-demo --template-body file://ecs-spot-demo.yml --capabilities CAPABILITY_IAM

The CloudFormation stack launches container instances and registers them to an ECS cluster named developmentby default. I can change this with the EcsCluster parameter. For more information on the parameters, see the README and the template source.

When I deploy the application, the deploy tool creates the ECS cluster itself. Here are the Spot Instances in the EC2 console.

Deploy the demo app

After the Spot cluster is up, I can deploy a demo app on it. I wrote a tool called Ufo that is useful for these tasks:

  1. Build the Docker image.
  2. Register the ECS task definition.
  3. Register and deploy the ECS service.
  4. Create the load balancer.

Docker should be installed as a prerequisite. First, I create an ECR repo and set some variables:

ECR_REPO=$(aws ecr create-repository --repository-name demo/sinatra | jq -r '.repository.repositoryUri')
VPC_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values="demo vpc" | jq -r '.Vpcs[].VpcId')

Now I’m ready to clone the demo repo and deploy a sample app to ECS with ufo.

git clone https://github.com/tongueroo/demo-ufo.git demo
cd demo
ufo init --image $ECR_REPO --vpc-id $VPC_ID
ufo current --service demo-web
ufo ship # deploys to ECS on the Spot Fleet cluster

Here’s the ECS service running:

I then grab the Elastic Load Balancing endpoint from the console or with ufo ps.

$ ufo ps
Elb: develop-Elb-12LHJWU4TH3Q8-597605736.us-west-2.elb.amazonaws.com
$

Now I test with curl:

$ curl develop-Elb-12LHJWU4TH3Q8-597605736.us-west-2.elb.amazonaws.com
42

The application returns “42,” the meaning of life, successfully. That’s it! I now have an application running on ECS with Spot Fleet Instances.

Parting thoughts

One additional advantage of using Spot is that it encourages me to think about my architecture in a highly available manner. The Spot “constraints” ironically result in much better sleep at night as the system must be designed to be self-healing.

Hopefully, this post opens the world of running ECS on Spot Instances to you. It’s a core of part of the systems that BoltOps has been running on its own production system and for customers. I still get excited about the setup today. If you’re interested in Spot architectures, contact me at BoltOps.

One last note: Auto Scaling groups also support running multiple instance types and purchase options. Jeff mentions in his post that weight support is planned for a future release. That’s exciting, as it may streamline the usage of Spot with ECS even further.

Learn about AWS Services & Solutions – April AWS Online Tech Talks

This post was originally published on this site

AWS Tech Talks

Join us this April to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

May 2, 2019 | 11:00 AM – 12:00 PM PTHow to Build an Application with Amazon Managed Blockchain – Learn how to build an application on Amazon Managed Blockchain with the help of demo applications and sample code.

Compute

April 29, 2019 | 1:00 PM – 2:00 PM PTHow to Optimize Amazon Elastic Block Store (EBS) for Higher Performance – Learn how to optimize performance and spend on your Amazon Elastic Block Store (EBS) volumes.

May 1, 2019 | 11:00 AM – 12:00 PM PTIntroducing New Amazon EC2 Instances Featuring AMD EPYC and AWS Graviton Processors – See how new Amazon EC2 instance offerings that feature AMD EPYC processors and AWS Graviton processors enable you to optimize performance and cost for your workloads.

Containers

April 23, 2019 | 11:00 AM – 12:00 PM PTDeep Dive on AWS App Mesh – Learn how AWS App Mesh makes it easy to monitor and control communications for services running on AWS.

March 22, 2019 | 9:00 AM – 10:00 AM PTDeep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application.

Databases

April 23, 2019 | 1:00 PM – 2:00 PM PTSelecting the Right Database for Your Application – Learn how to develop a purpose-built strategy for databases, where you choose the right tool for the job.

April 25, 2019 | 9:00 AM – 10:00 AM PTMastering Amazon DynamoDB ACID Transactions: When and How to Use the New Transactional APIs – Learn how the new Amazon DynamoDB’s transactional APIs simplify the developer experience of making coordinated, all-or-nothing changes to multiple items both within and across tables.

DevOps

April 24, 2019 | 9:00 AM – 10:00 AM PTRunning .NET applications with AWS Elastic Beanstalk Windows Server Platform V2 – Learn about the easiest way to get your .NET applications up and running on AWS Elastic Beanstalk.

Enterprise & Hybrid

April 30, 2019 | 11:00 AM – 12:00 PM PTBusiness Case Teardown: Identify Your Real-World On-Premises and Projected AWS Costs – Discover tools and strategies to help you as you build your value-based business case.

IoT

April 30, 2019 | 9:00 AM – 10:00 AM PTBuilding the Edge of Connected Home – Learn how AWS IoT edge services are enabling smarter products for the connected home.

Machine Learning

April 24, 2019 | 11:00 AM – 12:00 PM PTStart Your Engines and Get Ready to Race in the AWS DeepRacer League – Learn more about reinforcement learning, how to build a model, and compete in the AWS DeepRacer League.

April 30, 2019 | 1:00 PM – 2:00 PM PTDeploying Machine Learning Models in Production – Learn best practices for training and deploying machine learning models.

May 2, 2019 | 9:00 AM – 10:00 AM PTAccelerate Machine Learning Projects with Hundreds of Algorithms and Models in AWS Marketplace – Learn how to use third party algorithms and model packages to accelerate machine learning projects and solve business problems.

Networking & Content Delivery

April 23, 2019 | 9:00 AM – 10:00 AM PTSmart Tips on Application Load Balancers: Advanced Request Routing, Lambda as a Target, and User Authentication – Learn tips and tricks about important Application Load Balancers (ALBs) features that were recently launched.

Productivity & Business Solutions

April 29, 2019 | 11:00 AM – 12:00 PM PTLearn How to Set up Business Calling and Voice Connector in Minutes with Amazon Chime – Learn how Amazon Chime Business Calling and Voice Connector can help you with your business communication needs.

May 1, 2019 | 1:00 PM – 2:00 PM PTBring Voice to Your Workplace – Learn how you can bring voice to your workplace with Alexa for Business.

Serverless

April 25, 2019 | 11:00 AM – 12:00 PM PTModernizing .NET Applications Using the Latest Features on AWS Development Tools for .NET – Get a dive deep and demonstration of the latest updates to the AWS SDK and tools for .NET to make development even easier, more powerful, and more productive.

May 1, 2019 | 9:00 AM – 10:00 AM PTCustomer Showcase: Improving Data Processing Workloads with AWS Step Functions’ Service Integrations – Learn how innovative customers like SkyWatch are coordinating AWS services using AWS Step Functions to improve productivity.

Storage

April 24, 2019 | 1:00 PM – 2:00 PM PTAmazon S3 Glacier Deep Archive: The Cheapest Storage in the Cloud – See how Amazon S3 Glacier Deep Archive offers the lowest cost storage in the cloud, at prices significantly lower than storing and maintaining data in on-premises magnetic tape libraries or archiving data offsite.

The PowerShell Gallery is now more Accessible

This post was originally published on this site

Over the past few months, the team has been working hard to make the PowerShell Gallery as accessible as possible. This blog details why it matters and what work has been done.

Why making the PowerShell Gallery more accessible was a priority

Accessible products change lives and allow everyone to be included in our product. Accessibility is also a major component of striving toward Microsoft’s mission to “Empower every person and every organization on the planet to achieve more.” Improvements in accessibility mean improvements in usability which makes the experience better for everyone. In doing accessibility testing for the Gallery, for example, we found that it was confusing for users to distinguish between “deleting” and “unlisting” packages. By clearly naming this action in the UI, it makes the process of unlisting a package more clear for all package owners.

The steps taken to make the PowerShell Gallery more accessible

The first part of the process focused on bug generation and resolution. We used scanning technology to ensure that the Gallery alerts and helper texts were configured properly, and were compatible with screen reading technology. We use Keros scanning which is Microsoft’s premier accessibility tool to identify accessibility issues and worked to triage and fix the detected issues.

For the second part of the process, we undertook a scenario-focused accessibility study. For the study, blind or visually impaired IT professionals went through core scenarios for using the Gallery. These scenarios included: finding packages, publishing packages, managing packages, and getting support. The majority of the scenarios focused on searching for packages as we believe this is the primary way customers interact with the Gallery. After the study concluded we reviewed the results and watched recordings of the participants navigating through our scenarios. This process allowed us to focus on improving our lowest performing scenarios by addressing specific usability improvements. After making these improvements we underwent a review by accessibility experts to assure we had high usability and accessibility.

Usability Improvements

  • Screen Reader Compatibility: Screen reader technologies make consuming web content accessible so we underwent thorough review, and improvement, to ensure that the Gallery was providing accurate, consistent, and helpful information to screen readers. Some examples of areas we improved:
    • Accurate Headers
    • Clearly labeled tables
    • Helpful tool tips
    • Labeled graph node points
  • Improved Aria Tags: Accessible Rich Internet Applications (Aria) is a specification that makes web content more accessible by passing helpful information to assistive technologies such as screen readers. We underwent a thorough review, and enhancement, of our Aria tags to make sure they were as helpful as possible. One improvement we made, for example, was an ARIA description explaining how to use tags in the Gallery search bar.
  • Renamed UI elements to be more descriptive: Through our review we noticed we were generating some confusion by labeling the unlist button as “delete” and we worked to fix these types of issues.
  • Filters: We added filters for the operating system to make it easier to find compatible packages.
  • Results description: we made searching for packages more straightforward by displaying the total number of results and pages.
  • Page Scrolling: we made searching for packages easier by adding multi-page scrolling.

Reporting Issues

Our goal is to make the Gallery completely user friendly. If you encounter any issues in the PowerShell Gallery that make it less accessible/usable we would love to hear about it on our GitHub page. Please file an issue letting us know what we can do to make the Gallery even more accessible.

The post The PowerShell Gallery is now more Accessible appeared first on PowerShell.

LiveFyre commenting will no longer be available on the PowerShell Gallery

This post was originally published on this site

Commenting on the PowerShell Gallery is provided by LiveFyre–a third-party comment system. LiveFyre is no longer supported by Adobe and therefore we are unable to service issues as they arise. We have gotten reports of authentication failing for Twitter and Microsoft AAD and unfortunately we are unable to bring back those services. As we cannot predict when more issues will occur, and we cannot fix issues as they arise we must depreciate use of LiveFyre on the PowerShell Gallery. As of May 1st 2019 LiveFyre commenting will no longer be available on the PowerShell Gallery. Unfortunately we are unable to migrate comments off of LiveFyre so comment history will be lost.

How will package consumers be able to get support?

The other existing channels for getting support and contacting package owners will still be available on the Gallery. The left pane of the package page is the best place to get support. If you are looking to contact the package owner, select “Contact Owners” on the package page. If you are looking to contact Gallery support use the “Report” button. If the package owner has provided a link to their project site in their module manifest a link to their site is also available in the left pane and can be a good avenue for support. For more information on getting package support please see our documentation.

Questions

We appreciate your understanding as we undergo this transition.
Please direct any questions to sysmith@microsoft.com.

The post LiveFyre commenting will no longer be available on the PowerShell Gallery appeared first on PowerShell.

New – Advanced Request Routing for AWS Application Load Balancers

This post was originally published on this site

AWS Application Load Balancers have been around since the summer of 2016! They support content-based routing, work well for serverless & container-based applications, and are highly scalable. Many AWS customers are using the existing host and path-based routing to power their HTTP and HTTPS applications, while also taking advantage of other ALB features such as port forwarding (great for container-based applications), health checks, service discovery, redirects, fixed responses, and built-in authentication.

Advanced Request Routing
The host-based routing feature allows you to write rules that use the Host header to route traffic to the desired target group. Today we are extending and generalizing this feature, giving you the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the query string, and the source IP address. We are also making the rules and conditions more powerful; rules can have multiple conditions (AND’ed together), and each condition can specify a match on multiple values (OR’ed).

You can use this new feature to simplify your application architecture, eliminate the need for a proxy fleet for routing, and to block unwanted traffic at the load balancer. Here are some use cases:

  • Separate bot/crawler traffic from human traffic.
  • Assign customers or groups of customers to cells (distinct target groups) and route traffic accordingly.
  • Implement A/B testing.
  • Perform canary or blue/green deployments.
  • Route traffic to microservice handlers based on method (PUTs to one target group and GETs to another, for example).
  • Implement access restrictions based on IP address or CDN.
  • Selectively route traffic to on-premises or in-cloud target groups.
  • Deliver different pages or user experiences to various types and categories of devices.

Using Advanced Request Routing
You can use this feature with your existing Application Load Balancers by simply editing your existing rules. I will start with a simple rule that returns a fixed, plain-text response (the examples in this post are for testing and illustrative purposes; I am sure that yours will be more practical and more interesting):

I can use curl to test it:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

I click Insert Rule to set up some advanced request routing:

Then I click Add condition and examine the options that are available to me:

I select Http header, and create a condition that looks for a cookie named user with value jeff. Then I create an action that returns a fixed response:

I click Save, wait a few seconds for the change to take effect, and then issue a pair of requests:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

$ curl --cookie "user=jeff" http://TestALB-156468799.elb.amazonaws.com
Hello Jeff

I can also create a rule that matches one or more CIDR blocks of IP addresses:

$ curl http://TestALB-156468799.elb.amazonaws.com
Hello EC2 Instance

I can match on the query string (this is very useful for A/B testing):

$ curl http://TestALB-156468799.elb.amazonaws.com?ABTest=A
A/B test, option A selected 

I can also use a wildcard if all I care about is the presence of a particular field name:

I can match a standard or custom HTTP method. Here, I will invent one called READ:

$ curl --request READ http://TestALB-156468799.elb.amazonaws.com
Custom READ method invoked

I have a lot of flexibility (not new, but definitely worth reviewing) when it comes to the actions:

Forward to routes the request to a target group (a set of EC2 instances, a Lambda function, or a list of IP addresses).

Redirect to generates a 301 (permanent) or 302 (found) response, and can also be used to switch between HTTP and HTTPS.

Return fixed response generates a static response with any desired response code, as I showed you earlier.

Authenticate uses Amazon Cognito or an OIDC provider to authenticate the request (applicable to HTTPS listeners only).

Things to Know
Here are a couple of other things that you should know about this cool and powerful new feature:

Metrics – You can look at the Rule Evaluations and HTTP fixed response count CloudWatch metrics to learn more about activity related to your rules (learn more):

Programmatic Access – You can also create, modify, examine, and delete rules using the ALB API and CLI (CloudFormation support will be ready soon).

Rule Matching – The rules are powered by string matching, so test well and double-check that your rules are functioning as intended. The matched_rule_priority and actions_executed fields in the ALB access logs can be helpful when debugging and testing (learn more).

Limits – Each ALB can have up to 100 rules, not including the defaults. Each rule can reference up to 5 values and can use up to 5 wildcards. The number of conditions is limited only by the number of unique values that are referenced.

Available Now
Advanced request routing is available now in all AWS regions at no extra charge (you pay the usual prices for the Application Load Balancer).

Jeff;