Category Archives: Powershell

SecretManagement Module Preview Design Changes

This post was originally published on this site

SecretManagement Module Preview Design Changes

This article is intended primarily for SecretManagement extension vault developers.
It covers design changes that an extension vault implementer must be aware of.

It has been about 6 months since we published a pre-release alpha version of SecretManagement.
Since that time we have received a bunch of great feedback from the community.
There have been a number of incremental changes and improvements (including changing the name of the module).
But it also became clear that the original vision and design suffered some shortcomings.
Consequently, we re-thought the design and have been experimenting with various changes.
For this preview release, we now have what we feel is a better and simpler design providing a consistent cross-platform experience.


Design Issues

One problem with the previous alpha release was that it depended on the Windows Credential Manager (CredMan) for the built-in local vault.
The built-in local vault is a secure local store that is always available, regardless of whether any extension vaults are registered.
CredMan is Windows platform only, and so the pre-release version of SecretManagement worked only on Windows.
The plan was to eventually find similar third party solutions for non-Windows platforms.
However it turns out that CredMan is pretty unique, and there are no equivalent solutions on non-Windows platforms.
In addition community members pointed out that CredMan only works for interactive log-in accounts, and this means SecretManagement pre-release would not work with Windows built-in accounts or over PowerShell remoting.

Another problem was the decision to allow extension vaults to be written either as a binary C# implementing type or a PowerShell module.
These two forms of an extension vault are quite different, and the code became complex trying to accommodate them both.
The result was that extension vaults written in the two forms would have different behavior.

An original design goal was to abstract both management of secrets and management of secret vaults.
But we found that the way we abstracted secret vaults was too restrictive to accommodate the wide variety of existing local and cloud based secure store solutions we saw.
With this new design we are focusing on abstracting the management of secrets.
The purpose of SecretManagement is to provide scripts a common way to access secrets from widely different secret store solutions.
So the new design leaves it to the individual vault solutions how they are installed, configured, and authenticated.

Built-In Local Vault

The built-in local vault served two purposes.
It provided an out-of-the-box secure local store, and also provided a way to store sensitive registration data for extension vaults.
But using CredMan as the local store limited SecretManagement to Windows platform, without any good similar solutions on other platforms.
Nevertheless, a CredMan based extension vault is still useful for many Windows platform scenarios, and so we now have a CredMan based extension vault example available in the GitHub repo.

The original design allowed extension vault implementations to store additional vault registry information securely in the built-in local vault.
But we never saw a case where this was needed.
Secure store solutions have their own way to authenticate a user and don’t need to store sensitive data.

Consequently, the built-in local vault has been removed from SecretManagement.
All storage mechanisms are now extension vaults only.
But SecretManagement without an extension vault is not very useful, so we are also publishing a preview release of a new cross-platform secure store extension vault.
You can read more about the new SecretStore Extension Vault in the section below.

We have also added the ability to designate a registered extension vault as a Default vault.
The default vault works like the built-in vault did.
If no vault is specified when adding a secret, it is added to the default vault.
Also the default vault is searched before any other registered vault, when retrieving secrets.

Implementing Type

Support for an extension vault based on an implementing type has been removed.
This greatly simplifies SecretManagement and provides a more uniform development experience.

Now, extension vaults are just PowerShell modules that implement five required functions.
When an extension vault is registered as a module, it is verified to have the correct form and to implement the expected functions.
The extension vaults can be PowerShell script modules or binary modules, and the SecretManagement GitHub repo has examples of both forms.

Extension Vault User Interaction

Previously, an extension vault communicated to the user indirectly by passing results and error messages up to SecretManagement.
However, this greatly complicated writing extension vaults around existing secret store solutions on various platforms.
It was very difficult or impossible to support different forms of authentication.
For example, an extension vault couldn’t prompt the user directly for some action such as providing a password.

But now extension vaults are just PowerShell modules and their exposed functions (cmdlets) are called on the UI thread.
So each extension vault can now write to the pipeline, error or data streams, and can directly prompt users for actions.

This means an extension vault now has much more leeway in how it communicates to the user.
But it is also responsible for ensuring a consistent user experience with other vaults.
An extension vault should take care to write only expected data to the pipeline, and to write error data only for abnormal conditions specific to the vault implementation.
Common error conditions, such as secret not found should be left to SecretManagement.

Extension Vault Module Format

Now that SecretManagement calls extension vault functions on the UI thread, there have been some changes in how an extension module resides on file.
Each extension vault module implements the same five functions.
SecretManagement needs to disambiguate the function calls, and also ensure that these functions do not appear in the user command space.

To do this, an extension vault module must conform a specific file structure.
The following is an example of the required file structure for an extension vault named TestVault:


The TestVault module contains a normal TestVault.psd1 PowerShell module manifest file.
It also has an optional TestStoreImplementation.dll binary that implements the actual store.
The manifest file looks like this:

    ModuleVersion = '1.0'
    RootModule = '.TestStoreImplementation.dll'
    NestedModules = @('.TestVault.Extension')
    CmdletsToExport = @('Set-TestStoreConfiguration','Get-TestStoreConfiguration')

The manifest file exports two optional cmdlets used to configure the store.
But the only required field is NestedModules that tells PowerShell to load the TestVault.Extension sub module as a nested module of TestVault.

It is the TestVault.Extension module that contains the required function implementations.
It must be named with the parent module name plus the .Extension part appended to it.
The TestVault.Extension hides the SecretManagement required functions from the user, and also uniquely qualifies the functions for this specific extension vault module.
The TestVault.Extension sub module contains the expected .psd1 and .psm1 files.
The manifest file looks like:

    ModuleVersion = '1.0'
    RootModule = '.TestVault.Extension.psm1'
    RequiredAssemblies = '..TestStoreImplementation.dll'
    FunctionsToExport = @('Set-Secret','Get-Secret','Remove-Secret','Get-SecretInfo','Test-SecretVault')

The TestVault.Extension module is dependent on the TestStoreImplementation.dll binary file, which implements the actual store.
But this is optional, and a simpler .Extension module implementation could implement everything within PowerShell script.
One example of this might be a cloud based extension vault that is accessed through PowerShell cmdlets or REST APIs.

For more extension vault module examples, see the SecretManagement GitHub repo.

SecretStore Extension Vault

The Microsoft.PowerShell.SecretStore extension vault module is a secure local store for SecretManagement.
It is open source and currently available as a preview version on PowerShell Gallery.
SecretStore is based on .NET Core cryptographic APIs, and works on all PowerShell supported platforms.
Secret data is stored at rest in encrypted form on the file system and in memory, and only decrypted when returned to a user request.
SecretStore works within the scope of the current user, and is configurable.

By default SecretStore is configured to require a password, as this provides the strongest security.
When a password is provided, it remains valid in the current PowerShell session until the PasswordTimeout time elapses.
When the password becomes invalid, the store can be configured to prompt the user or otherwise throw an exception.
The user can provide the password through an interactive prompt or with the Unlock-SecretStore cmdlet.
The Unlock-SecretStore is intended for automation scenarios where user interaction is not possible.

PS C:> Get-SecretStoreConfiguration

      Scope PasswordRequired PasswordTimeout DoNotPrompt
      ----- ---------------- --------------- -----------
CurrentUser             True             900       False

SecretStore can be configured to not require a password, but will be less secure.
Secrets are still encrypted on file and in memory.
But the encryption key is stored on file and protected only by the platform file system.
It is strongly encouraged that SecretStore be configured to require a password.

The following cmdlets are provided to manage SecretStore:

  • Get-SecretStoreConfiguration
  • Set-SecretStoreConfiguration
  • Unlock-SecretStore
  • Update-SecretStorePassword
  • Reset-SecretStore

Paul Higinbotham
PowerShell Team

The post SecretManagement Module Preview Design Changes appeared first on PowerShell.

SecretManagement Preview 3

This post was originally published on this site

We are excited to announce two modules are now available on the PowerShell Gallery:

To install the modules, and register the SecretStore vault, open any PowerShell console and run:

Install-Module Microsoft.PowerShell.SecretManagement -AllowPrerelease
Install-Module Microsoft.PowerShell.SecretStore -AllowPrerelease
Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault

NOTE: This version of the module has a major breaking changing, if you have used past preview versions of the SecretManagement module, secrets previously stored will not be accessible in this version. Please migrate your secrets before updating. Your old secrets will not be removed after updating so they can be accessed by explicitly importing an earlier version of the module.

This blog will explain what to expect from these modules, how to try them out, and what is coming next.

Re-Introducing SecretManagement

The SecretManagement module helps users manage secrets by providing a common set of cmdlets to interface with secrets across vaults. This module supports an extensible model where local and remote vaults can be registered and unregistered for use in accessing and retrieving secrets. The module provides the following cmdlets for accessing secrets and managing SecretVaults:




SecretManagement is valuable in heterogeneous environments where you may want to separate the specifics of the vault from a common script which needs secrets. SecretManagement is also as a convenience feature which allows users to simplify their interactions with various vaults by only needing to learn a single set of cmdlets.

Some key scenarios we have heard from PowerShell users are:

  • Sharing a script across my org (or Open Source) without knowing the platform/local vault of all the users
  • Running my deployment script in local, test and production with the change of only a single parameter (-Vault)
  • Changing the backend of the authentication method to meet specific security or organizational needs without needing to update all my scripts

Secret Management Preview 3

Since our first two preview releases we have done significant investigation to maximize the usability, supportability, and extensibility of the module. Through this process we have re-designed the module is a couple of impactful ways:

  • Separated the SecretManagement module from a built-in default vault
  • Simplified the experience for authoring vaults to give more flexibility to authors–for more information on that check out this vault-author focused blog post

Separating SecretManagement from a built in vault

In this release we have separated the interface for accessing secrets and registering vaults from any vault implementation. We made this decision based on feedback that there are trade-offs between security, usability, and specificity for any vault that would be built into the SecretManagement module. We opted for a configurable model that allows the user to install and set as default the vault that best matches their requirements.

Consequently, vault registration metadata is stored on the file system with the expectation that it is not sensitive data. As a part of this design we leave secure vault authentication up to the vault extensions. While some vault implementations may take advantage of user context others may require interactive authentication from the user (or use another method).

We recognize that the value of the SecretManagement interface comes from the available underlying vaults. In other words if there are no vault extensions to register the module is useless. With this in mind, we wrote (and published a preview of) a local, cross-platform vault called SecretStore. We hope this vault not only proves useful for SecretManagement users but also serves as an example for extension vault authors looking to build off of existing vaults. We have also provided an example implementation of a Windows Credential Manager vault in our GitHub repository.

SecretStore Preview 1

SecretStore is a cross-platform local extension vault which is available as a preview on the PowerShell Gallery. We designed this vault as a best attempt at creating a vault that is available where PowerShell is, usable in popular PowerShell scenarios (like automation and remoting) and utilizes common security practices. This vault does have security limitations and we recommend exploring alternative vaults if data is highly sensitive.

Features of SecretStore

The SecretStore vault stoes secrets locally on file for the current user, and uses .NET Core cryptographic APIs to encrypt file contents. This extension vault is configurable and works over all supported PowerShell platforms on Windows, Linux, and macOS. The following cmdlets are provided to manage SecretStore:

- Get-SecretStoreConfiguration
- Set-SecretStoreConfiguration
- Unlock-SecretStore
- Update-SecretStorePassword
- Reset-SecretStore

For more information on the design of the SecretStore please refer to this design document.


Getting started with SecretStore and SecretManagement

After you have installed the modules open any PowerShell console to register the vault and create your first secret.

# First Register the vault, the name parameter is a friendly name and can be anything you choose
PS C:> Register-SecretVault -Name SecretStore -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault
# Now you can create a secret, you will also need to provide a password for the SecretStore vault
PS C:> Set-Secret -Name TestSecret -Secret "TestSecret"
Vault SecretStore requires a password.
Enter password:
# Run Get-Secret to retrieve the secret, using the -AsPlainText switch will return it as a readable string
PS C:> Get-Secret -Name TestSecret -AsPlainText
# To see metadata about all of your secrets you can run Get-SecretInfo
PS C:> Get-SecretInfo

Name                       Type VaultName
----                       ---- ---------
TestSecret               String SecretStore


What’s Next

Based on user feedback we plan to move towards a General Availability (GA) release for both modules. Through this time, we hope to work with vault extension authors to build out the vault ecosystem. The timing of these releases is highly dependent on feedback from this release.

Feedback and Support

To get support or give feedback on the SecretManagement module please open an issue in the SecretManagement repository. To get support or give feedback on the SecretStore module please open an issue in the SecretStore repository.


Sydney Smith
PowerShell Team



The post SecretManagement Preview 3 appeared first on PowerShell.

PowerShell 7.1 Preview 6

This post was originally published on this site

Today, we are releasing the sixth preview of the PowerShell 7.1 release!

Further in this blog post I’ll provide details on some important changes you should be aware of as you
starting trying out this preview.

PowerShell 7.1 Roadmap update

We are planning the last preview release around the first week of September.
This will allow us to get in the last of our significant code changes before we start minimizing risk (and code churn) for our Release Candidate.

We expect our first Release Candidate in the second half of September.
Release Candidates are “go-live” meaning they are supported in production.
If there is a critical security or blocking issue, we will provide an updated servicing release for a
Release Candidate.
We would like users to start deploying the Release Candidate so that we can address any major issues
as we head towards a General Availability release at the end of this year.

Unlike preview releases, Release Candidates will not have Experimental Features enabled by default.
Some Experimental Features will be taken out of experimental state and their designs will be considered
final while others will stay in experimental state and will need to be explicitly enabled to continue use.

As a reminder, Experimental Features are not considered design complete.
This means that the design may change based on real world usage feedback and such changes are not
considered breaking changes.
Production code should not depend on experimental features.

Noteworthy Changes

There are some changes in PowerShell 7.1 Preview 6 that are worth going into detail and some are breaking changes.

Rename -FromUnixTime to -UnixTimeSeconds on Get-Date to allow Unix time input

Unix time is the number of seconds since January 1st, 1970 at 00:00:00 UTC.
In Preview 2 of PowerShell 7.1, we merged a community submitted pull request to add -FromUnixTime to the Get-Date cmdlet.
However, it was discovered later that the prefix From was inconsistent with existing parameters on that cmdlet.
So a change was accepted to simply call it -UnixTimeSeconds which accepts a value in Unix time similar to how
-Date accepts a value in date time format.

get-date -UnixTimeSeconds 1597615564
Sunday, August 16, 2020 3:06:04 PM

This feature was added by community member aetos382!

Make $ErrorActionPreference not affect stderr output of native commands

Native commands support three streams: stdin, stdout, and stderr.
stdin is for sending input to a native command.
stdout is for any output from a native command you might want to pass further down the pipeline.
stderr (despite “error” being in the name) is used for all other output.

stderr can include error messages, but also text content equivalent to PowerShell verbose, debug, and information streams
as well as progress information or help text.

The de facto standard way to determining if a native command has succeeded or failed is by looking at the exit code
(in PowerShell this is by the $LASTEXITCODE automatic variable).
Since stderr typically contains non-error information, it is not a reliable way to detect errors from native commands.

Text output from stderr are wrapped as ErrorRecords but tagged to indicate it is from stderr.
Prior to this change, because these are ErrorRecord instances, they would get added to the $Error automatic variable
which is a running log of errors.
If you set $ErrorActionPreference to a value that affects execution (like Stop or Inquire) then any text written
to stderr would cause your script execution to be impacted.

With this change, stderr is just treated like an information stream (although it is still stream number 2 for redirection) and
is not affected by $ErrorActionPreference which also means stderr output does not show up in $Error nor does it
cause $? to be $false indicating the last command failed.

There is an existing RFC to allow script execution to be impacted
by native command exit code that we hope to be in a future version of PowerShell.

Allow explicitly specified named parameter to supersede the same one from hashtable splatting

Splatting is a feature in
PowerShell allowing passing parameters to commands defined in a hashtable.
For example, if you have a script that calls the same or different cmdlets multiple times where the parameters and their
values are the same, then instead of typing it multiple times you can define it once and pass the same hashtable
to each cmdlet.
If a parameter or value changes, you would then just update the hashtable.

$myParams = @{
  Uri = ''
  Method = 'Get'
  FollowRelLink = $true

Invoke-RestMethod @myParams

Note: The above example shows how to pass the -FollowRelLink switch using splatting

In some scripts, you may want to override one of the values in the hashtable for a specific instance.
Previously, you would get an error that the parameter was already specified:

Invoke-RestMethod @myParams -Uri
Invoke-RestMethod: Cannot bind parameter because parameter 'Uri' is specified more than once. To provide multiple values to parameters that can accept multiple values, use the array syntax. For example, "-parameter value1,value2,value3".

With this change, the above example now works as one would expect in that the -Uri parameter value will be the one explicitly
specified in the command line and not the one in the hashtable.

Experimental Feature: PSManageBreakpointsInRunspace

PowerShell has powerful built-in scripting debugging capabilities.
However, previously the debugging cmdlets would only allow you to manage breakpoints in the current PowerShell session (called a runspace).
Community member Kirk Munro added the ability to specify a different runspace to the breakpoint management cmdlets!

$job = Start-Job { $pid; Start-Sleep -Seconds 5; Write-Verbose "Hello" }

# wait until the job process starts
while ($job.ChildJobs[0].Runspace.RunspaceStateInfo.State -ne 'Opened') {
    Start-Sleep -Milliseconds 50
Set-PSBreakpoint -Command Write-Verbose -Action { break } -Runspace $job.ChildJobs[0].Runspace

# wait until the job is waiting at the breakpoint
while ($job.State -ne 'AtBreakpoint') {
    Start-Sleep -Seconds 1

# get the job process id to debug it later
$jobpid = $job | Receive-Job
$job | Get-Job

Id Name PSJobTypeName State HasMoreData Location Command -- ---- ------------- ----- ----------- -------- ------- 7 Job7 BackgroundJob AtBreakpoint True localhost Start-Sleep -Seconds 5;…
# Connect to the running job process
Enter-PSHostProcess -id $jobpid

# Find the runspace where the running script is stopped at the breakpoint and attach debugger to it
Get-Runspace | Where-Object { $_.Debugger.InBreakpoint -eq $true } | Debug-Runspace
Debugging Runspace: RemoteHost
To end the debugging session type the 'Detach' command at the debugger prompt, or type 'Ctrl+C' otherwise.

Hit Command breakpoint on 'Write-Verbose'

At line:1 char:32
+  $pid; Start-Sleep -Seconds 5; Write-Verbose "Hello"
+                                ~~~~~~~~~~~~~~~~~~~~~

Here you see that execution stopped exactly where we set the breakpoint.

Note: As the script is stopped at the breakpoint, it will wait indefinitely for a debugger to be attached

Built in Pager for Get-Help

As part of additional work to improve the PowerShell help experience (more on this later!), we wrote our own pager to provide a consistent
experience across platforms.

This is a work in progress so a bit limited in functionality currently.
Contributions and issues are certainly welcome in the Pager repo.

You can try it out using the new -Paged switch with Get-Help:

Get-Help -Paged Get-Command

PSReadLine will leverage this pager for its new “Dynamic Help” feature where you can use a keyboard combination to get help while in the middle
of typing out a command-line and being able to return to exactly where you left off!

New -MaskInput switch for Read-Host

This was a community requested feature to hide the input the user types.
Normally, you would use -AsSecureString for sensitive information that you would not want echoed onto the screen or end up in logs/transcripts.
However, there are scenarios where you don’t want to deal with a SecureString and prefer to get plain text.
This switch will still mask the input the user types, but returns a string.

$input = Read-Host -MaskInput -Prompt "Tell me a secret"


There are, of course, many bug fixes and smaller improvements not mentioned in this blog post provided by both the community as well
as the PowerShell team.
We have a few more important changes to get into the PowerShell 7.1 release before we hit our first Release Candidate in September.

Early this month, the majority of PowerShell usage (not including Windows PowerShell) switched from PowerShell Core 6.2 to PowerShell 7.0!
We hit a high point of 4.5 million startups of pwsh in a single day!
See more data at our public PowerBI dashboard.
Reminder that PowerShell Core 6.2 support ends next month!

The post PowerShell 7.1 Preview 6 appeared first on PowerShell.

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

This post was originally published on this site

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

Hi everyone!
I’m Justin and I am currently an intern on the PowerShell team.
One of my projects was to add PowerShell semantic highlighting support in VS Code allowing for more accurate highlighting in the editor.
I’m excited to share that the first iteration has been released.

Getting started

Great news!
You don’t have to do anything to get this feature except for making sure you have at least the
v2020.7.0 version of the
PowerShell Preview extension for Visual Studio Code.


You have to use a theme that supports Semantic Highlighting.
All the inbox themes support it and the PowerShell ISE theme supports it but it’s not guaranteed that every theme will.
If you don’t see any difference in highlighting,
the theme you’re using probably doesn’t support it.
Open an issue on the theme you’re using to support Semantic Highlighting.

For theme authors: Supporting Semantic Highlighting

If you are a theme author, make sure to add {semanticHighlighting: true} to the
theme.json file of your VS Code theme.

For a more complete guide into supporting Semantic Highlighting in your theme,
please look at:

The rest of this blog post will discuss the shortcomings of the old syntax
highlighting mechanism and how semantic highlighting addresses those issues.

Syntax Highlighting

Currently, the syntax highlighting support for PowerShell scripts in VS Code leverages
TextMate grammars, which are mappings
of regular expressions to tokens. For instance, to identify control keywords, something like
the following would be used

    name = 'keyword.control.untitled';
    match = 'b(if|while|for|return)b';

However, there are some limitations with regular expressions and their ability to recognize different syntax patterns.
Since TextMate grammars rely on these expressions,
there are many complex and context-dependent tokens these grammars are unable to parse,
leading to inconsistent or incorrect highlighting.
Just skim through the issues in the
EditorSyntax repo,
our TextMate grammar.

Here are a few examples where syntax highlighting fails in
tokenizing a PowerShell script.

Syntax Highlighting Bugs

Semantic Highlighting

To solve those cases
(and many other ones)
we use the PowerShell tokenizer
which describes the tokens more accurately than regular expressions can,
while also always being up-to-date with the language grammar.
The only problem is that the tokens generated by the PowerShell tokenizer do not align perfectly to the semantic token types predefined by VS Code.
semantic token types provided by VS Code are:

  • namespace
  • type, class, enum, interface, struct, typeParameter
  • parameter, variable, property, enumMember, event
  • function, member, macro
  • label
  • comment, string, keyword, number, regexp, operator

On the other hand, there are over 100
PowerShell token kinds
and also many
token flags
that can modify those types.

The main task (aside from setting up a semantic tokenization
handler) was to create a mapping from PowerShell tokens to VS Code semantic token types. The
result of enabling semantic highlighting can be seen below.

Semantic Highlighting Examples

If we compare the semantic highlighting to the highlighting in PowerShell ISE, we can see they are quite similar (in tokenization, not color).

PowerShell ISE Screenshot

Next Steps

Although semantic highlighting does a better job than syntax highlighting in identifying tokens,
there remain some cases that can still be improved at the PowerShell layer.

In Example 5, for instance, while the enum does have better highlighting, the name and members
of the enums are highlighted identically. This occurs because PowerShell tokenizes them
all of them the same way (as identifiers with a token flags denoting that they are member names meaning that the semantic highlighting has no way to differentiate them.

How to Provide Feedback

If you experience any issues or have comments on improvement, please raise an issue in
PowerShell/vscode-powershell. Since this was
just released, any feedback will be greatly appreciated.

Justin Chen
PowerShell Team

The post Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code appeared first on PowerShell.

PSScriptAnalyzer 1.19.1

This post was originally published on this site

PSScriptAnalyzer 1.19.1 is now available on the PowerShell Gallery. This minor update fixes a few user-reported bugs and introduces a new rule (which is disabled by default) for avoiding using double quotes for constant strings. To install this version of the module, open any PowerShell console and run: Install-Module PSScriptAnalyzer -Force -Repository PSGallery

New Rule

AvoidUsingDoubleQuotesForConstantString (disabled by default)

Severity Level: Information

Rule Description

When the value of a string is constant (i.e. not being interpolated by injecting variables or expression into such as e.g. "$PID-$(hostname)"), then single quotes should be used to express the constant nature of the string. This is not only to make the intent clearer that the string is a constant and makes it easier to use some special characters such as e.g. $ within that string expression without the need to escape them. There are exceptions to that when double quoted strings are more readable though, e.g. when the string value itself has to contain a single quote (which would require a double single quotes to escape the character itself) or certain very special characters such as e.g. "n" are already being escaped, the rule would not warn in this case as it is up to the author to decide on what is more readable in those cases.

Rule Example


$constantValue = "I Love PowerShell"


$constantValue = 'I Love PowerShell'

Bug Fixes

  • Fix bug that caused incorrect formatting in hashtables
  • Fix ArgumentException when the same variable name is used in 2 different sessions
  • Fixed bugs with UseConsistentWhitespace
  • Fixes issue when CommandInfoParameters property throws due to runspace affinity problem of PowerShell engine
  • Fix token check with new lines
  • Fix CheckParameter bug when using interpolated string

For the full list of changes please refer to the change log.

What’s next for PSScriptAnalyzer

We are currently working on a major rearchitecture of PSScriptAnalyzer, we are calling this project PSScriptAnalyzer 2.0, and hope to have previews available fall 2020.

To understand the goals of the project and track our progress please refer to this roadmap. Within our GitHub repository we are using the issue tag “Consider-2.0” to track bugs and features that we are hoping to resolve in the release. As we get closer to releasing versions of this rearchitecture we will shift to a “Resolved-2.0” tag to mark issues that are expected to be resolved by the time PSScriptAnalyzer 2.0 becomes generally available.

Now is a great time to give us feedback around the priorities for improvements and features/use cases of the module as we are still in the early stages of development. If you use PSScriptAnalyzer to build PowerShell tools, we would especially love your feedback on our API design.

Feedback and Support

The best place to get support on issues you may encounter or to give feedback/feature requests is through our GitHub Repository.

Sydney Smith
PowerShell Team

The post PSScriptAnalyzer 1.19.1 appeared first on PowerShell.

PowerShellGet 3.0 Preview 6 Release

This post was originally published on this site

Since our last blog post we have released two preview versions (previews 4 and 6) of the PowerShellGet 3.0 module. These releases have introduced publish functionality, repository wildcard search, and fixed error handling when a repository is not accessible in Find-PSResource. Note that preview 5 was unlisted due to a packaging error discovered post-publish.

To install the latest version module, open any PowerShell console and run:

Install-Module PowerShellGet -Force -AllowPrerelease -Repository PSGallery

Highlights of the releases

Preview 4 (3.0.0-beta4) Highlights

New Feature

Wildcard search for the -Repository parameter in Find-PSResource. This allows the user to return results from all registered PSRepositories instead of just their prioritized repository. To use this feature add -Repository '*' to your call to Find-PSResource.

Bug Fix

Fixed poor error handling for when repository is not accessible in Find-PSResource.

Preview 6 (3.0.0-beta6) Highlight

New Feature

The cmdlet Publish-PSResource was introduced which allows users to publish PowerShell resources to any registered PSRepository.

What’s next

We have 3 planned upcoming releases for the module:

  • The Preview 7 release will focus on update functionality, along with several bug fixes that have been reported by users through our preview releases.
  • The Release Candidate (RC) release will resolve any remaining bugs not resolved in our Preview 6 release.
  • The 3.0 General Availability (GA) release will be the same as the RC version so long as no blocking or high-risk bugs are found in the release candidate. If there are any blocking or high-risk bugs, we will release another release candidate before GA.

Migration to PowerShellGet 3.0

We hope to ship the the latest preview of PowerShellGet 3.0 in the next preview of PowerShell 7.1 (preview 6). The goal for this version of PowerShellGet, which will ship in PowerShell 7.1 preview 6, is to contain a compatibility module that will enable scripts with PowerShell 2.x cmdlets (ex. Install-Module) to be run using the PowerShellGet 3.0 module. This means that users will likely not need to update their scripts to use PowerShellGet 2.x cmdlets with PowerShell 7.1. It is important to note, as well, that on systems which contain any other version of PowerShell, the PowerShellGet 2.x module will still be available and used.

We hope to ship PowerShellGet 3.0 with a compatibility layer into PowerShell 7.1 as the sole version of PowerShellGet in the package. However, we will only do this if we reach GA, with a high bar for release quality, in time for the PowerShell 7.1 release candidate.

How to get support and give feedback

We cannot overstate how critical user feedback is at this stage in the development of the module. Feedback from preview releases help inform design decisions without incurring a breaking change once generally available and used in production. To help us to make key decisions around the behavior of the module please give us feedback by opening issues in our GitHub repository.

Sydney Smith PowerShell Team

The post PowerShellGet 3.0 Preview 6 Release appeared first on PowerShell.

Native Commands in PowerShell – A New Approach – Part 2

This post was originally published on this site

Native Commands in PowerShell
A New Approach – Part 2

In my last post I went through some some strategies for executing native executable and having them participate more fully in the PowerShell environment. In this post, I’ll be going through a couple of experiments I’ve done with the kubernetes kubectl utility.

Is there a better way

It may be possible to create a framework that inspects the help of the application and automatically creates the code that calls the underlying application. This framework can also handle the output mapping to an object more suitable for the PowerShell environment.

Possibilities in wrapping

The aspect that makes this possible is that some commands have consistently structured help that describes how the application can be used. If this is the case, then we can iteratively call the help, parse it, and automatically construct much of the infrastructure needed to allow these native applications to be incorporated into the PowerShell environment.

First Experiment – Microsoft.PowerShell.Kubectl Module

I created a wrapper for to take the output of kubectl api-resources and create functions for each returned resource. This way, instead of running kubectl get pod; I could run Get-KubectlPod (a much more PowerShell-like experience). I also wanted to have the function return objects that I could then use with other PowerShell tools (Where-Object, ForEach-Object, etc). To do this, I needed a way to map the output (JSON) of the kubectl tool to PowerShell objects. I decided that it was reasonable to use a more declarative approach that maps the property in the JSON to a PowerShell class member.

There were some problems that I wanted to solve with this first experiment

  • wrap kubectl api-resources in a function
    • automatically create object output from kubectl api-resources
  • Auto-generate functions for each resource that could be retrieved (only resource get for now)
    • only support name as a parameter
  • Auto-generate the conversion of output to objects to look similar to the usual kubectl output

When it came to wrapping kubectl api-resources I took the static approach rather than auto generation. First, because it was my first attempt so I was still finding my feet. Second, because this is one of the kubectl commands that does not emit JSON. So, I took the path of parsing the output of kubectl api-resources -o wide. My concern is that I wasn’t sure whether the table changes width based on the screen width. I calculated column positions based on the fields I knew to be present and then parsed the line using the offsets. You can see the code in the function get-kuberesource and the constructor for the PowerShell class KubeResource. My plan was that these resources would drive the auto-generation of the Kubernetes resource functions.

Now that I have retrieved the resources, I can auto-generate specific resource function for calling the kubectl get <resource>. At the time, I wanted some flexibility in the creation of these proxy functions, so I provided a way to include a specific implementation, if desired (see the $proxyFunctions hashtable). I’m not sure that’s needed now, but we’ll get to that later. The problem is that while the resource data can be returned as JSON, that JSON has absolutely no relation to the way the data is represented in the kubectl get pod table. Of course, in PowerShell we can create formatting to present any member of an object (JSON or otherwise), but I like to be sure that the properties seen in a table are properties that I can use with Where-Object, etc. Since, I want to return the data as objects, I created classes for a couple resources by hand but thought there might be a better way.

I determined that when you get data from kubernetes, the table (both normal and wide) output is created on the server. This means the mapping of the properties of the JSON object to the table columns is defined in the server code. It’s possible to provide data as custom columns, but you need to provide the value for the column using a JSON path expression. So, it’s not possible to automatically generate those tables. However, I thought it might be possible to provide a configuration file that could be read to automatically generate a PowerShell class. The configuration file would need to define the mapping between the property in the table with the properties of the object. The file would include the name of the column and the expression to get the value for the object. This allows a user to retrieve the JSON object and construct their custom object without touching the programming logic of the module but a configuration file. I created the ResourceConfiguration.json file to encapsulate all the resources that I had access to and provide a way to customize the object members where desired.

here’s an example:

    "TypeName": "namespaces",
    "Fields": [
        "PropertyName": "NAME",
        "PropertyReference": "$o.metadata.NAME"
        "PropertyName": "STATUS",
        "PropertyReference": "$o.status.phase"
        "PropertyName": "AGE",
        "PropertyReference": "$o.metadata.creationTimeStamp"

This JSON is converted into a PowerShell class whose constructor takes the JSON object and assigns the values to the members, according to the PropertyReference. The module automatically attaches the original JSON to a hidden member originalObject so if you want to inspect all the data that’s available, you can. The module also automatically generates a proxy function so you can get the data:

function Get-KubeNamespace
  param ()
  (Invoke-KubeCtl -Verb get -resource namespaces).Foreach({[namespaces]::new($_)})

This function is then exported so it’s available in the module. When used, it behaves very close to the original:

PS> Get-KubeNamespace

Name                 Status Age
----                 ------ ---
default              Active 5/6/2020 6:13:07 PM
default-mem-example  Active 5/14/2020 8:14:45 PM
docker               Active 5/6/2020 6:14:25 PM
kube-node-lease      Active 5/6/2020 6:13:05 PM
kube-public          Active 5/6/2020 6:13:05 PM
kube-system          Active 5/6/2020 6:13:05 PM
kubernetes-dashboard Active 5/18/2020 8:44:01 PM
openfaas             Active 5/6/2020 6:51:22 PM
openfaas-fn          Active 5/6/2020 6:51:22 PM

PS> kubectl get namespaces --all-namespaces

NAME                   STATUS   AGE
default                Active   26d
default-mem-example    Active   18d
docker                 Active   26d
kube-node-lease        Active   26d
kube-public            Active   26d
kube-system            Active   26d
kubernetes-dashboard   Active   14d
openfaas               Active   26d
openfaas-fn            Active   26d

but importantly, I can use the output with Where-Object and ForEach-Object or change the format to list, etc.

PS> Get-KubeNamespace |? name -match "faas"

Name        Status Age
----        ------ ---
openfaas    Active 5/6/2020 6:51:22 PM
openfaas-fn Active 5/6/2020 6:51:22 PM

Second Experiment – Module KubectlHelpParser

I wanted to see if I could read any help content from kubectl that would enable me to auto-generate a complete proxy of the kubectl command that included general parameters, command specific parameters, and help. It turns out that kubectl help is regular enough that this is quite possible.

When retrieving help, kubectl provides subcommands that also have structured help. I created a recursive parser that allowed me to retrieve all of the help for all of the available kubectl commands. This means that if an additional command is provided in the future, and the help for that command follows the existing pattern for help, this parser will be able to generate a command for it.

PS> kubectl --help
kubectl controls the Kubernetes cluster manager.

 Find more information at:

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
. . .

kubectl set --help

PS> kubectl set --help

Configure application resources

 These commands help you make changes to existing application resources.

Available Commands:
  env            Update environment variables on a pod template
  . . .
  subject        Update User, Group or ServiceAccount in a RoleBinding/ClusterRoleBinding

  kubectl set SUBCOMMAND [options]

PS> kubectl set env --help

Update environment variables on a pod template.

 List environment variable definitions in one or more pods, pod templates. Add, update, or remove container environment
variable definitions in one or more pod templates (within replication controllers or deployment configurations). View or
modify the environment variable definitions on all containers in the specified pods or pod templates, or just those that
match a wildcard.

 If "--env -" is passed, environment variables can be read from STDIN using the standard env syntax.

 Possible resources include (case insensitive):

  pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)

  # Update deployment 'registry' with a new environment variable
  kubectl set env deployment/registry STORAGE_DIR=/local
  . . .
  # Set some of the local shell environment into a deployment config on the server
  env | grep RAILS_ | kubectl set env -e - deployment/registry

      --all=false: If true, select all resources in the namespace of the specified resource types
      --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
the template. Only applies to golang and jsonpath output formats.
  . . .
      --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
template format is golang templates [].

  kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

The main function of the module will recursively collect the help for all of the commands and construct an object representation that I hope can then be used to generate the proxy functions. This is still very much a work in progress, but it is definitely showing promise. Here’s an example of what it can already do.

PS> import-module ./KubeHelpParser.psm1
PS> $res = get-kubecommands
PS> $res.subcommands[3].subcommands[0]
Command             : set env
CommandElements     : {, set, env}
Description         : Update environment variables on a pod template.

                       List environment variable definitions in one or more pods, pod templates. Add, update, or remove container environment variable definitions in one or more pod templates (within replication controllers or deployment configurations). View or modify the environment variable definitions
                      on all containers in the specified pods or pod templates, or just those that match a wildcard.

                       If "--env -" is passed, environment variables can be read from STDIN using the standard env syntax.

                       Possible resources include (case insensitive):

                        pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
Usage               : kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N [options]
SubCommands         : {}
Parameters          : {[Parameter(Mandatory=$False)][switch]${All}, [Parameter(Mandatory=$False)][switch]${NoAllowMissingTemplateKeys}, [Parameter(Mandatory=$False)][System.String]${Containers} = "*", [Parameter(Mandatory=$False)][switch]${WhatIf}…}
MandatoryParameters : {}
Examples            : {kubectl set env deployment/registry STORAGE_DIR=/local, kubectl set env deployment/sample-build --list, kubectl set env pods --all --list, kubectl set env deployment/sample-build STORAGE_DIR=/data -o yaml…}
PS> $res.subcommands[3].subcommands[0].usage
Usage                                                               supportsFlags hasOptions
-----                                                               ------------- ----------
kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N [options]         False       True
PS> $res.subcommands[3].subcommands[0].examples
Description                                                   Command
-----------                                                   -------
Update deployment 'registry' with a new environment variable  kubectl set env deployment/registry STORAGE_DIR=/local
. . .

PS> $res.subcommands[3].subcommands[0].parameters.Foreach({$_.tostring()})

[Parameter(Mandatory=$False)][System.String]${Containers} = "*"
. . .

There are still a lot of open questions and details to work out here:

  • how are mandatory parameters determined?
  • how do we keep a map of used parameters?
  • does parameter order matter?
  • can reasonable debugging be provided?
  • do we have to “boil the ocean” to provide something useful?

I believe it may be possible to create a more generic framework which would allow a larger number native executables to be more fully incorporated into the PowerShell ecosystem. These are just the first steps in the investigation, but it looks very promising.

Call To Action

First, I’m really interested in knowing that having a framework that can auto-generate functions that wrap a native executable is useful. The obvious response might be “of course”, but how much of a solution is really needed to provide value? Second, I would really like to know if you would like us to investigate specific tools for this sort of treatment. If it is possible to make this a generic framework, I would love to have more examples of tools which would be beneficial to you and test our ability to handle.

James Truher
Software Engineer
PowerShell Team

The post Native Commands in PowerShell – A New Approach – Part 2 appeared first on PowerShell.

Resolving PowerShell Module Assembly Dependency Conflicts

This post was originally published on this site

When writing a PowerShell module, especially a binary module (i.e. one written in a language like C# and loaded into PowerShell as an assembly/DLL), it’s natural to take dependencies on other packages or libraries to provide functionality.

Taking dependencies on other libraries is usually desirable for code reuse. However, PowerShell always loads assemblies into the same context, and this can present issues when a module’s dependencies conflict with already-loaded DLLs, preventing two otherwise unrelated modules from being used together in the same PowerShell session. If you’ve been hit by this yourself, you would have seen an error message like this:

Assembly load conflict error message.

In this blog post, we’ll look at some of the ways dependency conflicts can arise in PowerShell, and some of the ways to mitigate dependency conflict issues. Even if you’re not a module author, there are some tricks in here that might help you with dependency conflicts arising in modules that you use.


This is a fairly long blog post, so here’s a table of contents:

Why do dependency conflicts occur?

In .NET, dependency conflicts arise when two versions of the same assembly are loaded into the same Assembly Load Context (this term means slightly different things on different .NET platforms, which we’ll cover later). This conflict issue is a common problem that occurs essentially everywhere in software where versioned dependencies are used. Because there are no simple solutions (and because many dependency resolution frameworks try to solve the problem in different ways), this situation is often called “dependency hell”.

Conflict issues are compounded by the fact that a project almost never deliberately or directly depends on two versions of the same dependency, but instead depends on two different dependencies that each require a different version of the same dependency.

For example, say your .NET application, DuckBuilder, brings in two dependencies, to perform parts of its functionality and looks like this:

Two dependencies of DuckBuilder rely on different versions of Newtonsoft.Json

Because Contoso.ZipTools and Fabrikam.FileHelpers both depend on Newtonsoft.Json, but different versions, there may be a dependency conflict, depending on how each dependency is loaded.

Conflicting with PowerShell’s dependencies

In PowerShell, the dependency conflict issue is exacerbated because PowerShell modules, and PowerShell’s own dependencies, are all loaded into the same shared context. This means the PowerShell engine and all loaded PowerShell modules must not have conflicting dependencies.

One scenario in which this can cause issues is where a module has a dependency that conflicts with one of PowerShell’s own dependencies. A classic example of this is Newtonsoft.Json:

FictionalTools module depends on newer version of Newtonsoft.Json than PowerShell

In this example, the module FictionalTools depends on Newtonsoft.Json version 12.0.3, which is a newer version of Newtonsoft.Json than 11.0.2 that ships in the example PowerShell. (To be clear, this is an example and PowerShell 7 currently ships with Newtonsoft.Json 12.0.3.)

Because the module depends on a newer version of the assembly, it won’t accept the version that PowerShell already has loaded, but because PowerShell has already loaded a version of the assembly, the module can’t load its own using the conventional load mechanism.

Conflicting with another module’s dependencies

Another common scenario in PowerShell is that a module is loaded that depends on one version of an assembly, and then another module is loaded later that depends on a different version of that assembly.

This often looks like the following:

Two PowerShell modules require different versions of the Microsoft.Extensions.Logging dependency

In the above case, the FictionalTools module requires a newer version of Microsoft.Extensions.Logging than the FilesystemManager module.

Let’s imagine these modules load their dependencies by placing the dependency assemblies in the same directory as the root module assembly and allowing .NET to implicitly load them by name. If we are running PowerShell 7 (on top of .NET Core 3.1), we can load and run FictionalTools and then load and run FilesystemManager without issue. However, if in a new session we load and run FilesystemManager and then FictionalTools, we will encounter a FileLoadException from the FictionalTools command, because it requires a newer version of Microsoft.Extensions.Logging than the one loaded, but cannot load it because an assembly of the same name has already been loaded.

PowerShell and .NET

PowerShell runs on the .NET platform, and since we’re discussing assembly dependency conflicts, we must discuss .NET. .NET is ultimately responsible for resolving and loading assembly dependencies, so we must understand how .NET operates here to understand dependency conflicts.

We must also confront the fact that different versions of PowerShell run on different .NET implementations, which respectively have their own mechanisms for assembly resolution.

In general, PowerShell 5.1 and below run on .NET Framework, while PowerShell 6 and above run on .NET Core. These two flavours of .NET load and handle assemblies somewhat differently, meaning resolving dependency conflicts can vary depending on the underlying .NET platform.

Assembly Load Contexts

In this post, we’ll use the term “Assembly Load Context” (ALC) frequently. An Assembly Load Context is a .NET concept that essentially means a runtime namespace into which assemblies are loaded and within which assemblies’ names are unique. This concept allows assemblies to be uniquely resolved by name in each ALC.

Assembly reference loading in .NET

The semantics of assembly loading (whether versions clash, whether that assembly’s dependencies are handled, etc.) depend on both the .NET implementation (.NET Core vs .NET Framework) and the API or .NET mechanism used to load a particular assembly.

Rather than go into deep detail describing these here, there is a list of links in the Further reading section that go into great detail on how .NET assembly loading works in each .NET implementation.

In this blog post, we’ll refer to the following mechanisms:

  • Implicit assembly loading (effectively Assembly.Load(AssemblyName)), when .NET implicitly tries to load an assembly by name from a static assembly reference in .NET code.
  • Assembly.LoadFrom(), a plugin-oriented loading API that will add handlers to resolve dependencies of the loaded DLL (but which may not do so the way we want).
  • Assembly.LoadFile(), a bare-bones loading API intended to load only the assembly asked for with no handling of its dependencies.

The way these APIs work has changed in subtle ways between .NET Core and .NET Framework, so it’s worth reading through the further reading links.

Differences in .NET Framework vs .NET Core

Importantly, Assembly Load Contexts and other assembly resolution mechanisms have changed between .NET Framework and .NET Core.

In particular .NET Framework has the following features:

  • The Global Assembly Cache, for machine-wide assembly resolution
  • Application Domains, which work like in-process sandboxes for assembly isolation, but also present a serialization layer to contend with
  • A limited assembly load context model, explained in depth here, that has a fixed set of assembly load contexts, each with their own behaviour:
    • The default load context, where assemblies are loaded by default
    • The load-from context, for loading assemblies manually at runtime
    • The reflection-only context, for safely loading assemblies to read their metadata without running them
    • The mysterious void that assemblies loaded with Assembly.LoadFile(string path) and Assembly.Load(byte[] asmBytes) live in

.NET Core (and .NET 5+) has eschewed this complexity for a simpler model:

  • No Global Assembly Cache; applications bring all their own dependencies (PowerShell, as the plugin host, complicates this slightly for modules ??? its dependencies in $PSHOME are shared with all modules). This removes an exogenous factor for dependency resolution in applications, making dependency resolution more reproducible.
  • Only one Application Domain, and no ability to create new ones. The Application Domain concept lives on purely to be the global state of the .NET process.
  • A new, extensible Assembly Load Context model, where assembly resolution can be effectively namespaced by putting it in a new ALC. .NET Core processes begin with a single default ALC into which all assemblies are loaded (except for those loaded with Assembly.LoadFile(string) and Assembly.Load(byte[])), but are free to create and define their own custom ALCs with their own loading logic. When an assembly is loaded, the first ALC it is loaded into is responsible for resolving its dependencies, creating opportunities to create powerful .NET plugin loading mechanisms.

In both implementations, assemblies are loaded lazily when a method requiring their type is run for the first time.

For example here are two versions of the same code that load a dependency at different times.

The first will always load its dependency when Program.GetRange() is called, because the dependency reference is lexically present within the method:

using Dependency.Library;

public static class Program
    public static List<int> GetRange(int limit)
        var list = new List<int>();
        for (int i = 0; i < limit; i++)
            if (i >= 20)
                // Dependency.Library will be loaded when GetNumbers is run
                // because the dependency call occurs directly within the method

        return list;

The second will load its dependency only if the limit parameter is 20 or more, because of the internal indirection through a method:

using Dependency.Library;

public static class Program
    public static List<int> GetNumbers(int limit)
        var list = new List<int>();
        for (int i = 0; i < limit; i++)
            if (i >= 20)
                // Dependency.Library is only referenced within
                // the UseDependencyApi() method,
                // so will only be loaded when limit >= 20

        return list;

    private static void UseDependencyApi()
        // Once UseDependencyApi() is called, Dependency.Library is loaded

This is a good thing for .NET to do, since it minimizes the memory and filesystem reads it needs to use, making it more resource efficient.

Unfortunately a side effect of this is that if an assembly load will fail, we won’t necessarily know when the program is first loaded, only when the code path that tries to load the assembly is run.

It can also set up timing conditions for assembly load conflicts; if two parts of the same program will try to load different versions of the same assembly, which one is loaded depends on which code path is run first.

For PowerShell this means that the following factors can affect an assembly load conflict:

  • Which module was loaded first?
  • Was the cmdlet/code path that uses the dependency library run?
  • Does PowerShell load a conflicting dependency at startup or only under certain code paths?

Quick fixes and their limitations

In some cases it’s possible to make small adjustments to your module and fix things with a minimum of fuss. But these solutions tend to come with caveats, so that while they may apply to your module, they won’t work for every module.

Change your dependency version

The simplest way to avoid dependency conflicts is to agree on a dependency. This may be possible when:

  • Your conflict is with a direct dependency of your module and you control the version
  • Your conflict is with an indirect dependency, but you can configure your direct dependencies to use a workable indirect dependency version
  • You know the conflicting version and can rely on it not changing

A good example of this last scenario is with the Newtonsoft.Json package. This is a dependency of PowerShell 6 and above, and isn’t used in Windows PowerShell, meaning a simple way to resolve versioning conflicts is to target the lowest version of Newtonsoft.Json across the PowerShell versions you wish to target.

For example, PowerShell 6.2.6 and PowerShell 7.0.2 both currently use Newtonsoft.Json version 12.0.3. So, to create a module targeting Windows PowerShell, PowerShell 6 and PowerShell 7, you would target Newtonsoft.Json 12.0.3 as a dependency and include it in your built module. When the module is loaded in PowerShell 6 or 7, PowerShell’s own Newtonsoft.Json assembly will already be loaded, but the version will be at least the required one for your module, meaning resolution will succeed. In Windows PowerShell, the assembly will not be already present in PowerShell, and so will be loaded from your module instead.

Generally, when targeting a concrete PowerShell package, like Microsoft.PowerShell.Sdk or System.Management.Automation, NuGet should be able to resolve the right dependency versions required. It’s only in the case of targeting both Windows PowerShell and PowerShell 6+ that dependency versioning becomes more difficult, either because of needing to target multiple frameworks, or due to targeting PowerShellStandard.Library.

Circumstances where pinning to a common dependency version won’t work include:

  • The conflict is with an indirect dependency, and there’s no configuration of your dependencies that can use the common version
  • The other dependency version is likely to change often, meaning settling on a common version will only be a short-term fix

Add an AssemblyResolve event handler to create a dynamic binding redirect

When changing your own dependency version isn’t possible, or is too hard, another way to make your module play nicely with other dependencies is to set up a dynamic binding redirect by registering an event handler for the AssemblyResolve event.

As with a static binding redirect, the point here to is force all consumers of a dependency to use the same actual assembly. This means we need to intercept calls to load the dependency and always redirect them to the version we want.

This looks something like this:

// Register the event handler as early as you can in your code.
// A good option is to use the IModuleAssemblyInitializer interface
// that PowerShell provides to run code early on when your module is loaded.

// This class will be instantiated on module import and the OnImport() method run.
// Make sure that:
//  - the class is public
//  - the class has a public, parameterless constructor
//  - the class implements IModuleAssemblyInitializer
public class MyModuleInitializer : IModuleAssemblyInitializer
    public void OnImport()
        AppDomain.CurrentDomain.AssemblyResolve += DependencyResolution.ResolveNewtonsoftJson;

// Clean up the event handler when the the module is removed
// to prevent memory leaks.
// Like IModuleAssemblyInitializer, IModuleAssemblyCleanup allows
// you to register code to run when a module is removed (with Remove-Module).
// Make sure it is also public with a public parameterless contructor
// and implements IModuleAssemblyCleanup.
public class MyModuleCleanup : IModuleAssemblyCleanup
    public void OnRemove()
        AppDomain.CurrentDomain.AssemblyResolve -= DependencyResolution.ResolveNewtonsoftJson;

internal static class DependencyResolution
    private static readonly string s_modulePath = Path.GetDirectoryName(

    public static Assembly ResolveNewtonsoftJson(object sender, ResolveEventArgs args)
        // Parse the assembly name
        var assemblyName = new AssemblyName(args.Name);

        // We only want to handle the dependency we care about.
        // In this example it's Newtonsoft.Json.
        if (!assemblyName.Name.Equals("Newtonsoft.Json"))
            return null;

        // Generally the version of the dependency you want to load is the higher one,
        // since it's the most likely to be compatible with all dependent assemblies.
        // The logic here assumes our module always has the version we want to load.
        // Also note the use of Assembly.LoadFrom() here rather than Assembly.LoadFile().
        return Assembly.LoadFrom(Path.Combine(s_modulePath, "Newtonsoft.Json.dll"));

Note that you can technically register a scriptblock within PowerShell to handle an AssemblyResolve event, but:

  • An AssemblyResolve event may be triggered on any thread, something that PowerShell will be unable to handle.
  • Even when events are handled on the right thread, PowerShell’s scoping concepts mean that the event handler scriptblock must be written carefully to not depend on variables defined outside it.

There are situations where redirecting to a common version will not work:

  • When the dependency has made breaking changes between versions, or when dependent code relies on a functionality otherwise not available in the version you redirect to
  • When your module isn’t loaded before the conflicting dependency. If you register an AssemblyResolve event handler after the dependency has been loaded, it will never be fired. If you’re trying to prevent dependency conflicts with another module, this may be an issue, since the other module may be loaded first.

Use the dependency out of process

This solution is more for module users than module authors, but is possible a solution to use when confronted with a module that won’t work due to an existing dependency conflict.

Dependency conflicts occur because two versions of the same assembly are loaded into the same .NET process. So a simple solution is to load them into different processes, as long as you can still use the functionality from both together.

In PowerShell, there are several ways to achieve this.

Invoke PowerShell as a subprocess

A simple way to run a PowerShell command out of the current process is to just start a new PowerShell process directly with the command call:

pwsh -c 'Invoke-ConflictingCommand'

The main limitation here is that restructuring the result can be trickier or more error prone than other options.

The PowerShell job system

The PowerShell job system also runs commands out of process, by sending commands to a new PowerShell process and returning the results:

$result = Start-Job { Invoke-ConflictingCommand } | Receive-Job -Wait

In this case, you just need to be sure that any variables and state are passed in correctly.

The job system can also be slightly cumbersome when running small commands.

PowerShell remoting

When it’s available, PowerShell remoting can be a very ergonomic way to run commands out of process. With remoting you can create a fresh PSSession in a new process, call its commands over PowerShell remoting and then use the results locally with, for example, the other module with the conflicting dependencies.

An example might look like this:

# Create a local PowerShell session
# where the module with conflicting assemblies will be loaded
$s = New-PSSession

# Import the module with the conflicting dependency via remoting,
# exposing the commands locally
Import-Module -PSSession $s -Name ConflictingModule

# Run a command from the module with the conflicting dependencies

Implicit remoting to Windows PowerShell

Another option in PowerShell 7 is to use the -UseWindowsPowerShell flag on Import-Module. This will import the module through a local remoting session into Windows PowerShell:

Import-Module -Name ConflictingModule -UseWindowsPowerShell

Be aware of course that modules may not work with or work differently with Windows PowerShell.

When not to use out-of-process invocation

As a module author, out-of-process command invocation is harder to bake into a module, and may have edge cases that cause issues. In particular, remoting and jobs may not be available in all environments where your module needs to work. However, the general principle of moving the implementation out of process and allowing the PowerShell module to be a thinner client, may still be applicable.

Of course, for module users too there will be cases where out-of-process invocation won’t work:

  • When PowerShell remoting is unavailable, for example if you don’t have privileges to use it or it is not enabled.
  • When a particular .NET type is needed from output, for example to run methods on, or as input to another command. Commands run over PowerShell remoting emit deserialized output, meaning they return psobjects rather than strongly-typed .NET objects. This means that method calls and strongly typed APIs will not work with the output of commands imported over remoting.

More robust solutions

The solutions above have all had the issue that there are scenarios and modules for which they won’t work. However, they also have the virtue of being relatively simple to implement correctly.

These next solutions we discuss are generally more robust, but also take somewhat more work to implement correctly and can introduce subtle bugs if not written carefully.

Loading through .NET Core Assembly Load Contexts

Assembly Load Contexts (ALCs) as an implementable API were introduced in .NET Core 1.0 to specifically address the need to load multiple versions of the same assembly into the same runtime.

Within .NET Core and .NET 5, they offer what is far and away the most robust solution to the problem of needing to load conflicting versions of an assembly. However, custom ALCs are not available in .NET Framework. This means that this solution will only work in PowerShell 6 and above.

Currently, the best example of using an ALC for dependency isolation in PowerShell is in PowerShell Editor Services (the language server for the PowerShell extension for Visual Studio Code), where an ALC is used to prevent PowerShell Editor Services’ own dependencies from clashing with those in PowerShell modules.

Implementing module dependency isolation with an ALC is conceptually difficult, but we will work through a minimal example here.

Imagine we have a simple module, only intended to work in PowerShell 7, whose source is laid out like this:

+ AlcModule.psd1
+ src/
    + TestAlcModuleCommand.cs
    + AlcModule.csproj

The cmdlet implementation looks like this:

using Shared.Dependency;

namespace AlcModule
    [Cmdlet(VerbsDiagnostic.Test, "AlcModule")]
    public class TestAlcModuleCommand : Cmdlet
        protected override void EndProcessing()
            // Here's where our dependency gets used
            // Something trivial to make our cmdlet do *something*

The (heavily simplified) manifest, looks like this:

    Author = 'Me'
    ModuleVersion = '0.0.1'
    RootModule = 'AlcModule.dll'
    CmdletsToExport = @('Test-AlcModule')
    PowerShellVersion = '7.0'

And the csproj looks like this:

<Project Sdk="Microsoft.NET.Sdk">
    <PackageReference Include="Shared.Dependency" Version="1.0.0" />
    <PackageReference Include="Microsoft.PowerShell.Sdk" Version="7.0.1" PrivateAssets="all" />

When we build this module, the generated output has the following layout:

  + AlcModule.psd1
  + AlcModule.dll
  + Shared.Dependency.dll

In this example, our problem lies in the Shared.Dependency.dll assembly, which is our imaginary conflicting dependency. This is the dependency that we need to put behind an ALC so that we can use the module specific one.

We seek to re-engineer the module so that:

  • Module dependencies are only ever loaded into our custom ALC, and not into PowerShell’s ALC, so there can be no conflict. Moreover, as we add more dependencies to our project, we don’t want to continuously add more code to keep loading working; instead we want reusable, generic dependency resolution logic.
  • Loading the module will still work as normal in PowerShell, meaning cmdlets and other types the PowerShell module system needs to see will be defined within PowerShell’s own ALC.

To mediate these two requirements, we must break our module up into two assemblies:

  • A cmdlets assembly, AlcModule.Cmdlets.dll, which will contain definitions of all the types that PowerShell’s module system needs to load the module correctly. Namely any implementations of the Cmdlet base class and our IModuleAssemblyInitializer-implementing class, which will set up the event handler for AssemblyLoadContext.Default.Resolving to properly load AlcModule.Engine.dll through our custom ALC. Any types that are meant to be publicly exposed to PowerShell must also be defined here, since PowerShell 7 deliberately hides types defined in assemblies loaded in other ALCs. Finally, this assembly will also need to be where our custom ALC definition lives. Beyond that, as little code should live in this as possible.
  • An engine assembly, AlcModule.Engine.dll, which handles all the actual implementation of the module. Types from this will still be available in the PowerShell ALC, but it will initially be loaded through our custom ALC, and its dependencies will only ever be loaded into the custom ALC. Effectively, this becomes a “bridge” between the two ALCs.

Using this bridge concept, our new assembly situation effectively will look like this:

Diagram representing AlcModule.Engine.dll bridging the two ALCs

To make sure the default ALC’s dependency probing logic will not resolve the dependencies to be loaded into the custom ALC, we will need to separate these two parts of the module in different directories. So our new module layout will look like this:

  | + AlcModule.Engine.dll
  | + Shared.Dependency.dll

To see how our implementation now changes, we’ll start with the implementation of AlcModule.Engine.dll:

using Shared.Dependency;

namespace AlcModule.Engine
    public class AlcEngine
        public static void Use()

This is a fairly straightforward container for the dependency, Shared.Dependency.dll, but you should think of it as the .NET API for your functionality, which the cmdlets living in the other assembly will wrap for PowerShell.

The cmdlet in AlcModule.Cmdlets.dll now looks like this:

// Reference our module's Engine implementation here
using AlcModule.Engine;

namespace AlcModule.Cmdlets
    [Cmdlet(VerbsDiagnostic.Test, "AlcModule")]
    public class TestAlcModuleCommand : Cmdlet
        protected override void EndProcessing()

At this point, if we were to load AlcModule and run Test-AlcModule, we would hit a FileNotFoundException when the default ALC tries to load Alc.Engine.dll to run EndProcessing(). This is good, since it means the default ALC can’t find the dependencies we want to hide.

So now we need to add the magic to AlcModule.Cmdlets.dll that helps it know how to resolve AlcModule.Engine.dll. First we must define our custom ALC that will resolve assemblies from our module’s Dependencies directory:

namespace AlcModule.Cmdlets
    internal class AlcModuleAssemblyLoadContext : AssemblyLoadContext
        private readonly string _dependencyDirPath;

        public AlcModuleAssemblyLoadContext(string dependencyDirPath)
            _depdendencyDirPath = dependencyDirPath;

        protected override Assembly Load(AssemblyName assemblyName)
            // We do the simple logic here of
            // looking for an assembly of the given name
            // in the configured dependency directory
            string assemblyPath = Path.Combine(

            // The ALC must use inherited methods to load assemblies
            // Assembly.Load*() won't work here
            return LoadFromAssemblyPath(assemblyPath);

Then, we need to hook our custom ALC up to the default ALC’s Resolving event (which is the ALC version of the AssemblyResolve event on Application Domains) that will be fired when EndProcessing() is called to try and find AlcModule.Engine.dll:

namespace AlcModule.Cmdlets
    public class AlcModuleResolveEventHandler : IModuleAssemblyInitializer, IModuleAssemblyCleanup
        // Get the path of the dependency directory.
        // In this case we find it relative to the AlcModule.Cmdlets.dll location
        private static readonly string s_dependencyDirPath = Path.GetFullPath(

        private static readonly AlcModuleAssemblyLoadContext s_dependencyAlc = new AlcModuleAssemblyLoadContext(s_dependencyDirPath);

        public void OnImport()
            // Add the Resolving event handler here
            AssemblyLoadContext.Default.Resolving += ResolveAlcEngine;

        public void OnRemove()
          // Remove the Resolving event handler here
          AssemblyLoadContext.Default.Resolving -= ResolveAlcEngine;

        private static Assembly ResolveAlcEngine(
            AssemblyLoadContext defaultAlc,
            AssemblyName assemblyToResolve)
            // We only want to resolve the Alc.Engine.dll assembly here.
            // Because this will be loaded into the custom ALC,
            // all of *its* dependencies will be resolved
            // by the logic we defined for that ALC's implementation.
            // Note that we are safe in our assumption that the name is enough
            // to distinguish our assembly here,
            // since it's unique to our module.
            // There should be no other AlcModule.Engine.dll on the system.
            if (!assemblyToResolve.Name.Equals("AlcModule.Engine"))
                return null;

            // Allow our ALC to handle the directory discovery concept
            // This is where Alc.Engine.dll is loaded into our custom ALC
            // and then passed through into PowerShell's ALC,
            // becoming the bridge between both
            return s_dependencyAlc.LoadFromAssemblyName(assemblyToResolve);

With the new implementation laid out, we can now take a look at the sequence of calls that occurs when the module is loaded and Test-AlcModule is run:

Sequence diagram of calls using the custom ALC to load dependencies

Some points of interest are:

  • The IModuleAssemblyInitializer is run first when the module loads and sets the Resolving event
  • We don’t even load the dependencies until Test-AlcModule is run and its EndProcessing() method is called
  • When EndProcessing() is called, the default ALC does not find AlcModule.Engine.dll and fires the Resolving event
  • Our event handler here is what does the magic of hooking up the custom ALC to the default ALC, and only loads AlcModule.Engine.dll
  • Then, within AlcModule.Engine.dll, when AlcEngine.Use() is called, the custom ALC again kicks in to resolve Shared.Dependency.dll. Specifically it always loads our Shared.Dependency.dll, since it never conflicts with anything in the default ALC and only looks in our Dependencies directory.

Assembling the implementation, our new source code layout looks like this:

+ AlcModule.psd1
+ src/
  + AlcModule.Cmdlets/
  | + AlcModule.Cmdlets.csproj
  | + TestAlcModuleCommand.cs
  | + AlcModuleAssemblyLoadContext.cs
  | + AlcModuleInitializer.cs
  + AlcModule.Engine/
  | + AlcModule.Engine.csproj
  | + AlcEngine.cs

AlcModule.Cmdlets.csproj looks like:

<Project Sdk="Microsoft.NET.Sdk">
    <ProjectReference Include="..AlcModule.EngineAlcModule.Engine.csproj" />
    <PackageReference Include="Microsoft.PowerShell.Sdk" Version="7.0.1" PrivateAssets="all" />

AlcModule.Engine.csproj looks like this:

<Project Sdk="Microsoft.NET.Sdk">
    <PackageReference Include="Shared.Dependency" Version="1.0.0" />

And so when we build the module, our strategy is:

  • Build AlcModule.Engine
  • Build AlcModule.Cmdlets
  • Copy everything from AlcModule.Engine into the Dependencies dir, and remember what we copied
  • Copy everything from AlcModule.Cmdlets that wasn’t in AlcModule.Engine into the base module dir

Since the module layout here is so crucial to dependency separation, here’s a build script to use from the source root:

    # The .NET build configuration
    [ValidateSet('Debug', 'Release')]
    $Configuration = 'Debug'

# Convenient reusable constants
$mod = "AlcModule"
$netcore = "netcoreapp3.1"
$copyExtensions = @('.dll', '.pdb')

# Source code locations
$src = "$PSScriptRoot/src"
$engineSrc = "$src/$mod.Engine"
$cmdletsSrc = "$src/$mod.Cmdlets"

# Generated output locations
$outDir = "$PSScriptRoot/out/$mod"
$outDeps = "$outDir/Dependencies"

# Build AlcModule.Engine
Push-Location $engineSrc
dotnet publish -c $Configuration

# Build AlcModule.Cmdlets
Push-Location $cmdletsSrc
dotnet publish -c $Configuration

# Ensure out directory exists and is clean
Remove-Item -Path $outDir -Recurse -ErrorAction Ignore
New-Item -Path $outDir -ItemType Directory
New-Item -Path $outDeps -ItemType Directory

# Copy manifest
Copy-Item -Path "$PSScriptRoot/$mod.psd1"

# Copy each Engine asset and remember it
$deps = [System.Collections.Generic.Hashtable[string]]::new()
Get-ChildItem -Path "$engineSrc/bin/$Configuration/$netcore/publish/" |
    Where-Object { $_.Extension -in $copyExtensions } |
    ForEach-Object { [void]$deps.Add($_.Name); Copy-Item -Path $_.FullName -Destination $outDeps }

# Now copy each Cmdlets asset, not taking any found in Engine
Get-ChildItem -Path "$cmdletsSrc/bin/$Configuration/$netcore/publish/" |
    Where-Object { -not $deps.Contains($_.Name) -and $_.Extension -in $copyExtensions } |
    ForEach-Object { Copy-Item -Path $_.FullName -Destination $outDir }

So finally, we have a general way to use an Assembly Load Context to isolate our module’s dependencies that remains robust over time and as more dependencies are added.

Hopefully this example is informative, but naturally a fuller example would be more helpful. For that, go to this GitHub repository I created to give a full demonstration of how to migrate a module to use an ALC, while keeping that module working in .NET Framework and also using .NET Standard and PowerShell Standard to simply the core implementation.

Assembly resolve handler for side-by-side loading with Assembly.LoadFile()

Another, possibly simpler, way to achieve side-by-side assembly loading is to hook up an AssemblyResolve event to Assembly.LoadFile(). Using Assembly.LoadFile() has the advantage of being the simplest solution to implement and working with both .NET Core and .NET Framework.

To show this, let’s take a look at a quick example of an implementation that combines ideas from our dynamic binding redirect example, and from the Assembly Load Context example above.

We’ll call this module LoadFileModule, which has a trival command Test-LoadFile [-Path] <path>. This module takes a dependency on the CsvHelper assembly (version 15.0.5).

As with the ALC module, we must first split up the module into two pieces. First the part that does the actual implementation, LoadFileModule.Engine:

using CsvHelper;

namespace LoadFileModule.Engine
    public class Runner
        public void Use(string path)
            // Here's where we use the CsvHelper dependency
            using (var reader = new StreamReader(path))
            using (var csvReader = new CsvReader(reader, CultureInfo.InvariantCulture))
                // Imagine we do something useful here...

And then the part that PowerShell will load directly, LoadFileModule.Cmdlets. We use a very similar strategy as with the ALC module, where we isolate dependencies in a separate directory. But this time we must load all assemblies in with a resolve event, rather than just LoadFileModule.Engine.dll.

using LoadFileModule.Engine;

namespace LoadFileModule.Cmdlets
    // Actual cmdlet definition
    [Cmdlet(VerbsDiagnostic.Test, "LoadFile")]
    public class TestLoadFileCommand : Cmdlet
        [Parameter(Mandatory = true)]
        public string Path { get; set; }

        protected override void EndProcessing()
            // Here's where we use LoadFileModule.Engine
            new Runner().Use(Path);

    // This class controls our dependency resolution
    public class ModuleContextHandler : IModuleAssemblyInitializer, IModuleAssemblyCleanup
        // We catalog our dependencies here to ensure we don't load anything else
        private static IReadOnlyDictionary<string, int> s_dependencies = new Dictionary<string, int>
            { "CsvHelper", 15 },

        // Set up the path to our dependency directory within the module
        private static string s_dependenciesDirPath = Path.GetFullPath(

        // This makes sure we only try to resolve dependencies when the module is loaded
        private static bool s_engineLoaded = false;

        public void OnImport()
            // Set up our event when the module is loaded
            AppDomain.CurrentDomain.AssemblyResolve += HandleResolveEvent;

        public void OnRemove(PSModuleInfo psModuleInfo)
            // Unset the event when the module is unloaded
            AppDomain.CurrentDomain.AssemblyResolve -= HandleResolveEvent;

        private static Assembly HandleResolveEvent(object sender, ResolveEventArgs args)
            var asmName = new AssemblyName(args.Name);

            // First we want to know if this is the top-level assembly
            if (asmName.Name.Equals("LoadFileModule.Engine"))
                s_engineLoaded = true;
                return Assembly.LoadFile(Path.Combine(s_dependenciesDirPath, "Unrelated.Engine.dll"));

            // Otherwise, if that assembly has been loaded, we must try to resolve its dependencies too
            if (s_engineLoaded
                && s_dependencies.TryGetValue(asmName.Name, out int requiredMajorVersion)
                && asmName.Version.Major == requiredMajorVersion)
                string asmPath = Path.Combine(s_dependenciesDirPath, $"{asmName.Name}.dll");
                return Assembly.LoadFile(asmPath);

            return null;

Finally, the layout of the module is also similar to the ALC module:

  + LoadFileModule.Cmdlets.dll
  + LoadFileModule.psd1
  + Dependencies/
  |  + LoadFileModule.Engine.dll
  |  + CsvHelper.dll

With this structure in place, LoadFileModule now supports being loaded alongside other modules with a dependency on CsvHelper.

Because our handler will apply to all AssemblyResolve events across the Application Domain, we’ve needed to make some specific design choices here:

  • We only start handling general dependency loading after LoadFileModule.Engine.dll has been loaded, to narrow the window in which our event may interfere with other loading.
  • We push LoadFileModule.Engine.dll out into the Dependencies directory, so that it’s loaded by a LoadFile() call rather than by PowerShell. This means its own dependency loads will always raise an AssemblyResolve event, even if another CsvHelper.dll (for example) is loaded in PowerShell, meaning we have an opportunity to find the correct dependency.
  • We are forced to code a dictionary of dependency names and versions into our handler, since we must try only to resolve our specific dependencies for our module. In our particular case, it would be possible to use the ResolveEventArgs.RequestingAssembly property to work out whether CsvHelper is being requested by our module, but this wouldn’t work for dependencies of dependencies (for example if CsvHelper itself raised an AssemblyResolve event). There are other, harder ways to do this, such as comparing requested assemblies to the metadata of assemblies in the Dependencies directory, but here we’ve made this checking more explicit both to highlight the issue and to simplify the example.

Essentially this is the key difference between the LoadFile() strategy and the ALC strategy: the AssemblyResolve event must service all loads in the Application Domain, making it much harder to keep our dependency resolution from being tangled with other modules. However, the lack of ALCs in .NET Framework makes this option one worth understanding (even just for .NET Framework, while using an ALC in .NET Core).

Custom Application Domains

A final (and by some measures, extreme) option for assembly isolation is custom Application Domain use. Application Domains (used in the plural), are only available in .NET Framework, and are used to provide in-process isolation between parts of a .NET application. One of the uses of this is to isolate assembly loads from each other within the same process.

However, Application Domains are serialization boundaries, meaning that objects in one Application Domain cannot be referenced and used directly by objects in another Application Domain. This can be worked around by implementing MarshalByRefObject, but when you don’t control the types (as is often the case with dependencies), it’s not possible to force an implementation here, meaning that the only solution is to make large architectural changes. The serialization boundary also has serious performance implications.

Because Application Domains have this serious limitation, are complicated to implement, and only work in .NET Framework, I won’t give an example of how you might use them here. While they’re worth mentioning as a possibility, they aren’t an investment I would recommend.

If you’re interested in trying to use a custom Application Domain, the following links might help:

Solutions for dependency conflicts that don’t work for PowerShell

Finally, I should address some possibilities that come up when researching .NET dependency conflicts in .NET that can look promising but generally won’t work for PowerShell.

These solutions have the common theme that they are changes in deployment configurations in an environment where you control the application and possibly the entire machine. This is because they are oriented toward scenarios like web servers and other applications deployed to server environments, where the environment is intended to host the application and is free to be configured by the deploying user.

They also tend to be very much .NET Framework focused, meaning they will not work with PowerShell 6 or above.

Note that if you know that your module will only be used in environments you have total control over, and only with Windows PowerShell 5.1 and below, some of these may be options.

In general however, modules should not modify global machine state like this, as it can break configurations, causing powershell.exe, other modules or other dependent applications not to work, or simply fail and cause your module to fail in unexpected ways.

Static binding redirect with app.config to force using the same dependency version

.NET Framework applications can take advantage of an app.config file to configure some of the application’s behaviours declaratively. It’s possible to write an app.config entry that configures assembly binding to redirect assembly loading to a particular version.

Two issues with this for PowerShell are:

  • .NET Core does not support app.config, so this is a powershell.exe-only solution.
  • powershell.exe is a shared application that lives under the System32 directory. This means its likely your module won’t be able to modify its contents on many systems, and even if it can, modifying the app.config could break an existing configuration or affect the loading of other modules.

Setting codebase with app.config

For the same reasons as with setting binding redirects in app.config, trying to configure the app.config codebase setting is generally not going to work in PowerShell modules.

Installing your dependencies to the Global Assembly Cache (GAC)

Another way to resolve dependency version conflicts in .NET Framework is to install dependencies to the GAC, so that different versions can be loaded side-by-side from the GAC.

Again for PowerShell modules, the chief issues here are:

  • The GAC only applies to .NET Framework, so this will not help in PowerShell 6 and above.
  • Installing assemblies to the GAC is a modification of global machine state and may cause side-effects in other applications or to other modules. It may also be difficult to do correctly, even when your module has the required access privileges (and getting it wrong could cause serious issues in other .NET applications machine-wide).

Solving the issue in PowerShell itself…

After reading through all of this and seeing the complexity not just of implementing an isolation solution, but making it work with the PowerShell module system, you may wonder why PowerShell hasn’t put a solution to this problem into the module system yet.

While it’s not something we’re planning to implement imminently, the facility of Assembly Load Contexts available in .NET 5 makes this something worth considering long term.

However, this would represent a large change in the module system, which is a very critical (and already complex) component of PowerShell. In addition, the diversity of PowerShell modules and module scenarios presents a serious challenge in terms of PowerShell correctly isolating modules consistently and without creating edge cases.

In particular, dependencies will be exposed if they are part of a module’s API. For example, if a PowerShell module converts YAML strings to objects and uses YamlDotNet to return objects of type YamlDotNet.YamlDocument, then its dependencies are exposed to the global context.

This can lead to type identity issues when instances of the exposed type are used with APIs expecting the same type (but from a different assembly); specifically you may see confusing messages like “Type YamlDotNet.YamlDocument cannot be cast to type YamlDotNet.YamlDocument“, because even though the two types have the same name, .NET regards them as coming from different assemblies and therefore being different.

In order to safely isolate a dependency used in the module API, API types need to either be a type already found in PowerShell and its public dependencies, like PSObject or System.Xml.XmlDocument, or a new type defined by the module to be an intermediary between the PowerShell context and the dependency context.

It’s likely that any module isolation strategy could break some module, and that module authors will need, to some extent, to understand the implications of dependency isolation and be responsible for deciding whether their module can have its dependencies isolated.

So with that in mind, I encourage you to contribute to the discussion on GitHub. Also take a look at the RFC proposed to address the issue.

Further reading

There’s plenty more to read on the topic of .NET assembly version dependency conflicts. Here are some nice jumping off points:

Final notes

In this blog post, we looked over various ways to solve the issue of having module dependency conflicts in PowerShell, identifying strategies that won’t work, simple strategies that sometimes work, and more complex strategies that are more robust.

In particular we looked at how to implement an Assembly Load Context in .NET Core to make module dependency isolation much easier in PowerShell 6 and up.

This is fairly complicated subject matter, so don’t worry if it doesn’t click immediately. Feel free to read the material here and linked, experiment with implementations (try stepping through it in a debugger), and also get in touch with us on GitHub or Twitter.


Rob Holt

Software Engineer

PowerShell Team

The post Resolving PowerShell Module Assembly Dependency Conflicts appeared first on PowerShell.

PowerShellGet 3.0 Preview 3

This post was originally published on this site

PowerShellGet 3.0 preview 3 is now available on the PowerShell Gallery. The focus of this release is the -RequiredResource parameter for Install-PSResource which now allows for json, hashtable, or .json files as input. For a full list of the issues addressed by this release please refer to this GitHub project.

Note: For background information on PowerShellGet 3.0 please refer to the blog post for the first preview release and the second preview release.

How to install the module

To install this version of PowerShellGet open any PowerShell console and run: Install-Module -Name PowerShellGet -AllowPrerelease -Force

New Features of this release

New Features

  • RequiredResource parameter for Install-PSResource
  • RequiredResourceFile parameter for Install-PSResource
  • IncludeXML parameter in Save-PSResource

Bug Fix

  • Resolved paths in Install-PSResource and Save-PSResource

What is next

GitHub is the best place to track the bugs/feature requests related to this module. We have used a combination of projects and labels on our GitHub repo to track issues for this upcoming release. We are using the label Resolved-3.0 to mark issues that we plan to release at some point before we release the module as GA (generally available). To track issues/features to particular preview and GA releases we are using GitHub projects which are titled to reflect the release.

The focus feature for our next preview release (preview 4) is publishing functionality.

How to Give feedback and Get Support

We cannot overstate how critical user feedback is at this stage in the development of the module. Feedback from preview releases help inform design decisions without incurring a breaking change once generally available and used in production. In order to help us to make key decisions around the behavior of the module please give us feedback by opening issues in our GitHub repository.

Sydney Smith PowerShell Team

The post PowerShellGet 3.0 Preview 3 appeared first on PowerShell.

PowerShell 7 Video Series

This post was originally published on this site

As a part of our PowerShell 7 release, the PowerShell Team put together a series of videos explaining and demoing aspects of the release. The intent of these videos was for User Groups to host events celebrating and discussing PowerShell 7, however, in light of the current guidance against group gatherings, we have decided to make these videos available publicly on the PowerShell Team YouTube channel.

The content is broken down into 10 videos with a total runtime of 58 minutes. We have included links to the videos below, along with links to other related resources.

Helpful Links from the Videos

We hope you enjoy these videos! Please subscribe to the PowerShell Team YouTube channel for more video content from the PowerShell Team.


The post PowerShell 7 Video Series appeared first on PowerShell.