LiveFyre commenting will no longer be available on the PowerShell Gallery

This post was originally published on this site

Commenting on the PowerShell Gallery is provided by LiveFyre–a third-party comment system. LiveFyre is no longer supported by Adobe and therefore we are unable to service issues as they arise. We have gotten reports of authentication failing for Twitter and Microsoft AAD and unfortunately we are unable to bring back those services. As we cannot predict when more issues will occur, and we cannot fix issues as they arise we must depreciate use of LiveFyre on the PowerShell Gallery. As of May 1st 2019 LiveFyre commenting will no longer be available on the PowerShell Gallery. Unfortunately we are unable to migrate comments off of LiveFyre so comment history will be lost.

How will package consumers be able to get support?

The other existing channels for getting support and contacting package owners will still be available on the Gallery. The left pane of the package page is the best place to get support. If you are looking to contact the package owner, select “Contact Owners” on the package page. If you are looking to contact Gallery support use the “Report” button. If the package owner has provided a link to their project site in their module manifest a link to their site is also available in the left pane and can be a good avenue for support. For more information on getting package support please see our documentation.

Questions

We appreciate your understanding as we undergo this transition.
Please direct any questions to sysmith@microsoft.com.

The post LiveFyre commenting will no longer be available on the PowerShell Gallery appeared first on PowerShell.

New – Advanced Request Routing for AWS Application Load Balancers

This post was originally published on this site

AWS Application Load Balancers have been around since the summer of 2016! They support content-based routing, work well for serverless & container-based applications, and are highly scalable. Many AWS customers are using the existing host and path-based routing to power their HTTP and HTTPS applications, while also taking advantage of other ALB features such as port forwarding (great for container-based applications), health checks, service discovery, redirects, fixed responses, and built-in authentication.

Advanced Request Routing
The host-based routing feature allows you to write rules that use the Host header to route traffic to the desired target group. Today we are extending and generalizing this feature, giving you the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the query string, and the source IP address. We are also making the rules and conditions more powerful; rules can have multiple conditions (AND’ed together), and each condition can specify a match on multiple values (OR’ed).

You can use this new feature to simplify your application architecture, eliminate the need for a proxy fleet for routing, and to block unwanted traffic at the load balancer. Here are some use cases:

  • Separate bot/crawler traffic from human traffic.
  • Assign customers or groups of customers to cells (distinct target groups) and route traffic accordingly.
  • Implement A/B testing.
  • Perform canary or blue/green deployments.
  • Route traffic to microservice handlers based on method (PUTs to one target group and GETs to another, for example).
  • Implement access restrictions based on IP address or CDN.
  • Selectively route traffic to on-premises or in-cloud target groups.
  • Deliver different pages or user experiences to various types and categories of devices.

Using Advanced Request Routing
You can use this feature with your existing Application Load Balancers by simply editing your existing rules. I will start with a simple rule that returns a fixed, plain-text response (the examples in this post are for testing and illustrative purposes; I am sure that yours will be more practical and more interesting):

I can use curl to test it:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

I click Insert Rule to set up some advanced request routing:

Then I click Add condition and examine the options that are available to me:

I select Http header, and create a condition that looks for a cookie named user with value jeff. Then I create an action that returns a fixed response:

I click Save, wait a few seconds for the change to take effect, and then issue a pair of requests:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

$ curl --cookie "user=jeff" http://TestALB-156468799.elb.amazonaws.com
Hello Jeff

I can also create a rule that matches one or more CIDR blocks of IP addresses:

$ curl http://TestALB-156468799.elb.amazonaws.com
Hello EC2 Instance

I can match on the query string (this is very useful for A/B testing):

$ curl http://TestALB-156468799.elb.amazonaws.com?ABTest=A
A/B test, option A selected 

I can also use a wildcard if all I care about is the presence of a particular field name:

I can match a standard or custom HTTP method. Here, I will invent one called READ:

$ curl --request READ http://TestALB-156468799.elb.amazonaws.com
Custom READ method invoked

I have a lot of flexibility (not new, but definitely worth reviewing) when it comes to the actions:

Forward to routes the request to a target group (a set of EC2 instances, a Lambda function, or a list of IP addresses).

Redirect to generates a 301 (permanent) or 302 (found) response, and can also be used to switch between HTTP and HTTPS.

Return fixed response generates a static response with any desired response code, as I showed you earlier.

Authenticate uses Amazon Cognito or an OIDC provider to authenticate the request (applicable to HTTPS listeners only).

Things to Know
Here are a couple of other things that you should know about this cool and powerful new feature:

Metrics – You can look at the Rule Evaluations and HTTP fixed response count CloudWatch metrics to learn more about activity related to your rules (learn more):

Programmatic Access – You can also create, modify, examine, and delete rules using the ALB API and CLI (CloudFormation support will be ready soon).

Rule Matching – The rules are powered by string matching, so test well and double-check that your rules are functioning as intended. The matched_rule_priority and actions_executed fields in the ALB access logs can be helpful when debugging and testing (learn more).

Limits – Each ALB can have up to 100 rules, not including the defaults. Each rule can reference up to 5 values and can use up to 5 wildcards. The number of conditions is limited only by the number of unique values that are referenced.

Available Now
Advanced request routing is available now in all AWS regions at no extra charge (you pay the usual prices for the Application Load Balancer).

Jeff;

 

AWS App Mesh – Application-Level Networking for Cloud Applications

This post was originally published on this site

AWS App Mesh helps you to run and monitor HTTP and TCP services at scale. You get a consistent way to route and monitor traffic, giving you insight into problems and the ability to re-route traffic after failures or code changes. App Mesh uses the open source Envoy proxy, giving you access to a wide range of tools from AWS partners and the open source community.

Services can run on AWS Fargate, Amazon EC2, Amazon ECS, Amazon Elastic Container Service for Kubernetes, or Kubernetes. All traffic in and out of the each service goes through the Envoy proxy so that it can be routed, shaped, measured, and logged. This extra level of indirection lets you build your services in any desired languages without having to use a common set of communication libraries.

App Mesh Concepts
Before we dive in, let’s review a couple of important App Mesh concepts and components:

Service Mesh – A a logical boundary for network traffic between the services that reside within it. A mesh can contain virtual services, virtual nodes, virtual routers, and routes.

Virtual Service – An abstraction (logical name) for a service that is provided directly (by a virtual node) or indirectly (through a virtual router). Services within a mesh use the logical names to reference and make use of other services.

Virtual Node – A pointer to a task group (an ECS service or a Kubernetes deployment) or a service running on one or more EC2 instances. Each virtual node can accept inbound traffic via listeners, and can connect to other virtual nodes via backends. Also, each node has a service discovery configuration (currently a DNS name) that allows other nodes to discover the IP addresses of the tasks, pods, or instances.

Virtual Router – A handler for one or more virtual services within a mesh. Each virtual router listens for HTTP traffic on a specific port.

Route – Routes use prefix-based matching on URLs to route traffic to virtual nodes, with optional per-node weights. The weights can be used to test new service versions in production while gradually increasing the amount of traffic that they handle.

Putting it all together, each service mesh contains a set of services that can be accessed by URL paths specified by routes. Within the mesh, services refer to each other by name.

I can access App Mesh from the App Mesh Console, the App Mesh CLI, or the App Mesh API. I’ll show you how to use the Console and take a brief look at the CLI.

Using the App Mesh Console
The console lets me create my service mesh and the components within it. I open the App Mesh Console and click Get started:

I enter the name of my mesh and my first virtual service (I can add more later), and click Next:

I define the first virtual node:

I can click Additional configuration to specify service backends (other services that this one can call) and logging:

I define my node’s listener via protocol (HTTP or TCP) and port, set up an optional health check, and click Next:

Next, I define my first virtual router and a route for it:

I can apportion traffic across several virtual nodes (targets) on a percentage basis, and I can use prefix-based routing for incoming traffic:

I review my choices and click Create mesh service:

The components are created in a few seconds and I am just about ready to go:

The final step is to update my task definitions (Amazon ECS or AWS Fargate) or pod specifications (Amazon EKS or Kubernetes) to reference the Envoy container image and the proxy container image. If my service is running on an EC2 instance, I will need to deploy Envoy there.

Using the AWS App Mesh Command Line
App Mesh lets you specify each type of component in a simple JSON form and provides you with command-line tools to create each one (create-mesh, create-virtual-service, create-virtual-node, and create-virtual-router). For example, I can define a virtual router in a file:

{
  "meshName": "mymesh",
  "spec": {
        "listeners": [
            {
                "portMapping": {
                    "port": 80,
                    "protocol": "http"
                }
            }
        ]
    },
  "virtualRouterName": "serviceA"
}

And create it with one command:

$ aws appmesh create-virtual-router --cli-input-json file://serviceA-router.json

Now Available
AWS App Mesh is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions today.

Jeff;

New – AWS Deep Learning Containers

This post was originally published on this site

We want to make it as easy as possible for you to learn about deep learning and to put it to use in your applications. If you know how to ingest large datasets, train existing models, build new models, and to perform inferences, you’ll be well-equipped for the future!

New Deep Learning Containers
Today I would like to tell you about the new AWS Deep Learning Containers. These Docker images are ready to use for deep learning training or inferencing using TensorFlow or Apache MXNet, with other frameworks to follow. We built these containers after our customers told us that they are using Amazon EKS and ECS to deploy their TensorFlow workloads to the cloud, and asked us to make that task as simple and straightforward as possible. While we were at it, we optimized the images for use on AWS with the goal of reducing training time and increasing inferencing performance.

The images are pre-configured and validated so that you can focus on deep learning, setting up custom environments and workflows on Amazon ECS, Amazon Elastic Container Service for Kubernetes, and Amazon Elastic Compute Cloud (EC2) in minutes! You can find them in AWS Marketplace and Elastic Container Registry, and use them at no charge. The images can be used as-is, or can be customized with additional libraries or packages.

Multiple Deep Learning Containers are available, with names based on the following factors (not all combinations are available):

  • Framework – TensorFlow or MXNet.
  • Mode – Training or Inference. You can train on a single node or on a multi-node cluster.
  • Environment – CPU or GPU.
  • Python Version – 2.7 or 3.6.
  • Distributed Training – Availability of the Horovod framework.
  • Operating System – Ubuntu 16.04.

Using Deep Learning Containers
In order to put an AWS Deep Learning Container to use, I create an Amazon ECS cluster with a p2.8xlarge instance:

$ aws ec2 run-instances --image-id  ami-0ebf2c738e66321e6 
  --count 1 --instance-type p2.8xlarge 
  --key-name keys-jbarr-us-east ... 

I verify that the cluster is running, and check that the ECS Container Agent is active:

Then I create a task definition in a text file (gpu_task_def.txt):

{
  "requiresCompatibilities": [
    "EC2"
  ],
  "containerDefinitions": [
    {
      "command": [
        "tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=saved_model_half_plus_two_gpu  --model_base_path=/models/saved_model_half_plus_two_gpu"
      ],
      "entryPoint": [
        "sh",
        "-c"
      ],
      "name": "EC2TFInference",
      "image": "841569659894.dkr.ecr.us-east-1.amazonaws.com/sample_tf_inference_images:gpu_with_half_plus_two_model",
      "memory": 8111,
      "cpu": 256,
      "resourceRequirements": [
        {
          "type": "GPU",
          "value": "1"
        }
      ],
      "essential": true,
      "portMappings": [
        {
          "hostPort": 8500,
          "protocol": "tcp",
          "containerPort": 8500
        },
        {
          "hostPort": 8501,
          "protocol": "tcp",
          "containerPort": 8501
        },
        {
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/TFInference",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ],
  "volumes": [],
  "networkMode": "bridge",
  "placementConstraints": [],
  "family": "Ec2TFInference"
}

I register the task definition and capture the revision number (3):

Next, I create a service using the task definition and revision number:

I use the console to navigate to the task:

Then I find the external binding for port 8501:

Then I run three inferences (this particular model was trained on the function y = ax + b, with a = 0.5 and b = 2):

$ curl -d '{"instances": [1.0, 2.0, 5.0]}' 
  -X POST http://xx.xxx.xx.xx:8501/v1/models/saved_model_half_plus_two_gpu:predict
{
    "predictions": [2.5, 3.0, 4.5
    ]
}

As you can see, the inference predicted the values 2.5, 3.0, and 4.5 when given inputs of 1.0, 2.0, and 5.0. This is a very, very simple example but it shows how you can use a pre-trained model to perform inferencing in ECS via the new Deep Learning Containers. You can also launch a model for training purposes, perform the training, and then run some inferences.

Jeff;

New – Concurrency Scaling for Amazon Redshift – Peak Performance at All Times

This post was originally published on this site

Amazon Redshift is a data warehouse that can expand to exabyte-scale. Today, tens of thousands of AWS customers (including NTT DOCOMO, Finra, and Johnson & Johnson) use Redshift to run mission-critical BI dashboards, analyze real-time streaming data, and run predictive analytics jobs.

A challenge arises when the number of concurrent queries grows at peak times. When a multitude of business analysts all turn to their BI dashboards or long-running data science workloads compete with other workloads for resources, Redshift will queue queries until enough compute resources become available in the cluster. This ensures that all of the work gets done, but it can mean that performance is impacted at peak times. Two options present themselves:

  • Overprovision the cluster to meet peak needs. This option addresses the immediate issue, but wastes resources and costs more than necessary.
  • Optimize the cluster for typical workloads. This option forces you to wait longer for results at peak times, possibly delaying important business decisions.

New Concurrency Scaling
Today I would like to offer a third option. You can now configure Redshift to add more query processing power on an as-needed basis. This happens transparently and in a manner of seconds, and provides you with fast, consistent performance even as the workload grows to hundreds of concurrent queries. Additional processing power is ready in seconds and does not need to be pre-warmed or pre-provisioned. You pay only for what you use, with per-second billing and also accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. The extra processing power is removed when it is no longer needed, making this a perfect way to address the bursty use cases that I described above.

You can allocate the burst power to specific users or queues, and you can continue to use your existing BI and ETL applications. Concurrency Scaling Clusters are used to handle many forms of read-only queries, with additional flexibility in the works; read about Concurrency Scaling to learn more.

Using Concurrency Scaling
This feature can be enabled for an existing cluster in minutes! We recommend starting with a fresh Redshift Parameter Group for testing purposes, so I start by creating one:

Then I edit my cluster’s Workload Management Configuration, select the new parameter group, set the Concurrency Scaling Mode to auto, and click Save:

I will use the Cloud Data Warehouse Benchmark Derived From TPC-DS as a source of test data and test queries. I download the DDL, customize it with my AWS credentials, and use psql to connect to my cluster and create the test data:

sample=# create database sample;
CREATE DATABASE
sample=# connect sample;
psql (9.2.24, server 8.0.2)
WARNING: psql version 9.2, server version 8.0.
         Some psql features might not work.
SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256)
You are now connected to database "sample" as user "awsuser".
sample=# i ddl.sql

The DDL creates the tables and loads populates them using data stored in an S3 bucket:

sample=# dt
                 List of relations
 schema |          name          | type  |  owner
--------+------------------------+-------+---------
 public | call_center            | table | awsuser
 public | catalog_page           | table | awsuser
 public | catalog_returns        | table | awsuser
 public | catalog_sales          | table | awsuser
 public | customer               | table | awsuser
 public | customer_address       | table | awsuser
 public | customer_demographics  | table | awsuser
 public | date_dim               | table | awsuser
 public | dbgen_version          | table | awsuser
 public | household_demographics | table | awsuser
 public | income_band            | table | awsuser
 public | inventory              | table | awsuser
 public | item                   | table | awsuser
 public | promotion              | table | awsuser
 public | reason                 | table | awsuser
 public | ship_mode              | table | awsuser
 public | store                  | table | awsuser
 public | store_returns          | table | awsuser
 public | store_sales            | table | awsuser
 public | time_dim               | table | awsuser
 public | warehouse              | table | awsuser
 public | web_page               | table | awsuser
 public | web_returns            | table | awsuser
 public | web_sales              | table | awsuser
 public | web_site               | table | awsuser
(25 rows)

Then I download the queries and open up a bunch of PuTTY windows so that I can generate a meaningful load for my Redshift cluster:

I run an initial set of parallel queries, and then ramp up over time, I can see them in the Cluster Performance tab for my cluster:

I can see the additional processing power come online as needed, and then go away when no longer needed, in the Database Performance tab:

As you can see, my cluster scales as needed in order to handle all of the queries as expeditiously as possible. The Concurrency Scaling Usage shows me how many seconds of additional processing power I have consumed (as I noted earlier, each cluster accumulates a full hour of concurrency credits every 24 hours).

I can use the parameter max_concurrency_scaling_clusters to control the number of Concurrency Scaling Clusters that can be used (the default limit is 10, but you can request an increase if you need more).

Available Today
You can start making use of Concurrency Scaling Clusters today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions today, with more to come later this year.

Jeff;

 

New AMD EPYC-Powered Amazon EC2 M5ad and R5ad Instances

This post was originally published on this site

Last year I told you about our New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances. Built on the AWS Nitro System, these instances are powered by custom AMD EPYC processors running at 2.5 GHz. They are priced 10% lower than comparable EC2 M5 and R5 instances, and give you a new opportunity to balance your instance mix based on cost and performance.

Today we are adding M5ad and R5ad instances, both powered by custom AMD EPYC 7000 series processors and built on the AWS Nitro System.

M5ad and R5ad Instances
These instances add high-speed, low latency local (physically connected) block storage to the existing M5a and R5a instances that we launched late last year.

M5ad instances are designed for general purpose workloads such as web servers, app servers, dev/test environments, gaming, logging, and media processing. They are available in 6 sizes:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
m5ad.large
2 8 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5ad.xlarge
4 16 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5ad.2xlarge
8 32 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
m5ad.4xlarge
16 64 GiB 2 x 300 GB NVMe SSD 2.120 Gbps Up to 10 Gbps
m5ad.12xlarge
48 192 GiB 2 x 900 GB NVMe SSD 5 Gbps 10 Gbps
m5ad.24xlarge
96 384 GiB 4 x 900 GB NVMe SSD 10 Gbps 20 Gbps

R5ad instances are designed for memory-intensive workloads: data mining, in-memory analytics, caching, simulations, and so forth. The R5ad instances are available in 6 sizes:

Instance Name vCPUs RAM Local Storage EBS-Optimized Bandwidth Network Bandwidth
r5ad.large
2 16 GiB 1 x 75 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
r5ad.xlarge
4 32 GiB 1 x 150 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
r5ad.2xlarge
8 64 GiB 1 x 300 GB NVMe SSD Up to 2.120 Gbps Up to 10 Gbps
r5ad.4xlarge
16 128 GiB 2 x 300 GB NVMe SSD 2.120 Gbps Up to 10 Gbps
r5ad.12xlarge
48 384 GiB 2 x 900 GB NVMe SSD 5 Gbps 10 Gbps
r5ad.24xlarge
96 768 GiB 4 x 900 GB NVMe SSD 10 Gbps 20 Gbps

Again, these instances are available in the same sizes as the M5d and R5d instances, and the AMIs work on either, so go ahead and try both!

Here are some things to keep in mind about the local NMVe storage on the M5ad and R5ad instances:

Naming – You don’t have to specify a block device mapping in your AMI or during the instance launch; the local storage will show up as one or more devices (/dev/nvme*1 on Linux) after the guest operating system has booted.

Encryption – Each local NVMe device is hardware encrypted using the XTS-AES-256 block cipher and a unique key. Each key is destroyed when the instance is stopped or terminated.

Lifetime – Local NVMe devices have the same lifetime as the instance they are attached to, and do not stick around after the instance has been stopped or terminated.

M5ad and R5ad instances are available in the US East (N. Virginia), US West (Oregon), US East (Ohio), and Asia Pacific (Singapore) Regions in On-Demand, Spot, and Reserved Instance form.

Jeff;

 

New Amazon S3 Storage Class – Glacier Deep Archive

This post was originally published on this site

Many AWS customers collect and store large volumes (often a petabyte or more) of important data but seldom access it. In some cases raw data is collected and immediately processed, then stored for years or decades just in case there’s a need for further processing or analysis. In other cases, the data is retained for compliance or auditing purposes. Here are some of the industries and use cases that fit this description:

Financial – Transaction archives, activity & audit logs, and communication logs.

Health Care / Life Sciences – Electronic medical records, images (X-Ray, MRI, or CT), genome sequences, records of pharmaceutical development.

Media & Entertainment – Media archives and raw production footage.

Physical Security – Raw camera footage.

Online Advertising – Clickstreams and ad delivery logs.

Transportation – Vehicle telemetry, video, RADAR, and LIDAR data.

Science / Research / Education – Research input and results, including data relevant to seismic tests for oil & gas exploration.

Today we are introducing a new and even more cost-effective way to store important, infrequently accessed data in Amazon S3.

Amazon S3 Glacier Deep Archive Storage Class
The new Glacier Deep Archive storage class is designed to provide durable and secure long-term storage for large amounts of data at a price that is competitive with off-premises tape archival services. Data is stored across 3 or more AWS Availability Zones and can be retrieved in 12 hours or less. You no longer need to deal with expensive and finicky tape drives, arrange for off-premises storage, or worry about migrating data to newer generations of media.

Your existing S3-compatible applications, tools, code, scripts, and lifecycle rules can all take advantage of Glacier Deep Archive storage. You can specify the new storage class when you upload objects, alter the storage class of existing objects manually or programmatically, or use lifecycle rules to arrange for migration based on object age. You can also make use of other S3 features such as Storage Class Analysis, Object Tagging, Object Lock, and Cross-Region Replication.

The existing S3 Glacier storage class allows you to access your data in minutes (using expedited retrieval) and is a good fit for data that requires faster access. To learn more about the entire range of options, read Storage Classes in the S3 Developer Guide. If you are already making use of the Glacier storage class and rarely access your data, you can switch to Deep Archive and begin to see cost savings right away.

Using Glacier Deep Archive Storage – Console
I can switch the storage class of an existing S3 object to Glacier Deep Archive using the S3 Console. I locate the file and click Properties:

Then I click Storage class:

Next, I select Glacier Deep Archive and click Save:

I cannot download the object or edit any of its properties or permissions after I make this change:

In the unlikely event that I need to access this 2013-era video, I select it and choose Restore from the Actions menu:

Then I specify the number of days to keep the restored copy available, and choose either bulk or standard retrieval:

Using Glacier Deep Archive Storage – Lifecycle Rules
I can also use S3 lifecycle rules. I select the bucket and click Management, then select Lifecycle:

Then I click Add lifecycle rule and create my rule. I enter a name (ArchiveOldMovies), and can optionally use a path or tag filter to limit the scope of the rule:

Next, I indicate that I want the rule to apply to the Current version of my objects, and specify that I want my objects to transition to Glacier Deep Archive 30 days after they are created:

Using Glacier Deep Archive – CLI / Programmatic Access
I can use the CLI to upload a new object and set the storage class:

$ aws s3 cp new.mov s3://awsroadtrip-videos-raw/ --storage-class DEEP_ARCHIVE

I can also change the storage class of an existing object by copying it over itself:

$ aws s3 cp s3://awsroadtrip-videos-raw/new.mov s3://awsroadtrip-videos-raw/new.mov --storage-class DEEP_ARCHIVE

If I am building a system that manages archiving and restoration, I can opt to receive notifications on an SNS topic, an SQS queue, or a Lambda function when a restore is initiated and/or completed:

Other Access Methods
You can also use Tape Gateway configuration of AWS Storage Gateway to create a Virtual Tape Library (VTL) and configure it to use Glacier Deep Archive for storage of archived virtual tapes. This will allow you to move your existing tape-based backups to the AWS Cloud without making any changes to your existing backup workflows. You can retrieve virtual tapes archived in Glacier Deep Archive to S3 within twelve hours. With Tape Gateway and S3 Glacier Deep Archive, you no longer need on-premises physical tape libraries, and and you don’t need to manage hardware refreshes and rewrite data to new physical tapes as technologies evolve. For more information, visit the Test Your Gateway Setup with Backup Software page of Storage Gateway User Guide.

Now Available
The S3 Glacier Deep Archive storage class is available today in all commercial regions and in both AWS GovCloud regions. Pricing varies by region, and the storage cost is up to 75% less than for the existing S3 Glacier storage class; visit the S3 Pricing page for more information.

Jeff;

PowerShell ScriptAnalyzer Version 1.18.0 Released

This post was originally published on this site

PSScriptAnalyzer (PSSA1.18.0 is now available on the PSGallery and brings a lot of improvements in the following areas:

  • Better compatibility analysis of commands, types and syntax across different platforms and versions of PowerShell
  • Better formatting and customization. New capabilities are:
    • Multi-line pipeline indentation styles
    • Cmdlet casing for better consistency and readability
    • Consistent whitespace inside braces and pipes
  • Custom rules can now be suppressed and preserve the RuleSuppressionID
  • Better DSC support by being able to understand different syntaxes of Import-DscResource
  • Better user experience by being able to pipe to Invoke-ScriptAnalyzer and added tab completion of the returned objects that are piped to the next pipeline
  • Better handling of parsing errors by emitting them as a diagnostic record with a new Severity type
  • Improved Performance: Expect it to be about twice as fast in most cases and even more when re-analyzing a file. More on this below
  • Fixes and enhancements to the engine, rules, and documentation

There are some minor breaking changes such as e.g. requiring the minimum version of PowerShell Core to 6.1 as 6.0 has reached the end of its support lifecycle. With this, it was possible to update the used version of Newtonsoft.Json to 11.0.2. On Windows PowerShell, the minimum required runtime was upped from4.5.0 to4.5.2, which is the lowest version that is still supported by Microsoft but Windows update will have taken care of upgrading the to this patched version anyway, therefore no disruption is expected. We have also replaced old command data files of PowerShell 6.0 with a newer version for theUseCompatibleCmdletsrule.

Formatting

New rules and new features/customization of existing rules were added and in an upcoming release of the PowerShell vscode extension, those new features will also be configurable from within the extension in an upcoming update.

New PSUseConsistentWhitespace options

The PSUseConsistentWhitespace rule has 2 new configuration options that are both enabled by default:

  • CheckInnerBrace: Checks if there is a space after the opening brace and a space before the closing brace. E.g. if ($true) { foo } instead of if ($true) {bar}.
  • CheckPipe: Checks if a pipe is surrounded on both sides by a space. E.g. foo | bar instead offoo|bar.

In an upcoming update of the PowerShell vscode extension, this feature will be configurable in the editor via the settings powershell.WhitespaceInsideBrace and powershell.WhitespaceAroundPipe.

New PipelineIndentation option for PSUseConsistentIndentation

The PSUseConsistentIndentation rule was fixed to handle multi-line pipeline (before, the behavior was a bit ill-defined) and as part of that we decided to expose 3 options for a new configuration option calledPipelineIndentation. This allows PSSA to cater to different tastes of the user whether to increase indentation after a pipeline for multi-line statements. The settings are:

  • IncreaseIndentationForFirstPipeline (default): Indent once after the first pipeline and keep this indentation. Example:
foo |
    bar |
    baz
  • IncreaseIndentationAfterEveryPipeline: Indent more after the first pipeline and keep this indentation. Example:
foo |
    bar |
        baz
  • NoIndentation: Do not increase indentation. Example:
foo |
bar |
baz

In an upcoming update of the PowerShell vscode extension, this feature will be configurable in the editor via the setting powershell.codeFormatting.

New PSUseConsistentCasing rule

By popular request, this rule can correct the casing of cmdlet names. This can correct e.g. get-azadapplicaTION to Get-AzADApplication. This not only makes code more consistent but can improve readability in most cases. In an upcoming update of the PowerShell vscode extension, this feature will be configurable in the editor via the settingpowershell.useCorrectCasingsettings.

Compatibility Analysis

The UseCompatibleCmdlets rule requires JSON files in the Settings folder of PSSA’s installation and their file name is mapped to directly the compatibility configuration. In the new version we have replaced the JSON files for PowerShell 6.0 with files for 6.1 and also added new files for e.g. ARM on Linux (Raspian) and also PowerShell 2.0 that is still being used by some despite it being deprecated. If desired, one can always add custom JSON files to the Settings folder and it will just work by using the filename without the need to re-compile. To generate your custom JSON file for your environment, you can use the New-CommandDataFile.ps1 script.

To further add more analysis, 3 more rules were being added:

These rules do not follow the definition style of the UseCompatibleCmdlets rule. For usage and examples please refer to the rule documentation links of the 3 new rules above, there will be a more detailed blog post about them in the future.

Better DSC Support

Invoke-ScriptAnalyzer has a -SaveDscDependency switch that will download the required module from the PSGalleryto allow for parsing of the DSC files. In order to do that is has to parse calls to Import-DscResource correctly. Previously it could neither take the version into account or parse the hashtable syntax (Import-DscResource -ModuleName (@{ModuleName="SomeDscModule1";ModuleVersion="1.2.3.4"})). We added support for both of them. But because there could be different variations of the first one (different parameter name order or not using named paramters, etc.), please use it in the form Import-DscResource -ModuleName MyModuleName -ModuleVersion 1.2.3.4.

Custom Rules

We added the capability of being able to suppress violations from custom rules the same way how you can already suppress rules from the built-in rules. It is worth noting though that the rulename of custom rules has to be of the format CustomRuleModuleFileNameCustomRuleName, this is to uniquely identify the rule as it could be possible that 2 custom rule modules emit a rule of the same name.

When a custom rule emits a DiagnosticRecord, then the engine has to translate all properties of the object as it has to be re-created when emitting it via Invoke-ScriptAnalyzer. We added the translation of the SuggestedCorrectionsproperty already in the last release (1.17.1) to allow for auto-correction in the editor or via the -Fix switch. However, we also found that customers want to also use the RuleSuppressionID property in their custom rules, so we added translation for this as well.

Engine Improvements

PSScriptAnalyzer is highly multi-threaded by executing each rule (excluding custom or DSC rules) in parallel in each own thread. But there are some global resources such as e.g. a CommandInfo cache that needs to be accessed using a lock. Caching and lock granularity has been improved and are therefore reducing congestion which leads to much better performance. You can expect PSScriptAnalyzer to be about twice as fast for ‘cold runs’ (where Invoke-ScriptAnalyzerhas not been called before) and magnitudes faster when re-analyzing the same file. To you as a user, this will mean that you will see the squiggles faster when opening a new file in VS-Code and you will get faster updates when editing a file whilst reducing the CPU consumption in the background. We have more optimizations planned in this area, you can expect further improvements of similar scale in future versions and we hope to release future versions more frequent as well.

Miscellaneous Fixes

We received reports of some functionality not working when using Turkish culture and made a fix for and as part reviewed some culture critical points and made sure they work better across all cultures. The bug was very specific to Turkish culture, therefore we are confident that PSSA should work with any other cultures as well.

The Changelog has more details on the various fixes that were made to other rules.

On behalf of the Script Analyzer team,

Chris Bergmeister, Project Maintainer
Jim Truher, Senior Software Engineer, Microsoft

The post PowerShell ScriptAnalyzer Version 1.18.0 Released appeared first on PowerShell.

New – Gigabit Connectivity Options for Amazon Direct Connect

This post was originally published on this site

AWS Direct Connect gives you the ability to create private network connections between your datacenter, office, or colocation environment and AWS. The connections start at your network and end at one of 91 AWS Direct Connect locations and can reduce your network costs, increase throughput, and deliver a more consistent experience than an Internet-based connection. In most cases you will need to work with an AWS Direct Connect Partner to get your connection set up.

As I prepared to write this post, I learned that my understanding of AWS Direct Connect was incomplete, and that the name actually encompasses three distinct models. Here’s a summary:

Dedicated Connections are available with 1 Gbps and 10 Gbps capacity. You use the AWS Management Console to request a connection, after which AWS will review your request and either follow up via email to request additional information or provision a port for your connection. Once AWS has provisioned a port for you, the remaining time to complete the connection by the AWS Direct Connect Partner will vary between days and weeks. A Dedicated Connection is a physical Ethernet port dedicated to you. Each Dedicated Connection supports up to 50 Virtual Interfaces (VIFs). To get started, read Creating a Connection.

Hosted Connections are available with 50 to 500 Mbps capacity, and connection requests are made via an AWS Direct Connect Partner. After the AWS Direct Connect Partner establishes a network circuit to your premises, capacity to AWS Direct Connect can be added or removed on demand by adding or removing Hosted Connections. Each Hosted Connection supports a single VIF; you can obtain multiple VIFs by acquiring multiple Hosted Connections. The AWS Direct Connect Partner provisions the Hosted Connection and sends you an invite, which you must accept (with a click) in order to proceed.

Hosted Virtual Interfaces are also set up via AWS Direct Connect Partners. A Hosted Virtual Interface has access to all of the available capacity on the network link between the AWS Direct Connect Partner and an AWS Direct Connect location. The network link between the AWS Direct Connect Partner and the AWS Direct Connect location is shared by multiple customers and could possibly be oversubscribed. Due to the possibility of oversubscription in the Hosted Virtual Interface model, we no longer allow new AWS Direct Connect Partner service integrations using this model and recommend that customers with workloads sensitive to network congestion use Dedicated or Hosted Connections.

Higher Capacity Hosted Connections
Today we are announcing Hosted Connections with 1, 2, 5, or 10 Gbps of capacity. These capacities will be available through a select set of AWS Direct Connect Partners who have been specifically approved by AWS. We are also working with AWS Direct Connect Partners to implement additional monitoring of the network link between the AWS Direct Connect Partners and AWS.

Most AWS Direct Connect Partners support adding or removing Hosted Connections on demand. Suppose that you archive a massive amount of data to Amazon Glacier at the end of every quarter, and that you already have a pair of resilient 10 Gbps circuits from your AWS Direct Connect Partner for use by other parts of your business. You then create a pair of resilient 1, 2, 5 or 10 Gbps Hosted Connections at the end of the quarter, upload your data to Glacier, and then delete the Hosted Connections.

You pay AWS for the port-hour charges while the Hosted Connections are in place, along with any associated data transfer charges (see the Direct Connect Pricing page for more info). Check with your AWS Direct Connect Partner for the charges associated with their services. You get a cost-effective, elastic way to move data to the cloud while creating Hosted Connections only when needed.

Available Now
The new higher capacity Hosted Connections are available through select AWS Direct Connect Partners after they are approved by AWS.

Jeff;

PS – As part of this launch, we are reducing the prices for the existing 200, 300, 400, and 500 Mbps Hosted Connection capacities by 33.3%, effective March 1, 2019.

 

Serial port locked after Win10 Pro VM/Service restart

This post was originally published on this site

ESXi 6.7 on HP ProLiant DL380 G9 and Win10 Pro VM.

 

Physical serial port assigned to VM and works fine under control of a service. It is assigned to device /dev/char/serial/uart0.

Runs for days without a glitch.

 

If I restart the VM or the service to update a piece of code or a configuration, the serial port is not available anymore, no matter how many times I do resets.

 

To recover, I need to:

– inhibit the serial port in the VM’s Device Manager

– restart the service

– activate the serial port in the VM’s Device Manager

– restart the service again.

– then it works … until I need to restart the service another time …

 

Anyone has an idea what the cause may be … and, most importantly, what’s the solution?

 

Thanks