LiveFyre commenting will no longer be available on the PowerShell Gallery

This post was originally published on this site

Commenting on the PowerShell Gallery is provided by LiveFyre–a third-party comment system. LiveFyre is no longer supported by Adobe and therefore we are unable to service issues as they arise. We have gotten reports of authentication failing for Twitter and Microsoft AAD and unfortunately we are unable to bring back those services. As we cannot predict when more issues will occur, and we cannot fix issues as they arise we must depreciate use of LiveFyre on the PowerShell Gallery. As of May 1st 2019 LiveFyre commenting will no longer be available on the PowerShell Gallery. Unfortunately we are unable to migrate comments off of LiveFyre so comment history will be lost.

How will package consumers be able to get support?

The other existing channels for getting support and contacting package owners will still be available on the Gallery. The left pane of the package page is the best place to get support. If you are looking to contact the package owner, select “Contact Owners” on the package page. If you are looking to contact Gallery support use the “Report” button. If the package owner has provided a link to their project site in their module manifest a link to their site is also available in the left pane and can be a good avenue for support. For more information on getting package support please see our documentation.

Questions

We appreciate your understanding as we undergo this transition.
Please direct any questions to sysmith@microsoft.com.

The post LiveFyre commenting will no longer be available on the PowerShell Gallery appeared first on PowerShell.

VMware Releases Security Updates

This post was originally published on this site

Original release date: March 29, 2019

VMware has released security updates to address vulnerabilities in multiple products. An attacker could exploit some of these vulnerabilities to take control of an affected system.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the VMware Security Advisories VMSA-2019-0004 and VMSA-2019-0005 and apply the necessary updates.


This product is provided subject to this Notification and this Privacy & Use policy.

Special Webcast: Best Practices for Network Security Resilience – March 28, 2019 10:30am US/Eastern

This post was originally published on this site

Speakers: Jon Oltsik and Keith Bromley

Its not a question of IF your network will be breached, but WHEN. News broadcasts for the last several years have shown that most enterprise networks will be hacked at some point. In addition, the time it takes for most IT departments to notice the intrusion usually takes monthsover six months according to the Ponemon Institute. This gives hackers plenty of time to find what they want and exfiltrate whatever information they want. There are some clear things that you can do to minimize your corporate risk and the potential costs of a breach. One new approach is to create a resilient security architecture model. The intent of this model is to create a solution that gets the network back up and running after a breach has occurred, as fast as possible. While prevention should always be a key security architecture goal, a resilient architecture goal focusses on recognizing the breach, investigating the breach, and then remediating the damage as quickly as possible.

At this webinar you’ll learn:

  • What network security resilience is
  • The benefits of this security technique
  • Examples of visibility and security solutions that you can implement to reduce the time to remediation

Cisco Releases Security Update for Cisco IOS XE

This post was originally published on this site

Original release date: March 28, 2019

Cisco has released a security update to address a vulnerability in Cisco IOS XE. An attacker could exploit this vulnerability to obtain sensitive information.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the Cisco Security Advisory and apply the necessary update.


This product is provided subject to this Notification and this Privacy & Use policy.

Analyst Webcast: Taming the Endpoint Chaos Within: A Review of Panda Security Adaptive Defense 360 – March 27, 2019 11:00am US/Eastern

This post was originally published on this site

Speakers: Justin Henderson and Rui Lopes

The idea of 100% prevention is something that often seems out of reach, but there are solutions that combine artificial intelligence and managed services in an effort to enhance organizations’ abilities to stop attackers before they damage endpoints and to seamlessly investigate and reduce the attack surface.

To survive in a world in which attackers deploy automated malware and targeted adversarial attacks, organizations need endpoint platforms that provide automated protection and detection mechanisms. However, it is important—and often difficult—to establish a balance between automation of protection/detection and ease of maintaining such a solution.

In this webcast, SANS Analyst Justin Henderson reviews Panda Adaptive Defense 360, which reports providing cybersecurity at scale, and its capability to improve prevention of security incidents. Attendees will learn to:

  • Realize the significance of using endpoint protection, detection and response capabilities in concert to stop attackers before they gain a foothold in an endpoint
  • Appreciate the value of 100% attestation in reducing the number of incidents that require investigation
  • Understand the progression of endpoint protection from auditing to hardening to locking down the endpoint
  • Investigate attacks on endpoints through visualization tools
  • Make the most of options to lock down infected endpoints

Register today to be among the first to receive the associated whitepaper written by SANS Analyst and endpoint security expert Justin Henderson.

About Panda Security

Panda Security is the leading Spanish multinational offering advanced cybersecurity solutions and services, as well as management and monitoring tools.

Consistently maintaining a spirit of innovation, Panda has marked several historical milestones in the industry. Currently, the development of advanced cybersecurity technologies has become the core of its business.

New – Advanced Request Routing for AWS Application Load Balancers

This post was originally published on this site

AWS Application Load Balancers have been around since the summer of 2016! They support content-based routing, work well for serverless & container-based applications, and are highly scalable. Many AWS customers are using the existing host and path-based routing to power their HTTP and HTTPS applications, while also taking advantage of other ALB features such as port forwarding (great for container-based applications), health checks, service discovery, redirects, fixed responses, and built-in authentication.

Advanced Request Routing
The host-based routing feature allows you to write rules that use the Host header to route traffic to the desired target group. Today we are extending and generalizing this feature, giving you the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the query string, and the source IP address. We are also making the rules and conditions more powerful; rules can have multiple conditions (AND’ed together), and each condition can specify a match on multiple values (OR’ed).

You can use this new feature to simplify your application architecture, eliminate the need for a proxy fleet for routing, and to block unwanted traffic at the load balancer. Here are some use cases:

  • Separate bot/crawler traffic from human traffic.
  • Assign customers or groups of customers to cells (distinct target groups) and route traffic accordingly.
  • Implement A/B testing.
  • Perform canary or blue/green deployments.
  • Route traffic to microservice handlers based on method (PUTs to one target group and GETs to another, for example).
  • Implement access restrictions based on IP address or CDN.
  • Selectively route traffic to on-premises or in-cloud target groups.
  • Deliver different pages or user experiences to various types and categories of devices.

Using Advanced Request Routing
You can use this feature with your existing Application Load Balancers by simply editing your existing rules. I will start with a simple rule that returns a fixed, plain-text response (the examples in this post are for testing and illustrative purposes; I am sure that yours will be more practical and more interesting):

I can use curl to test it:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

I click Insert Rule to set up some advanced request routing:

Then I click Add condition and examine the options that are available to me:

I select Http header, and create a condition that looks for a cookie named user with value jeff. Then I create an action that returns a fixed response:

I click Save, wait a few seconds for the change to take effect, and then issue a pair of requests:

$ curl http://TestALB-156468799.elb.amazonaws.com
Default rule reached!

$ curl --cookie "user=jeff" http://TestALB-156468799.elb.amazonaws.com
Hello Jeff

I can also create a rule that matches one or more CIDR blocks of IP addresses:

$ curl http://TestALB-156468799.elb.amazonaws.com
Hello EC2 Instance

I can match on the query string (this is very useful for A/B testing):

$ curl http://TestALB-156468799.elb.amazonaws.com?ABTest=A
A/B test, option A selected 

I can also use a wildcard if all I care about is the presence of a particular field name:

I can match a standard or custom HTTP method. Here, I will invent one called READ:

$ curl --request READ http://TestALB-156468799.elb.amazonaws.com
Custom READ method invoked

I have a lot of flexibility (not new, but definitely worth reviewing) when it comes to the actions:

Forward to routes the request to a target group (a set of EC2 instances, a Lambda function, or a list of IP addresses).

Redirect to generates a 301 (permanent) or 302 (found) response, and can also be used to switch between HTTP and HTTPS.

Return fixed response generates a static response with any desired response code, as I showed you earlier.

Authenticate uses Amazon Cognito or an OIDC provider to authenticate the request (applicable to HTTPS listeners only).

Things to Know
Here are a couple of other things that you should know about this cool and powerful new feature:

Metrics – You can look at the Rule Evaluations and HTTP fixed response count CloudWatch metrics to learn more about activity related to your rules (learn more):

Programmatic Access – You can also create, modify, examine, and delete rules using the ALB API and CLI (CloudFormation support will be ready soon).

Rule Matching – The rules are powered by string matching, so test well and double-check that your rules are functioning as intended. The matched_rule_priority and actions_executed fields in the ALB access logs can be helpful when debugging and testing (learn more).

Limits – Each ALB can have up to 100 rules, not including the defaults. Each rule can reference up to 5 values and can use up to 5 wildcards. The number of conditions is limited only by the number of unique values that are referenced.

Available Now
Advanced request routing is available now in all AWS regions at no extra charge (you pay the usual prices for the Application Load Balancer).

Jeff;

 

AWS App Mesh – Application-Level Networking for Cloud Applications

This post was originally published on this site

AWS App Mesh helps you to run and monitor HTTP and TCP services at scale. You get a consistent way to route and monitor traffic, giving you insight into problems and the ability to re-route traffic after failures or code changes. App Mesh uses the open source Envoy proxy, giving you access to a wide range of tools from AWS partners and the open source community.

Services can run on AWS Fargate, Amazon EC2, Amazon ECS, Amazon Elastic Container Service for Kubernetes, or Kubernetes. All traffic in and out of the each service goes through the Envoy proxy so that it can be routed, shaped, measured, and logged. This extra level of indirection lets you build your services in any desired languages without having to use a common set of communication libraries.

App Mesh Concepts
Before we dive in, let’s review a couple of important App Mesh concepts and components:

Service Mesh – A a logical boundary for network traffic between the services that reside within it. A mesh can contain virtual services, virtual nodes, virtual routers, and routes.

Virtual Service – An abstraction (logical name) for a service that is provided directly (by a virtual node) or indirectly (through a virtual router). Services within a mesh use the logical names to reference and make use of other services.

Virtual Node – A pointer to a task group (an ECS service or a Kubernetes deployment) or a service running on one or more EC2 instances. Each virtual node can accept inbound traffic via listeners, and can connect to other virtual nodes via backends. Also, each node has a service discovery configuration (currently a DNS name) that allows other nodes to discover the IP addresses of the tasks, pods, or instances.

Virtual Router – A handler for one or more virtual services within a mesh. Each virtual router listens for HTTP traffic on a specific port.

Route – Routes use prefix-based matching on URLs to route traffic to virtual nodes, with optional per-node weights. The weights can be used to test new service versions in production while gradually increasing the amount of traffic that they handle.

Putting it all together, each service mesh contains a set of services that can be accessed by URL paths specified by routes. Within the mesh, services refer to each other by name.

I can access App Mesh from the App Mesh Console, the App Mesh CLI, or the App Mesh API. I’ll show you how to use the Console and take a brief look at the CLI.

Using the App Mesh Console
The console lets me create my service mesh and the components within it. I open the App Mesh Console and click Get started:

I enter the name of my mesh and my first virtual service (I can add more later), and click Next:

I define the first virtual node:

I can click Additional configuration to specify service backends (other services that this one can call) and logging:

I define my node’s listener via protocol (HTTP or TCP) and port, set up an optional health check, and click Next:

Next, I define my first virtual router and a route for it:

I can apportion traffic across several virtual nodes (targets) on a percentage basis, and I can use prefix-based routing for incoming traffic:

I review my choices and click Create mesh service:

The components are created in a few seconds and I am just about ready to go:

The final step is to update my task definitions (Amazon ECS or AWS Fargate) or pod specifications (Amazon EKS or Kubernetes) to reference the Envoy container image and the proxy container image. If my service is running on an EC2 instance, I will need to deploy Envoy there.

Using the AWS App Mesh Command Line
App Mesh lets you specify each type of component in a simple JSON form and provides you with command-line tools to create each one (create-mesh, create-virtual-service, create-virtual-node, and create-virtual-router). For example, I can define a virtual router in a file:

{
  "meshName": "mymesh",
  "spec": {
        "listeners": [
            {
                "portMapping": {
                    "port": 80,
                    "protocol": "http"
                }
            }
        ]
    },
  "virtualRouterName": "serviceA"
}

And create it with one command:

$ aws appmesh create-virtual-router --cli-input-json file://serviceA-router.json

Now Available
AWS App Mesh is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions today.

Jeff;

Cisco Releases Security Updates for Multiple Products

This post was originally published on this site

Original release date: March 27, 2019

Cisco has released security updates to address vulnerabilities in multiple Cisco products. A remote attacker could exploit some of these vulnerabilities to take control of an affected system.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and administrators to review the Cisco Security Advisory page and apply the necessary updates.


This product is provided subject to this Notification and this Privacy & Use policy.

New – AWS Deep Learning Containers

This post was originally published on this site

We want to make it as easy as possible for you to learn about deep learning and to put it to use in your applications. If you know how to ingest large datasets, train existing models, build new models, and to perform inferences, you’ll be well-equipped for the future!

New Deep Learning Containers
Today I would like to tell you about the new AWS Deep Learning Containers. These Docker images are ready to use for deep learning training or inferencing using TensorFlow or Apache MXNet, with other frameworks to follow. We built these containers after our customers told us that they are using Amazon EKS and ECS to deploy their TensorFlow workloads to the cloud, and asked us to make that task as simple and straightforward as possible. While we were at it, we optimized the images for use on AWS with the goal of reducing training time and increasing inferencing performance.

The images are pre-configured and validated so that you can focus on deep learning, setting up custom environments and workflows on Amazon ECS, Amazon Elastic Container Service for Kubernetes, and Amazon Elastic Compute Cloud (EC2) in minutes! You can find them in AWS Marketplace and Elastic Container Registry, and use them at no charge. The images can be used as-is, or can be customized with additional libraries or packages.

Multiple Deep Learning Containers are available, with names based on the following factors (not all combinations are available):

  • Framework – TensorFlow or MXNet.
  • Mode – Training or Inference. You can train on a single node or on a multi-node cluster.
  • Environment – CPU or GPU.
  • Python Version – 2.7 or 3.6.
  • Distributed Training – Availability of the Horovod framework.
  • Operating System – Ubuntu 16.04.

Using Deep Learning Containers
In order to put an AWS Deep Learning Container to use, I create an Amazon ECS cluster with a p2.8xlarge instance:

$ aws ec2 run-instances --image-id  ami-0ebf2c738e66321e6 
  --count 1 --instance-type p2.8xlarge 
  --key-name keys-jbarr-us-east ... 

I verify that the cluster is running, and check that the ECS Container Agent is active:

Then I create a task definition in a text file (gpu_task_def.txt):

{
  "requiresCompatibilities": [
    "EC2"
  ],
  "containerDefinitions": [
    {
      "command": [
        "tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=saved_model_half_plus_two_gpu  --model_base_path=/models/saved_model_half_plus_two_gpu"
      ],
      "entryPoint": [
        "sh",
        "-c"
      ],
      "name": "EC2TFInference",
      "image": "841569659894.dkr.ecr.us-east-1.amazonaws.com/sample_tf_inference_images:gpu_with_half_plus_two_model",
      "memory": 8111,
      "cpu": 256,
      "resourceRequirements": [
        {
          "type": "GPU",
          "value": "1"
        }
      ],
      "essential": true,
      "portMappings": [
        {
          "hostPort": 8500,
          "protocol": "tcp",
          "containerPort": 8500
        },
        {
          "hostPort": 8501,
          "protocol": "tcp",
          "containerPort": 8501
        },
        {
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/TFInference",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      }
    }
  ],
  "volumes": [],
  "networkMode": "bridge",
  "placementConstraints": [],
  "family": "Ec2TFInference"
}

I register the task definition and capture the revision number (3):

Next, I create a service using the task definition and revision number:

I use the console to navigate to the task:

Then I find the external binding for port 8501:

Then I run three inferences (this particular model was trained on the function y = ax + b, with a = 0.5 and b = 2):

$ curl -d '{"instances": [1.0, 2.0, 5.0]}' 
  -X POST http://xx.xxx.xx.xx:8501/v1/models/saved_model_half_plus_two_gpu:predict
{
    "predictions": [2.5, 3.0, 4.5
    ]
}

As you can see, the inference predicted the values 2.5, 3.0, and 4.5 when given inputs of 1.0, 2.0, and 5.0. This is a very, very simple example but it shows how you can use a pre-trained model to perform inferencing in ECS via the new Deep Learning Containers. You can also launch a model for training purposes, perform the training, and then run some inferences.

Jeff;

New – Concurrency Scaling for Amazon Redshift – Peak Performance at All Times

This post was originally published on this site

Amazon Redshift is a data warehouse that can expand to exabyte-scale. Today, tens of thousands of AWS customers (including NTT DOCOMO, Finra, and Johnson & Johnson) use Redshift to run mission-critical BI dashboards, analyze real-time streaming data, and run predictive analytics jobs.

A challenge arises when the number of concurrent queries grows at peak times. When a multitude of business analysts all turn to their BI dashboards or long-running data science workloads compete with other workloads for resources, Redshift will queue queries until enough compute resources become available in the cluster. This ensures that all of the work gets done, but it can mean that performance is impacted at peak times. Two options present themselves:

  • Overprovision the cluster to meet peak needs. This option addresses the immediate issue, but wastes resources and costs more than necessary.
  • Optimize the cluster for typical workloads. This option forces you to wait longer for results at peak times, possibly delaying important business decisions.

New Concurrency Scaling
Today I would like to offer a third option. You can now configure Redshift to add more query processing power on an as-needed basis. This happens transparently and in a manner of seconds, and provides you with fast, consistent performance even as the workload grows to hundreds of concurrent queries. Additional processing power is ready in seconds and does not need to be pre-warmed or pre-provisioned. You pay only for what you use, with per-second billing and also accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. The extra processing power is removed when it is no longer needed, making this a perfect way to address the bursty use cases that I described above.

You can allocate the burst power to specific users or queues, and you can continue to use your existing BI and ETL applications. Concurrency Scaling Clusters are used to handle many forms of read-only queries, with additional flexibility in the works; read about Concurrency Scaling to learn more.

Using Concurrency Scaling
This feature can be enabled for an existing cluster in minutes! We recommend starting with a fresh Redshift Parameter Group for testing purposes, so I start by creating one:

Then I edit my cluster’s Workload Management Configuration, select the new parameter group, set the Concurrency Scaling Mode to auto, and click Save:

I will use the Cloud Data Warehouse Benchmark Derived From TPC-DS as a source of test data and test queries. I download the DDL, customize it with my AWS credentials, and use psql to connect to my cluster and create the test data:

sample=# create database sample;
CREATE DATABASE
sample=# connect sample;
psql (9.2.24, server 8.0.2)
WARNING: psql version 9.2, server version 8.0.
         Some psql features might not work.
SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256)
You are now connected to database "sample" as user "awsuser".
sample=# i ddl.sql

The DDL creates the tables and loads populates them using data stored in an S3 bucket:

sample=# dt
                 List of relations
 schema |          name          | type  |  owner
--------+------------------------+-------+---------
 public | call_center            | table | awsuser
 public | catalog_page           | table | awsuser
 public | catalog_returns        | table | awsuser
 public | catalog_sales          | table | awsuser
 public | customer               | table | awsuser
 public | customer_address       | table | awsuser
 public | customer_demographics  | table | awsuser
 public | date_dim               | table | awsuser
 public | dbgen_version          | table | awsuser
 public | household_demographics | table | awsuser
 public | income_band            | table | awsuser
 public | inventory              | table | awsuser
 public | item                   | table | awsuser
 public | promotion              | table | awsuser
 public | reason                 | table | awsuser
 public | ship_mode              | table | awsuser
 public | store                  | table | awsuser
 public | store_returns          | table | awsuser
 public | store_sales            | table | awsuser
 public | time_dim               | table | awsuser
 public | warehouse              | table | awsuser
 public | web_page               | table | awsuser
 public | web_returns            | table | awsuser
 public | web_sales              | table | awsuser
 public | web_site               | table | awsuser
(25 rows)

Then I download the queries and open up a bunch of PuTTY windows so that I can generate a meaningful load for my Redshift cluster:

I run an initial set of parallel queries, and then ramp up over time, I can see them in the Cluster Performance tab for my cluster:

I can see the additional processing power come online as needed, and then go away when no longer needed, in the Database Performance tab:

As you can see, my cluster scales as needed in order to handle all of the queries as expeditiously as possible. The Concurrency Scaling Usage shows me how many seconds of additional processing power I have consumed (as I noted earlier, each cluster accumulates a full hour of concurrency credits every 24 hours).

I can use the parameter max_concurrency_scaling_clusters to control the number of Concurrency Scaling Clusters that can be used (the default limit is 10, but you can request an increase if you need more).

Available Today
You can start making use of Concurrency Scaling Clusters today in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions today, with more to come later this year.

Jeff;