Improving NIC and switch performance for vSAN (and other IP storage)

This post was originally published on this site

This is going to be a short post collecting a few tricks to unlock some bottlenecks in storage networking that may grow over time: Unfortunetly a lot of troubeshooting of networking performance stops earlier than it should. Two common incomplete troubleshooting workflows I’ve seen: Someone checks that network utilization on a host isn’t near the […]

The post Improving NIC and switch performance for vSAN (and other IP storage) appeared first on Virtual Ramblings.

New Automation Features In AWS Systems Manager

This post was originally published on this site

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone.

Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button.

When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button.

This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts.

The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step.

So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name.

Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems.

Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set.

When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API.

You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one.

You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}).

Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script.

With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice.

In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script.

To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}}

In the Outputs section, I also wire up an output parameter than can be used by subsequent steps.

Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps.

def poll_instance(events, context):
    import boto3
    import time

    ec2 = boto3.client('ec2')

    instance_id = events['InstanceId']

    print('[INFO] Waiting for instance to enter Status: Ok', instance_id)

    instance_status = "null"

    while True:
    res = ec2.describe_instance_status(InstanceIds=[instance_id])

    if len(res['InstanceStatuses']) == 0:
        print("Instance Status Info is not available yet")
        time.sleep(5)
        continue

    instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status']

    print('[INFO] Polling get status of the instance', instance_status)

    if instance_status == 'ok':
        break

    time.sleep(10)

    return {'Status': instance_status, 'InstanceId': instance_id}

To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload.

Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python.

You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services.

All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance.

It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier.

Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab.

If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve.

When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances.

Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation.

These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away.

Happy Automating!

— Martin;

Some packet-fu with Zeek (previously known as bro), (Mon, Nov 11th)

This post was originally published on this site

During an incident response process, one of the fundamental variables to consider is speed. If a net capture is being made where we can presumably find evidence that who and how is causing an incident, any second counts in order to anticipate the attacker in the cyber kill chain sequence.

We need to use a passive approach in the analysis of network traffic to be quick in obtaining results. Zeek is a powerful tool to use in these scenarios. It is a tool with network traffic processing capabilities for application level protocols (DCE-RPC, DHCP, DNP3, DNS, FTP, HTTP, IMAP, IRC, KRB, MODBUS, MQTT, MYSQL, NTLM, NTP, POP3, RADIUS, RDP, RFB, SIP, SMB, SMTP, SOCKS, SSH, SSL, SYSLOG, TUNNELS, XMPP), pattern search and a powerful scripting language to process what the incident responder might require.

Zeek scripts work through events. We can find a summary of all possible events that can be used at https://docs.zeek.org/en/stable/scripts/base/bif/event.bif.zeek.html. Next we will review those that will be covered by the examples of this diary:

  • new_connection: This event is raised everytime a new connection is detected.
  • zeek_done: This event is raised when the packet input is exhausted.
  • protocol_confirmation: This event is raised when zeek was able to confirm the protocol inside a specific connection.

We will cover three simple use cases in this diary:

  • Top talkers by source IP connection and new connections performed.
  • Top talkers by source IP and destination port, with new connections performed.
  • Number of connections confirmed by zeek for a specific IP address with a specific protocol.

Top talkers by source IP connection

The following script implements the use case:

global attempts: table[addr] of count &default=0; 
event new_connection (c: connection)
{
    local source = c$id$orig_h;
    local n = ++attempts[source];
}

event zeek_done ()
{
    local toplog=open(“toptalkers.log”);
    for (k in attempts)
        print toplog,fmt(“%s %s”,attempts[k],k);
    close(toplog);
}

 

Let’s go through the script in detail:

  • We will store the result in the attempts table. We will store there IP addresses type addr and count the occurrences with type count.
  • Using the new_connection event, we traverse the capture counting source IP addresses that generate new connections.
  • Once the packet input is exhausted, using the zeek_done event we create the toptalkers.log file and write the information in the attempts table separated by blank spaces.

Let’s see a snippet of the output:

We can get a sorted output:

Top talkers by source IP and destination port, with new connections performed

The following script implements the use case:

global attempts: table[addr,port] of count &default=0; 
event new_connection (c: connection)
{
    local source = c$id$orig_h;
    local the_port = c$id$resp_p;
    local n = ++attempts[source,the_port];
}

event zeek_done ()
{
    local toplog=open(“toptalkers.log”);
    for ([k,l] in attempts)
        print toplog,fmt(“%s %s %s”,attempts[k,l],k,l);
    close(toplog);
}

 

Let’s review the differences from the previous one:

  • Table now includes ports. Therefore, a new type is included in declaration: port.
  • The source IP, the destination port and the counter for each repetition of this pair of data in the network capture are stored in the code within the new_connection event.
  • Information is written once packet processing is finished to file toptalkers.log.

Let’s see a snippet of the script’s output:

We can get a sorted output:

Number of connections confirmed by zeek for a specific IP address with a specific protocol

The following script implements the use case:

global attempts: table[addr,Analyzer::Tag] of count &default=0; 
event protocol_confirmation (c: connection, the_type: Analyzer::Tag, aid:count)
{
    local source = c$id$orig_h;
    local n = ++attempts[source,the_type];
}

event zeek_done ()
{
    local toplog=open(“toptalkers.log”);
    for ([k,l] in attempts)
        print toplog,fmt(“%s,%s,%s”,k,l,attempts[k,l]);
    close(toplog);
}

 

We can see the some new aspects:

  • The Analyzer::Tag attribute: When the protocol_confirmation event is raised, this attribute saves the protocol that was confirmed by zeek to be in the connection.
  • Information is stored in the table and then saved to a file within the zeek_done event.

Let’s see a snippet of the script’s output:

In my next diaries I will cover other interesting use cases with zeek using the frameworks that it has.

Manuel Humberto Santander Peláez
SANS Internet Storm Center – Handler
Twitter: @manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

What’s New with VMware vCloud Availability 3.5?

This post was originally published on this site

Our team has officially released VMware vCloud Availability 3.5 this week. This is a jam-packed release that has quite a bit of value based on Cloud Provider feedback and customer consumption. Download Page Release Notes In this post, I’m going to highlight the new technical additions that are now part of the VMware vCloud Availability … Continue reading “What’s New with VMware vCloud Availability 3.5?”

The post What’s New with VMware vCloud Availability 3.5? appeared first on Clouds, etc..

Special Webcast: How to Perform a Security Investigation in Amazon Web Services (AWS) – November 14, 2019 1:00pm US/Eastern

This post was originally published on this site

Speakers: Kyle Dickinson and Nam Le

Do you have a plan in place describing how to conduct an effective investigation in AWS? What security controls, techniques, and data sources can you leverage when investigating and containing an incident in the cloud? Learn how to leverage different technologies to determine the source and timeline of the event, and the systems targeted to define a reliable starting point from which to begin your investigations.

In this recorded webcast, SANS instructor and cloud security expert Kyle Dickinson explains how to conduct an investigation in AWS from preparation through completion.

Attendees at this webcast will learn about the:

  • Prerequisites for performing an effective investigation
  • Services that enable an investigation
  • How to plan an investigation
  • Steps to completing an investigation in AWS

Register for this webcast to be among the first to receive the associated whitepaper written by incident response and forensics expert Kyle Dickinson.

vExpert Cloud Management October 2019 Blog Digest

This post was originally published on this site

  VMware’s vExpert program is made up of industry professionals from around the vCommunity. The vExpert title is awarded to individuals who dedicate their time outside of work to share their experiences and useful information about VMware products with the rest of the community. In July of 2019, we spun off a sub-program called vExpert

The post vExpert Cloud Management October 2019 Blog Digest appeared first on VMware Cloud Management.

VMware and the 2019 Tianfu Cup PWN Contest

This post was originally published on this site

We wanted to post a quick acknowledgement that VMware will have representatives in attendance at the 2019 Tianfu Cup PWN Contest in Chengdu, China to review any vulnerabilities that may be demonstrated during the contest. We would like to thank the organizers for inviting us to attend. Stay tuned for further updates. As always please sign up

The post VMware and the 2019 Tianfu Cup PWN Contest appeared first on Security & Compliance Blog.