Analyst Webcast: Common and Best Practices for Security Operations Centers: Panel Discussion – July 11, 2019 1:00pm US/Eastern

This post was originally published on this site

Speakers: Chris Crowley and John Pescatore

This webcast digs more deeply into the results of the SANS 2019 SOC Survey. A panel moderated by SANS Director of Emerging Technologies John Pescatore and comprised of survey author Chris Crowley and representatives from ExtraHop and ThreatConnect will touch on key themes developed through analyzing the results of the survey.

Click here to register for the survey results webcast on July 10 at 1:00 PM and be among the first to receive the associated whitepaper written by Chris Crowley with advice from John Pescatore.

AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available

This post was originally published on this site

Managing your Infrastructure as Code provides great benefits and is often a stepping stone for a successful application of DevOps practices. In this way, instead of relying on manually performed steps, both administrators and developers can automate provisioning of compute, storage, network, and application services required by their applications using configuration files.

For example, defining your Infrastructure as Code makes it possible to:

  • Keep infrastructure and application code in the same repository
  • Make infrastructure changes repeatable and predictable across different environments, AWS accounts, and AWS regions
  • Replicate production in a staging environment to enable continuous testing
  • Replicate production in a performance test environment that you use just for the time required to run a stress test
  • Release infrastructure changes using the same tools as code changes, so that deployments include infrastructure updates
  • Apply software development best practices to infrastructure management, such as code reviews, or deploying small changes frequently

Configuration files used to manage your infrastructure are traditionally implemented as YAML or JSON text files, but in this way you’re missing most of the advantages of modern programming languages. Specifically with YAML, it can be very difficult to detect a file truncated while transferring to another system, or a missing line when copying and pasting from one template to another.

Wouldn’t it be better if you could use the expressive power of your favorite programming language to define your cloud infrastructure? For this reason, we introduced last year in developer preview the AWS Cloud Development Kit (CDK), an extensible open-source software development framework to model and provision your cloud infrastructure using familiar programming languages.

I am super excited to share that the AWS CDK for TypeScript and Python is generally available today!

With the AWS CDK you can design, compose, and share your own custom components that incorporate your unique requirements. For example, you can create a component setting up your own standard VPC, with its associated routing and security configurations. Or a standard CI/CD pipeline for your microservices using tools like AWS CodeBuild and CodePipeline.

Personally I really like that by using the AWS CDK, you can build your application, including the infrastructure, in your IDE, using the same programming language and with the support of autocompletion and parameter suggestion that modern IDEs have built in, without having to do a mental switch between one tool, or technology, and another. The AWS CDK makes it really fun to quickly code up your AWS infrastructure, configure it, and tie it together with your application code!

How the AWS CDK works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

How to use the AWS CDK
We continuously add new features based on the feedback of our customers. That means that when creating an AWS resource, you often have to specify many options and dependencies. For example, if you create a VPC you have to think about how many Availability Zones (AZs) to use and how to configure subnets to give private and public access to the resources that will be deployed in the VPC.

To make it easy to define the state of AWS resources, the AWS Construct Library exposes the full richness of many AWS services with sensible defaults that you can customize as needed. In the case above, the VPC construct creates by default public and private subnets for all the AZs in the VPC, using 3 AZs if not specified.

For creating and managing CDK apps, you can use the AWS CDK Command Line Interface (CLI), a command-line tool that requires Node.js and can be installed quickly with:

npm install -g aws-cdk

After that, you can use the CDK CLI with different commands:

  • cdk init to initialize in the current directory a new CDK project in one of the supported programming languages
  • cdk synth to print the CloudFormation template for this app
  • cdk deploy to deploy the app in your AWS Account
  • cdk diff to compare what is in the project files with what has been deployed

Just run cdk to see more of the available commands and options.

You can easily include the CDK CLI in your deployment automation workflow, for example using Jenkins or AWS CodeBuild.

Let’s use the AWS CDK to build two sample projects, using different programming languages.

An example in TypeScript
For the first project I am using TypeScript to define the infrastructure:

cdk init app --language=typescript

Here’s a simplified view of what I want to build, not entering into the details of the public/private subnets in the VPC. There is an online frontend, writing messages in a queue, and an asynchronous backend, consuming messages from the queue:

Inside the stack, the following TypeScript code defines the resources I need, and their relations:

  • First I define the VPC and an Amazon ECS cluster in that VPC. By using the defaults provided by the AWS Construct Library, I don’t need to specify any parameter here.
  • Then I use an ECS pattern that in a few lines of code sets up an Amazon SQS queue and an ECS service running on AWS Fargate to consume the messages in that queue.
  • The ECS pattern library provides higher-level ECS constructs which follow common architectural patterns, such as load balanced services, queue processing, and scheduled tasks.
  • A Lambda function has the name of the queue, created by the ECS pattern, passed as an environment variable and is granted permissions to send messages to the queue.
  • The code of the Lambda function and the Docker image are passed as assets. Assets allow you to bundle files or directories from your project and use them with Lambda or ECS.
  • Finally, an Amazon API Gateway endpoint provides an HTTPS REST interface to the function.
const myVpc = new ec2.Vpc(this, "MyVPC");

const myCluster = new ecs.Cluster(this, "MyCluster", {
  vpc: myVpc
});

const myQueueProcessingService = new ecs_patterns.QueueProcessingFargateService(
  this, "MyQueueProcessingService", {
    cluster: myCluster,
    memoryLimitMiB: 512,
    image: ecs.ContainerImage.fromAsset("my-queue-consumer")
  });

const myFunction = new lambda.Function(
  this, "MyFrontendFunction", {
    runtime: lambda.Runtime.NODEJS_10_X,
    timeout: Duration.seconds(3),
    handler: "index.handler",
    code: lambda.Code.asset("my-front-end"),
    environment: {
      QUEUE_NAME: myQueueProcessingService.sqsQueue.queueName
    }
  });

myQueueProcessingService.sqsQueue.grantSendMessages(myFunction);

const myApi = new apigateway.LambdaRestApi(
  this, "MyFrontendApi", {
    handler: myFunction
  });

I find this code very readable and easier to maintain than the corresponding JSON or YAML. By the way, cdk synth in this case outputs more than 800 lines of plain CloudFormation YAML.

An example in Python
For the second project I am using Python:

cdk init app --language=python

I want to build a Lambda function that is executed every 10 minutes:

When you initialize a CDK project in Python, a virtualenv is set up for you. You can activate the virtualenv and install your project requirements with:

source .env/bin/activate

pip install -r requirements.txt

Note that Python autocompletion may not work with some editors, like Visual Studio Code, if you don’t start the editor from an active virtualenv.

Inside the stack, here’s the Python code defining the Lambda function and the CloudWatch Event rule to invoke the function periodically as target:

myFunction = aws_lambda.Function(
    self, "MyPeriodicFunction",
    code=aws_lambda.Code.asset("src"),
    handler="index.main",
    timeout=core.Duration.seconds(30),
    runtime=aws_lambda.Runtime.PYTHON_3_7,
)

myRule = aws_events.Rule(
    self, "MyRule",
    schedule=aws_events.Schedule.rate(core.Duration.minutes(10)),
)
myRule.add_target(aws_events_targets.LambdaFunction(myFunction))

Again, this is easy to understand even if you don’t know the details of AWS CDK. For example, durations include the time unit and you don’t have to wonder if they are expressed in seconds, milliseconds, or days. The output of cdk synth in this case is more than 90 lines of plain CloudFormation YAML.

Available Now
There is no charge for using the AWS CDK, you pay for the AWS resources that are deployed by the tool.

To quickly get hands-on with the CDK, start with this awesome step-by-step online tutorial!

More examples of CDK projects, using different programming languages, are available in this repository:

https://github.com/aws-samples/aws-cdk-examples

You can find more information on writing your own constructs here.

The AWS CDK is open source and we welcome your contribution to make it an even better tool:

https://github.com/awslabs/aws-cdk

Check out our source code on GitHub, start building your infrastructure today using TypeScript or Python, or try different languages in developer preview, such as C# and Java, and give us your feedback!

Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications

This post was originally published on this site

Many AWS customers also make great use of SaaS (Software as a Service) applications. For example, they use Zendesk to manage customer service & support tickets, PagerDuty to handle incident response, and SignalFx for real-time monitoring. While these applications are quite powerful on their own, they are even more so when integrated into a customer’s own systems, databases, and workflows.

New Amazon EventBridge
In order to support this increasingly common use case, we are launching Amazon EventBridge today. Building on the powerful event processing model that forms the basis for CloudWatch Events, EventBridge makes it easy for our customers to integrate their own AWS applications with SaaS applications. The SaaS applications can be hosted anywhere, and simply publish events to an event bus that is specific to each AWS customer. The asynchronous, event-based model is fast, clean, and easy to use. The publisher (SaaS application) and the consumer (code running on AWS) are completely decoupled, and are not dependent on a shared communication protocol, runtime environment, or programming language. You can use simple Lambda functions to handle events that come from a SaaS application, and you can also route events to a wide variety of other AWS targets. You can store incident or ticket data in Amazon Redshift, train a machine learning model on customer support queries, and much more.

Everything that you already know (and hopefully love) about CloudWatch Events continues to apply, with one important change. In addition to the existing default event bus that accepts events from AWS services, calls to PutEvents, and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.

As part of today’s launch we are also opening up a partner program. The integration process is simple and straightforward, and generally requires less than one week of developer time.

All About Amazon EventBridge
Here are some terms that you need to know in order to understand how to use Amazon EventBridge:

Partner – An organization that has integrated their SaaS application with EventBridge.

Customer – An organization that uses AWS, and that has subscribed to a partner’s SaaS application.

Partner Name – A unique name that identifies an Amazon EventBridge partner.

Partner Event Bus – An Event Bus that is used to deliver events from a partner to AWS.

EventBridge can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS SDKs. There are distinct commands and APIs for partners and for customers. Here are some of the most important ones:

PartnersCreatePartnerEventSource, ListPartnerEventSourceAccounts, ListPartnerEventSources, PutPartnerEvents.

CustomersListEventSources, ActivateEventSource, CreateEventBus, ListEventBuses, PutRule, PutTargets.

Amazon EventBridge for Partners & Customers
As I noted earlier, the integration process is simple and straightforward. You need to allow your customers to enter an AWS account number and to select an AWS region. With that information in hand, you call CreatePartnerEventSource in the desired region, inform the customer of the event source name and tell them that they can accept the invitation to connect, and wait for the status of the event source to change to ACTIVE. Then, each time an event of interest to the customer occurs, you call PutPartnerEvents and reference the event source.

The process is just as simple on the customer side. You accept the invitation to connect by calling CreateEventBus to create an event bus associated with the event source. You add rules and targets to the event bus, and prepare your Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. You can use DeActivateEventSource and ActivateEventSource to control the flow.

Here’s the overall flow (diagram created using SequenceDiagram):

Each partner has the freedom to choose the events that are relevant to their application, and to define the data elements that are included with each event.

Using EventBridge
Starting from the EventBridge Console, I click Partner event sources, find the partner of interest, and click it to learn more:

Each partner page contains additional information about the integration. I read the info, and click Set up to proceed:

The page provides me with a simple, three-step procedure to set up my event source:


After the partner creates the event source, I return to Partner event sources and I can see that the Zendesk event source is Pending:

I click the pending event source, review the details, and then click Associate with event bus:

I have the option to allow other AWS accounts, my Organization, or another Organization to access events on the event bus that I am about to create. After I have confirmed that I trust the origin and have added any additional permissions, I click Associate:

My new event bus is now available, and is listed as a Custom event bus:

I click Rules, select the event bus, and see the rules (none so far) associated with it. Then I click Create rule to make my first rule:

I enter a name and a description for my first rule:

Then I define a pattern, choosing Zendesk as the Service name:

Next, I select a Lambda function as my target:

I can also choose from many other targets:

After I create my rule, it will be activated in response to activities that occur within my Zendesk account. The initial set of events includes TicketCreated, CommentCreated, TagsChanged, AgentAssignmentChanged, GroupAssignmentChanged, FollowersChanged, EmailCCsChanged, CustomFieldChanged, and StatusChanged. Each event includes a rich set of properties; you’ll need to consult the documentation to learn more.

Partner Event Sources
We are launching with ten partner event sources, with more to come:

  • Datadog
  • Zendesk
  • PagerDuty
  • Whispir
  • Saviynt
  • Segment
  • SignalFx
  • SugarCRM
  • OneLogin
  • Symantec

If you have a SaaS application and you are ready to integrate, read more about EventBridge Partner Integration.

Now Available
Amazon EventBridge is available now and you can start using it today in all public AWS regions in the aws partition. Support for the AWS regions in China, and for the Asia Pacific (Osaka) Local Region, is in the works.

Pricing is based on the number of events published to the event buses in your account, billed at $1 for every million events. There is no charge for events published by AWS services.

Jeff;

PS – As you can see from this post, we are paying even more attention to the overall AWS event model, and have a lot of interesting goodies on the drawing board. With this launch, CloudWatch Events has effectively earned a promotion to a top-level service, and I’ll have a lot more to say about that in the future!

Special Webcast: The Critical Security Controls and Some Recent Data Breaches – July 10, 2019 3:30pm US/Eastern

This post was originally published on this site

Speakers: Chris Christianson

The Center for Internet Security’s Critical Security Controls is a prioritized list of best practices for computer security. It gives organizations a strategy for defense. We’ll discuss what we know about some recent data breaches and how a lack of effective controls in part lead to the breach.

Analyst Webcast: Common and Best Practices for Security Operations Centers: Results of the 2019 SOC Survey – July 10, 2019 1:00pm US/Eastern

This post was originally published on this site

Speakers: Chris Crowley and John Pescatore

{{!What does it take to establish a successful security operations program? Tell us your experience. | Take the 2019 SANS SOC Survey and enter for a chance to win a $400 Amazon gift card | www.surveymonkey.com/r/2019_SANS_SOC_Survey}}

The 2019 SANS Security Operations Center (SOC) Survey is focused on providing objective data to security leaders who are looking to establish a SOC or optimize an existing one. This webcast will capture common and best practices, provide defendable metrics that can be used to justify SOC resources to management, and highlight the key areas that SOC managers should prioritize to increase the effectiveness and efficiency of security operations.

Attendees at this webcast will learn:

  • What types of SOC infrastructures are used most frequently
  • How SOCs interact with network operations centers and incident response teams
  • What activities typically define a SOC and how many of them are outsourced
  • Which SOC-related technologies organizations are most satisfied with
  • How organizations use metrics to evaluate SOC performance
  • What challenges inhibit integration and utilization of a centralized SOC model

Register for this webcast and be among the first to receive the associated whitepaper written by SOC expert and SANS Senior Instructor Chris Crowley, with advice from SANS Director of Emerging Technology John Pescatore.

Special Webcast: Leveraging Your SIEM to Implement Security Best Practices – July 9, 2019 10:30am US/Eastern

This post was originally published on this site

Speakers: Hannah Coakley

The biggest challenge that security analytics addresses is the volume and diversity of information that can be analyzed at a given point to assist security professionals in detecting, responding to and mitigating cyber threats. But how do you leverage that data to implement security best practices?

InsightIDR is a single solution that provides visibility across your traditional on-premise environment, but also extends monitoring to your remote endpoints and cloud services. Join us to learn how InsightIDR provides visibility into your network and highlight useful metrics to implement security best practices.

In this webcast, we will discuss:

  • How to make security analytics more consumable
  • The data sources you need to collect and analyze
  • How InsightIDR leverages pre-built analytics to detect top attack vectors

Amazon Aurora PostgreSQL Serverless – Now Generally Available

This post was originally published on this site

The database is usually the most critical part of a software architecture and managing databases, especially relational ones, has never been easy. For this reason, we created Amazon Aurora Serverless, an auto-scaling version of Amazon Aurora that automatically starts up, shuts down and scales up or down based on your application workload.

The MySQL-compatible edition of Aurora Serverless has been available for some time now. I am pleased to announce that the PostgreSQL-compatible edition of Aurora Serverless is generally available today.

Before moving on with details, I take the opportunity to congratulate the Amazon Aurora development team that has just won the 2019 Association for Computing Machinery’s (ACM) Special Interest Group on Management of Data (SIGMOD) Systems Award!

When you create a database with Aurora Serverless, you set the minimum and maximum capacity. Your client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is very fast because resources are “warm” and ready to be added to serve your requests.

 

There is no change with Aurora Serverless on how storage is managed by Aurora. The storage layer is independent from the compute resources used by the database. There is no need to provision storage in advance. The minimum storage is 10GB and, based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance.

Creating an Aurora Serverless PostgreSQL Database
Let’s start an Aurora Serverless PostgreSQL database and see the automatic scalability at work. From the Amazon RDS console, I select to create a database using Amazon Aurora as engine. Currently, Aurora serverless is compatible with PostgreSQL version 10.5. Selecting that version, the serverless option becomes available.

I give the new DB cluster an identifier, choose my master username, and let Amazon RDS generate a password for me. I will be able to retrieve my credentials during database creation.

I can now select the minimum and maximum capacity for my database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration I choose to pause compute capacity after 5 minutes of inactivity. Based on my settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.

Testing Some Load on the Database
To generate some load on the database I am using sysbench on an EC2 instance. There are a couple of Lua scripts bundled with sysbench that can help generate an online transaction processing (OLTP) workload:

  • The first script, parallel_prepare.lua, generates 100,000 rows per table for 24 tables.
  • The second script, oltp.lua, generates workload against those data using 64 worker threads.

By using those scripts, I start generating load on my database cluster. As you can see from this graph, taken from the RDS console monitoring tab, the serverless database capacity grows and shrinks to follow my requirements. The metric shown on this graph is the number of ACUs used by the database cluster. First it scales up to accommodate the sysbench workload. When I stop the load generator, it scales down and then pauses.

Available Now
Aurora Serverless PostgreSQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). With Aurora Serverless, you pay on a per-second basis for the database capacity you use when the database is active, plus the usual Aurora storage costs.

For more information on Amazon Aurora, I recommend this great post explaining why and how it was created:

Amazon Aurora ascendant: How we designed a cloud-native relational database

It’s never been so easy to use a relational database in production. I am so excited to see what you are going to use it for!

Special Webcast: Simplify Your Office 365 Migration – July 9, 2019 3:30pm US/Eastern

This post was originally published on this site

Speakers: Sibrina Subedar

By migrating to Microsoft Office 365, organizations gain robust, cloud-based reliability and enhanced security. However, even with 0365s security model, most enterprises are leaving the front door open. Lack of extended email security such as DMARC enforcement is allowing phishing and BEC attacks to grow exponentially – $7.5B in losses in just the last 18 months according to FBI reports.

But it’s not just about security: In the cloud era, you need to know which services are in use and are authorized to send an email and which are not as well as which are compatible with Office 365 prior to your migration.

During this webinar, we will help you get your cloud house in order:

  • Learn more about how to secure your O365 deployment against phishing attacks while increasing deliverability and trust using domain-based authentication.
  • Get tools to identify and mitigate shadow IT email services.
  • Learn how to automate your domain lockdown without investing a lot of time and resources.

Privacy and Mobile Device Apps

This post was originally published on this site

Original release date: July 9, 2019

What are the risks associated with mobile device apps?

Applications (apps) on your smartphone or other mobile devices can be convenient tools to access the news, get directions, pick up a ride share, or play games. But these tools can also put your privacy at risk. When you download an app, it may ask for permission to access personal information—such as email contacts, calendar inputs, call logs, and location data—from your device. Apps may gather this information for legitimate purposes—for example, a ride-share app will need your location data in order to pick you up. However, you should be aware that app developers will have access to this information and may share it with third parties, such as companies who develop targeted ads based on your location and interests.

How can you avoid malicious apps and limit the information apps collect about you?

Before installing an app

  • Avoid potentially harmful apps (PHAs) – Reduce the risk of downloading PHAs by limiting your download sources to official app stores, such as your device’s manufacturer or operating system app store. Do not download from unknown sources or install untrusted enterprise certificates. Additionally—because malicious apps have been known to slip through the security of even reputable app stores—always read the reviews and research the developer before downloading and installing an app.
  • Be savvy with your apps – Before downloading an app, make sure you understand what information the app will access. Read the permissions the app is requesting and determine whether the data it is asking to access is related to the purpose of the app. Read the app’s privacy policy to see if, or how, your data will be shared. Consider foregoing the app if the policy is vague regarding with whom it shares your data or if the permissions request seems excessive.

On already installed apps

  • Review app permissions – Review the permissions each app has. Ensure your installed apps only have access to the information they need, and remove unnecessary permissions from each app. Consider removing apps with excessive permissions. Pay special attention to apps that have access to your contact list, camera, storage, location, and microphone.
  • Limit location permissions – Some apps have access to the mobile device’s location services and thus have access to the user’s approximate physical location. For apps that require access to location data to function, consider limiting this access to when the app is in use only.
  • Keep app software up to date – Apps with out-of-date software may be at risk of exploitation of known vulnerabilities. Protect your mobile device from malware by installing app updates as they are released.
  • Delete apps you do not need – To avoid unnecessary data collection, uninstall apps you no longer use.
  • Be cautious with signing into apps with social network accounts – Some apps are integrated with social network sites—in these cases, the app can collect information from your social network account and vice versa. Ensure you are comfortable with this type of information sharing before you sign into an app via your social network account. Alternatively, use your email address and a unique password to sign in.

What additional steps can you take to secure data on your mobile devices?

  • Limit activities on public Wi-Fi networks – Public Wi-Fi networks at places such as airports and coffee shops present an opportunity for attackers to intercept sensitive information. When using a public or unsecured wireless connection, avoid using apps and websites that require personal information, e.g., a username and password. Additionally, turn off the Bluetooth setting on your devices when not in use. (See Cybersecurity for Electronic Devices.)
  • Be cautious when charging – Avoid connecting your smartphone to any computer or charging station that you do not control, such as a charging station at an airport terminal or a shared computer at a library. Connecting a mobile device to a computer using a USB cable can allow software running on that computer to interact with the phone in ways you may not anticipate. For example, a malicious computer could gain access to your sensitive data or install new software. (See Holiday Traveling with Personal Internet-Enabled Devices.)
  • Protect your device from theft – Having physical access to a device makes it easier for an attacker to extract or corrupt information. Do not leave your device unattended in public or in easily accessible areas. (See Protecting Portable Devices: Physical Security.)
  • Protect your data if your device is stolen – Ensure your device requires a password or biometric identifier to access it, so if is stolen, thieves will have limited access to its data. (See Choosing and Protecting Passwords.) If your device is stolen, immediately contact your service provider to protect your data. (See the Federal Communications Commission’s Consumer Guide: Protect Your Smart Device.)

References

Author: CISA

This product is provided subject to this Notification and this Privacy & Use policy.

Special Webcast: Leading Change for CISOs – July 3, 2019 10:30am US/Eastern

This post was originally published on this site

Speakers: Lance Spitzner

Board members, executives, and CISOs around the world are all realizing the same thing Cybersecurity is no longer just an IT problem, but an organizational problem. The challenge has become how can CISOs got beyond technology embed security at an organizational level. The concept is called Change Management, and other fields have been leveraging it decades. Its time to apply lessons from Change Management leaders such as John Kotter, Simon Sinek or Chip and Dan Heath.

Learn how to become a far more effective CISO as create change and embed security at an organizational level. Key things you will learn include

– What is culture, Change Management and how it impacts security

– Learn the 8 proven steps to organizational change.

– Leveraging accelerators at an organizational level.

– Learn why emotion often trumps numbers / statistics, and how to leverage that to your advantage.