All posts by David

HTTPS Support for All Internal Services, (Fri, Apr 16th)

This post was originally published on this site

SSL/TLS has been on stage for a while with deprecated protocols[1], free certificates for everybody[2]. The landscape is changing to force more and more people to switch to encrypted communications and this is good! Like Johannes explained yesterday[3], Chrome 90 will now append "https://" by default in the navigation bar. Yesterday diary covered the deployment of your own internal CA to generate certificates and switch everything to secure communications. This is a good point. Especially, by deploying your own root CA, you will add an extra  string to your securitybow: SSL interception and inspection.

But sometimes, you could face other issues:

  • If you have guests on your network, they won't have the root CA installed and will receive annoying messages
  • If you have very old devices or "closed" devices (like all kind of IoT gadgets), it could be difficult to switch them to HTTPS.

On my network, I'm still using Let's Encrypt but to generate certificates for internal hostname. To bypass the reconfiguration of "old devices", I'm concentrating all the traffic behind a Traefik[4] reverse-proxy. Here is my setup:

My IoT devices and facilities (printers, cameras, lights) are connected to a dedicated VLAN with restricted capabilities. As you can see, URLs to access them can be on top of HTTP, HTTPS, use standard ports or exotic ports. A Traefik reverse-proxy is installed on the IoT VLAN and accessible from clients only through TCP/443. Access to the "services" is provided through easy to remember URLs (https://service-a.internal.mydomain.be, etc).

From an HTTP point of view, Traefik is deployed in a standard way (in a Docker in my case). The following configuration is added to let it handle the certificates:

# Enable ACME
certificatesResolvers:
  le:
    acme:
      email: xavier@<redacted>.be
      storage: /etc/traefik/acme.json
      dnsChallenge:
        provider: ovh
        delayBeforeCheck: 10
        resolvers:
         - "8.8.8.8:53"
         - "8.8.4.4:53"

There is one major requirement for this setup: You need to use a valid domain name (read: a publicly registered domain) to generate internal URL (in my case, "mydomain.be") and the domain must be hosted at a provider that provides an API to manage the DNS zone (in my case, OVH). This is required by the DNS authentication mechanism that we will use. Every new certificate generation will requite a specific DNS record to be created through the API:

_acme-challenge.service-a.internal.mydomain.be

The subdomain is your preferred choice ("internal", "dmz", …), be imaginative!

For all services running in Docker containers, Traefik is able to detect them and generate certificates on the fly. For other services like IoT devices, you just create a new config in Traefik, per service:

http:
  services:
    service_cam1:
      loadBalancer:
        servers:
          - url: "https://172.16.0.155:8443/"

  routers:
    router_cam1:
      rule: Host("cam1.internal.mydomain.be")
      entryPoints: [ "websecure" ]
      service: service_cam1
      tls:
        certResolver: le

You can instruct Traefik to monitor new configuration files and automatically load them:

# Enable automatic reload of the config
providers:
  file:
    directory: /etc/traefik/hosts/
    watch: true

Now you are ready to deploy all your HTTPS internal URL and map them to your gadgets!

Of course, you have to maintain an internal DNS zone with records pointing to your Traefik instance.

Warning: Some services accessed through this kind of setup may require configuration tuning. By example, search for parameters like "base URL" and changed to reflex the URL that you're using with Traefik. More details about ACME support is available here[5] (with a list of all supported DNS providers).

[1] https://isc.sans.edu/forums/diary/Old+TLS+versions+gone+but+not+forgotten+well+not+really+gone+either/27260
[2] https://letsencrypt.org
[3] https://isc.sans.edu/forums/diary/Why+and+How+You+Should+be+Using+an+Internal+Certificate+Authority/27314/
[4] https://traefik.io
[5] https://doc.traefik.io/traefik/https/acme/

Xavier Mertens (@xme)
Senior ISC Handler – Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Digital Inheritance

This post was originally published on this site

What happens to our digital presence when we die or become incapacitated? Many of us have or know we should have a will and checklists of what loved ones need to know in the event of our passing. But what about all of our digital data and online accounts? Consider creating some type of digital will, often called a "Digital Inheritance" plan.

Why and How You Should be Using an Internal Certificate Authority, (Thu, Apr 15th)

This post was originally published on this site

Yesterday, Google released Chrome 90, and with that "HTTPS" is becoming the default protocol if you enter just a hostname into the URL bar without specifying the protocol [1]. This is the latest indication that the EFF's "HTTPS Everywhere" initiative is succeeding [2][3]. Browsers are more and more likely to push users to encrypted content. While I applaud this trend, it does have a downside for small internal sites that often make it difficult to configure proper certificates. In addition, browsers are becoming pickier as to what certificates they accept. For example, in the "good old days", I could set up internal certificates that were valid for 10 years, not having to worry about the expiring. Currently, browsers will reject certificates valid for more than 13 months (398 days) [4]. 

Luckily, there is a solution: The "ACME" protocol popularized by the Let's Encrypt initiative makes it relatively painless to renew certificates. Sadly, not all software supports it, and in particular, IoT devices often do not support it [5].

Why Run Your Own Certificate Authority

Let's step back for a moment, and look at the certificate authorities. Why do you want to run your own? There are a couple of reasons that made me use my own certificate authority:

  1. Privacy: Public certificate authorities maintain certificate transparency logs. These logs are made public and are easily searchable. I do not want my internal hostnames to show up in these logs [6].
  2. Flexibility: Sometimes, I do not want to play by the rules that public certificate authorities have to play by. I do still have a pretty nice security camera that I don't want to toss that only supports 1024 bit private keys. Verifying an internal hostname can also be difficult if you are using a nonpublic top-level domain, or if the host is not reachable for certificate validation (you will need to use DNS which requires a cooperating DNS host).

How to Get Started With Your Private Certificate Authority

I found the easiest way to set up your own certificate authority (and be able to use the ACME protocol) is smallstep [7]. Smallstep is often used for SSH keys, but it is also a very capable certificate authority and easily runs in a virtual machine or container. When I started to use smallstep, it required a bit of work (a patch) to be compatible with macOS. But I believe this issue has been fixed yet. Certificates obtained via ACME were missing the "CommonName" that MacOS (and the RFC) require. Today, the "Getting Started Guide" is all you should need.

The setup process will do all the hard work for you. You will get a CA certificate, and Intermediate certificate and should be ready to go in no time. Just make sure to import the CA certificate into your clients and trust them. (I include the intermediate certificate as well to avoid some issues with the intermediate certificate not being included by a server).

The certificate authority doesn't necessarily have to be online all the time, but for ACME to work best and for your systems to be able to automatically renew certificates, you may just want to keep it running.

Using Your Own Certificate Authority with "certbot"

"certbot" is the most popular ACME client these days. All you need to do to use it with smallstep is to point it at your own smallstep server:

certbot certonly -d example.com --server https://internal-ca-hostname:8443/acme/acme/directory

by default, smallstep listens on port 8443. The system you run certbot on needs to trust the smallstep CA or the connection will fail.

For internal verification, I also like DNS instead of the normal default HTTP. You often deal with devices that have odd web server configurations. So you can not easily spin up a stand-alone web server, or use the nginx/apache plugins. The home directory is also not always writeable (or even present). So DNS makes for a nice alternative. To use DNS, it is easiest if you run an internal authoritative DNS server for the respective zone, and enable dynamic updates. Certbot has a "dns-rfc2136" module that supports authenticated dynamic DNS updates. [9]

A Lot of Moving Parts…

So here is a quick "To Do" list of everything you need in the rough order you should set it up:

  1. Register a domain for internal use (or use a subdomain of one you already own). Do NOT use .local internally.
  2. Setup an internal authoritative DNS server
  3. Enable authenticated dynamic DNS on that DNS server and allow updates from your internal IPs using specific keys.
  4. Install smallstep
  5. Install certbot and the rfc2136 module
  6. Run certbot to get your new certificates
  7. Symlink the certificates to the location where your system expects them
  8. Use certbot renewal hooks to do additional operations on certificates as needed (e.g. if you need to create Java keystores or restart services)

Enjoy!

[1] https://blog.chromium.org/2021/03/a-safer-default-for-navigation-https.html
[2] https://www.eff.org/https-everywhere
[3] https://transparencyreport.google.com/https/overview?hl=en
[4] https://support.apple.com/en-us/HT211025
[5] https://tools.ietf.org/html/rfc8555
[6] https://transparencyreport.google.com/https/certificates?hl=en
[7] https://smallstep.com/docs/step-ca
[8] https://eff.org/certbot
[9] https://tools.ietf.org/html/rfc2136


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Modern Apps Live – Learn Serverless, Containers and more During May

This post was originally published on this site

Modern Apps Live is a series of events about modern application development that will be live-streaming on Twitch in May. Session topics include serverless, containers, and mobile and front-end development.

If you’re not familiar, modern applications are those that:

  • Can scale quickly to millions of users.
  • Have global availability.
  • Manage a lot of data (we’re talking exabytes of data).
  • Respond in milliseconds.

These applications are built using a combination of microservices architectures, serverless operational models, and agile developer processes. Modern applications allow organizations to innovate faster and reduce risk, time to market, and total cost of ownership.

Modern Apps Live is a series of four virtual events:

If you’re a developer, solutions architect, or IT and DevOps professional who wants to build and design modern applications, these sessions are for you, no matter if you’re just starting out or a more experience cloud practitioner. There will be time in each session for Q&A. AWS experts will be in the Twitch chat, ready to answer your questions.

If you cannot attend all four events, here are some sessions you definitely shouldn’t miss:

  • Keynotes are a must! AWS leadership will share product roadmaps and make some important announcements.
  • There are security best practices sessions scheduled on May 4 and May 19, during the Container Day x Kubecon and Serverless Live events. Security should be a top priority when you develop modern applications.
  • If you’re just getting started with serverless, don’t miss the “Building a Serverless Application Backend” on May 19. Eric Johnson and Justin Pirtle will show you how to pick the right serverless pattern for your workload and share information about security, observability, and simplifying your deployments.
  • On May 25, in the “API modernization with GraphQL” session, Brice Pelle will show how you can use GraphQL in your client applications.
  • Bring any burning questions about containers to the “Open Q&A and Whiteboarding” session during the Container Day x DockerCon event on May 26.

For more information or to register for the events, see the Modern Apps Live webpage.

I hope to see you there.

Marcia

AQUA (Advanced Query Accelerator) – A Speed Boost for Your Amazon Redshift Queries

This post was originally published on this site

Amazon Redshift already provides up to 3x better price-performance at any scale than any other cloud data warehouse. We do this by designing our own hardware and by using Machine Learning (ML).

For example, we launched the SSD-based RA3 nodes for Amazon Redshift at the end of 2019 (Amazon Redshift Update – Next-Generation Compute Instances and Managed, Analytics-Optimized Storage) and added additional node sizes last April (Amazon Redshift update – ra3.4xlarge Nodes), and last December (Amazon Redshift Launches RA3.xlplus Nodes With Managed Storage). In addition to high-bandwidth networking, RA3 nodes incorporate a sophisticated data management model. As I said when we launched the RA3 nodes:

There’s a cache of large-capacity, high-performance SSD-based storage on each instance, backed by S3, for scale, performance, and durability. The storage system uses multiple cues, including data block temperature, data blockage, and workload patterns, to manage the cache for high performance. Data is automatically placed into the appropriate tier, and you need not do anything special to benefit from the caching or the other optimizations.

Our customers use RA3 nodes to maintain very large data sets and are seeing great results. From digital interactive entertainment to tracking impressions and performance for media buys, Amazon Redshift and RA3 nodes help our customers to store and query data at world scale, with up to 32 PB of data in a single data warehouse.

On the downside, it turns out that advances in storage performance have outpaced those in CPU performance, even as data warehouses continue to grow. The combination of large amounts of data (often accessed by queries that mandate a full scan), and limits on network traffic, can result in a situation where network and CPU bandwidth become limiting factors.

We can do something about that…

Introducing AQUA
Today we are making the ra3.4xl and ra3.16xl nodes even more powerful with the addition of AQUA (Advanced Query Accelerator). Building on the caches that I told you about earlier, and taking advantage of the AWS Nitro System and custom FPGA-based acceleration, AQUA pushes the computation needed to handle reduction and aggregation queries closer to the data. This reduces network traffic, offloads work from the CPUs in the RA3 nodes, and allows AQUA to improve the performance of those queries by up to 10x, at no extra cost and without any code changes. AQUA also makes use of a fast, high-bandwidth connection to Amazon Simple Storage Service (S3).

You can watch this video to learn a lot more about how AQUA uses the custom-designed hardware in the AQUA nodes to accelerate queries. The benefit comes about in several different ways. Each node performs the reduction and aggregation operations in parallel with the others. In addition to getting the n-fold speedup due to parallelism, the amount of data that must be sent to and processed on the compute nodes is generally far smaller (often just 5% of the original). Here’s a diagram that shows how all of the elements come together to accelerate queries:

If you are already using ra3.4xl or ra3.16xl nodes to host your data warehouse, you can start using AQUA in minutes. You simply enable AQUA for your clusters, restart them, and benefit from vastly improved performance for your reduction and aggregation queries. If you are ready to move into the future with RA3 and AQUA, you can create a new RA3-based cluster from a snapshot of your existing one, or you can use Classic resize to do an in-place upgrade.

Using AQUA
I don’t happen to have a data warehouse! I used a snapshot provided by the Redshift team to create a pair of clusters. The first one (prod-cluster) does not have AQUA enabled, and the second one (test-cluster) does:

To create the AQUA-enabled cluster, I simply choose Turn on on the Cluster configuration page:

My queries will use the lineitem table, which has over 18 billion rows:

I create a session on each cluster and disable the Redshift result cache:

And then I run the same query on both clusters:

select sum(l_orderkey), count(*) from lineitem where
l_comment similar to 'slyly %' or
l_comment similar to 'plant %' or
l_comment similar to 'fina %' or
l_comment similar to 'quick %' or
l_comment similar to 'slyly %' or
l_comment similar to 'quickly %' or
l_comment similar to ' %about%' or
l_comment similar to ' final%' or
l_comment similar to ' %final%' or
l_comment similar to ' breach%' or
l_comment similar to ' egular%' or
l_comment similar to ' %closely%' or
l_comment similar to ' closely%' or
l_comment similar to ' %idea%' or
l_comment similar to ' idea%' ; 

If you take a look at the diagram above (and perhaps watch the video), you can see why AQUA can handle queries of this type very efficiently. Instead of sequentially scanning all 18 billion or so rows on the compute nodes, AQUA distributes the collection of similar to expressions to multiple AQUA nodes where they are run in parallel.

The query on the cluster that has AQUA enabled finishes in less than a minute:

The query on the cluster that does not have AQUA enabled finishes in a little under 4 minutes:

As is always the case with databases, complex data, and equally complex queries, your mileage will vary. For example, you could imagine a query that did a complex JOIN of rows SELECTed from multiple tables, where each SELECT would benefit from AQUA, and the overall speedup could be even greater. As you can see from the simple query that I used for this post, AQUA can dramatically reduce query time and perhaps even enable some new types of somewhat real-time queries that were simply not possible or practical in the past.

Things to Know
Here are a couple of interesting facts about AQUA:

Cluster Version – Your clusters must be running Redshift version 1.0.24421 or later in order to be able to make use of AQUA. To learn more about how to enable and disable AQUA, read Managing an AQUA Cluster.

Relevant Queries – AQUA is designed to deliver up to 10X performance on queries that perform large scans, aggregates, and filtering with LIKE and SIMILAR_TO predicates. Over time we expect to add support for additional queries.

Security – All data cached by AQUA is encrypted using your keys. After performing a filtering or aggregation operation, AQUA compresses the results, encrypts them, and returns them to Redshift.

Regions – AQUA is available today in the US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo) Regions, and will be coming to Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Singapore) in the first half of 2021.

Pricing – As I mentioned earlier, there’s no additional charge for AQUA.

Try AQUA Today
If you are using ra3.4xl or ra3.16xl nodes to power your Redshift cluster, you can enable AQUA, restart the cluster, and run some test queries within minutes. Take AQUA for a spin and let me know what you think!

Jeff;

 

Example of Cleartext Cobalt Strike Traffic (Thanks Brad), (Mon, Apr 12th)

This post was originally published on this site

Brad has a large collection of malware traffic (thanks Brad 🙂 ).

I've been searching his collection for a particular sample, and I found one:

2019-07-25 – HANCITOR-STYLE AMADEY MALSPAM PUSHES PONY & COBALT STRIKE

What is special about this sample? Let me show you.

Open Brad's pcap file with Wireshark, go to File / Export Objects / HTTP …, and sort the table by size, descending:

You see 3 filenames (H7mp, H7mp, OLIx). These are the Cobalt Strike beacons with a "checksum8 URL":

See my diary entry "Finding Metasploit & Cobalt Strike URLs" for more information.

Save one of the files to disk (the 3 files are identical) and analyze it with my tool 1768.py:

Notice that the value of CryptoScheme is 1: this means that this beacon (and the C2) will not encrypt their transmitted data with AES. Because encryption is disabled for trial versions.

And since the beacon communicates over HTTP, we can see cleartext traffic directly with Wireshark. For example, filter on "submit.php" requests:

And follow the HTTP stream of the first request:

We can see strings that look like IPv4 config information: internal IPv4 address (10.7.25.101), network mask (255.255.255.0), MTU (1500) and MAC address (00:08:02:1C:47:AE).

Isn't that nice? 🙂

If you like looking through pcap files, like we handlers do, I invite you to find more unencrypted Cobalt Strike traffic in Brad's pcap file, and share your comments here.

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

&#xa;Building an IDS Sensor with Suricata & Zeek with Logs to ELK, (Sat, Apr 10th)

This post was originally published on this site

Over the past several years I have used multiple pre-built sensors using readily available ISO images (rockNSM, SO, OPNSense, etc) but what I was really looking for was just a sensor to parse traffic (i.e Zeek) and IDS alerts (Suricata) to ELK. To speed up the deployment of each sensors, I created a basic CentOS7 server VM where I copied all the scripts and files, I need to get Suricata & Zeek up and running. Since my ELK cluster is the recipient of these logs, it includes Elastic filebeat. I saved all the important scripts and changes into two tarballs (installation and sensor). The sensor tarball has a copy of the softflowd (netflow binary) that can be use to capture netflow data.

Using this document as a template, to build the sensor, it is time to download and extract the installation tarball on the sensor to install Suricata & Zeek as well as the Elasticsearch applications filebeate, metricbeat and packetbeat if using ELK to analyze the traffic. Refer to the document to configure each of the ELK applications.

There are two tarball, the first installation.tgz is to setup all the scripts listed below to install the software and the second tarball is to preconfigure some of the sensor configuration files (Suricata, Zeek, softflowd, Filebeat, Metricbeat & Packetbeat).

  • $ wget https://handlers.sans.edu/gbruneau/scripts/installation.tgz
  • $ wget https://handlers.sans.edu/gbruneau/scripts/sensor.tgz
  • Extract the tarball with the scripts as follow: $ sudo tar zxvf installation.tgz -C /
  • Install Suricata: $ sudo yum -y install suricata
  • Install Zeek: $ sudo yum -y install zeek

After Suricata & Zeek have been installed, if you plan to send the logs to Elasticsearch, install filebeat (metricbeat & packetbeat are optional).

  • Install Filebeat: $ sudo yum -y install filebeat (metricbeat and packetbeat)

The sensor.tgz tarball has Zeek configured to save the logs in JSON format which has support by most commercial products like ELK, RSA NetWitness, Splunk, etc.

  • Extract this tarball after installing all the packages: $ sudo tar zxvf sensor.tgz -C /

If the packet capture interface is other than ens160 (ifconfig), update the following files:

  • /opt/zeek/etc/node.cfg
  • /etc/suricata/suricata.yaml

If using packetbeat:

  • /etc/packetbeat/packetbeat.yml

If using softflowd (make script executable: chmod 755 /etc/rc.local):

  • /etc/rc.local

Enable Suricata & Zeek to start on reboot:

  • $ sudo systemctl enable suricata
  • $ sudo systemctl enable zeek

Update Suricata's rules:

  • $ sudo /usr/bin/suricata-update update –reload-command "/usr/bin/systemctl kill -s USR2 suricata"

Lets start some services:

  • $ sudo systemctl start suricata
  • $ sudo systemctl status suricata
  • $ sudo systemctl start zeek
  • $ sudo systemctl status zeek

Last, configure filebeat (metricbeat & packetbeat are optional) Elasticsearch server section to send the logs to the server. To make sure nothing is missed to configure Elasticsearch applications, review this document Logging Data to Elasticsearch which contains all the steps to configure these Elastic Beats.

  • /etc/filebeat/filebeat.yml
  • /etc/metricbeat/metricbeat.yml
  • /etc/packetbeat/packetbeat.yml

If using any of the Beats, enable them to start on reboot:

  • $ sudo systemctl enable filebeat
  • $ sudo systemctl enable metricbeat
  • $ sudo systemctl enable packetbeat

Let's start Filebeat:

  • $ sudo systemctl start filebeat
  • $ sudo systemctl status filebeat

Note: Because Suricata logs are sent to ELK with filebeat, there is an hourly cronjob that delete the previous hour logs from the /nsm/suricata directory to keep it clean and in the end requires a minimal /nsm/suricata partition documented in [4].

Since I use VMs as sensors, I exported this sensor template as an OVA, which requires minimum configuration changes for the next deployment.

[1] https://github.com/irino/softflowd
[2] https://handlers.sans.edu/gbruneau/elastic.htm
[3] https://handlers.sans.edu/gbruneau/elk/TLS_elasticsearch_configuration.pdf
[4] https://handlers.sans.edu/gbruneau/elk/Building_Custom_IDS_Sensor.pdf
[5] https://handlers.sans.edu/gbruneau/scripts/installation.tgz
[6] https://handlers.sans.edu/gbruneau/scripts/sensor.tgz

———–
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.