SPF and DMARC use on GOV domains in different ccTLDs, (Fri, Dec 30th)

This post was originally published on this site

Although e-mail is one of the cornerstones of modern interpersonal communication, its underlying Simple Mail Transfer Protocol (SMTP) is far from what we might call “robust” or “secure”[1]. By itself, the protocol lacks any security features related to ensuring (among other factors) integrity or authenticity of transferred data or the identity of their sender, and creating a “spoofed” e-mail is therefore quite easy. This poses a significant issue, especially when one considers that most ordinary people don’t tend to question the validity of officially looking messages if it appears that they were sent from a respectable/well-known domain.

Opening the Door for a Knock: Creating a Custom DShield Listener, (Thu, Dec 29th)

This post was originally published on this site

There are a variety of services listening for connections on DShield honeypots [1]. Different systems scanning the internet can connect to these listening services due to exceptions in the firewall. Any attempted connections blocked by the firewall are logged and can be analyzed later. This can be useful to see TCP port connection attempts, but it's usefulness is limited. Without the ability to complete the SYN, SYN-ACK, ACK handshake process other protocol data may not be sent.

Exchange OWASSRF Exploited for Remote Code Execution, (Thu, Dec 22nd)

This post was originally published on this site

According to a post by Rapid7, they have observed Exchange server 2013, 2016 & 2019 being actively exploited for "a chaining of CVE-2022-41080 and CVE-2022-41082 to bypass URL rewrite mitigations that Microsoft provided for ProxyNotShell allowing for remote code execution (RCE) via privilege escalation via Outlook Web Access (OWA)."[1]

Linux File System Monitoring & Actions, (Tue, Dec 20th)

This post was originally published on this site

There can be multiple reasons to keep an eye on a critical/suspicious file or directory. For example, you could track an attacker and wait for some access to the captured credentials in a phishing kit installed on a compromised server. You could deploy an EDR solution or an OSSEC agent that implements an FIM (‘File Integrity Monitoring”)[1]. Upon a file change, an action can be triggered. Nice, but what if you would like a quick solution but agentless? (In the scope of an incident, for example)

Hunting for Mastondon Servers, (Mon, Dec 19th)

This post was originally published on this site

Since Elon Mush took control of Twitter, there has been considerable interest in alternative platforms to the micro-blogging network. Without certainty about Twitter's future, many people switched to the Mastodon[1] network. Most of the ISC Handlers are now present on this decentralized network. For example, I’m reachable via @xme@infosec.exchange[2]. You can find our addresses on the Contact page[3].

Infostealer Malware with Double Extension, (Sun, Dec 18th)

This post was originally published on this site

Got this file attachment this week pretending to be from HSBC Global Payments and Cash Management. The attachment payment_copy.pdf.z is a rar archive, kind of unusual with this type of file archive but when extracted, it comes out as a double extension with pdf.exe. The file is a trojan infostealer and detected by multiple scanning engines. 

New – Bring ML Models Built Anywhere into Amazon SageMaker Canvas and Generate Predictions

This post was originally published on this site

Amazon SageMaker Canvas provides business analysts with a visual interface to solve business problems using machine learning (ML) without writing a single line of code. Since we introduced SageMaker Canvas in 2021, many users have asked us for an enhanced, seamless collaboration experience that enables data scientists to share trained models with their business analysts with a few simple clicks.

Today, I’m excited to announce that you can now bring ML models built anywhere into SageMaker Canvas and generate predictions.

New – Bring Your Own Model into SageMaker Canvas
As a data scientist or ML practitioner, you can now seamlessly share models built anywhere, within or outside Amazon SageMaker, with your business teams. This removes the heavy lifting for your engineering teams to build a separate tool or user interface to share ML models and collaborate between the different parts of your organization. As a business analyst, you can now leverage ML models shared by your data scientists within minutes to generate predictions.

Let me show you how this works in practice!

In this example, I share an ML model that has been trained to identify customers that are potentially at risk of churning with my marketing analyst. First, I register the model in the SageMaker model registry. SageMaker model registry lets you catalog models and manage model versions. I create a model group called 2022-customer-churn-model-group and then select Create model version to register my model.

Amazon SageMaker Model Registry

To register your model, provide the location of the inference image in Amazon ECR, as well as the location of your model.tar.gz file in Amazon S3. You can also add model endpoint recommendations and additional model information. Once you’ve registered your model, select the model version and select Share.

Amazon SageMaker Studio - Share models from model registry with SageMaker Canvas users

You can now choose the SageMaker Canvas user profile(s) within the same SageMaker domain you want to share your model with. Then, provide additional model details, such as information about training and validation datasets, the ML problem type, and model output information. You can also add a note for the SageMaker Canvas users you share the model with.

Amazon SageMaker Studio - Share a model from Model Registry with SageMaker Canvas users

Similarly, you can now also share models trained in SageMaker Autopilot and models available in SageMaker JumpStart with SageMaker Canvas users.

The business analysts will receive an in-app notification in SageMaker Canvas that a model has been shared with them, along with any notes you added.

Amazon SageMaker Canvas - Received model from SageMaker Studio

My marketing analyst can now open, analyze, and start using the model to generate ML predictions in SageMaker Canvas.

Amazon SageMaker Canvas - Imported model from SageMaker Studio

Select Batch prediction to generate ML predictions for an entire dataset or Single prediction to create predictions for a single input. You can download the results in a .csv file.

Amazon SageMaker Canvas - Generate Predictions

New – Improved Model Sharing and Collaboration from SageMaker Canvas with SageMaker Studio Users
We also improved the sharing and collaboration capabilities from SageMaker Canvas with data science and ML teams. As a business analyst, you can now select which SageMaker Studio user profile(s) you want to share your standard-build models with.

Your data scientists or ML practitioners will receive a similar in-app notification in SageMaker Studio once a model has been shared with them, along with any notes from you. In addition to just reviewing the model, SageMaker Studio users can now also, if needed, update the data transformations in SageMaker Data Wrangler, retrain the model in SageMaker Autopilot, and share back the updated model. SageMaker Studio users can also recommend an alternate model from the list of models in SageMaker Autopilot.

Once SageMaker Studio users share back a model, you receive another notification in SageMaker Canvas that an updated model has been shared back with you. This collaboration between business analysts and data scientists will help democratize ML across organizations by bringing transparency to automated decisions, building trust, and accelerating ML deployments.

Now Available
The enhanced, seamless collaboration capabilities for Amazon SageMaker Canvas, including the ability to bring your ML models built anywhere, are available today in all AWS Regions where SageMaker Canvas is available with no changes to the existing SageMaker Canvas pricing.

Start collaborating and bring your ML model to Amazon SageMaker Canvas today!

— Antje

Support Requests: Change Management (and How To Test)

This post was originally published on this site

Tweet Changes shouldn’t lead to crisis, and therefore we have Change Management. It is a powerful tool for avoiding new and unexpected issues.  It will also provide audit trails and situational ownership during an event. Frequently during the life of a VMware Support Request, we have to make a change in order to resolve a … Continued

The post Support Requests: Change Management (and How To Test) appeared first on VMware Support Insider.

Support Requests: How to Get Help from VMware Support

This post was originally published on this site

Tweet Having spent 10+ years in VMware Support, I would like to share a few bits of advice with all my customers.  This is the first post in a series where I’ll go over some fundamentals which will help our newer customers.  And I’ll also talk about some core issues that every business can run … Continued

The post Support Requests: How to Get Help from VMware Support appeared first on VMware Support Insider.

Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023

This post was originally published on this site

Starting in April of 2023 we will be making two changes to Amazon Simple Storage Service (Amazon S3) to put our latest best practices for bucket security into effect automatically. The changes will begin to go into effect in April and will be rolled out to all AWS Regions within weeks.

Once the changes are in effect for a target Region, all newly created buckets in the Region will by default have S3 Block Public Access enabled and access control lists (ACLs) disabled. Both of these options are already console defaults and have long been recommended as best practices. The options will become the default for buckets that are created using the S3 API, S3 CLI, the AWS SDKs, or AWS CloudFormation templates.

As a bit of history, S3 buckets and objects have always been private by default. We added Block Public Access in 2018 and the ability to disable ACLs in 2021 in order to give you more control, and have long been recommending the use of AWS Identity and Access Management (IAM) policies as a modern and more flexible alternative.

In light of this change, we recommend a deliberate and thoughtful approach to the creation of new buckets that rely on public buckets or ACLs, and believe that most applications do not need either one. If your application turns out be one that does, then you will need to make the changes that I outline below (be sure to review your code, scripts, AWS CloudFormation templates, and any other automation).

What’s Changing
Let’s take a closer look at the changes that we are making:

S3 Block Public Access – All four of the bucket-level settings described in this post will be enabled for newly created buckets:

A subsequent attempt to set a bucket policy or an access point policy that grants public access will be rejected with a 403 Access Denied error. If you need public access for a new bucket you can create it as usual and then delete the public access block by calling DeletePublicAccessBlock (you will need s3:PutBucketPublicAccessBlock permission in order to call this function; read Block Public Access to learn more about the functions and the permissions).

ACLs Disabled – The Bucket owner enforced setting will be enabled for newly created buckets, making bucket ACLs and object ACLs ineffective, and ensuring that the bucket owner is the object owner no matter who uploads the object. If you want to enable ACLs for a bucket, you can set the ObjectOwnership parameter to ObjectWriter in your CreateBucket request or you can call DeleteBucketOwnershipControls after you create the bucket. You will need s3:PutBucketOwnershipControls permission in order to use the parameter or to call the function; read Controlling Ownership of Objects and Creating a Bucket to learn more.

Stay Tuned
We will publish an initial What’s New post when we start to deploy this change and another one when the deployment has reached all AWS Regions. You can also run your own tests to detect the change in behavior.

Jeff;