Building a .freq file with Public Domain Data Sources, (Fri, Jul 31st)

This post was originally published on this site

This diary started out as a frequency analysis of zone files for domains that expire before May 2023. Our intent was to look for frequency of random on all Generic Top-Level Domains (gTLDs). This exercise quickly turned into “create the freq file” for the analysis.

First, we create our .freq file

python3 ./freq.py —create bookmag.freq

The name will make sense as sources are revealed.

My first pass was to download a few famous books from the Gutenberg project (e.g., Sherlock Holmes, a Tall of Two Cities, War and Peace) following the example from [2] [4] Mark. The frequency analysis on that first attempt did not match up to my randomly (not true random as my brain was the random generator, read into that what you will :)).

This got me thinking, can you compile some strange sources and LOTS of data to create a better frequency analysis. More data == better right (well, not always but in this case, maybe)…

This gave me an idea, why not put all of the venerable PHRACK [8] mags in my freq file… To the “Intertubes” BatMan…

Now when you download “ALL” the TGZ files there are a few steps to get them in your new bookman.freq. The first is to uncompress them. And YES, I downloaded them by clicking on them and after about phrack15 or 16 wget -r or curl – something came to mind. Stuck with it and was polite. Thanks Phrack for leaving these rare gems up!

Once you get them unpacked, you can crawl through them pretty easily as noted below:

for fname in `ls ../phrack*/*`
do
echo $fname
python3 ./freq.py --normalfile $fname bookmag.freq
done

Now are bookmag.freq has all of Phrack as part of its analysis baseline.

Onto the Gutenberg portion cause now the curiosity of “Can I put EVEN MORE literary works in this thing” came to mind.. Go Big or Go <insert somewhere not home here as we are all stuck at home>….

So after locating a DVD ISO of the Gutenberg project (not listed here, but it is easy to find) and going through uncompressing all the ZIP files:

NOTE: Files were transferred from the root of the ISO to a working “RAW” Directory.

for fname in `find ./ "*.ZIP"`
do
unzip $fname -d ../textdocs
done

Here is where zsh barked at me cause it was “TOOOOOO LOOOOOONG”, as apparently there is a limit to looping through an ls.

Settled into manually reducing the load on my poor ‘ls’ cmd.

For example changing the ls to ‘0d*.txt’

for fname in `ls ../textdocs/0d*.txt`
do
echo $fname
python3 ./freq.py --normalfile $fname bookmag.freq
done

../textdocs/0ddc809a.txt
../textdocs/0ddcc10.txt
../textdocs/0ddcd09.txt
../textdocs/0ddcl10.txt
../textdocs/0drvb10.txt

Now that we have a robust .freq file we can start testing.

Let us know what sources you decide to use?

References
[1] https://isc.sans.edu/diary/Detecting+Random+-+Finding+Algorithmically+chosen+DNS+names+%28DGA%29/19893
[2] https://github.com/MarkBaggett/freq
[3] http://dev.gutenberg.org/
[4] https://www.youtube.com/watch?v=FpfOzcRpzs8
[5] https://github.com/sans-blue-team/freq.py/blob/master/freq.py
[6] https://wiki.sans.blue/Tools/pdfs/freq.py.pdf
[7] https://isc.sans.edu/diary/freq.py+super+powers%3F/19903
[8] http://phrack.org/

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

This post was originally published on this site

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

Hi everyone!
I’m Justin and I am currently an intern on the PowerShell team.
One of my projects was to add PowerShell semantic highlighting support in VS Code allowing for more accurate highlighting in the editor.
I’m excited to share that the first iteration has been released.

Getting started

Great news!
You don’t have to do anything to get this feature except for making sure you have at least the
v2020.7.0 version of the
PowerShell Preview extension for Visual Studio Code.

IMPORTANT

You have to use a theme that supports Semantic Highlighting.
All the inbox themes support it and the PowerShell ISE theme supports it but it’s not guaranteed that every theme will.
If you don’t see any difference in highlighting,
the theme you’re using probably doesn’t support it.
Open an issue on the theme you’re using to support Semantic Highlighting.

For theme authors: Supporting Semantic Highlighting

If you are a theme author, make sure to add {semanticHighlighting: true} to the
theme.json file of your VS Code theme.

For a more complete guide into supporting Semantic Highlighting in your theme,
please look at:

The rest of this blog post will discuss the shortcomings of the old syntax
highlighting mechanism and how semantic highlighting addresses those issues.

Syntax Highlighting

Currently, the syntax highlighting support for PowerShell scripts in VS Code leverages
TextMate grammars, which are mappings
of regular expressions to tokens. For instance, to identify control keywords, something like
the following would be used

{
    name = 'keyword.control.untitled';
    match = 'b(if|while|for|return)b';
}

However, there are some limitations with regular expressions and their ability to recognize different syntax patterns.
Since TextMate grammars rely on these expressions,
there are many complex and context-dependent tokens these grammars are unable to parse,
leading to inconsistent or incorrect highlighting.
Just skim through the issues in the
EditorSyntax repo,
our TextMate grammar.

Here are a few examples where syntax highlighting fails in
tokenizing a PowerShell script.

Syntax Highlighting Bugs

Semantic Highlighting

To solve those cases
(and many other ones)
we use the PowerShell tokenizer
which describes the tokens more accurately than regular expressions can,
while also always being up-to-date with the language grammar.
The only problem is that the tokens generated by the PowerShell tokenizer do not align perfectly to the semantic token types predefined by VS Code.
The
semantic token types provided by VS Code are:

  • namespace
  • type, class, enum, interface, struct, typeParameter
  • parameter, variable, property, enumMember, event
  • function, member, macro
  • label
  • comment, string, keyword, number, regexp, operator

On the other hand, there are over 100
PowerShell token kinds
and also many
token flags
that can modify those types.

The main task (aside from setting up a semantic tokenization
handler) was to create a mapping from PowerShell tokens to VS Code semantic token types. The
result of enabling semantic highlighting can be seen below.

Semantic Highlighting Examples

If we compare the semantic highlighting to the highlighting in PowerShell ISE, we can see they are quite similar (in tokenization, not color).

PowerShell ISE Screenshot

Next Steps

Although semantic highlighting does a better job than syntax highlighting in identifying tokens,
there remain some cases that can still be improved at the PowerShell layer.

In Example 5, for instance, while the enum does have better highlighting, the name and members
of the enums are highlighted identically. This occurs because PowerShell tokenizes them
all of them the same way (as identifiers with a token flags denoting that they are member names meaning that the semantic highlighting has no way to differentiate them.

How to Provide Feedback

If you experience any issues or have comments on improvement, please raise an issue in
PowerShell/vscode-powershell. Since this was
just released, any feedback will be greatly appreciated.

Justin Chen
PowerShell Team

The post Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code appeared first on PowerShell.

New – Using Amazon GuardDuty to Protect Your S3 Buckets

This post was originally published on this site

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

Announcing the New AWS Community Builders Program!

This post was originally published on this site

We continue to be amazed by the enthusiasm for AWS knowledge sharing in technical communities. Many experienced AWS advocates are passionate about helping others build on AWS by sharing their challenges, success stories, and code. Others who are newer to AWS are showing a similar enthusiasm for community building and are asking how they can get more involved in community activities. These builders are seeking better ways to connect with one another, share best practices, and receive resources & mentorship to help improve community knowledge sharing.

To help address these points, we are excited to announce the new AWS Community Builders Program which offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. As of today, this program is open for anyone to apply to join!

Members of the program will receive:

  • Access to AWS product teams and information about new services and features
  • Mentorship from AWS subject matter experts on a variety of topics, including content creation, community building, and securing speaking engagements
  • AWS Promotional Credits and other helpful resources to support content creation and community-based work

Any individual who is passionate about building on AWS can apply to join the AWS Community Builders program. The application process is open to AWS builders worldwide, and the program seeks applicants from all regions, demographics, and underrepresented communities.

While there is no single specific criteria for being accepted into the program, applications will generally be reviewed for evidence and accuracy of technical content, such as blog posts, open source contributions, presentations, online knowledge sharing, and community organization efforts, such as hosting AWS Community Days, AWS User Groups, or other community-based events. Equally important, the program seeks individuals from diverse backgrounds, who are enthusiastic about getting more involved in these types of activities! The program will accept a limited number of applicants per year.

Please apply to be an AWS Community Builder today. To learn more, you can get connected via a variety of community resources.

Channy and Jason;

Python Developers: Prepare!!!, (Thu, Jul 30th)

This post was originally published on this site

I know… tried it several times… growing up is hard. So instead, you decided to become a “Red Teamer” (aka Pentesters…). You got the hoodie, and you acquired a taste for highly caffeinated energy drinks. Now the only thing left: Learning to write a script. So like all the other “kids,” you learn Python and start writing and publishing tools (Yes… all the world needed was DNS covert channel tool #32773… you realize you could have written that as a bash oneliner?).

So what follows comes from a reluctant occasional Python coder.

So instead of learning a real language like Perl, you figured Python is it. Lately, in my ongoing quest to avoid growing up, I jumped from Perl on the Python bandwagon to also be one of the cool kids. Like most developers, I heavily lean on Stackoverflow and other tools like Github to find sample code that I am sure millions of others have reviewed (isn’t that the point of Open Sources?) and made sure it is well written secure code.

But as I learn more about Python, I noticed a dangerous trend in Python: People all for sudden forgot that preparing SQL statements is a thing. I think the problem is in part that Python makes it so easy to mix up prepared/not-prepared.

Compare these two snippets:

sql = “””SELECT count(*) FROM users where id=%s”””
vars = (‘642063 OR 1=1’,)
cursor.execute(sql , vars)
count=cursor.fetchone()
print(count[0])

Returns: 1

sql = “””SELECT count(*) FROM users where id=%s”””
vars = (‘642063 OR 1=1’,)
cursor.execute(sql % vars)
count=cursor.fetchone()
print(count[0])

Returns: 123237

The difference is a single character in the line highlighted in red: a “,” vs. a “%.”

A “,” will identify the variables as a second parameter. A ‘%’ is a format string like operators, and it will alter the string, which is the only parameter in this case. So really no better than concatenating the string. Yes, the ‘{}’ notation is not much better. You can specify more specific formats, but that falls apart quickly if you are using arbitrary strings.

Unlike with Perl’s amazing PDO library, Python is a bit inconsistent between different databases. So double-check this if you are using SQLite or Postgresql. 

And if you would like to read about this by someone who appears to know how to code in Python: I found this blog post that I thought was pretty good and went into more details: https://www.btelligent.com/en/blog/best-practice-for-sql-statements-in-python/


Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Nakivo Backup and Replication v10 released with support for vSphere 7

This post was originally published on this site

Nakivo Backup and Replication released last week version 10. I personally use Nakivo Backup and Replication in my lab environment because it can be installed on my NAS device. This saves me hardware resources of the backup VM (appliance or Windows/Linux) and the deployment is super fast. In version 10 of Nakivo Backup and Replication … Read more

The post Nakivo Backup and Replication v10 released with support for vSphere 7 appeared first on ivobeerens.nl.

NAKIVO Backup & Replication v10 released with VMware vSphere 7.0 support, Linux workstation backup, and backup to Wasabi

This post was originally published on this site

all-nas
Turn Your QNAP, Synology, ASUSTOR, WD, or Netgear NAS into a VM Backup Appliance, claiming to improve backup performance by up to 2X and offload your IT infrastructure

Last year, I wrote about NAKIVO v8.5.2 with detailed walk-thru install and configure videos, see that detailed article here, and it included Windows Server backups too:

This year, major new functionality has been added yet again to NAKIVO’s product line. While it might take me some time to add to this article some of my experiences with testing v10 out in my vSphere/ESXi 7.0b home lab, the UI is similar enough to the information above that it should provide you with a very good sense of just how easy it is to install and use this very special backup appliance VM in your VMware home lab. You’ll be kicking off your first automated daily backup jobs in very little time, but remember, always test out the restore process too to start to gain trust in any backup product. See also NAKIVO’s official videos below as well to help you get started.

Back to what just happened, NAKIVO’s big v10 announcement this week:

nakivo-releases-v10
  • NAKIVO Backup & Replication v10 Adds vSphere 7 Support and Other Features

    Sparks, NV, United States – July 27, 2020

    NAKIVO Backup & Replication v10 Adds vSphere 7 Support and Other Features
    The latest version of NAKIVO Backup & Replication introduces Backup to Wasabi, vSphere 7 support, Full P2V Recovery and more.
    Sparks, NV, United States – July 27, 2020

    NAKIVO Inc., a fast-growing software company dedicated to protecting virtual, physical, cloud and SaaS environments, announced today the release of NAKIVO Backup & Replication v10.

    NAKIVO Backup & Replication provides businesses with the tools they need to protect their entire IT infrastructure—from VMware, Hyper-V and Nutanix AHV VMs and Amazon EC2 instances to physical servers and workstations, Oracle databases, and Microsoft Office 365 application data. The introduction of Backup to Wasabi in v10 gives customers the power to leverage scalable cloud storage while retaining the option to store confidential data on local or off-site storage devices and tape media.

Product Page

whats-new

Release Notes

I’m particularly keen on finally having full vSphere 7 support, along with a new option to recover physical machine backups into VMs, nice! There’s a lot in there to check out, below I’ve included just a small excerpt, the rest is at:

New Features
VMware vSphere v7.0 Support

NAKIVO Backup & Replication now supports VMware vSphere v7.0

Physical Machine Recovery to VMware VM

To protect mixed physical and virtual IT environments, NAKIVO Backup & Replication offers the Physical to Virtual Machine Recovery feature. For details, refer to Physical to Virtual Machine Recovery.

User Interface Enhancement (Facelift)

NAKIVO Backup & Replication interface has been enhanced to improve user experience with the product. Refer to Web Interface Components to learn what has been changed.

Backup to Wasabi Hot Cloud Storage

NAKIVO Backup & Replication allows you to create Backup Repositories in Wasabi buckets for backing up and storing virtual and physical machines.

NFR for IT Pros

I’ve not been able to test whether this still works for v10, give it a try and drop a comment below about how it goes, especially if you get to it before I do, then I’ll update this article accordingly.

Request-NFR-Licenses

Note that NAKIVO offers NFR (Not For Resale) to many IT Professionals, which I explain in more detail here:

If you are a VMUG member, VMware vExpert, VCP, VSP, VTSP, or VCI you can receive a FREE two-socket Not For Resale (NFR) license of NAKIVO Backup & Replication for one year and use it in your home or work lab.

The NFR licenses are available for non-production use only, including educational, lab testing, evaluation, training, and demonstration purposes.

  • Apply for NFR download access here.
    I’m not sure what their selection criteria is, or how long it takes for NAKIVO to get back to you. The latest version filename is currently:
    NAKIVO_Backup_Replication_VA_v10.0.0_Full_Solution_NFR.ova

Download trial

download

The OVA Appliance file is currently named:
NAKIVO_Backup_Replication_VA_v10.0.0_Full_Solution_TRIAL.ova

TinkerTry’d

This part is in progress.

Pricing

Their Pricing page seems to indicate that protection of physical servers is only available on a Subscription License basis, with Perpetual Licenses options for VMware and Hyper-V only. The thing is, it appears this agent isn’t really called out here, so this is subject to change, I’d recommend you revisit this pricing page or use the online chat to inquire.

Video

If time permits, I’ll do a v10 version, meanwhile, here’s my recent v8.5.2 videos.

Overview videos by NAKIVO:


See also at TinkerTry

nakivo-v9-adds-physical-windows-server-backups

nakivo-852-works-with-vmware-vsphere-67u2

first-look-at-synology-1618-plus-10-gb-nas

clone-esxi-with-usb-image-tool


Full Disclosure: “TinkerTry.com, LLC” is registered as a NAKIVO Bronze Partner, mostly to help get notified of latest news and code releases. I get no special early-access, anybody can sign up for the betas. All TinkerTry site advertising goes through middle-man BuySellAds, and NAKIVO has run ads on TinkerTry through BuySellAds off and on the past few years. NAKIVO does know if you found their affiliate link from my site, which means the possibility of reseller commissions if you eventually upgrade to one of their paid offerings. Here’s their pricing.

Integrating VMware Cloud Notification Gateway with VMware Event Broker Appliance (VEBA)

This post was originally published on this site

I previously wrote about the VMware Cloud Notification Gateway (NGW) which provides curated notifications delivered to VMware Cloud on AWS users. By default, NGW supports several  types of notification channels such as email, VMware Cloud Console UI, VMware Cloud Activity Log, vRealize Log Intelligence Cloud (vRLIC) and the vSphere UI when using the vCenter Cloud […]

Why VMware Cloud Director?

This post was originally published on this site

Why VMware Cloud Director? Why VMware Cloud Director? VMware Cloud Director isn’t about IaaS anymore. It’s a pervasive cloud fabric stretching across any VMware endpoint, bringing true cloud capability to a Cloud Provider’s Software-Defined Datacenter. With a brand new UI, deep integration with the VMware SDDC, extensibility, and a plethora of services, vCloud Director is … Continue reading Why VMware Cloud Director?