Creating an isolated network between two virtual machines and facing issue while assigning static IP

This post was originally published on this site

Issue- I was following the KB https://kb.vmware.com/s/article/2043160 and facing issues while assigning static IP to a VM with two network adapters

 

Script that I was using –

 

$vmNameTemplate = “VM-{0:D3}”

 

$cluster = Get-Cluster -Name “Test Cluster”

$template = Get-Template “New_Template”

$vmList = @()

 

New-VirtualSwitch -VMHost ‘esxi3.gsslabs.org’ -Name vSwitch8

New-VirtualPortGroup -Name “Private” -VirtualSwitch vSwitch8 -VLanId 0

 

$vmhost = Get-Cluster ‘Test Cluster’ | Get-VMHost -Name ‘esxi3.gsslabs.org’

 

$myDatastoreCluster = ‘Datastore’

 

for ($i = 1; $i –le 2; $i++) {

    $vmName = $vmNameTemplate –f $i

    $vmList += New-VM –Name $vmName -Datastore $myDatastoreCluster -VMHost $vmhost –Template $template

    |Start-VM   

}

 

#Create a custom spec 

$staticIpList = Import-CSV C:deploy.csv

      $sCust = @{

         Name = “Win2008r”

         OSType = ‘Windows’

         Type = ‘Persistent’

         FullName = “Test”

         OrgName = “VMware1”

         NamingScheme = ‘VM’

         AutoLogonCount = 1

         ChangeSID = $true

         Confirm = $false

         Workgroup = “VMware”

    }

 

New-OSCustomizationSpec @sCust

$nicMappings = Get-OSCustomizationNicMapping –OSCustomizationSpec Win2008r

Remove-OSCustomizationNicMapping –OSCustomizationNicMapping $nicMappings

$specClone = New-OSCustomizationSpec –Spec Win2008r –Type NonPersistent

 

for ($i = 0; $i –lt $vmList.Count; $i++) {

$vm=Get-VM -Name $vmList[$i]

$ip = $staticIpList[$i].ip

New-Networkadapter -vm $vmList[$i] -NetworkName “Private” -Type “E1000” -startconnected

$publicNIC = $vmList[$i] | Get-NetworkAdapter| where {$_.NetworkName -eq “VM Network”}

$privateNIC = $vmList[$i] | Get-NetworkAdapter| where {$_.NetworkName -eq “Private”}

 

 

New-OSCustomizationNicMapping -OSCustomizationSpec $specClone –IpMode UseDhcp –NetworkAdapterMac $publicNIC.MacAddress

New-OSCustomizationNicMapping -OSCustomizationSpec $specClone –IpMode UseStaticIP –IpAddress $ip –SubnetMask “255.255.255.0” –DefaultGateway “192.168.0.1” -Dns “192.168.0.10” -NetworkAdapterMac $privateNIC.MacAddress

 

$nicMappings = Get-OSCustomizationNicMapping –OSCustomizationSpec $specClone | where {$_.Position –eq 1}

$nicMappings | Set-OSCustomizationNicMapping –IpMode UseDhcp –NetworkAdapterMac $publicNIC.MacAddress

 

$nicMapping = Get-OSCustomizationNicMapping –OSCustomizationSpec $specClone | where {$_.Position –eq 2}

$nicMapping | Set-OSCustomizationNicMapping –IpMode UseStaticIP –IpAddress $ip –SubnetMask “255.255.255.0” –DefaultGateway “192.168.0.1” -Dns “192.168.0.10” -NetworkAdapterMac $privateNIC.MacAddress

  

$vmCust = Get-OSCustomizationSpec -Name $specClone

#New-NetFirewallRule -DisplayName “Allow inbound ICMPv4” -Direction Inbound -Protocol ICMPv4 -IcmpType 8 -Action Allow

Set-VM –VM $vmList[$i] –OSCustomizationSpec $vmCust –Confirm:$false

   }

 

Any assistance that I can get would be much appreciated.

Thank you

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

This post was originally published on this site

Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code

Hi everyone!
I’m Justin and I am currently an intern on the PowerShell team.
One of my projects was to add PowerShell semantic highlighting support in VS Code allowing for more accurate highlighting in the editor.
I’m excited to share that the first iteration has been released.

Getting started

Great news!
You don’t have to do anything to get this feature except for making sure you have at least the
v2020.7.0 version of the
PowerShell Preview extension for Visual Studio Code.

IMPORTANT

You have to use a theme that supports Semantic Highlighting.
All the inbox themes support it and the PowerShell ISE theme supports it but it’s not guaranteed that every theme will.
If you don’t see any difference in highlighting,
the theme you’re using probably doesn’t support it.
Open an issue on the theme you’re using to support Semantic Highlighting.

For theme authors: Supporting Semantic Highlighting

If you are a theme author, make sure to add {semanticHighlighting: true} to the
theme.json file of your VS Code theme.

For a more complete guide into supporting Semantic Highlighting in your theme,
please look at:

The rest of this blog post will discuss the shortcomings of the old syntax
highlighting mechanism and how semantic highlighting addresses those issues.

Syntax Highlighting

Currently, the syntax highlighting support for PowerShell scripts in VS Code leverages
TextMate grammars, which are mappings
of regular expressions to tokens. For instance, to identify control keywords, something like
the following would be used

{
    name = 'keyword.control.untitled';
    match = 'b(if|while|for|return)b';
}

However, there are some limitations with regular expressions and their ability to recognize different syntax patterns.
Since TextMate grammars rely on these expressions,
there are many complex and context-dependent tokens these grammars are unable to parse,
leading to inconsistent or incorrect highlighting.
Just skim through the issues in the
EditorSyntax repo,
our TextMate grammar.

Here are a few examples where syntax highlighting fails in
tokenizing a PowerShell script.

Syntax Highlighting Bugs

Semantic Highlighting

To solve those cases
(and many other ones)
we use the PowerShell tokenizer
which describes the tokens more accurately than regular expressions can,
while also always being up-to-date with the language grammar.
The only problem is that the tokens generated by the PowerShell tokenizer do not align perfectly to the semantic token types predefined by VS Code.
The
semantic token types provided by VS Code are:

  • namespace
  • type, class, enum, interface, struct, typeParameter
  • parameter, variable, property, enumMember, event
  • function, member, macro
  • label
  • comment, string, keyword, number, regexp, operator

On the other hand, there are over 100
PowerShell token kinds
and also many
token flags
that can modify those types.

The main task (aside from setting up a semantic tokenization
handler) was to create a mapping from PowerShell tokens to VS Code semantic token types. The
result of enabling semantic highlighting can be seen below.

Semantic Highlighting Examples

If we compare the semantic highlighting to the highlighting in PowerShell ISE, we can see they are quite similar (in tokenization, not color).

PowerShell ISE Screenshot

Next Steps

Although semantic highlighting does a better job than syntax highlighting in identifying tokens,
there remain some cases that can still be improved at the PowerShell layer.

In Example 5, for instance, while the enum does have better highlighting, the name and members
of the enums are highlighted identically. This occurs because PowerShell tokenizes them
all of them the same way (as identifiers with a token flags denoting that they are member names meaning that the semantic highlighting has no way to differentiate them.

How to Provide Feedback

If you experience any issues or have comments on improvement, please raise an issue in
PowerShell/vscode-powershell. Since this was
just released, any feedback will be greatly appreciated.

Justin Chen
PowerShell Team

The post Semantic Highlighting in the PowerShell Preview extension for Visual Studio Code appeared first on PowerShell.

New – Using Amazon GuardDuty to Protect Your S3 Buckets

This post was originally published on this site

As we anticipated in this post, the anomaly and threat detection for Amazon Simple Storage Service (S3) activities that was previously available in Amazon Macie has now been enhanced and reduced in cost by over 80% as part of Amazon GuardDuty. This expands GuardDuty threat detection coverage beyond workloads and AWS accounts to also help you protect your data stored in S3.

This new capability enables GuardDuty to continuously monitor and profile S3 data access events (usually referred to data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities such as requests coming from an unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. To detect possibly malicious behavior, GuardDuty uses a combination of anomaly detection, machine learning, and continuously updated threat intelligence. For your reference, here’s the full list of GuardDuty S3 threat detections.

When threats are detected, GuardDuty produces detailed security findings to the console and to Amazon EventBridge, making alerts actionable and easy to integrate into existing event management and workflow systems, or trigger automated remediation actions using AWS Lambda. You can optionally deliver findings to an S3 bucket to aggregate findings from multiple regions, and to integrate with third party security analysis tools.

If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, you can simply enable this new capability with one-click in the GuardDuty console or through the API. For simplicity, and to optimize your costs, GuardDuty has now been integrated directly with S3. In this way, you don’t need to manually enable or configure S3 data event logging in AWS CloudTrail to take advantage of this new capability. GuardDuty also intelligently processes only the data events that can be used to generate threat detections, significantly reducing the number of events processed and lowering your costs.

If you are part of a centralized security team that manages GuardDuty across your entire organization, you can manage all accounts from a single account using the integration with AWS Organizations.

Enabling S3 Protection for an AWS Account
I already have GuardDuty enabled for my AWS account in this region. Now, I want to add threat detection for my S3 buckets. In the GuardDuty console, I select S3 Protection and then Enable. That’s it. To be more protected, I repeat this process for all regions enabled in my account.

After a few minutes, I start seeing new findings related to my S3 buckets. I can select each finding to get more information on the possible threat, including details on the source actor and the target action.

After a few days, I select the Usage section of the console to monitor the estimated monthly costs of GuardDuty in my account, including the new S3 protection. I can also find which are the S3 buckets contributing more to the costs. Well, it turns out I didn’t have lots of traffic on my buckets recently.

Enabling S3 Protection for an AWS Organization
To simplify management of multiple accounts, GuardDuty uses its integration with AWS Organizations to allow you to delegate an account to be the administrator for GuardDuty for the whole organization.

Now, the delegated administrator can enable GuardDuty for all accounts in the organization in a region with one click. You can also set Auto-enable to ON to automatically include new accounts in the organization. If you prefer, you can add accounts by invitation. You can then go to the S3 Protection page under Settings to enable S3 protection for their entire organization.

When selecting Auto-enable, the delegated administrator can also choose to enable S3 protection automatically for new member accounts.

Available Now
As always, with Amazon GuardDuty, you only pay for the quantity of logs and events processed to detect threats. This includes API control plane events captured in CloudTrail, network flow captured in VPC Flow Logs, DNS request and response logs, and with S3 protection enabled, S3 data plane events. These sources are ingested by GuardDuty through internal integrations when you enable the service, so you don’t need to configure any of these sources directly. The service continually optimizes logs and events processed to reduce your cost, and displays your usage split by source in the console. If configured in multi-account, usage is also split by account.

There is a 30-day free trial for the new S3 threat detection capabilities. This applies as well to accounts that already have GuardDuty enabled, and add the new S3 protection capability. During the trial, the estimated cost based on your S3 data event volume is calculated in the GuardDuty console Usage tab. In this way, while you evaluate these new capabilities at no cost, you can understand what would be your monthly spend.

GuardDuty for S3 protection is available in all regions where GuardDuty is offered. For regional availability, please see the AWS Region Table. To learn more, please see the documentation.

Danilo

vSphere client could not connect to VC

This post was originally published on this site

I was trying to upgrade the vcenter version from 5.5 U3E to 6.5 U3. Upgrade process failed and we restored the changes from snapshots and SQL Server database restored from backup. Post restoration, Inventory is empty and unable to connect through domain ID. In SQL Database, found two schema using. One is dbo and other one is VMW. While trying SQL query, vCenter Data’s are available in VMW Schema, but vcenter can’t read the data from VMW. Can we move/alter/copy data from VMW Schema to default schema i.e. dbo? Does it will resolve the issue?

I have created a new Windows 2012 R2 server and installed fresh copy of vcenter 5.5 U3E. Also pointed to the existing external SQL database from ODBC, test connection is successful however vcenter can’t read data from Database.

Configuration are as below:

  1. VCenter Server – VC Version 5.5 U3E, Installed on Windows 2008 R2 server
  2. SQL Server version is MS SQL 2008 R2 Ent Service Pack 2.

 

Is there a specific setting to disable/manage the “Prevent cross site tracking” for iOS Safari in WS1

This post was originally published on this site

Our app development team is trying to test some security features in certain Betas that they are developing. They are looking to temporarily bypass the “Prevent Cross Site Tracking” setting. Which is currently greyed out for the Safari settings.

Is this a default setting that is applied, once a device is enrolled, and supervized?

 

Thank you

Low Read Latency, High Write Latency in SQL DB

This post was originally published on this site

I’ve been struggling with this one for a while. We have several VMs running SQL 2017 Enterprise (on Win 2012 R2) that experience high write latency (ranging from 15-45ms) but have great read latency (2-4ms). I’ve looked at all of the real-time stats from the OS (perfmon) to the hypervisor (ESXTOP) back to the storage itself and never see write latency higher than 5ms – the ESXTOP GAVG is a great stat for this. In general, read latency would be higher in most systems due to having to search for random data, whereas writes are typically asynchronous and simply handed off to be written to disk. There are no queueing issues with the SCSI controllers either – the data files, tempdb and log files are separated on different virtual controllers and have separate datastores .. plus, any queuing would also affect read latency. SQL is running on the latest compatibility level and I don’t see any write latency for the tempdb or the write log .. only the data files!

 

Anyone else have this issue where SQL reports high latency but you can’t actually see any from the OS back to the Storage? Surely the latency must be somewhere within SQL, but I can’t find it. Thanks in advance!

 

Configuration:

Compute – Cisco UCS, M5 blades

VMware – vSphere 6.0 U2 (upgrading soon)

VM SCSI Controllers – LSI for OS disk, Paravirtual for all others (data files, log, tempdb) with separate datastores

VMDKs – Thick Prov Eager Zero (Data drives)

Storage – All Flash via iSCSI (dedicated storage vlans)

Network – All Cisco Nexus 10GB+

SQL – 2017 CU19

Cannot access Sandisk SSD in Windows10 on VMWare Fusion

This post was originally published on this site

I am using an iMac2017 and VMWare Fusion to host a virtual Windows10 OS.

My goal is to install Windows OS onto a SSD so I can boot directly from the external on my iMac.

 

Im following this method: How to install Windows 10 “Boot Camp” on a Mac External Drive the EASY way! (2020 edition) – YouTube

 

I run into problems when I connect my Sandisk Extreme Portable 500G into the iMac.

The windows chime indicates it recognises the disk being connected, however I cannot access it in anyway.

Im not too familiar with windows but after hunting in “This PC” and “Computer Management” I can’t see the drive anywhere.

 

If I change the USB settings on VMWare to 1 or 2, it gives me an error the moment I start the VM.

If I use 3.1 I get no error and a chime telling me it is recognised but still invisible…

 

Any thoughts are greatly appreciated. Ive been trying several things on this and other forums but no success.

Run Docker for Windows and corresponding containers within VMWare Windows 10 session using the facilities provided by Visual Studio/Visual Studio Code

This post was originally published on this site

I want to run Docker For Windows within a VMWare Windows 10 Pro session. Right now, attempting to even start Docker for Windows in a virtual session results in an error message.

“Invalid Operation Exception:

Failed to deploy distro docker-desktop to …AppDataLocalDockerwsldistro: exit code: -1

stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS”

 

I checked, the Virtual Machine Platform feature is turned on both in the host and the VM and virtualization is enabled in the BIOS.

 

Below is the TLDR description of what I’m trying to accomplish and why.

 

I have searched for that scenario and come up either empty, with very outdated (as in years too old) methodologies or with respondents who wish to change the OP’s world view by telling them “You’re doing it wrong, change your methodology to this…” whatever “this” might happen to be. Or worse, asking “why do you want to do it that way? Do it this way instead…”. In all cases the respondents failed to fully understand the OP’s original question, which amounts to: “Can I run Docker for Windows within a Windows 10 Pro VMWare virtual session and, if so, how?”.

 

I understand the difference between Docker and VMWare, that’s not the issue. I’ve been a VMWare user since their version 3 so I get how it works and what it provides. Am I a VMWare guru? Nope, that’s part of the charm of VMWare – usually it “just works” and I don’t have to care about the machinery beneath. In the course of my learning, I see that, properly configured, Docker is exactly the same – it “just works” – awesome!

 

I am a developer, I use Visual Studio (VS) as my dev environment. Microsoft has gone to great lengths to make VS work with Docker. Marvelous! Love it! BUT, the presumption by Microsoft is that VS is running on the host machine or, and I haven’t tried this yet, running under their hypervisor. Never used hypervisor. When it first came out there were *way* too many problems with it and again, VMWare just worked. Never saw a reason to move off the VMWare platform and doing so now would be a major change for me, one I would very much like to avoid.

 

(update): I *have* tried running Docker for Windows within Microsoft’s Hyper-V environment. With a little fiddling (as in a few PowerShell commands) it WORKS JUST FINE! The concept is billed as “nested virtualization” but turning on a few Windows features and executing a PowerShell command or two and Docker for Windows starts just fine within the Hyper-V virtual session and I can execute both Windows and Linux containers.

 

My situation is this. Because I travel a great deal I rotate between my own personal machine at home, a company desktop machine and a laptop when traveling, usually issued by DevOps whenever I have to go offsite so the laptop isn’t “mine” it’s whatever happens to be available at the time. Changing that scenario is way above my pay-grade, company policy is something I cannot affect.

 

My solution up to this point has been to use VMWare to create a session, keep the session on a portable SSD drive and simply ensure that I have either VMWare Player or Workstation available to me, company DevOps has yielded that much as I’m not affecting their “host” machine configuration. It then becomes a simple matter to tune my development environment to suit me with all the additional bells and whistles and custom configurations and other goodies I might desire. Since its my environment with strictly controlled access to the corporate network as designated and configured by company DevOps, they don’t care what’s on my session so I avoid all sorts of political red tape if I wish to try a new version of Windows (yep, even Windows updates come through them, we had to fight a major battle to get them to allow us Windows 10, 2004 and wsl 2!), try a new tool, install a new version of VS (updates to which come out now about every two weeks!), etc.

In short, by using VMWare sessions, my dev environment is exactly what I want it to be, totally under my control as I am the admin on the session and thus able to control group policy, etc. and…it’s PORTABLE! *Easily* portable! I plug in the drive and viola! back to work and it matters not one whit which machine I might be working on!

 

Here is the kicker: We finally get to move to .NET Core and all the goodies associated with it and our C-level management has finally acknowledged the existence of Docker so now we get to play. Trouble is, when I went to install Docker for Windows into my Windows 10 Pro, 2004 with all updates, WSL2 enabled with both Debian and Alpine session running on the latest version of VMWare for Windows with full VMWare Tools installed, I receive the following error message:

“System.InvalidOperationException: Failed to deploy distro docker-desktop to …AppDataLocalDockerwsldistro: exit code: -1 stdout: Please enable Virtual Machine Platform WIndows feature and ensure virtualization is enabled in the BIOS.”

 

I checked and both “Virtual Machine Platform” and “Containers” are enabled in “Windows Features” for both the session and the host machine and virtualization is turned on for the host machine and that’s about as far as I can go – there is no “virtualization” setting in the VMWare BIOS for the session. I even took the risk and turned on hyper-v for the session (haven’t tried that for the host yet, am waiting for a response here before I take that plunge despite the articles on MS and VMWare working together to make that “just work”.

 

So, finally, the question is this: Is there a method of making Docker for Windows Desktop work within a VMWare session running Windows 10, 2004 with wsl 2 such that I might take advantage of all the goodies provided by Visual Studio and Visual Studio Code for Docker and, if so, will someone please provide the link to the instructions for configuring my environment, virtual or otherwise?

Conditions in Blueprint Code

This post was originally published on this site

New to vRA8 and still learning the syntax for blueprint as code.  The code below takes the input of the OS from the users and changes the customization spec to be used based on that input.  That works great when you only have two OS’ to pick from.  But how do I modify something like that to work with more than two OS?  It seems like conditions can only be A or B based on match.  Is there a way to rework that to work for multiple OS choices?

customizationSpec: ${input.OS,”Windows 2016″)? “Win2016” : “Win2019”}

Announcing the New AWS Community Builders Program!

This post was originally published on this site

We continue to be amazed by the enthusiasm for AWS knowledge sharing in technical communities. Many experienced AWS advocates are passionate about helping others build on AWS by sharing their challenges, success stories, and code. Others who are newer to AWS are showing a similar enthusiasm for community building and are asking how they can get more involved in community activities. These builders are seeking better ways to connect with one another, share best practices, and receive resources & mentorship to help improve community knowledge sharing.

To help address these points, we are excited to announce the new AWS Community Builders Program which offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. As of today, this program is open for anyone to apply to join!

Members of the program will receive:

  • Access to AWS product teams and information about new services and features
  • Mentorship from AWS subject matter experts on a variety of topics, including content creation, community building, and securing speaking engagements
  • AWS Promotional Credits and other helpful resources to support content creation and community-based work

Any individual who is passionate about building on AWS can apply to join the AWS Community Builders program. The application process is open to AWS builders worldwide, and the program seeks applicants from all regions, demographics, and underrepresented communities.

While there is no single specific criteria for being accepted into the program, applications will generally be reviewed for evidence and accuracy of technical content, such as blog posts, open source contributions, presentations, online knowledge sharing, and community organization efforts, such as hosting AWS Community Days, AWS User Groups, or other community-based events. Equally important, the program seeks individuals from diverse backgrounds, who are enthusiastic about getting more involved in these types of activities! The program will accept a limited number of applicants per year.

Please apply to be an AWS Community Builder today. To learn more, you can get connected via a variety of community resources.

Channy and Jason;