Fun in the lab with VMware and a Raspberry Pi

This post was originally published on this site

Note: These instructions are only intended for use in a lab environment for non-production use. Do not use the steps described here to further contribute to the number of insecure IoT devices populating our world. Assume everything described is an unsupported configuration. Finally, be mindful of the Windows 10 IoT prototyping restrictions as well as … Continue reading Fun in the lab with VMware and a Raspberry Pi

vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003

This post was originally published on this site

Ok, so I’m just going to call it out straight away, when using wildcard SSL certificates with vRealize Automation 8.0, read the release notes. I did not, and caused myself quite a few headaches with the deployment, which you can read about further in this post. Cannot set wildcard certs for certain domain names, specifically … Continue reading vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003

The post vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003 appeared first on @Saintdle.

vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003

This post was originally published on this site

Ok, so I’m just going to call it out straight away, when using wildcard SSL certificates with vRealize Automation 8.0, read the release notes. I did not, and caused myself quite a few headaches with the deployment, which you can read about further in this post. Cannot set wildcard certs for certain domain names, specifically … Continue reading vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003

The post vRealize Automation 8.0 – Wildcard SSL certificate support and deployment issues – LCMVRAVACONFIG590003 appeared first on @Saintdle.

AppVolumes Replication – Writables and Appstacks

This post was originally published on this site

Hello Everyone,

 

I’m currently working on getting our Cloud POD completed and the last piece I have to set is the AppVolumes replication between our two sites. Ideally I want to have users with homesites where 50% of users are using Site1 and 50% Site 2. I have two standalone vcenters and App Volumes Managers in each site. For the storage we have Pure and I have already enabled Async replication. With Appvolumes I have created a storage group and I was able to import all appstacks. Then I used the powershell script and was able to replicate all entitlements with no problems. At some point I would like to automate it because right now I’m only snapshoting a replicated Appstack datastore in Site 1 and sending it across to Site 2 and then i have to manually go in and restore it to non-attachable Datastore so then AppVolumes replicates it using storage group and imports it.

 

As for writables I was trying to accomplish it the same way however it looks like Storage groups either don’t apply to writable or it just doesn’t work. I looked at the Horizon reference architecture-multi site reference guide and the only thing I can find in it is as follows:

 

In use cases where writable volumes are being used, there are a few additional steps: 1. Mount the replicated datastore that contains the writable volumes.2. Perform a rescan of that datastore. If the datastore was the default writable volume location, App Volumes Manager automatically picks up the user entitlements after the old assignment information has been cleaned up.3. (Optional) If the datastore is not the default writable volume location, perform an Import Writables operation from the App Volumes Manager at Site 2.”

 

 

What I’m seeing though is that storage group doesn’t really work on Writables like with appstacks. It doesn’t replicate between datastores. Also, If I manually drop a writable vmdk and metadata file to the default writable location, the manager never imports it automatically like mentioned in the article.

 

At the end of the day what I want to achieve is to have 2 sites and appstacks and writables available at both sites at all times. A user at Site 2 might never have to log in to Site 1 but in case of disaster I would like to have that synchronized and available in both places

 

Can someone share how this could be achieved or if you are currently doing it, maybe share you setup?

 

Thank you

 

 

 

 

VM Workstation 15.5 choppy usb audio

This post was originally published on this site

I am using Windows 10 as a guest OS (as well as host) and when connecting any usb audio to the guest OS i am getting it choppy

I have follow other topics on this community yet found no working solution so far

 

issue happens with all type of usb audio (dongles, jabra, etc)

audio is choppy and distorted

reinstalling vm tools did not helped

usb 3.0 support is enabled in vm

 

any suggestions?

Windows 10 guest "sudden death" on VMWare Fusion 11.5 on MacOS Catalina

This post was originally published on this site

Hi to all,

I am experiencing a Windows 10 guest “sudden death”. The VM quits abruptly and all I see is the VMware “play” triangle on a black background, as if I had shut down the guest.

I have 2 Windows 10 guests on VMWare Fusion 11.5 on MacOS Catalina, each one running in full screen on a dedicated screen (I have a 3 screens setup).

This tends to happen freshly after I restarted the whole system (MAC + VMs), sometimes on one Windows 10 VM, sometimes on the other Windows 10 VM, sometimes on both Windows VMs (but not at the same time).

 

I am just furious about this. In the middle of the work, the VM simply dies, as if someone had pulled the cord. All of the non-saved data is lost, and my fear is, that the VM file(s) will get corrupted at some time.

 

Due to the fact that it has happened several times, I activated the “Hang/Crash” Troubleshooting.

This is the log of the last moments of my VM:

 

2019-10-29T07:35:04+01:00[+0.000]| keyboard| W115: Caught signal 11 — tid 9398 (addr 625BC7714)

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: rip 0x7fff67082dc6 rsp 0x70000e729de0 rbp 0x70000e729de0

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: rax 0x625bc7714 rbx 0x7fa716dedb68 rcx 0x1 rdx 0x1 rsi 0x7fa716dedb68 rdi 0x625bc7714

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125:         r8 0x0 r9 0x0 r10 0x114e4cc48 r11 0xffff805f0edd9bac r12 0x625bc7714 r13 0x0 r14 0x625bc7700 r15 0x1

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729DE0 : 0x000070000e729e20 0x00007fff32951003

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729DF0 : 0x0000000000001700 0x0000000625bc6000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E00 : 0x00007fa716dc2d30 0x00007fa746c7c4d0

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E10 : 0x0000000000010000 0x00007fa717038b70

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E20 : 0x000070000e729e70 0x0000000114e4306c

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E30 : 0xaaaaaaaaaaaaaaaa 0x00007fa716dedb30

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E40 : 0x00007fa716df7390 0x0000000000000000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SIGNAL: stack 70000E729E50 : 0x00007fa716ddb5b0 0x00007fa716dc2d30

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace:

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace[0] 000070000e729820 rip=00000001101e1cb0 rbx=000070000e7298b8 rbp=000070000e729880 r12=000070000e729de0 r13=00000000000001f6 r14=000000000000000b r15=00000001105b2ee3

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace[1] 000070000e729890 rip=00007fff67085b1d rbx=000070000e729d28 rbp=000070000e729890 r12=36dca36955e0aa1c r13=0000000000000000 r14=0000000625bc7700 r15=0000000000000001

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SymBacktrace[0] 000070000e729820 rip=00000001101e1cb0 in function (null) in object /Applications/VMware Fusion.app/Contents/Library/vmware-vmx-debug loaded at 000000010f7b5000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SymBacktrace[1] 000070000e729890 rip=00007fff67085b1d in function _sigtramp in object /usr/lib/system/libsystem_platform.dylib loaded at 00007fff67081000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| E105: PANIC: Unexpected signal: 11.

2019-10-29T07:35:04+01:00[+0.000]| keyboard| E105: PANIC: Loop on signal 11.

2019-10-29T07:35:04+01:00[+0.000]| keyboard| E105: Panic loop

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace:

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace[0] 000070000e728540 rip=000000010f802930 rbx=000070000e728540 rbp=000070000e728a20 r12=000070000e728ed0 r13=00000000000001f6 r14=00007fff93cbd070 r15=000070000e728f38

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace[1] 000070000e728a30 rip=00000001101e1e1f rbx=0000000000000160 rbp=000070000e728a90 r12=000070000e728ed0 r13=00000000000001f6 r14=000000000000000b r15=000070000e728f38

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: Backtrace[2] 000070000e728aa0 rip=00007fff67085b1d rbx=000070000e728f38 rbp=000070000e728aa0 r12=36dca36955e0b80c r13=0000000000000000 r14=0000000625bc7700 r15=0000000000000001

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SymBacktrace[0] 000070000e728540 rip=000000010f802930 in function (null) in object /Applications/VMware Fusion.app/Contents/Library/vmware-vmx-debug loaded at 000000010f7b5000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SymBacktrace[1] 000070000e728a30 rip=00000001101e1e1f in function (null) in object /Applications/VMware Fusion.app/Contents/Library/vmware-vmx-debug loaded at 000000010f7b5000

2019-10-29T07:35:04+01:00[+0.000]| keyboard| I125: SymBacktrace[2] 000070000e728aa0 rip=00007fff67085b1d in function _sigtramp in object /usr/lib/system/libsystem_platform.dylib loaded at 00007fff67081000

 

I also found this in the log, just 40 seconds earlier:

 

2019-10-29T07:34:23.562+01:00| vcpu-1| I125: TOOLS call to Ïltú  failed.

2019-10-29T07:34:23.581+01:00| vcpu-1| I125: TOOLS call to n%mtú  failed.

2019-10-29T07:34:23.598+01:00| vcpu-1| I125: TOOLS call to Nîmqú  failed.

2019-10-29T07:34:23.606+01:00| vcpu-0| I125: TOOLS call to ßltú  failed.

2019-10-29T07:34:23.622+01:00| vcpu-0| I125: TOOLS call to Ïltú  failed.

2019-10-29T07:34:23.636+01:00| vcpu-1| I125: TOOLS call to œçltú  failed.

 

These weird names do sound strange to me, but I don’t know if this could have something to do with my issue.

I also have triggered the “Collect Support Information”, and I am willing to send it to VMware, in case someone from the Customer Support needs it (it’s a 412 MB .zip).

 

Did this happen to anyone else before?

What can I do to avoid this umpteenth crappy bug of this disgraceful beta-like software?

 

My gear is the following:

  • iMac (27-inch, Late 2013)
  • Processor: 3.4 GHz Quad-Core Intel Core i5
  • Memory: 32 GB 1600 MHz DDR3
  • Graphics: NVIDIA GeForce GTX 775M 2 G
  • 2 external DELL monitors
  • VmWare Fusion 11.5
  • MacOS Catalina 10.15 (19A602)
  • 2 Windows 10 VMs (both Version 1903, OS Build 18362.418)
  • Keyboard: Microsoft Digital Media Pro
  • Mouse: Logitech RX1000

Bridging the Gap Between NHS and Public Cloud with VMware Cloud on AWS

This post was originally published on this site

Following on from How VMware is Accelerating NHS Cloud Adoption, this post dives into more detail around how the UK National Health Service (NHS) can use VMware Cloud on AWS to bridge the gap between existing investments and Public Cloud. Part 1: How VMware is Accelerating NHS Cloud Adoption Part 2: Bridging the Gap Between NHS […]

NSX-T Data Center 2.4 Management and Control Plane agents

This post was originally published on this site


As in the previous article I have illustrated about the NSX-T DC 2.4 management plane and Central control plane which is now conversed into one nsx manager node.



MPA (Management Plane Agent): This agent is located on each transport node which communicate with the NSX manager


NETCPA:  It provides communication between central control plane and the hypervisor.

The management plane and the central control plane (CCP) run on same virtual appliance but they perform different functionality and will cover about they technical aspects below.

The NSX cluster can scale to max of 3 NSX manager nodes run on the management and CCP.


Communication process

The nsx-mpa agent on transport node get communicated with NSX manager over Rabbitmq channel which is on port 5671

Now, the CCP communicate with transport node through nsx-proxy through port 1235

The task of NSX manager is to push the config to the CCP. The CCP configures the dataplane through nsx-proxy, which is one of the component of LCP (Local control plane)



*MPA ( Management Plane Agent)
*NETCPA (Network Control Plane Agent)
*CCP (Central Control Plane)
*LCP (Local Control Plane) 

Happy learning…. 🙂

VMware vCenter Server Appliance – Backup and Restore Vulnerability

This post was originally published on this site

VMware has released a new security advisory VMSA-2019-0018 (VMware vCenter Server Appliance updates address sensitive information disclosure vulnerability in backup and restore functions). This advisory documents the remediation of one issue, rated with a severity of moderate. Sensitive information disclosure vulnerabilities resulting from a lack of certificate validation during the File-Based Backup and Restore operations […]

The post VMware vCenter Server Appliance – Backup and Restore Vulnerability appeared first on CloudHat.eu.