BUG: Passthrough HBA Controller with VMWare 7.0

This post was originally published on this site

System: Dell R730 with Dell HBA330 Controller (LSI 3008) on VMWare 7.0b Build16324942.

Goal: Passthrough Controller to VM

Issue: Passthrough not possible with disks attached to HBA330

 

Steps:

1. Boot ESXI 7.0 with HBA330 Controller without disks attached

2. vSphere client:

  • select host
  • Configure->Hardware->PCI-Devices
  • “CONFIGURE PASSTHROUGH”
  • select HBA330 controller
  • HBA330 is listed in “Passtrough-enabled devices”

3. vShpere client

  • select VM
  • Context menu: Edit Settings
  • “ADD NEW DEVICE”
  • PCI Device
  • select Passtrough device HBA330
  • boot VM

3. VM (e.g. freeBSD)

  • camcontrol devlist: controller shown
  • attach disks to controller (hot swap bay): disks recognized and accessable:

HBA330 - Passthrough Disks.jpg

HBA330 - Passthrough Disks - camcontrol.jpg

BUG:

After rebooting HOST (with disks attached) the PCI device is not passthrough anymore:

VM-Settings:

HBA330 - no Passthrough after reboot - VM.jpg

Host->PCI-Devices:

HBA330 - no Passthrough after reboot - Host.jpg

No Passthrough possible with attached disks:

HBA330 - no Passthrough with disks attached - Host.jpg

Rebooting host does not help. Only detach of disks on boot enables the passthrough again.

MacOS 10.15.6, Fusion 12.0.0.0 Transport (VMDB) error -14: Pipe connection has been broken

This post was originally published on this site

Fusion 12 is still trying to load a kernel extension it seems. I get monitor errors upon boot up. It is trying to load ‘vmmon’ and errors, then throws “Transport (VMDB) error -14: Pipe connection has been broken” error afterwards.

 

I have uninstalled, reinstalled, re-dowloaded Fusion 12, plus both VMs I am trying to load. Nothing seems to work.

 

All system approvals have been completed in Privacy and Security. Still, nothing works.

 

I tried to manually load the vmmon.kext kernel extension, but MacOS would not allow it due to the read-only system partition.

 

Fusion 12 says is does not use kernel extensions, but the errors say different.

 

Can anyone help? I am about at my wits end on this one.

 

Thanks,

How to compact/expand a VMware disk from the command line?

This post was originally published on this site

[Linux Mint 19.3, VMware Player v16]

 

I can’t seem to find any recent documentation that describes the process that will allow me to compact or expand a VMware disk from the command line. It’s simple enough from the GUI, but I’d like to be able to schedule regular maintenance on my VMs.

 

Is this feature still possible? I was able to find this, but only for super-old products that referenced CLI tools that don’t exist anymore.

Workstation 16 Pro DirectX11 Gaming support results?

This post was originally published on this site

One of the primary items I am looking for in virtualization software is to be able to run games in VM’s.   Up to now this has really been limited to just older games (DX9 basically as the 10.1 support in WS 15.5 was not complete when I tested and thus skipped WS15 for upgrade).   I am hoping that there are others here that can share their experience with Workstation 16 and DX11 games.   Any issues?   Performance hits?    What about higher resolutions (1k, 2k, 4k gaming)?

 

I know DX11 is still not the best (seems vmware is about 10+ years behind technology) but it’s a far cry better than DX9. 

Customization spec failing after OSOT

This post was originally published on this site

We are deploying a persistent pool for some of our users which require customized software and more RAM.

 

We have updated our golden image and re-run the new OSOT (VMware OS Optimization Tool) Template+LGPO utility to maximize performance.

 

We previously had a vcenter customization spec that worked great, for making persistent machines. When we run it against the images now, we get an error “Error opening file C:WINDOWSPantherunattend.xmlThe system cannot find the path specified” visible in the logs and I see no unattend.xml in the location.

 

If I clone a copy of the image before the OSOT was run, the customization spec works like a charm – but, a lot of work has gone in to the image after the OSOT was run so we’d rather not repeat all that, and miss out on the performance benefits the OSOT template offers.

 

Any thoughts/things to look at/advice?

How to attach shared RAC disks to other nodes prgramatically?

This post was originally published on this site

I have some Oracle RAC nodes that have been built with all shared disks, ASM , etc. I need to attach the shared disks (IndependentPersistent) to the other nodes in the RAC.

I think I have a pretty good idea of how this should work.  This is what I have so far:

$vm1 = Get-VM -Name PRD-DB01
$vm2 = Get-VM -Name PRD-DB02
$scsi2 = (Get-ScsiController -VM $vm2 | Where-Object{$_.extensiondata.busNumber -eq 2})
$scsi3 = (Get-ScsiController -VM $vm2 | Where-Object{$_.extensiondata.busNumber -eq 3})

$sharedHD = (Get-HardDisk -VM $vm1 | Where-Object {$_.Persistence -eq "IndependentPersistent"})

$sharedHD | FT -AutoSize

CapacityGB Persistence                                                         Filename
———- ———–                                                         ——–
80.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_2.vmdk
80.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_4.vmdk
75.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_6.vmdk
75.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_8.vmdk
80.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_3.vmdk
80.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_5.vmdk
75.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_7.vmdk
75.000     IndependentPersistent  [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_9.vmdk
75.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_10.vmdk
75.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_12.vmdk
75.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_11.vmdk
75.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_13.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_14.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_16.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_15.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_17.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_18.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_20.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_19.vmdk
15.000     IndependentPersistent [XIO-ORACLE-VM-01] PRD-DB01/PRD-DB01_21.vmdk

So I have all of the disk info from the first node.  The first 4 disks need to go to $scsi2, the next 4 to $scsi3, back and forth.
I’ve tried this:

PS C:> foreach ($disk in $sharedHD[0,1,2,3]) {
New-HardDisk -VM $vm2 -DiskPath $disk.Filename -Persistence $disk.Persistence -Controller $scsi2
}

CapacityGB      Persistence                                                    Filename
———-      ———–                                                    ——–
80.000          IndependentPersis… …RACLE-VM-01] PRD-DB01/PRD-DB01_2.vmdk
New-HardDisk : 9/19/2020 9:36:06 AM    New-HardDisk        Invalid configuration for device ‘0’.    
At line:2 char:1
+ New-HardDisk -VM $vm2 -DiskPath $disk.Filename -Persistence $disk.Per …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-HardDisk], InvalidDeviceSpec
    + FullyQualifiedErrorId : Client20_VirtualDeviceServiceImpl_AttachHardDisk_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.NewHardDisk
 
New-HardDisk : 9/19/2020 9:36:12 AM    New-HardDisk        Invalid configuration for device ‘0’.    
At line:2 char:1
+ New-HardDisk -VM $vm2 -DiskPath $disk.Filename -Persistence $disk.Per …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-HardDisk], InvalidDeviceSpec
    + FullyQualifiedErrorId : Client20_VirtualDeviceServiceImpl_AttachHardDisk_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.NewHardDisk
 
New-HardDisk : 9/19/2020 9:36:16 AM    New-HardDisk        Invalid configuration for device ‘0’.    
At line:2 char:1
+ New-HardDisk -VM $vm2 -DiskPath $disk.Filename -Persistence $disk.Per …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-HardDisk], InvalidDeviceSpec
    + FullyQualifiedErrorId : Client20_VirtualDeviceServiceImpl_AttachHardDisk_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.NewHardDisk
 

So the first disk attaches to the correct adapter, but then it fails.  The VM also shows the newly attached disk as No Sharing, but it was created as MultiWriter.

I have tried manually specifying the adapter as $scsi2:1, but that didn’t work either.

Workstation 16 Pro Upgrade

This post was originally published on this site

I am seeing that shared VMs were lost in the new version.  It is generally too soon to consider updating, since the release is too new and needs updates.

 

I am curious of the general opinion of this version when compared to 15.5.6 build-16341506.  The latest Worksation 15 does everything perfectly right now and supports all operating systems I run.  Other than losing features, what is the benefit of even considering an update?

 

Thanks.

bootstop: Host has booted / unexpected reboot : how to debug ?

This post was originally published on this site

Hello guys,

 

For some time now I’ve been having problems with unexpected rebooting on my esxi.

 

I have looked at the article :

https://kb.vmware.com/s/article/1019238#:~:text=If%20your%20VMware%20ESXi%20host,faulty%20components%2C%20and%20heating%20issues.

Without finding anything special.

 

I guess the last reboot is between : 2020-09-19T08:42:24.481Z and 2020-09-19T08:51:14.365Z2020-09-19T08:42:24.481Z

I don’t see anything in “hostd.log” :

 

2020-09-19T08:41:51.482Z verbose hostd[2099022] [Originator@6876 sub=Default opID=esxui-8c0c-17bc user=root] AdapterServer: target='vmodl.query.PropertyCollector:ha-property-collector', method='waitForUpdatesEx'
2020-09-19T08:41:51.560Z verbose hostd[2099058] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 7. Sent notification immediately.
2020-09-19T08:41:51.561Z verbose hostd[2099131] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 7. Sent notification immediately.
2020-09-19T08:41:53.849Z verbose hostd[2099059] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest, 7. Sent notification immediately.
2020-09-19T08:41:53.849Z verbose hostd[2099059] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: summary.guest, 7. Sent notification immediately.
2020-09-19T08:41:54.484Z verbose hostd[2099023] [Originator@6876 sub=Default opID=esxui-1cf5-17bd user=root] AdapterServer: target='vmodl.query.PropertyCollector:ha-property-collector', method='waitForUpdatesEx'
2020-09-19T08:42:20.553Z verbose hostd[2099518] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 2. Sent notification immediately.
2020-09-19T08:42:20.615Z verbose hostd[2099024] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest, 2. Sent notification immediately.
2020-09-19T08:42:20.615Z verbose hostd[2099024] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: summary.guest, 2. Sent notification immediately.
2020-09-19T08:42:21.485Z verbose hostd[2099059] [Originator@6876 sub=Default opID=esxui-ae15-17be user=root] AdapterServer: target='vmodl.query.PropertyCollector:ha-property-collector', method='waitForUpdatesEx'
2020-09-19T08:42:21.561Z verbose hostd[2099062] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 7. Sent notification immediately.
2020-09-19T08:42:21.561Z verbose hostd[2099515] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest.disk, 7. Sent notification immediately.
2020-09-19T08:42:23.833Z verbose hostd[2099517] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: guest, 7. Sent notification immediately.
2020-09-19T08:42:23.833Z verbose hostd[2099517] [Originator@6876 sub=PropertyProvider] RecordOp ASSIGN: summary.guest, 7. Sent notification immediately.
2020-09-19T08:42:24.481Z verbose hostd[2099522] [Originator@6876 sub=Default opID=esxui-d074-17bf user=root] AdapterServer: target='vmodl.query.PropertyCollector:ha-property-collector', method='waitForUpdatesEx'
2020-09-19T08:51:14.365Z - time the service was last started, Section for VMware ESX, pid=2098958, version=6.7.0, build=16316930, option=Release
2020-09-19T08:51:14.365Z warning -[2098958] [Originator@6876 sub=Default] Failed to load vsansvc configuration file: N7Vmacore22AlreadyExistsExceptionE(Already Exists)
--> [context]zKq7AVICAgAAAAL6+AAJLQAALE42bGlidm1hY29yZS5zbwAAsL4bAL6dFwCeShcBbX7JaG9zdGQAAYc9yQGbs2ICfRkCbGliYy5zby42AAGt1mI=[/context]
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Default] Supported VMs 334
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Handle checker] Setting system limit of 3740
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Handle checker] Set system limit to 3740
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Default] Setting malloc mmap threshold to 32 k
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Default] getrlimit(RLIMIT_NPROC): curr=4096 max=8192
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Default] Glibc malloc guards disabled.
2020-09-19T08:51:14.365Z info -[2098958] [Originator@6876 sub=Default] Initialized SystemFactory

 

Same for the “vmksummary.log”. :

 

2020-09-19T07:00:00Z heartbeat: up 3d20h57m50s, 2 VMs; [] []
2020-09-19T08:00:00Z heartbeat: up 3d21h57m50s, 2 VMs; [] []
2020-09-19T08:51:23Z bootstop: Host has booted
2020-09-19T09:00:01Z heartbeat: up 0d0h12m8s, 1 VM; [] []

 

 

What else can I look at to understand and solve this?

 

The host is a Dell R710, latest bios / firmware updated.

Current esxi version: 6.7.0 Update 3 (Build 16316930) –> (same reboot problem with previous versions)

I have few VMs on this esxi, it happens when there are only 2 VMs started.

 

An idea ?

Thanks !

 

Best regards,

Bob

 

NB : Logs attached

Storage conectivity losses, Consequences at VM Level

This post was originally published on this site

Hello everyone,

 

I’m seeking for information about the consequence of a iscsi storage failover (Active (IO) path dead) at the VM level :

 

  • How a VM (Deian/Centos/Windows at their system level only) react to storage loss of few second (1-5) during a storage failover ?
  • Do you know a white paper focuses on the path status and configuration optimisation on ESXI ?

 

I found many type of document that explain in details how rendondency work but nothing about “Hey, we lost an array IRL, path changed in less than 5 second but 150 VM get weird after failover and we know why ! (BTW we know how to simulate it)”.

 

Array vendor have many specification and configuration recommandation depending the array hardware. The context of my questions is restricted to VMware/VM only.

 

Thanks in advance guys.

vCenter HA has an invalid configuration. Remove vCenter HA to destroy the current cluster configuration and set up vCenter HA again.

This post was originally published on this site

Today I  rebuilt my HA

VMWare 6.7 U3 

Have 1 datacenter

1 cluster

8 ESXI hosts 6.7 U3 also

 

All three nodes are up and running   Active Passive and witness

 

I was just going to check on things and I noticed this error

 

vCenter HA has an invalid configuration. Remove vCenter HA to destroy the current cluster configuration and set up vCenter HA again.

 

Why did this happen? 

 

What can I do to fix this.?

 

This was a rebuild I lost my primary node to disk failure weeks ago and I was told Remove Ha and then redeploy and thats what I did

 

How to find what the invalid configuration is? 

 

Thank you

 

Tom