Hello,
I have just migrated a VSAN cluster from 6.7U3 to 7.0U1.
Apart of big network issue with Mellanox ConnetctX-4 LX cards that I’m trying to solve upgrading driver/firmware (now the cards go to 2Gbit/s and VSAN is super slow), I have noticed that VCenter (that has some disks Thick even on VSAN), has 2 disks of 1.4TB of size (99% free but thick), VMDK7 and VMDK13…
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 932K 5.9G 1% /dev/shm
tmpfs 5.9G 1.2M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda3 46G 5.2G 39G 12% /
/dev/sda2 120M 27M 85M 24% /boot
/dev/mapper/lifecycle_vg-lifecycle 98G 3.6G 90G 4% /storage/lifecycle
/dev/mapper/vtsdblog_vg-vtsdblog 15G 73M 14G 1% /storage/vtsdblog
/dev/mapper/core_vg-core 25G 45M 24G 1% /storage/core
/dev/mapper/vtsdb_vg-vtsdb 1.4T 108M 1.4T 1% /storage/vtsdb
/dev/mapper/archive_vg-archive 49G 1.2G 46G 3% /storage/archive
/dev/mapper/db_vg-db 9.8G 232M 9.1G 3% /storage/db
/dev/mapper/updatemgr_vg-updatemgr 98G 908M 93G 1% /storage/updatemgr
/dev/mapper/netdump_vg-netdump 985M 2.5M 915M 1% /storage/netdump
/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 37M 9.3G 1% /storage/imagebuilder
/dev/mapper/autodeploy_vg-autodeploy 9.8G 37M 9.3G 1% /storage/autodeploy
tmpfs 5.9G 7.2M 5.9G 1% /tmp
/dev/mapper/dblog_vg-dblog 15G 105M 14G 1% /storage/dblog
/dev/mapper/seat_vg-seat 1.4T 349M 1.4T 1% /storage/seat
/dev/mapper/log_vg-log 9.8G 1.7G 7.6G 19% /storage/log
tmpfs 1.0M 0 1.0M 0% /var/spool/snmp
How is that happened? And, more important, how can I solve this safely?
Thanks
Manuel