Scenario:
– a standalone ESXi with a 2.8TB datastore in Australia has to be cloned to Brasil
– problems: no physical access is possible and access via idrac is unreliable
Why clone a complete datastore at all ?
– after an “incident” one VM has disappeared and is hiding among 20 other working VMs – thats why we need to move the raw volume and cant move single VMs
Current plan:
– setup an iSCSI target in the vicinity of the standalone host
– connect to the iSCSi-target and create a second datastore
– dd the complete original datastore by reading the device /dev/disks/naa.number:3 split in 10gb chunks piped into gzip to the second datastore
– upload 267 gz files to a webserver available in the internet
– download the gz-files to the location in Brasil, check md5sums and reassemble the datastore
Script to split the dd-dumps in Australia looks like this:
#!/bin/sh
DUMP=”/dev/disks/naa.6842b2b06bce1b001fa1a45e08787efd:3″
TARGET=”/vmfs/volumes/iscsi/parts”
RANGEMIN=0
RANGEMAX=267
x=$RANGEMIN
while [ $x -le $RANGEMAX ]
do
echo “dd part number $x”
echo “dd part number $x” >> $TARGET/clone-log.txt
echo “dd part number $x” >> $TARGET/error.txt
dd if=$DUMP bs=1M count=10240 skip=$(( $x * 10240 )) | gzip -c > $TARGET/dump-$x.gz
echo “$?” >> $TARGET/error.txt
if [ ! -e $TARGET/dump-$x.gz ];
then
echo “$TARGET/dump-$x.gz does not exist” >> $TARGET/error-log.txt;
fi
echo “md5sum part number $x”
md5sum $TARGET/dump-$x.gz > $TARGET/dump-$x.md5
if [ ! -e $TARGET/dump-$x.md5 ];
then
echo “$TARGET/dump-$x.md5 does not exist” >> $TARGET/error-log.txt;
fi
x=$(( $x + 1 ))
done
Any suggestions on how to impove speed, improve error-messages or for additional dd-parameters are welcome.
In case one of the 10gb slices fails because of I/O errors I plan to address those slices separately via dd-rescue from a Linux-system.