The previous post talks about vSphere Integrated Containers and their benefits. The VIC offers a robust solution that enables the vSphere environment to quickly get containers up and running in their current vSphere infrastructure. This environment can be useful for migrating current apps to containers or for in-house development.
In a traditional container environment, containers run as threads within the container host. vSphere Integrated Containers leverage the native constructs of vSphere for provisioning container-based applications into its own container running its own very minimal Linux kernel with just enough code to run a Docker image, thus preventing any issue with containers being accessed from other containers by pushing isolation of the container down to the hypervisor layer that is much better at handling this type of isolation.
This isolation permits IT directors to deliver an instrumentation atmosphere while not having to create a separate, specialized instrumentation infrastructure stack. By deploying every instrumentation image as a vSphere virtual machine (VM), vSphere Integrated Containers permits these workloads to leverage vital vSphere application availableness and performance options like vSphere hour angle, vMotion, DRS and a lot of. vSphere Integrated Containers provides these options whereas still presenting a jack API to developers of container-based applications.
The VIC engine is that the mechanism that gives this docker API to the container VM’s. It permits the provisioning and management of VMs into vSphere clusters using the docker binary image format. It allows vSphere admins to pre-allocate certain amounts of compute, networking and storage and provides that to developers as a self-service portal using a familiar Docker-compatible API. It permits developers that already know docker to develop in containers and deploy them aboard ancient VM-based workloads on vSphere clusters.
Virtual Container Host Deployment Options:
- Deploy Virtual Container hosts in the vSphere Client
- Deploy Virtual Container hosts using the vic-machine CLI utility
- Deploy a Virtual Container host through the vRealize Automation Portal ( vRA/vRO)
Optionally we can also forward the logs to syslog or the vRealize Log insight server
Choose a cluster where you want to deploy a VCH
Provide the compute details
By default it takes 1 vCPU and 2 GB of Memory. In my experience, it works seamlessly well with 2 vCPU and 8 GB of memory.
Provide storage/datastore information. I recommend enabling anonymous volumes, which creates a path by default. If not then you could need to create/attach a volume manually post the deployment
Configure Networks – It is mandated to provide the Bridge network and Public network.
NOTE: It is mandated to create a dedicated BRIDGE NETWORK PORT GROUP for EACH VCH. If you reuse the same PORT GROUP then you would end up with duplicate IPs on C-VMS/container VMS
Security options – you can also turn off this option in your closed/POC/test setup.
Registry access: Leave it to default values unless you have some restrictions applied to download the registry entities in your network
Provide the operations user details – this is to deploy the VCH VM in your setup and also to access the ESXi host logs
In the next step, review and submit the request. Once the deployment is successful, you should see the below details with the right IP address.
In the vSphere client, we should be able to see a resource group/pool created with the same name as a VCH
NOTE: Each VCH creates its own Resource POOL where all the C-VMs/ container vms are grouped.
Once you have the DOCKER API IP details, navigate to VIC administrator portal and add the VCH to the project
Name a PROJECT where you want to add the VCH to.
Add host to the PROJECT
Add members who are entitled to access the project
Add the VCH host
Once the host is added it lists under the Infrastructure tab. We can add multiple hosts in this project
Choose the newly created PROJECT
Hosts are added to the Project and we are ready to spin our first C-VM/container.