Part 7 – Deploy VCH to vCenter Server

VCH is the key component of the VIC as this is the resource pool where all the container VMs and the VCH Endpoint VM resides here. VCH is deployed using the vic-machine utility and this can be run from any Windows, Mac or Linux machines. The binaries needs to be downloaded to the machine that runs these commands. Below are the prerequisites for deploying VCH.

  • Download binaries from the VIC appliance
  • Enable DRS on the cluster that is going to host VCH
  • All the hosts have a common datastore to enable easy migration between hosts; we are using vSanDatastore here.
  • Create a vDS with port group named ‘vic-bridge’. Ensure this port group is only used for one VCH and a new port group would be needed if a new VCH has to be deployed. This is for the isolation of the container VMs
  • All the ESXi hosts are licensed with Enterprise Plus
  • DHCP enabled in the network or assign a network profile to the port group with range of IPs.

vic29

From the machine where binaries are run the below command

vic-machine-windows.exe create --target vcsa.vmmaster.vic --user "administrator@vsphere.vic" --password VMware1! --bridge-network vic-bridge --image-store vsanDatastore --no-tlsverify --thumbprint CC:E1:DA:C8:93:4F:60:F9:13:EC:38:38:10:DF:54:CD:FC:61:44:AE

vic30vic31

A successfull deploy of VCH displays the message with details on how to connect to the docker API on the VCH. Observe a new resource pool is created and also there is a VCH Endpoint VM for each VCH deployed.

 

vic32

The same can be viewed in the HTML 5 client

vic280

In case the VCH deploy did not succed use the below to delete the partially deployed resource pools and any other configurations.

vic-machine-windows.exe delete --target administrator@vsphere.vic:VMware1!@vcsa.vmmaster.vic --thumbprint CC:E1:DA:C8:93:4F:60:F9:13:EC:38:38:10:DF:54:CD:FC:61:44:AE --name virtual-container-host --force

To verify the VCH installation run below from any docker client in the network

docker -H 10.7.7.10:2376 --tls info

vic33

Hope this was informative. Thanks!

Advertisements

Part 6 – Opening Ports on ESXi Hosts

Port 2377 is used for the communication between VCH and ESXi hosts. The name of the firewall rule is vSPC, if at all the rule is disabled for some reason, one must configure the firewall using other methods like web client and CLI.

Opening 2377 for outgoing connections in ESXi opens 2377 for inbound connections on VCH. Download the VCH binaries from the management portal. The binaries has the vic-machine utility needed for the VCH installation.

vic26

unzip the files to view the binaries and vic-machine utility.

vic27

Navigate to the download location of the binaries and run the below command from the elevated command prompt and make sure to use the right vic-machine file based on the OS you use.

vic-machine-windows.exe update firewall --target vcsa.vmmaster.vic --user "administrator@vsphere.vic" --password VMware1! -compute-resource My_Cluster --thumbprint CC:E1:DA:C8:93:4F:60:F9:13:EC:38:38:10:DF:54:CD:FC:61:44:AE --allow

Since we are going to VCH to a cluster managed by vCenter, we will execute the firewall open command against the vCenter server. This opens the port on all the ESXi hosts in the cluster.

vic28

Hope this is informative. Thanks!

Part 5 – Install Client Plugins on VCSA for VIC

The next step in the installation is to Install the client plugin for VIC. VIC plugin is very much integrated with the HTML 5 client than the web client. Enable shell on VCSA.

If you have a environment variable set as part of previous installation of VCH, the installation will fail. Ensure it is deleted.

Open a browser and use the VIC appliance IP with port 9443 to view the exact file name. Make a note of the vic tar file including the version. In my case it is vic_1.2.1.tar.gz.

vic20Enable shell and SSH on the VCSA and open a putty session as root adn run the below commands. The first command downloads the tar file and the second one unzips and sets permissions. Navigate the vic/ui/VCSA directory.

curl -kL http://10.7.7.14/files/vic_1.2.1.tar.gz -o vic_1.2.1.tar.gz
tar -zxf vic_1.2.1.tar.gz

vic21

Run the install.sh script to start the plugin installation. Enter the details as and when prompted. Have the vCenter thumbprint handy.

vic22

After the installation is complete, restart the web client and HTML 5 services.

vic23

Refresh the webclient and HTML client to view the vSphere Integrated Containers plugin.

vic24

vic25.jpg

Hope this was informative. Thanks!

Part 4 – Obtain vCenter Certificate Thumbprint

After deploying the appliance, the next step in the installation would be to install the plugin in the web client but inorder to do that we need to obtain the vCenter certificate thumbprint. We can obtain this by either connecting to the VCSA using SSH or in GUI by connecting to the PSC at port 5480. Enable shell on the VCSA appliance. Here is how

SSH to the VCSA appliance and login as root and execute the below command and make a note of the Fingerprint.

openssl x509 -in /etc/vmware-vpx/ssl/rui.crt -fingerprint -sha1 -noout

vic16.jpg

From the PSC appliance management, below is how

vic17

Click on the _MACHINE_CERT and then Show Details

vic18

This is same as the one you obtained in the shell session.

vic19

Hope this was informative. Thanks!

Part 3 – Deploy VIC Appliance

Now, it’s time to deploy the VIC appliance and do some basic configurations. Using the deploy OVF option in the web client start the vm deploy.

Give the VM a name

vic1

Now select the compute for the VM.

vic2

Do a thin provision as this is lab and accept the EULA

vic3

vic4

Select the datastore you want the VM to be deployed to. This need not be a shared storage and can also be a local storage but ideally in prod environments it will be a shared storage.

vic5

Select the VM Network

vic6

Configure the OVF with the right IP settings and all these configs are applied when the VM starts.

vic7.jpg

vic8

Review the settings and hit Finish

vic10

Wait untill you see the below screen on the VM console.

vic11

Use the IP configured and open that in a browser and provide the vcenter and PSC (if external) details and hit Continue.

vic12

Launch the management portal and observer the options like managing projects, users and registries.

vic13

Hope this was informative. Thanks!

Part 2 – VIC Lab Setup

It is quite simple to setup the home lab for VIC. This is like any basic vSphere lab. All you would need is 2-3 ESXi hosts, vCenter, a vDS switch, shared datastore and it is important that your lab has internet connectivity as we will be pulling the docker images from docker hub. The simple way to setup the lab would be to setup a vSAN environment and use vYOS for the internet connectivity if you are using workstation to setup the lab like me.

vic02

Hope this was informative. Thanks!

Part 1 – vSphere Integrated Containers

A lot is being discussed about the containers these days and definitely Containers seem to be the future especially for those who are moving into public cloud and looking for autoscalability. VMware has now vSphere Integrated Containers which allows the Containers to run on your existing infrastructure along with your virtual workloads and all this for no extra cost; all you would need is a Enterprise Plus license for the support. Containers run on top of the hypervisor as virtual machines would and supports all the features we love.

I wanted to try and learn about the containers and being an VMware admin, i wanted to use my existing home lab to do that and so here i am working on getting things ready for the vSphere Integrated Containers. VMware HOL is definitely a good place to get an understanding of the basics of the VIC, however to get the honds on experience of implemetation in a real time environment, i thought it would be good to actually setup a complete lab for my self and play around with it. I would also like to blog about it step by step so it would be a good reference for me and also helps others. I would like to keep each post short with less than 10 min read time.

Containers

Containers are like VMs running on Linux Host typically, except that you dont have to install the full blown OS for each Container like VMs. Containers are very light weight and portable and are easy to run and destroy. Unlike VMs, there is no delay for the OS to boot and for the applications to come up, it is just the application just starting in a bubble on an existing running OS. So Containers are fast! It is almost impossible to talk about Containers without the mention of Docker. With the release of VIC 1.2 it is now possible to run Docker containers on vSphere Infrastructure without disrupting your existing VM workloads. Below is a nice comparision of Containers and VMs

vic0

VIC allows a VMware Admin to create container hosts on top of which developers can run Docker CLI allowing admins to have control over the resources and developers to create and destroy containers as and when needed. The three components of VIC are

vSphere Integrated Containers Appliance – Appliance needed to install the vSphere Integrated Containers. This is the component that binds all the other components together and acts as management plane for VIC.

vSphere Integrated Containers Engine Bundle – An abstraction of Docker API fully integrated with vSphere and is responsible for creating container VMs.

Virtual Container Host (VCH) – Photon based linux environment/resource pool used to run the docker containers. This is an isolated environment and is managed by vic-machine utility. Each VCH resource pool has one and only one associated VCH endpoint VM

vic01

With basic understanding of VIC, let’s move onto setting up our lab.

Hope this was informative. Thanks!