It is still an ongoing work not ready for production, but the upstream
version of OpenShift origin has already an experimental support for
running OpenShift Origin using system containers. The
“latest” Docker image for origin, node and openvswitch,
the 3 components we need, are automatically pushed to docker.io, so we
can use these for our test. The rhel7/etcd system container image
instead is pulled from the Red Hat registry.
BOX_IMAGE="fedora/25-atomic-host"NODE_COUNT=2# Workaround for https://github.com/openshift/openshift-ansible/pull/3413 (which is not yet merged while writing this)SCRIPT="sed -i -e 's|^Defaults.*secure_path.*$|Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin|g' /etc/sudoers"Vagrant.configure("2")do|config|config.vm.define"master"do|subconfig|subconfig.vm.hostname="master"subconfig.vm.network:private_network,ip:"10.0.0.10"end(1..NODE_COUNT).eachdo|i|config.vm.define"node#{i}"do|subconfig|subconfig.vm.hostname="node#{i}"subconfig.vm.network:private_network,ip:"10.0.0.#{10+i}"endendconfig.vm.synced_folder"/tmp","/vagrant",disabled:'true'config.vm.provision:shell,:inline=>SCRIPTconfig.vm.box=BOX_IMAGEconfig.vm.provider"libvirt"do|v|v.memory=1024v.cpus=2endconfig.vm.provision"shell"do|s|ssh_pub_key=File.readlines(ENV['HOME']+"/.ssh/id_rsa.pub").first.strips.inline=<<-SHELLmkdir-p/home/vagrant/.ssh/root/.sshecho#{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keysecho#{ssh_pub_key} >> /root/.ssh/authorized_keyslvextend-L10G/dev/atomicos/rootxfs_growfs-d/dev/mapper/atomicos-rootSHELLendend
The Vagrantfile will provision three virtual machines based on the
`fedora/25-atomic-host` image. One machine will be used for the
master node, the other two will be used as nodes. I am using static
IPs for them so that it is easier to refer to them from the Ansible
playbook and to require DNS configuration.
The machines can finally be provisioned with vagrant as:
1
# vagrant up --provider libvirt
At this point you should be able to login into the VMs as root using your ssh key:
1
2
3
4
for host in 10.0.0.{10,11,12};do ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@$host"echo yes I could login on $host"done
Our VMs are ready. Let’ install OpenShift!
This is the inventory file used for openshift-ansible, store it in a file origin.inventory:
# Create an OSEv3 group that contains the masters and nodes groups[OSEv3:children]mastersnodesetcd# Set variables common for all OSEv3 hosts[OSEv3:vars]ansible_user=rootansible_become=yesansible_ssh_user=vagrantcontainerized=trueopenshift_image_tag=latestopenshift_release=latestopenshift_router_selector='router=true'openshift_registry_selector='registry=true'openshift_install_examples=Falsedeployment_type=origin##################################################################SYSTEM CONTAINERS##############################################################################################system_images_registry=docker.iouse_system_containers=True################################################################################################################################################################################## enable htpasswd authopenshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}# host group for masters[masters]10.0.0.10 openshift_hostname=10.0.0.10# host group for etcd, should run on a node that is not schedulable[etcd]10.0.0.10 openshift_ip=10.0.0.10# host group for worker nodes, we list master node here so that# openshift-sdn gets installed. We mark the master node as not# schedulable.[nodes]10.0.0.11 openshift_hostname=10.0.0.11 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'router':'true'}"10.0.0.12 openshift_hostname=10.0.0.12 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'registry':'true'}"
The new configuration required to run system containers is quite
visible in the inventory file. `use_system_containers=True` is
required to tell the installer to use system containers,
`system_images_registry` specifies the registry from where the
system containers must be pulled.
And we can finally run the installer, using python3, from the
directory where we forked ansible-openshift:
$ oc login --insecure-skip-tls-verify=false 10.0.0.10:8443 -u user -p OriginUser
1
2
3
4
5
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
1
2
3
4
5
6
7
8
$ oc new-project testNow using project "test" on server "https://10.0.0.10:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
to build a new example application in Ruby.
Are we really running on system containers? Let’s check it out on master and one node:
The atomic command upstream has a breaking change so with future
versions of atomic we will need -f backend=ostree to filter system
containers, as clearly ostree is not a runtime.
1
2
3
4
for host in 10.0.0.{10,11};do ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@$host"atomic containers list --no-trunc -f runtime=ostree"done