It is still an ongoing work not ready for production, but the upstream version of OpenShift origin has already an experimental support for running OpenShift Origin using system containers. The “latest” Docker image for origin, node and openvswitch, the 3 components we need, are automatically pushed to docker.io, so we can use these for our test. The rhel7/etcd system container image instead is pulled from the Red Hat registry.
This demo is based on these blog posts www.projectatomic.io/blog/2016/12/part1-install-origin-on-f25-atomic-host/ and www.projectatomic.io/blog/2016/12/part2-install-origin-on-f25-atomic-host/ with some differences for the provision of the VMs and obviously running system containers instead of Docker containers.
The files used for the provision and the configuration can also be found here: https://github.com/giuseppe/atomic-openshift-system-containers, if you find it easier than copy/paste from a web browser.
In order to give it a try, we need the latest version of openshift-ansible for the installation. Let’s use a known commit that worked for me.
$ git clone https://github.com/openshift/openshift-ansible.git
$ git checkout a395b2b4d6cfd65e1a2fb45a75d72a0c1d9c65bc
To provision the VMs for the OpenShift cluster, I’ve used this simple Vagrantfile:
BOX_IMAGE = "fedora/25-atomic-host"
NODE_COUNT = 2
# Workaround for https://github.com/openshift/openshift-ansible/pull/3413 (which is not yet merged while writing this)
SCRIPT = "sed -i -e 's|^Defaults.*secure_path.*$|Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin|g' /etc/sudoers"
Vagrant.configure("2") do |config|
config.vm.define "master" do |subconfig|
subconfig.vm.hostname = "master"
subconfig.vm.network :private_network, ip: "10.0.0.10"
end
(1..NODE_COUNT).each do |i|
config.vm.define "node#{i}" do |subconfig|
subconfig.vm.hostname = "node#{i}"
subconfig.vm.network :private_network, ip: "10.0.0.#{10 + i}"
end
end
config.vm.synced_folder "/tmp", "/vagrant", disabled: 'true'
config.vm.provision :shell, :inline => SCRIPT
config.vm.box = BOX_IMAGE
config.vm.provider "libvirt" do |v|
v.memory = 1024
v.cpus = 2
end
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines(ENV['HOME'] + "/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
mkdir -p /home/vagrant/.ssh /root/.ssh
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
lvextend -L10G /dev/atomicos/root
xfs_growfs -d /dev/mapper/atomicos-root
SHELL
end
end
The Vagrantfile will provision three virtual machines based on the `fedora/25-atomic-host` image. One machine will be used for the master node, the other two will be used as nodes. I am using static IPs for them so that it is easier to refer to them from the Ansible playbook and to require DNS configuration.
The machines can finally be provisioned with vagrant as:
# vagrant up --provider libvirt
At this point you should be able to login into the VMs as root using your ssh key:
for host in 10.0.0.{10,11,12};
do
ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@$host "echo yes I could login on $host"
done
yes I could login on 10.0.0.10
yes I could login on 10.0.0.11
yes I could login on 10.0.0.12
Our VMs are ready. Let’ install OpenShift!
This is the inventory file used for openshift-ansible, store it in a file origin.inventory:
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_user=root
ansible_become=yes
ansible_ssh_user=vagrant
containerized=true
openshift_image_tag=latest
openshift_release=latest
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'
openshift_install_examples=False
deployment_type=origin
###########################################################
#######SYSTEM CONTAINERS###################################
###########################################################
system_images_registry=docker.io
use_system_containers=True
###########################################################
###########################################################
###########################################################
# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1', 'user': '$apr1$.gw8w9i1$ln9bfTRiD6OwuNTG5LvW50'}
# host group for masters
[masters]
10.0.0.10 openshift_hostname=10.0.0.10
# host group for etcd, should run on a node that is not schedulable
[etcd]
10.0.0.10 openshift_ip=10.0.0.10
# host group for worker nodes, we list master node here so that
# openshift-sdn gets installed. We mark the master node as not
# schedulable.
[nodes]
10.0.0.11 openshift_hostname=10.0.0.11 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'router':'true'}"
10.0.0.12 openshift_hostname=10.0.0.12 openshift_schedulable=true openshift_node_labels="{'region': 'primary', 'registry':'true'}"
The new configuration required to run system containers is quite visible in the inventory file. `use_system_containers=True` is required to tell the installer to use system containers, `system_images_registry` specifies the registry from where the system containers must be pulled.
And we can finally run the installer, using python3, from the directory where we forked ansible-openshift:
$ ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' -v -i origin.inventory ./playbooks/byo/config.yml
After some time, if everything went well, OpenShift should be installed.
To copy the oc client to the local machine I’ve used this command from the directory with the Vagrantfile:
$ scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i .vagrant/machines/master/libvirt/private_key [email protected]:/usr/local/bin/oc /usr/local/bin/
As non root, let’s login into the cluster:
$ oc login --insecure-skip-tls-verify=false 10.0.0.10:8443 -u user -p OriginUser
Login successful.
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
$ oc new-project test
Now using project "test" on server "https://10.0.0.10:8443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
to build a new example application in Ruby.
$ oc new-app https://github.com/giuseppe/hello-openshift-plus.git
--> Found Docker image 1f8ec11 (6 days old) from Docker Hub for "fedora"
* An image stream will be created as "fedora:latest" that will track the source image
* A Docker build using source code from https://github.com/giuseppe/hello-openshift-plus.git will be created
* The resulting image will be pushed to image stream "hello-openshift-plus:latest"
* Every time "fedora:latest" changes a new build will be triggered
* This image will be deployed in deployment config "hello-openshift-plus"
* Ports 8080, 8888 will be load balanced by service "hello-openshift-plus"
* Other containers can access this service through the hostname "hello-openshift-plus"
* WARNING: Image "fedora" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "fedora" created
imagestream "hello-openshift-plus" created
buildconfig "hello-openshift-plus" created
deploymentconfig "hello-openshift-plus" created
service "hello-openshift-plus" created
--> Success
Build scheduled, use 'oc logs -f bc/hello-openshift-plus' to track its progress.
Run 'oc status' to view your app.
After some time, we can see our service running:
oc get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-openshift-plus 172.30.204.140 <none> 8080/TCP,8888/TCP 46m
Are we really running on system containers? Let’s check it out on master and one node:
The atomic command upstream has a breaking change so with future versions of atomic we will need -f backend=ostree to filter system containers, as clearly ostree is not a runtime.
for host in 10.0.0.{10,11};
do
ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@$host "atomic containers list --no-trunc -f runtime=ostree"
done
CONTAINER ID IMAGE COMMAND CREATED STATE RUNTIME
etcd 192.168.1.13:5000/rhel7/etcd /usr/bin/etcd-env.sh /usr/bin/etcd 2017-02-23 11:01 running ostree
origin-master 192.168.1.13:5000/openshift/origin:latest /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:10 running ostree
CONTAINER ID IMAGE COMMAND CREATED STATE RUNTIME
origin-node 192.168.1.13:5000/openshift/node:latest /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:17 running ostree
openvswitch 192.168.1.13:5000/openshift/openvswitch:latest /usr/local/bin/system-container-wrapper.sh 2017-02-23 11:18 running ostree
And to finally destroy the cluster:
vagrant destroy