Project Atomic is now sunset

The Atomic Host platform is now replaced by CoreOS. Users of Atomic Host are encouraged to join the CoreOS community on the Fedora CoreOS communication channels.

The documentation contained below and throughout this site has been retained for historical purposes, but can no longer be guaranteed to be accurate.

Project News

Subatomic cluster install with Kickstart

Look, new case! 3D printed, thanks to Spot Callaway.

new subatomic cluster

In my previous install of the Subatomic Cluster, I simply did the manual Ananconda install. However, since this cluster is for testing, I wanted a way to re-install it rapidly so that I can test out various builds of Atomic. This time, I was installing CentOS Atomic so that I could test things out on CentOS Atomic Continuous.

I also wanted to fix the disk allocation. Due to various limitations, the initial root partition for a new Atomic Host is of fixed size (3GB) regardless of the amount of disk space available. I wanted to increase that to 1/3 of the 64GB size of each SSD.

Enter Kickstart, which is the standard installation automation system used by Fedora, CentOS, RHEL, and other Linux distributions. I was more familiar with Kickstart as part of a PXEboot network install, and re-installing the cluster required something simpler. In this case, a Kickstart file on the network, combined with editing the boot line for install. Since the Kickstart documentation is extensive enough to be confusing, here’s some simple examples.

First, I created an atomic-ks.cfg file and dropped it on my laptop; see below for the full file. I’ve added comments in the example file so that you can understand what it’s doing and modify it for your own use. I then served this file on the local network just using python -m SimpleHTTPServer. I’ve annotated the atomic-ks.cfg file so that you can use it as a base for your own.

# usual setup
install
reboot
lang en_US.UTF-8
keyboard us
timezone --utc America/Los_Angeles
selinux --enforcing

# clear the disk and create a new mbr partition for boot
zerombr
clearpart --all --initlabel
bootloader --location=mbr --boot-drive=sda
reqpart --add-boot

# create a new logical volume and group for everything lese
part pv.01 --grow --ondisk=sda
volgroup atomicos pv.01

# add a 20GB XFS partition for root
logvol / --size=20000 --fstype="xfs" --name=root --vgname=atomicos

# add a 2GB swap partition
logvol swap --fstype swap --name=lv_swap --vgname=atomicos --size=2048

# disable cloud-init, enable ntp, docker and ssh
services --disabled="cloud-init,cloud-config,cloud-final,cloud-init-local" --enabled="systemd-timesyncd,network,sshd,docker"

# set up OSTree to pull a tree from the USB key
ostreesetup --osname="centos-atomic-host" --remote="centos-atomic-host" --url="file:///install/ostree" --ref="centos-atomic-host/7/x86_64/standard" --nogpg

# create static network interface, for Kubernetes setup.  Requires changing this line
# for each machine
network --device=link --bootproto=static --ip=192.168.1.102 --netmask=255.255.255.0 --gateway=192.168.1.1 --nameserver=192.168.1.1

# create sudo user.
user --name=atomic --groups=wheel --password=atomic

# reset ostree to upstream
%post --erroronfail
rm -f /etc/ostree/remotes.d/centos-atomic-host.conf
ostree remote add --set=gpg-verify=true centos-atomic-host 'http://mirror.centos.org/centos/7/atomic/x86_64/repo'
%end

I then booted each minnowboard off the USB. There was one manual step: I had to edit the grub boot menu and tell it to use the kickstart file. When the boot menu comes up, I selected Install Centos 7, pressed e and edited the linuxefi boot line:

linuxefi /images/pxeboot/vmlinuz inst.ks=http://192.168.1.105:8000/atomic-ks.cfg inst.stage2=hd:LABEL=Centos-Atomic-Host-7-x86_64 quiet

After that, it’s all automatic. Anaconda will partition, install, and boot the system.

Want to see the sub-atomic cluster running? Join us at ContainerDays Austin or KubeCon.

Thanks to Dusty Mabe and Matthew Micene for helping me create this Kickstart config and troubleshoot it

View article »

Docker Brno -- Summer is OVER

Summer is over and school is back in session. These events mark a change of seasons, a change in lifestyle, and a return to the meetups of Docker Brno. Tomáš Tomeček guided 45 of us through presentations by three speakers as well as a news and updates presentation.

Tomáš Tomeček

Tomas started us off with a news and updates presentation about recent changes in Docker (Slides). He briefly covered a lot of the features in the latest releases of docker, versions 1.12.0 and 1.12.1.

These versions include the new orchestration components bundled into the daemon. The addition of the components is particularly controversial and has caused some people to wonder why they are part of docker-engine.

Along with the orchestration components, a new abstraction called the service API was added along with load balancing using IPVS in Linux Kernel. Additional features include a plugin API, a new HEALTHCHECK instruction, and the --live-restore daemon flag that allows for auto-restarting of your containers.

Joseph Karasek

Josef Karásek presented “Rolling Down the Upgrade River doesn’t need to be a White Water Experience.” This demonstration of rolling updates used a Java application running in docker containers on OpenShift Origin.

The demo was a “canary-style” rolling upgrade, allowing an application to be upgraded in-place, on a live service, with no interruption for client sessions. While the demo used a monolithic application, many of the Twelve-Factor App principles were satisfied.

In both a show of demo-bravery and zero-to-hero magic, he started his demo with a clean install of OpenShift Origin. This was done using the new oc cluster up command which started a local single node OpenShift environment on his laptop. His secondary goal was to show how he could go from nothing to a fully launched Java application in less that 15 minutes, including build time and downloads.

To build the demo application he performed the following actions in the web console and with the CLI. He alternated between them to show off OpenShift during the build process.

  1. Created a project to hold a git forge. OpenShift lives behind a NAT by default, so he needed a git forge that could send a webhook to the rest of OpenShift. This project contains one container that provides a Gogs - Go Git Service.
  2. Created a second project to hold the actual application. Into this project he loaded:

    1. A Java application based on a JBoss EAP Quickstart example. The application is built using maven and is able to create and greet users and store session IDs in a replicated cache. The greeting page displays the cached session key information and reports what node is serving it. The session key was stored in a cache replicated over all EAP nodes. The application ran on a tiny cluster of two EAP servers (on a laptop!).
    2. A Postgres database to store user information.
  3. Configured Image Streams and other administrative components of OpenShift so that new builds can be automatically triggered and deployed. This would normally be done by the operations team and not the developer.

  4. Added the URL for the webhook to Gogs.

  5. Started the application and let it build.

While the build was finishing, he talked about how there are models for using OpenShift that include full CI/CD systems, like Jenkins. These models allow code changes to be built, tested, merged and deployed automatically. Today, he changed the code and merged it by hand because he was on his laptop and had memory constraints.

Then it was demo breaking time! Karásek scaled the application to two replicas and showed how a specific pod was assigned to serve it. A “pod” is a Kubernetes abstraction that represents one or more related containers. The containers are managed as a single group for administrative purposes, including replication. In this example, each pod consists of one Java application container. Once we were convinced that the HAProxy router used by OpenShift would not allow us to be served by any other pod, he deleted the pod. The other pod was able to pick up the session without a user visible failure because of the auto-spawn capabilities of OpenShift and the session ID cache.

Next, it was time for a code change. A quick git clone later and the code was modified and pushed to the Gogs service. Less than a second later OpenShift reacted to the git webhook notification and kicked of a new build of the code. Using the web console and oc get pod, we watched the builds progress. When complete, they seamlessly and invisibly replaced the original pods with zero downtime.

This demonstration provided insight into how an existing application can be migrated to containers to gain scale-out and management features from an orchestrator like OpenShift Origin in a way that preserves all of the hard-won existing functionality. Take a look at the demo script and code and try it yourself.

We took breaks between every talk and enjoyed the fine facilities provided by kiwi.com. They arranged for the use of their wine cellar for the meetup and a large supply of beverages and food for the attendees.

Vadim Rutkovsky

Vadim Rutkovsky was next with his presentation, Ansible Container: Look mom, no Dockerfile! (Slides) His need for a new way to build containers was driven by his use of grafana. He started with a container from DockerHub, but quickly hit some limitations that would mean he needed a custom built version.

This should be easy to do as the Dockerfiles are online next to the containers. Unfortunately, the Dockerfile in question, while successfully able to build a container, was crazy-pants and not easy to maintain or modify. In particular its handling of plugins was not elegant.

This got him thinking about traditional application installment concepts and he decided to use Ansible Container. Ansible Container has ability to build docker images and orchestrate containers using only Ansible playbooks + shell + docker-compose. It allows the container builder to leverage the power of Ansible features like vars, templates and roles.

Getting started is easy thanks to the ansible-container init command. This generates the basic files of:

  • main.yml: that describes the images
  • container.yml: that describes orchestration
  • requirements.txt: which can load additional Ansible modules, if required

A huge win came with the main.yml file structure because the container could be built using traditional application and system installation idioms.

A build using Ansible Container creates a “builder image” which allows building and deploying one or more images.
Ansible Container can then launch the container using docker-compose, or it can create a playbook and ship it to Kubernetes and OpenShift.

The project is fairly new and the next round of features will include build caching, detached execution, custom volumes and build variables, and rkt and OCI support. Full documentation is online as well as an active community in #ansible-container on Freenode.

Tomas Kral

Tomáš Král presented the final talk of the evening, Kompose: from your local machine to the cloud with one command. (Slides) Kompose can convert a Docker Compose file into a full Kubernetes or OpenShift configuration. It is a golang open source project supported by Skippbox, Google and Red Hat.

Kral’s demo used the golang guestbook application which he had decomposed into two containerized services. First he started the application just using a pair of docker run commands that started each service. Next he showed and used a Docker Compose file that was equivalent to the same pair of commands. Kompose showed up at this point and with one command allowed us to deploy our application to a local Minikube cluster.

As a final demo step, he made a live deployment to OpenShift 3 Online (dev-preview) to show how to go from a Docker Compose file on your local machine to a live production deployment in the cloud.

Kompose allows you to easily move from a development environment using Docker Compose or an application delivered with a distributed application bundle (DAB) file to a production quality environment based on Kubernetes and OpenShift. The output of Kompose allows you to quickly bootstrap to the rich Kubernetes and OpenShift environments with a standard configuration that can then be tuned and configured. Download the demo code and script and try it out.

This meetup was a fantastic event showing of some really cool technology. I want to thank our speakers, attendees and sponsors for an making this such an awesome event. I personally walked away motivated to play with both Ansible Container and Kompose to solve some challenges in my tech-life.

The meetup was made possible through the generosity of our sponsors: kiwi.com, who provided space and refreshments, and Red Hat, who provided administrative support and funding.

Our next meetup will be on 1 December 2016. We are looking for speakers and hope you’ll contact us at @DockerBrno or on our meetup page. If you’re not local to Brno and are interested in talking, contact us too. We may able to invite and sponsor you.

View article »

Running Kubernetes and Friends in Containers on CentOS Atomic Host

The atomic hosts from CentOS and Fedora earn their atomic namesake by providing for atomic, image-based system updates via rpm-ostree, and atomic, image-based application updates via docker containers.

This system vs application division isn’t set in stone, however. There’s room for system components to move across from the somewhat rigid world of ostree commits to the freer-flowing container side.

In particular, the key atomic host components involved in orchestrating containers across multiple hosts, such as flannel, etcd and kubernetes, could run instead in containers, making life simpler for those looking to test out newer or different versions of these components, or to swap them out for alternatives.

The devel tree of CentOS Atomic Host, which features a trimmed-down system image that leaves out kubernetes and related system components, is a great place to experiment with alternative methods of running these components, and swapping between them.

Read More »

Introduction to System Containers

As part of our effort to reduce the number of packages that are shipped with the Atomic Host image, we faced the problem of how to containerize services that are needed before Docker itself is running. The result: system containers, a way to run containers in production using read only images.

System containers use different technologies such as OSTree for the storage, Skopeo to pull images from a registry, runC to run the containers and systemd to manage their life cycle.

Read More »

New CentOS Atomic Host with Package Layering Support

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (tree version 7.20160818), featuring support for rpm-ostree package layering.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box; or as an installable ISO, qcow2, or Amazon Machine image. Check out the CentOS wiki for download links and installation instructions, or read on to learn more about what’s new in this release.

Read More »