Project Atomic is now sunset

The Atomic Host platform is now replaced by CoreOS. Users of Atomic Host are encouraged to join the CoreOS community on the Fedora CoreOS communication channels.

The documentation contained below and throughout this site has been retained for historical purposes, but can no longer be guaranteed to be accurate.

Project News

Project Atomic Docker Patches

Project Atomic’s version of the Docker-based container runtime has been carrying a series of patches on the upstream Docker project for a while now. Each time we carry a patch, it adds significant effort as we continue to track upstream, therefore we would prefer to never carry any patches. We always strive to get our patches upstream and do it in the open.

This post, and the accompanying document, will attempt to describe the patches we are currently carrying:

  • Explanation on types of patches.
  • Description of patches.
  • Links to GitHub discussions, and pull requests for upstreaming the patches to Docker.

Some people have asserted that our repo is a fork of the upstream Docker project.

What Does It Mean To Be a Fork?

I have been in open source for a long time, and my definition of a fork might be dated. I think of a fork as a hostile action taken by one group to get others to use and contribute to their version of an upstream project and ignore the original version. For example, LibreOffice forking off of OpenOffice or going way back Xorg forking off of Xfree86.

Nowadays, GitHub has changed the meaning. When a software repository exists on GitHub or a similar platform, everyone who wants to contribute has to hit the fork button, and start building their patches. As of this writing, Docker on GitHub has 9,860 forks, including ours. By this definition, however, all packages that distributions ship that include patches are forks. Red Hat ships the Linux Kernel, and I have not heard this referred to as a fork. But it would be considered a fork if you’re considering any upstream project shipped with patches a fork.

The Docker upstream even relies on Ubuntu carrying patches for AUFS that were never merged into the upstream kernel. Since Red Hat-based distributions don’t carry the AUFS patches, we contributed the support for Devicemapper, OverlayFS, and Btrfs backends, which are fully supported in the upstream kernel. This is what enterprise distributions should do: attempt to ship packages configured in a way that they can be supported for a long time.

At the end of the day, we continue to track the changes made to the upstream Docker Project and re-apply our patches to that project. We believe this is an important distinction to allow freedom in software to thrive while continually building stronger communities. It’s very different than a hostile fork that divides communities—we are still working very hard to maintain continuity around unified upstreams.

How Can I Find Out About Patches for a Particular Version of Docker?

All of the patches we ship are described in the README.md file on the appropriate branch of our repository. If you want to look at the patches for Docker 1.12 you would look at the 1.12 branch.

You can then look on the patches list page for information about these patches.

What Kind of Patches does Project Atomic Include?

Here is a quick overview of the kinds of patches we carry, and then guidance on finding information on specific patches.

Upstream Fixes

The Docker Project upstream tends to fix issues in the next version of Docker. This means if a user finds an issue in Docker 1.11 and we provide a fix for this to upstream, the patch gets merged in to the master branch, and it will probably not get back ported to Docker 1.11.

Since Docker is releasing at such a rapid rate, they tell users to just install Docker 1.12 when it is available. This is fine for people who want to be on the bleeding edge, but in a lot of cases the newer version of Docker comes with new issues along with the fixes.

For example, Docker 1.11 split the Docker daemon into three parts: Docker daemon, containerd, and runc. We did not feel this was stable enough to ship to enterprise customers right when it came out, yet it had multiple fixes for the Docker 1.10 version. Many users want to only get new fixes to their existing software and not have to re-certify their apps every two months.

Another issue with supporting stable software with rapidly changing dependencies is that developers on the stable projects must spend time ensuring that their product remains stable every time one of their dependencies is updated. This is an expensive process, dependencies end up being updated only infrequently. This causes us to cherry-pick fixes from upstream Docker and to ship these fixes on older versions so that we can get the benefits from the bug fixes without the cost of updating the entire dependency. This is the same approach we take in order to add capabilities to the Linux kernel, a practice that has proven to be very valuable to our users.

Proposed Patches for Upstream

We carry patches that we know our users require right now, but have not yet been merged into the upstream project. Every patch that we add to the Project Atomic repository also gets proposed to the upstream Docker repository.

These sorts of patches remain on the Project Atomic repository briefly while they’re being considered upstream, or forever if the upstream community rejects them. If we don’t agree with upstream Docker and feel our users need these patches, we continue to carry them. In some cases we have worked out alternative solutions like building authorization plugins.

For example, users of RHEL images are not supposed to push these images onto public web sites. We wanted a way to prevent users from accidentally pushing RHEL based images to Docker Hub, so we originally created a patch to block the pushing. When authorization plugins were added we then created a plugin to protect users from pushing RHEL content to a public registry like Docker Hub, and no longer had to carry the custom patch.

Detailed List of Patches

Want to know more about specific patches? You can find the current table and list of patches on our new patches list page.

View article »

Vagrant Service Manager 1.3.0 Released

This version of vagrant-service-manager introduces support for displaying Kubernetes configuration information. This enable users to access the Kubernetes server that runs inside ADB virtual machine from their host machine.

This version also includes binary installation support for Kubernetes. This support is extended to users of the Red Hat Container Development Kit. For information about client binary installation, see the previous release announcement Client Binary Installation Now Included in the ADB.

The full list of features from this version are:

  • Configuration information for Kubernetes provided as part of the env command
  • Client binary installation support for Kubernetes added to the ADB
  • Client binary installation support for OpenShift, Kubernetes and Docker in the Red Hat Container Development Kit
  • Auto-detection of a previously downloaded oc executable binary on Windows operating systems
  • Unit and acceptance tests for the Kubernetes service
  • Option to enable Kubernetes from a Vagrantfile with the following command:
  config.servicemanager.services = 'kubernetes'

1. Install the kubernetes client binary

Run the following command to install the kubernetes binary, kubectl

$ vagrant service-manager install-cli kubernetes
# Binary now available at /home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0/kubectl
# run binary as:
# kubectl <command>
export PATH=/home/budhram/.vagrant.d/data/service-manager/bin/kubernetes/1.2.0:$PATH

# run following command to configure your shell:
# eval "$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')"

Run the following command to configure your shell

$ eval "$(VAGRANT_NO_COLOR=1 vagrant service-manager install-cli kubernetes | tr -d '\r')"

2. Enable access to the kubernetes server that runs inside of the ADB

Run the following command to display environment variable for kubernetes

$ vagrant service-manager env kubernetes
# Set the following environment variables to enable access to the
# kubernetes server running inside of the vagrant virtual machine:
export KUBECONFIG=/home/budhram/.vagrant.d/data/service-manager/kubeconfig

# run following command to configure your shell:
# eval "$(vagrant service-manager env kubernetes)"

Run the following command to configure your shell

eval "$(vagrant service-manager env kubernetes)"

For a full list of changes in version 1.3.0, see the release log.

View article »

Creating OCI configurations with the ocitools generate library

OCI runc is a cool new tool for running containers on Linux machines. It follows the OCI container runtime specification. As of docker-1.11 it is the main mechanism that docker uses for launching containers.

The really cool thing is that you can use runc without even using docker. First you create a rootfs on your disk: a directory that includes all of your software and usually follows the basic layout of /. There are several tools that can create a rootfs, including dnf or the atomic command. Once you have a rootfs, you need to create a config.json file which runc will read. config.json has all of the specifications for running a container, things like which namespaces to use, which capabilities to use in your container, and what is the pid 1 of your container. It is somewhat similar to the output of docker inspect.

Creating and editing the config.json is not for the faint of heart, so we developed a command line tool called ocitools generate that can do the hard work of creating the config.json file.

Creating OCI Configurations

This post will guide you through the steps of creating OCI configurations using the ocitools generate library for the go programming language.

There are four steps to create an OCI configuration using the ocitools generate library:

  1. Import the ocitools generate library into your project;
  2. Create an OCI specification generator;
  3. Modify the specification by calling different methods of the specification generator;
  4. Save the specification.

Read More »

Download and Get Involved with Fedora Atomic 24

This week, the Fedora Project released updated images for its Fedora 24-based Atomic Host. Fedora Atomic Host is a leading-edge operating system designed around Kubernetes and Docker containers.

Fedora Atomic Host images are updated roughly every two weeks, rather than on the main six-month Fedora cadence. Because development is moving quickly, only the latest major Fedora release is supported.

Note: Due to an issue with the image-building process, the current Fedora Atomic Host images include an older version of the system tree. Be sure to atomic host upgrade to get the latest set of components. The next two-week media refresh will include an up-to-date tree.

Read More »