Summer is approaching, and it’s time for camp! Container.Camp, that is.
Okay, that’s a bit of a hokey lead-in. The good news is that I’ll try to avoid that sort of thing during my talk at Container.Camp this Friday.
The Atomic Host platform is now replaced by CoreOS. Users of Atomic Host are encouraged to join the CoreOS community on the Fedora CoreOS communication channels.
The documentation contained below and throughout this site has been retained for historical purposes, but can no longer be guaranteed to be accurate.
Summer is approaching, and it’s time for camp! Container.Camp, that is.
Okay, that’s a bit of a hokey lead-in. The good news is that I’ll try to avoid that sort of thing during my talk at Container.Camp this Friday.
Having problems with ping
in your containers? It might not be what you think!
We received a bug report the other day with the following comment:
On a RHEL 7 host (registered and subscribed), I can use Yum to install additional packages from docker run ...
or in a Docker file. If I install the iputils
package (and any dependencies), the basic ping
command does not work. However, if I use the busybox
image from the public Docker index, its ping command works perfectly fine (on same RHEL 7 host).
A couple of interesting Atomic releases to take a look at this week. The Fedora Project has released Fedora 22 alpha, which includes the Cloud edition Atomic Host images, as well as the Server and Workstation editions. We also have a few new test images from the CentOS Atomic SIG to check out – including Vagrant boxes.
The Fedora release includes the standard raw and qcow2 images, as well as new Vagrant boxes for VirtualBox and libvirt/KVM.
As an alpha release, this should give an excellent preview of the Fedora 22 release but it’s entirely possible it will have some interesting bugs as well. Please do give it a spin and let us know if you run into any problems! If you have questions, feel free to ask on cloud@lists.fedoraproject.org or report any bugs you might find.
We also have some news on the CentOS front, with new images and Vagrant boxes for libvirt and VirtualBox:
Note that the most recent CentOS qcow2 images do not include the LVM backing storage, but that is included in the Vagrant boxes.
Also worth noting, the CentOS SIG images are not rebuilds of Red Hat Enterprise Linux Atomic Host which was released last week. The CentOS SIG products are still work-in-progress, and should be treated as such.
Questions or comments on the CentOS Atomic work? Ping us on centos-devel or atomic-devel, or ask in #atomic on Freenode.
View article »Red Hat announced the general availability of Red Hat Linux Enterprise Atomic Host earlier today. This pulls together work from Project Atomic and makes it ready for organizations that are looking to package and run applications based on Red Hat Enterprise Linux (RHEL) 6 and 7 as containers.
This release includes all the components (Docker, Kubernetes, Flannel, systemd, etc.) that you need to deploy a container-based architecture in an environment based on RHEL.
Not quite sure about the benefits of containers? The RHEL folks are hosting a virtual event on March 12th. This features Tim Yeaton and Lars Herrman from Red Hat, and principal analyst Dave Bartoletti from Forrester. Odds are if you’re following Project Atomic you’re already pretty hip to the benefits fo containers, but if not – this should answer many of your questions.
Interested in trying RHEL Atomic Host? Head over to the Red Hat Customer Portal and grab the installation media and read up on the offering. You can download installation media and test it out with out a RHEL subscription.
The fun doesn’t stop with the RHEL Atomic Host release, of course. We are still working on getting Fedora Atomic Host ready for the Fedora 22 release, and the CentOS SIG is continuing to work on CentOS Atomic Host as well. Naturally, the work in Fedora and CentOS will benefit future RHEL Atomic Host releases.
The alpha for the Fedora 22 Cloud edition, including Atomic, should be released in the next week or two (the schedule currently calls for 10 March). A new CentOS image should be out in the next day or two, including some additional image types!
Have questions, or want to get involved in Project Atomic? Here’s where to find us:
And, as always, ping me directly (jzb, at RedHat.com) if you can’t find what you need elsewhere!
View article »Atomic hosts are meant to be as slim as possible, with a bare minimum of applications and services built-in, and everything else running in containers. However, what counts as your bare minimum is sure to differ from mine, particularly when we’re running our Atomic hosts in different environments.
For instance, I’m frequently testing and using Atomic hosts on my oVirt installation, where it’s handy to have oVirt’s guest agent running, which provides handy information about what’s going on inside of an oVirt-hosted VM. If you aren’t using oVirt, though, there’s no reason to carry this package around in what’s supposed to be a svelte image.
I could build my own Atomic host, and include the ovirt-guest-agent-common
package, but I’d rather stick with upstream. Containerization is the solution for running extra software on Atomic, but since the guest agent needs to see and interact with the host itself, we need an Containers Unbound sort of approach. Fortunately, Dan Walsh has blogged about this very issue, in a post about the Super Privileged Container concept:
I define an SPC as a container that runs with security turned off (–privileged) and turns off one or more of the namespaces or ‘volume mounts in’ parts of the host OS into the container. This means it is exposed to more of the Host OS.
I started with a Dockerfile defining my ovirt-guest-agent container:
FROM centos:centos7
MAINTAINER Jason Brooks <jbrooks@redhat.com>
RUN yum -y update; \
yum -y install epel-release; \
yum -y install ovirt-guest-agent-common; \
yum clean all
CMD /usr/bin/python /usr/share/ovirt-guest-agent/ovirt-guest-agent.py
For the CMD
line, I took the command that you’ll find running on a VM with guest agent active. I also tested with a longer CMD
, in which I strung together all of the commands you find in the systemd service file for the guest agent, including the PreExec commands:
CMD /sbin/modprobe virtio_console; \
/bin/touch /run/ovirt-guest-agent.pid; \
/bin/chown ovirtagent:ovirtagent /run/ovirt-guest-agent.pid; \
/usr/bin/python /usr/share/ovirt-guest-agent/ovirt-guest-agent.py
Both CMD
lines seemed to work in my tests, but this could stand some more testing.
Dan’s post includes a variety of examples of host resources that a super privileged container may need to access, and the docker run
arguments required to enable them. After experimenting with different run commands, the simplest set of arguments required appeared to be:
sudo docker run --privileged -dt --name ovirt-agent --net=host \
-v /dev/virtio-ports/com.redhat.rhevm.vdsm:/dev/virtio-ports/com.redhat.rhevm.vdsm \
-v /dev/virtio-ports/com.redhat.spice.0:/dev/virtio-ports/com.redhat.spice.0 \
-v /dev/virtio-ports/org.qemu.guest_agent.0:/dev/virtio-ports/org.qemu.guest_agent.0 \
$USER/ovirt-guest
I wanted the ovirt-agent container to start up following a reboot, so I followed the advice of the docker docs and made myself a systemd service file to handle the job:
[Unit]
Description=ovirt guest agent container
Author=Me
After=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a ovirt-agent
ExecStop=/usr/bin/docker stop -t 2 ovirt-agent
[Install]
WantedBy=multi-user.target
I saved this file at /etc/systemd/system/ovirt-agent.service
and ran sudo systemctl enable ovirt-agent
to direct systemd to start it up following a reboot.
If you have questions (or better yet, suggestions) regarding this post, I’d love to hear them. Ping me at jbrooks in #atomic on freenode irc or @jasonbrooks on Twitter, or send a message to the Project Atomic mailing list. Also, be sure to check out the Project Atomic Q&A site.
View article »