Project Atomic is now sunset

The Atomic Host platform is now replaced by CoreOS. Users of Atomic Host are encouraged to join the CoreOS community on the Fedora CoreOS communication channels.

The documentation contained below and throughout this site has been retained for historical purposes, but can no longer be guaranteed to be accurate.

Project News

Containerization and Deployment of application on Atomic host with Ansible-Playbook

This mini-tutorial describes how to build Docker image and deploy containerized application on Atomic host using Ansible Playbook.

Building Docker image for an application and running container/cluster of containers is nothing new. But the idea is to automate the whole process and this is where Ansible playbooks come in to play.

Note: You can use any Cloud/Workstation based Image to execute the following task.

How to automate the containerization and deployment process for a simple Flask application

First, let’s create a simple Flask Hello-World application. This is the directory structure of the entire application. You can copy these files from the repository trishnaguha/fedora-cloud-ansible:

flask-helloworld/
├── ansible
│   ├── ansible.cfg
│   ├── inventory
│   └── main.yml
├── Dockerfile
└── flask-helloworld
    ├── hello_world.py
    ├── static
    │   └── style.css
    └── templates
        ├── index.html
        └── master.html

hello_world.py:

from flask import Flask, render_template

APP = Flask(__name__)

@APP.route('/')
def index():
    return render_template('index.html')

if __name__ == '__main__':
    APP.run(debug=True, host='0.0.0.0')

static/style.css:

body {
  background: #F8A434;
  font-family: 'Lato', sans-serif;
  color: #FDFCFB;
  text-align: center;
  position: relative;
  bottom: 35px;
  top: 65px;
}
.description {
  position: relative;
  top: 55px;
  font-size: 50px;
  letter-spacing: 1.5px;
  line-height: 1.3em;
  margin: -2px 0 45px;
}

templates/master.html:

<!doctype html>
<html>
<head>
    {% block head %}
    <title>{% block title %}{% endblock %}</title>
    {% endblock %}
                                <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
                                <link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-T8Gy5hrqNKT+hzMclPo118YTQO6cYprQmhrYwIiQ/3axmI1hQomh7Ud2hPOy8SP1" crossorigin="anonymous">
                                <link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
                                <link href='http://fonts.googleapis.com/css?family=Lato:400,700' rel='stylesheet' type='text/css'>

</head>
<body>
<div id="container">
    {% block content %}
    {% endblock %}</div>
</body>
</html>

templates/index.html:

{% extends "master.html" %}

{% block title %}Welcome to Flask App{% endblock %}

{% block content %}
<div class="description">

Hello World</div>
{% endblock %}

Here’s the Dockerfile to build the image. Remember to put your name and email after MAINTAINER:

FROM fedora
MAINTAINER YOUR NAME HERE<your@email.address>

RUN dnf -y update && dnf -y install python-flask python-jinja2 && dnf clean all
RUN mkdir -p /app

COPY files/ /app/
WORKDIR /app

ENTRYPOINT ["python"]
CMD ["hello_world.py"]

That’s everything we need to build the container. Now let’s automate it.

Ansible playbook for our application

Create Inventory file:

[atomic]
<IP_ADDRESS_OF_HOST> ansible_ssh_private_key_file=<'PRIVATE_KEY_FILE'>

Replace IP_ADDRESS_OF_HOST with the IP address of the atomic/remote host and ‘PRIVATE_KEY_FILE’ with your private key file.

Create ansible.cfg file:

[defaults]
inventory=inventory
remote_user=<USER>

[privilege_escalation]
become_method=sudo
become_user=root

Replace USER with the user of your remote host (Atomic).

Create the Playbook main.yml file:

---
- name: Deploy Flask App
  hosts: atomic
  become: yes

  vars:
    src_dir: [Source Directory]
    dest_dir: [Destination Directory]

  tasks:
    - name: Create Destination Directory
      file:
       path: "{{ dest_dir }}/files"
       state: directory
       recurse: yes

    - name: Copy Dockerfile to host
      copy:
       src: "{{ src_dir }}/Dockerfile"
       dest: "{{ dest_dir }}"

    - name: Copy Application to host
      copy:
       src: "{{ src_dir }}/flask-helloworld/"
       dest: "{{ dest_dir }}/files/"

    - name: Make sure that the current directory is {{ dest_dir }}
      command: cd {{ dest_dir }}

    - name: Build Docker Image
      command: docker build --rm -t fedora/flask-app:test -f "{{ dest_dir }}/Dockerfile" "{{ dest_dir }}"

    - name: Run Docker Container
      command: docker run -d --name helloworld -p 5000:5000 fedora/flask-app:test
...

Replace [Source Directory] in src_dir field in main.yml with your /path/to/src_dir of the current host.

Replace [Destination Directory] in dest_dir field in main.yml with your /path/to/dest_dir of the remote atomic host.

Issue the following command in order to run the playbook. Make sure you are in the ansible directory. $ ansible-playbook main.yml.

To verify whether the application is running, you can curl the localhost on your remote atomic host with $ curl http://localhost:5000.

You can also manage your containers running on remote host using Cockpit. Check this article to know how to use Cockpit to manage your containers: Manage-Containers-with-Cockpit.

Here is the repository that contains Playbooks to deploy containers on Atomic host: trishnaguha/fedora-cloud-ansible.

View article »

How container registries prevent information leakage

Recently people have been reporting unexpected errors when doing a skopeo copy versus a docker pull: 1347805, 235, and 27281.

Skopeo is a command-line tool that that does various operations with container images and container image registries, including pulling the images to the host. It is also used under the covers by the atomic command-line tool.

This post explains why those weird errors can come up when pulling images.

Let’s see what happens when a user tries to pull an image from the docker hub and the image doesn’t exist:

$ docker pull thisimagedoesntexist
Using default tag: latest
Trying to pull repository docker.io/library/thisimagedoesntexist ...
Pulling repository docker.io/library/thisimagedoesntexist
Error: image library/thisimagedoesntexist:latest not found

We get an ‘image not found’, as expected, right?

Let’s try the same with skopeo copy:

$ skopeo --tls-verify=false copy docker://thisimagedoesntexist oci:what
FATA[0002] Error initializing image from source docker://thisimagedoesntexist:latest: unauthorized: authentication required

What?

Why are we getting an unauthorized error message?

Let’s see what’s really happening under the hood:

The docker daemon:

  1. Attempts to contact a V2 registry
  2. V2 registry returns 'unauthorized: authentication required’
  3. Daemon falls back and try to pull the same image from a V1 registry
  4. Attempt to contact a V1 registry
  5. V1 registry isn’t deployed, we get a 404
  6. The docker command line interprets the 404 as an image not found

Skopeo:

  1. Attempts to contact a V2 registry
  2. V2 registry returns 'unauthorized: authentication required’
  3. Skopeo errors out and shows the 'unauthorized: authentication required’

Why is docker trying to contact a V1 registry?

Docker still supports the old V1 registry API (remember docker-registry?). Some registry deployments use both V1 and V2 registries. When the docker engine fails to get a V2 Image, it falls back and tries to contact a V1 registry that may have the image.

Yes, but:

Why does skopeo return 'unauthorized’?

The V2 registry API is designed to prevent information leaks about private repositories (GitHub does the same, if you’re wondering).

From the first example above, library/imagedoesntexist could be a private repository/image (or not!). The registry can’t tell you that the repository/image doesn’t exist; it can only tell you that you’re not authorized to access it.

In fact, if you have a private repository/image on the docker hub and try to pull it with skopeo, you still get 'unauthorized’ (unless you’re logged in of course). Skopeo only supports V2 registries. Since V1 registries are being purged, we decided to not add support for V1 to Skopeo.

Let’s see some examples with a private image named runcom/what:

If runcom/what is a private image and I’m not logged in:

$ skopeo --tls-verify=false copy docker://runcom/what oci:what
FATA[0001] Error initializing image from source docker://runcom/what:latest: unauthorized: authentication required

As you can see it’s not telling me whether the image exists or not on the registry. It just tells me that I’m not authorized to pull the image.

Now, if runcom/what is a private image and I’m logged in:

$ skopeo --tls-verify=false copy docker://runcom/what oci:what
FATA[0002] Error initializing image from source docker://runcom/what:latest: manifest unknown: manifest unknown

The above error is indeed an 'image not found’ (e.g., a 404 from the V2 registry). Since I’m logged in, I have the rights to understand if the image is on the registry.

Let’s see what happens with docker instead when I’m not logged in:

$ docker pull runcom/what                                                     
Using default tag: latest
Trying to pull repository docker.io/runcom/what ...
Pulling repository docker.io/runcom/what
Error: image runcom/what:latest not found

Well, remember the docker engine falls back to V1? Let’s have a look at the docker engine logs to really understand why it felt back to V1. We see the following error message:

Oct 24 16:51:19 localhost.localdomain docker[1408]: time='2016-10-24T16:51:19.548329131+02:00' level=debug msg='GET https://registry-1.docker.io/v2/runcom/what/manifests/latest'
Oct 24 16:51:20 localhost.localdomain docker[1408]: time='2016-10-24T16:51:20.113460151+02:00' level=error msg='Attempting next endpoint for pull after error: unauthorized: authentication required'

Great. Exactly like skopeo, before falling back to V1, it’s correctly telling us that I’m unauthorized to pull the image (note, it’s not telling me anything about the existence of that image on the docker hub!)

If I try to pull the same image but this time logged in, I’ll get the same image not found error, but this time I can spot the following in the logs:

Oct 24 16:54:11 localhost.localdomain docker[1408]: time='2016-10-24T16:54:11.706002616+02:00' level=debug msg='GET https://registry-1.docker.io/v2/runcom/what/manifests/latest'
Oct 24 16:54:12 localhost.localdomain docker[1408]: time='2016-10-24T16:54:12.283006158+02:00' level=error msg='Attempting next endpoint for pull after error: manifest unknown: manifest unknown'

The errors in the logs are exactly the same as the ones we get with skopeo. Just that docker falls back and tries a V1 registry while skopeo doesn’t fall back.

That means that skopeo is indeed providing the correct error message from the V2 registry while docker is reporting 'image not found’ because it hides the real/correct unauthorized error from the V2 registry and only shows the V1 error. Docker command line might actually be giving you bogus information, when the container image is actually stored in V2, but reports the image does not exist when you are not logged in. When upstream docker eventually drops backward support for V1, it will report the same error that skopeo does.

I hope this post will shed some light about why these errors differ between docker and skopeo.

View article »

Better ways of handling logging in containers

Traditional logging systems, like syslog, do not quite work by default with Containers. This is especially true if they are running without an init system in the container.

STDIN/STDERR messages in journal

I recently received a bugzilla report complaining about logging inside of a docker container.

First the user complained about all of STDOUT/STDERR showing up in the journal. This can actually be configured in the docker daemon using the –log-driver parameter:

man dockerd
...

  --log-driver="json-file|syslog|journald|gelf|fluentd|
   awslogs|splunk|etwlogs|gcplogs|none"
  Default driver for container logs. Default is json-file.
  Warning: docker logs command works only for json-file logging driver.

Red Hat based Operating Systems use --log-driver=journald by default, because we believe log files should be permanently stored on the host system. The upstream docker default is json-file. With json-file the logs are removed when an admin removes the container using docker rm. . Another problem with the json-file logger is that tools that maintain logs won’t work with them. , We were having problems with containers’ logs filling up the system, and users not knowing what was using up the space.

If you don’t like our default, including the STDOUT/STDERR messages being recorded in the journal, you can edit /etc/sysconfig/docker and change the log-driver.

The bugzilla report then went on to ask about getting syslog and journal messages from the container. Where do these messages generated inside of the container end up?

syslog and journal log messages silently dropped by default

One big problem with standard docker containers is that any service that writes messages to syslog or directly to the journal get dropped by default. Docker does not record any logs unless the messages are written to STDIN/STDERR. There is no logging service running inside of the container to catch these messages.

Running a logging system inside of the container

If you want proper logging setup, I would suggest that you investigate running systemd inside of a container.. This would setup systemd as pid 1, but would also run journald inside of the container and the syslog and journal messages would be handled the same was as when they are on the hosts.

A lot of people do not want to run a full init system inside of their containers. Another option would be to have services running on the host listen for these messages. An administrator can volume mount in the hosts sockets into your container.

Let’s look into this.

Getting messages out of the container to the host logging system.

The bug reporter went on to show that volume mounting the /dev/log from the host into the container did not successfully get log messages from the container to the host journal. They got messages sent to syslog, but not to the journal:

# docker run -ti -v /dev/log:/dev/log fedora sh
container# dnf -y install systemd-python
...
container# python <<< "from systemd import journal; journal.send('journald Hello')"
container# logger "logger Hello"
container# exit

# journalctl -b | grep Hello
Oct 19 09:53:28 dhcp-10-19-62-196.boston.devel.redhat.com root[16787]: logger Hello

Notice that the journald Hello message to the journal does not show up, but the logger message does. The difference is syslog messages sent from the logger command write to /dev/log, and journald on the host is listening for syslog messages there. When it sees the message sent to the bind mounted /dev/log, it logs the message in the journal.

The python journal.send call attempts to write journald Hello to the /run/systemd/journald/socket socket. This socket does not exists inside of the container and the python code silently drops the message.

The following example works for me, binding in the hosts journal socket:

# docker run -ti -v /dev/log:/dev/log -v /var/run/systemd/journal/socket:/var/run/systemd/journal/socket fedora sh
container# dnf -y install systemd-python
...
container# python <<< "from systemd import journal; journal.send('journald Hello')"
container# logger "logger Hello"
container# exit

# journalctl -b | grep Hello
Oct 19 09:57:51 dhcp-10-19-62-196.boston.devel.redhat.com python[17523]: journald Hello
Oct 19 09:57:53 dhcp-10-19-62-196.boston.devel.redhat.com root[16787]: logger Hello

The journal.Send call above connects to /run/systemd/journal/socket and since we leaked it into the container, the messages gets to the host’s journal.

Note: SELinux was in enforcing mode for all of these tests. We allow container processes to communicate with the journal/syslog sockets on the host by default.

Conclusion

Handling of logging messages inside of containers can be difficult, most users just ignore this and rely on the applications to read/write STDOUT/STDERR. Getting syslog and journal messages out of the containers, requires an application to be listening on /dev/log and /run/system/journal/socket. The application that listens can either be run inside of the container or you can take advantage of volume mounts to listen from outside of the container.

View article »

New CentOS Atomic Host with Optional Docker 1.12

Last week, the CentOS Atomic SIG released an updated version of CentOS Atomic Host (tree version 7.20161006), which offers users the option of substituting the host’s default docker 1.10 container engine with a more recent, docker 1.12-based version, provided via the docker-latest package.

CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box; or as an installable ISO, qcow2, or Amazon Machine image. Check out the CentOS wiki for download links and installation instructions, or read on to learn more about what’s new in this release.

Read More »