Getting Started with Performance Analysis of Docker

Docker introduces some intriguing usability, packaging and deployment patterns.  These new patterns offer the potential to effect massive improvements to the enterprise application development and operations specialties.  Containers also offer the promise of bare metal performance while offering some amount of isolation as well.  But can they deliver on that promise ?

Since the early part of January, the Performance Engineering Group at Red Hat has run huge amounts of microbenchmarks, benchmarks and application workloads in Docker containers.  The output of that effort has been a steady stream of lessons learned and advice/guidance given to our product architects and developers.  How dense can we go ?  How fast can it go ?  Are these defaults “sane” ?  What NOT to do…etc.

Disclaimer:  as anyone who has worked with Docker knows, it’s a project under heavy development.  I mention that because this blog post includes code snippets and observations that are tied to specific experiments and Docker/kernel versions.  YMMV, the answer of course is “it depends”, and so on.

Performance tests we’ve pointed at Docker containers

We’ve done a whole bunch of R&D testing with bleeding edge, “niche” hardware and software to push and pull Docker containers in completely unnatural ways.   Based on our choice of benchmarks, you can see that the initial approach was to calculate the precise overhead of containers as compared to bare metal (Red Hat’s Project Atomic will support bare metal deployment of containers).  Of course we are also gathering numbers with VMs to compare and containers in VMs (which might be the end-game, who knows…) via OpenStack etc.

Starting at the core, and working our way to the heaviest, pushing all the relevant subsystems to their limits:

  • In-house timing syscall benchmarks (including vdso), libMicro
  • Linpack, single and double precision, Streams
  • Various incantations of sysbench (oltp and cpu)
  • iozone, smallfile, spinning disk, ssd and NAND flash
  • netperf on 10g and 40g, SR-IOV (pipework)
  • OpenvSwitch with VXLAN offload-capable NICs
  • Traditional “large” applications, i.e. business analytics
  • Addressing single-host vertical scalability limits by fixing the Linux kernel and fiddling some bits in Docker.
  • Using OpenvSwitch to get past the spanning-tree limitations of # of ports per bridged-interface.

All of these mine-sweeping experiments (lots more to come!) have allowed us to find and fix plenty of issues and document best-practices that we hope will lead to a great customer experience.

BTW if you’re interested in serious, low level, Enterprise-grade performance analysis and tuning for Linux containers (or in general!), let’s have a chat @DockerCon … I’ll be one of the guys in a Project Atomic T-shirt :-)

Unique Docker Philosophies

  • Ease of use:  Docker automates the use of existing Linux kernel technologies into an easily consumable format.  Setup and administration of traditionally disjoint subsystems (cgroups, namespaces, iptables, selinux) are encapsulated by Docker.
  • Packaging:  Docker specifies an image/packaging format that allows an application to be packaged with it’s full userspace requirements.  No longer is there a necessary interaction between system-level packages (other than the kernel) with the containerized application.  The application sees only what is provided inside the container.  This can be for example, a specific version of gcc or php that differs from what the host OS provides.  I keep drawing an analogy to BIND “views”.

Performance interests aside, those are the 2 main selling points for me, and the benefits of those cannot be overstated.

Surprise, we added some enterprise-y stuff

Docker learns about systemd

Red Hat has taught Docker to use systemd, rather than sysvinit.  I mention this because (depending on who you’re talking to) it may be controversial.  But I believe that the true promise of containers on Linux relies on specific capabilities that systemd provides:  at least init dbus messaging, remote capabilities, cgroups API, remote journaling.

Docker systemd unit-file override:

  • systemd supports “.d”-style overrides for installed unit-files.  This is the correct way to customize the defaults for any systemd unit-file.  Overrides go in /etc/systemd/system/.
  • I need an override for my testing, because I want to use my own bridge device and I want to play with the MTU as well.  By default, Docker creates a bridge called docker0 and assigns IP addresses from that pool, useful for development, not production.  For production, I guess folks will want to set up their own bridge (or pass through a device, macvlan, whatever).
  • Assuming you have a bridge that you want to use, create a new systemd unit override file called /etc/systemd/system/docker.service.  Here is an example where I’ve set Docker to use a bridge named ‘br1′ and I also added ‘-D’  to enable debug logging for the Docker daemon.  br1 is on my test network, on an IP range that I control.  Finally, I’ve bumped the MTU to 9000 for some throughput tests…
ExecStart=/usr/bin/docker -d --selinux-enabled -H fd:// -b br1 -D --mtu=9000

Also Stephen Tweedie spotted unnecessary memory consumption in systemd mount/umount handling, which was fixed in record time by Lennart Poettering :-)

Docker learns about SELinux

Red Hat has brought SELinux support to Docker.  If you’ve been using Red Hat products for any length of time, you know security is a first order concern for us.  Look at the stats for critical CVE reponse time…adding SELinux support to Docker should come as no surprise :-)  Shout out to the wizards in Red Hat’s Security Response Team, btw.

After the initial bring-up, SELinux support has been fairly painless for us in the Performance Group.  Dan Walsh is doing a talk called “SELinux and Docker” at DockerCon next week (June 10, 2pm, actually).  To give you a sense of how serious Red Hat is about containers and Docker, I should also mention Red Hat’s CTO Brian Stevens is doing one of the keynotes and we’re Platinum sponsoring.  Here’s the very high level picture:

Red_Hat-Project-Atomic-Introduction

Dockerfile for Performance Analysis

What is a Dockerfile?

Why create a Dockerfile specifically for Performance Analysis?

  • One of the core principals of Docker images is that they are absolutely as small as possible.  This is because when a user wants to use your container image, they must pull it over the network.  Docker hosts a registry at http://index.docker.io.  Folks may stand up their own internal registries as well, where bandwidth is a bit less of a concern, images can contain site-specific customizations, intellectual property, licensed software, etc.
  • Our engineers have been working hard to reduce the base image size.  Therefore, the base images include the smallest usable package set, plus necessary tooling/package management utilities (yum) to pull in anything else the user needs inside their containers.  Think @core on steriods.
  • Because of the size constraints on the base image, we have to layer on our usual set of Performance Analysis tools via Dockerfile rather than kickstart.
  • A very common question I get from the field is to provide a precise list of performance analysis packages/tools that I would recommend in their base RHEL images.  So I put a slide in the Summit deck this year:

helpful_utilities

Example Dockerfile

It’s not all that complicated, but includes lots of helpful utilities for characterizing workloads running inside containers.  You might see that sysstat is missing; that’s because I monitor that information on the host.  This is one critical differentiation between virtualization, and containers:  the VCPUs of a KVM guest exist as processes in the host.  With containers, the actual containerized binary shows up in the process list of the host.  Note:  the PID namespace ensures isolation of process tables between containers.

FROM rhel7:latest
MAINTAINER perf <perf@domain.com>

RUN yum install -q -y bc blktrace btrfs-progs ethtool gcc git gnuplot hwloc iotop iproute iputils less mailx man-db netsniff-ng net-tools numactl numactl-devel openssh-clients openssh-server passwd perf procps-ng psmisc screen strace tcpdump vim-enhanced wget xauth which 

RUN git clone http://whatever/project.git

ENV HOME /root
ENV USER root
WORKDIR /root
EXPOSE 22

You might also notice that I’m installing numactl and hwloc.  That’s because recent versions of Docker provide access to sysfs hardware topology tables from the host, allowing you to apply similar tuning techniques as you would on bare metal on containerized processes.  We had some pretty funny test automation explosions when sysfs hardware topology was not exposed :-)  Side note, you can’t tune IRQ affinity from a non-privileged container, but luckily IRQ balance really does a great job these days (even knows about PCI-locality).  Privileged containers CAN program IRQ affinity.

CPU and memory affinity is another important differentiation between VMs and containers.   In a container,  core1 is core1 on the host, core2 is core2 etc (depending on your cgroups config).  With VMs you apply specific vcpupin/numatune/emulatorpin commands in order to ensure VCPU threads and their memory utilize specific CPUs/memory banks.  The process of properly applying affinity to KVM guests is well-documented in Red Hat’s Virtualization Tuning and Optimization Guide.  Naturally, when we characterize VMs and containers inside VMs, we often apply much of that.

How to build a container with the Performance Dockerfile

# time docker build --no-cache=true -t r7perf --rm=true - < Dockerfile_r7perf

# docker run -it r7perf bash

root@7d7b16277784: / # exit

How do I add my benchmark/tool/workload to this Docker container?

  • Ideally, a pre-configured set of scripts would be committed to your own git repo, and pulled into this container automatically in the Dockerfile (RUN git clone http:///whatever/project.git).  This is our approach.
  • Add a RUN command to the Dockerfile that uses yum, wget, git or similar to pull in, install and configure your software.
  • Run a container interactively, then pull down the benchmark manually.  This is our fallback for some of the more challenging/complex benchmarks and under-load analysis.

How to get a benchmark running inside a Docker container

Let’s take for example, sysbench.

  • I’ve built RPMs for sysbench for RHEL6 and RHEL7 and committed them to our git repository.  I’ve also committed my driver script called run-sysbench.sh. (this isn’t mandatory, but using git makes things a LOT easier).
    • You can add a RUN statement to the Dockerfile that wget’s your benchmark/tarball from somewhere, or a RUN that does another git clone of some other repository.
    • However you would normally transfer your code to a new machine, you can do the same thing in the Dockerfile.
  • Once the container build is complete, launch a container, and kick off your workload.  run-sysbench.sh could be any driver/wrapper script that you’ve got.
host# docker run -it --privileged r7perf bash

container# yum install -y bench/sysbench/rhel7/*rpm mariadb-server mariadb ; cd bench/sysbench

container# ./run-sysbench.sh oltp docker

...run-sysbench.sh completes and spits out an output/logfile that it copies off the container (rsync, ftp whatever).
  • That’s it.  When the script finishes and you’ve copied off the results (part of run-sysbench.sh), you can ‘exit’ the container.
  • Astute observers will have noticed that I snuck ‘–privileged’ onto the command line above.  That is because my run-sysbench.sh wants to drop_caches, and that’s not something permitted to a container by default.  As an alternative, instead of using privileges, a container could ssh into it’s host machine as root and drop_cache from there.  See Docker source capabilitiesdaemon/execdriver/lxc/init.go for the additional capabilities afforded to “privileged” containers.
  • Fun example:  create 100 containers running apache, in 14 seconds :-)
# time for i in $(seq 100) ; do docker run -d r7perf /usr/sbin/httpd -DFOREGROUND ; done

43bd1efc8fd4d8cedcced29cedf7176286077661a4df02c27756b3959a9fa75f
de1cc33c8f73d9ebce8676ab52da5e1da9518c649af87688f4a89dbda197c7cb
...

real 0m14.159s
user 0m0.386s
sys 0m0.386s

It’s not very often that a new technology comes up that creates a whole new column for performance characterization.  But containers have done just that, and so it’s been quite the undertaking.  There are still many tests variations to run, but so far we’re encouraged.

That said, I have to keep reminding myself that performance isn’t always the first concern for everyone (*gasp*).  The packaging, development and deployment workflow that breaks the ties between host userspace and container userspace has frankly been a breath of fresh air.

Performance Analysis and Tuning Videos from Red Hat Summit 2014

This year’s Red Hat Summit took place at the Moscone Center in downtown San Francisco.  Red Hat’s Performance Engineering team had it’s opportunity to showcase our contributions to products and customers with presentations on performance tuning for RHEL, databases, and Red Hat Storage (with behind-the-scenes/support data for many other talks).

Summit is always exciting, because as a company, Red Hat finally gets to reveal what we’ve been cooking.  For example, you may have seen Jim Whitehurst  announce during his keynote, a RHEL variant for containers called Red Hat Enterprise Linux Atomic Host via the open source Project Atomic.  Having witnessed the internal development velocity and excitement from customers/partners at Summit around Atomic in particular, I am just so happy for our extremely hard working development teams who are doing everything out in the open, the “Red Hat Way”, as it absolutely should be.

Red Hat made so many announcements, I’d encourage you to look at their Twitter feed to catch it all.

This year marked my 2nd turn as a partner in the Performance Analysis and Tuning presentation.  If you haven’t attended a Summit before, this 2-part session is typically (this year included) one of the most highly anticipated and attended sessions.  Our A/V team has already posted the videos for both parts:  Part 1 and Part 2.

Red Hat also announced the imminent availability of the Red Hat Enterprise Linux 7 Release Candidate.  The RC includes quite a few performance improvements and important fixes (including this one, which I mentioned during one of the perf talks).  To compliment the RC, our docs team has also refreshed the official RHEL7 Documentation, which means I don’t have to keep pointing people to my blog to figure out nohz_full anymore :-)

If you haven’t tried the RHEL7 beta, I’d strongly encourage you look at the RC when it hits RHN.  It’s also probably best that you do a fresh install.

From helping characterize RHEL7, to OpenStack, Red Hat Storage, OpenShift and Docker, it’s been just an insane few years.   The most fun I’ve had in my career, too.   #opensource rocks!

nohz_full=godmode ?

Starting with some background…What is the kernel timer tick (aka LOC interrupt), and what does it do ?

The kernel timer tick is a interrupt triggered at a periodic interval (based on the kernel compile option CONFIG_HZ). The tick is what keeps track of kernel statistics such as CPU and memory usage and provides for scheduler fairness through it’s load balancer. It also does timekeeping, i.e. to keep gettimeofday updated.

When the tick fires (as often as every millisecond, based on value of CONFIG_NO_HZ), it will get scheduled ahead of whatever’s currently running on a CPU core. In other words, whatever was running (with all of it’s valuable data cache-hot) will be interrupted by the tick. The CPUs L1 instruction and data caches (the smallest yet fastest) are invalidated, somewhere around 1000 times a second (if the task was 100% CPU-bound which the majority are not).

This is not an All Is Lost scenario, but certain workloads might see a 1-3% hit that could be attributed to this interference. It also caused some noticeable jitter, especially since what happens inside the tick is not deterministic. The total time the tick runs is not a predictable/constant value.

That was a mouthful, so let me dissect it a bit by describing various kernel config options that control how often this tick fires.

Prior to the introduction of the “tickless kernel” in kernel 2.6.21, the timer tick ran on every core at the rate of CONFIG_HZ (i.e. 1000/sec). This provided for a decent balance of throughput and latency. It had the side-effect of waking up every core constantly, which wasn’t necessary when nr_running=0 (a per-core attribute…see /proc/sched_debug). The scheduler says there’s nothing to run on the core, so let’s disable the tick there and save some power by not waking the CPU up from a deeper c-state. Actually it saves lots of power; linux has become quite a responsible citizen in this regard.

In summary:

RHEL5 – CONFIG_HZ=1000
- No Tickless support
- Ticks 1000/sec on every CPU no matter what
RHEL6 – CONFIG_HZ=1000, CONFIG_NO_HZ=y
- Tickless when nr_running = 0
- Ticks 1000/sec when nr_running > 0
RHEL7 – CONFIG_HZ=1000, CONFIG_NO_HZ=y, CONFIG_NO_HZ_FULL=y, etc.
- Opt-in support for nohz_full
- Tickless when nr_running <= 1
- Ticks 1000/s when nr_running > 1

Note: for RHEL7, you will need 3.10.0-68 or later.

Red Hat’s Frederic Weisbecker has been working with other industry leaders such as Paul McKenney from IBM (and many others) to implement a feature called Full NO HZ. During the development phase, it has changed names several times (i.e. adaptive tickless). These days the kernel cmdline option to toggle it is nohz_full, so that’s what I’m calling it.

This feature requires yet another slew of kernel config options, along with some userspace gymnastics (that I’ll detail later) to get everything lined up. So far the use-cases for disabling the tick has been embedded applications, HPC/scientific, and the financial guys who need real-time characteristics.

It makes sense then to have these features enabled, but defaulted to OFF such that these folks can opt-in.  As you’ll see it’s not really necessary for everyone, nor do most workloads expose the tick as the “top-talker” in traces. But several can, and it was for those customers that the feature was developed.

nohz_full has the following characteristics:

  • Stop interrupting userspace when nr_running=1 (see /proc/sched_debug).
    • If runqueue depth is 1, then the scheduler should have nothing to do on that core.
  • Move all timekeeping to non-latency-sensitive cores.
  • Mark certain cores as nohz_full cores via cmdline.  In this example, the system has 2 sockets, 8 cores each, 16 cores total, logical cores disabled.  I want to dump everything I can over to core 0, leaving cores 1-15 for my performance critical application:
Kernel cmdline: nohz_full=1-15 isolcpus=1-15 selinux=0 audit=0

# dmesg|grep dyntick
dmesg: [ 0.000000] NO_HZ: Full dynticks CPUs: 1-15.
  • In addition to cmdline options nohz_full, the user must move RCU threads themselves.
 # for i in `pgrep rcu` ; do taskset -pc 0 $i ; done

Frederic has written a small harness that uses kernel tracepoints and the ftrace interface to test and debug during this feature’s development.  It’s available here:

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git

That harness spits out something like this:

root@localhost: ~/dynticks-testing # cat trace.1
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 10392/10392 #P:16
 #
 # _-----=> irqs-off
 # / _----=> need-resched
 # | / _---=> hardirq/softirq
 # || / _--=> preempt-depth
 # ||| / delay
 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
 # | | | |||| | |
 -0 [001] d... 1565.585643: tick_stop: success=yes msg=
 user_loop-10409 [001] d.h. 1565.586320: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1565474000583
 user_loop-10409 [001] d... 1565.586327: tick_stop: success=yes msg=
 user_loop-10409 [001] d.h. 1566.586352: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1566474000281
 user_loop-10409 [001] d.h. 1567.586384: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1567474000282
 user_loop-10409 [001] d.h. 1568.586417: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1568474000280
 user_loop-10409 [001] d.h. 1569.586449: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1569474000280
 user_loop-10409 [001] d.h. 1570.586482: hrtimer_expire_entry: hrtimer=ffff881fbfa2ec80 function=tick_sched_timer now=1570474000275

What we’re looking for is the tick_stop messages, which mean that tick fired.   Note:  There is still one tick per-second in the current upstream code to maintain scheduler stats for load balancing.   The above output is from a system tuned according to the specifics in this blog post.  It was also necessary to configure the system BIOS for low latency.  Individual OEMs typically publish whitepapers on this topic.

I mentioned certain statistical accounting is done inside the tick.  One of those that is user-controllable is vm.stat_interval (which defaults to 1, so once per second).  You will see that even with nohz_full, vm.stat_interval will pop at that interval.  Frederic’s test harness accounts for this by setting vm.stat_interval to 120, then running the test for 10 seconds.  If you run the test for 120+ seconds, you will see vmstat_update fire (and possibly other things like xfs).

kworker/1:0-141 [001] .... 2693.850191: workqueue_execute_start: work struct ffff881fbfa304a0: function vmstat_update

kworker/1:0-141   [001] ....  2713.458820: workqueue_execute_start: work struct ffff881f90e07c28: function xfs_log_worker [xfs]

This feature is a massive improvement in terms of cache efficiency.  To see what I mean, try running this test harness without the kernel cmdline optons :-)

To get rid of the xfs_log_worker interference, you can use the tunable workqueues feature of the kernel’s bdi-flush writeback threads.  If, as in the above example, you are using core 0 as your “housekeeping CPU”, then you could affine the bdi-flush threads to core 0 like so:

# echo 1 > /sys/bus/workqueue/devices/writeback/cpumask

It takes a hex argument, so 1 is actually core 0.

At this point whenever the kernel wants to write dirty pages, it will wake up these bdi-flush threads as normal, but now they will wake up with the affinity that you programmed in.  Keep in mind that a single core might not be enough to do the writeback and whatever else the kernel needs to do, because bdi-flush threads, like any IO thread, block.  You might need to use 2+ cores.  Keep an eye out for CPU congestion or blocking on the housekeeping core (mpstat or similar).

Also note that by default in RHEL7, bdi-flush threads are NUMA-affined to be PCI-local to your storage adapter (whether it’s a local SCSI/SATA card or HBA).  That’s a change from RHEL6 where bdi-flush threads had no affinity by default.  You can disable the default NUMA affinity and return RHEL6 setting like so:

# echo 0 > /sys/bus/workqueue/devices/writeback/numa

The 2 “echo” commands above do not persist reboots.

Now…If you run turbostat while in this configuration, you will see that the timekeeping core  (core 0 in this case) is kept busy enough (because it is now ticking @ CONFIG_HZ rate) to be kept in C-state 0.  That’s less than palatable, and was later fixed by Paul McKenney and is called CONFIG_NO_HZ_FULL_SYSIDLE.  When that’s set, the timekeeping core is no longer pegged.  Godmode???

Here’s another way to examine the tick’s behavior:

# perf stat -C 1 -e irq_vectors:local_timer_entry sleep 1

9 irq_vectors:local_timer_entry

pig is a program written by my co-worker Bill Gray.  It’s used as an artificial load generator.   Below, it spins on the CPU for 1 second.  Unfortunately it’s not packaged for RHEL.  But you can use this instead, just as well.

So here is the trace without the cmdline options.  You can see that the tick fires roughly 1000 times in the 1 second run, and is expected out of the box behavior.

# perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 /root/pig -s 1

1005 irq_vectors:local_timer_entry

Then reboot with nohz_full=1-15 rcu_nocbs=1-15 and isolate core 1 from userspace tasks and IRQs.  You could do this with isolcpus=1-15 too.

# tuna -c 1 -i ; tuna -q * -c 1 -i

The same pig run ends up with only a handful of interruptions! Oink!

# perf stat -C 1 -e irq_vectors:local_timer_entry taskset -c 1 /root/pig -s 1

4 irq_vectors:local_timer_entry

Here’s yet another (less granular) way to see what’s going on:

# watch -n1 -d "cat /proc/interrupts|egrep 'LOC|CPU'"

Now that you’ve validated your configuration, it’s time to run your applications and see if this feature gives you any boost.  If you’ve got the right NICs, try out the busy polling socket option, too.

Here is some further reading on the topic, including a video of Frederic Weisbecker from LinuxCon where he covers this feature in detail.

https://www.kernel.org/doc/Documentation/timers/NO_HZ.txt
http://lwn.net/Articles/549580/
http://www.youtube.com/watch?v=G3jHP9kNjwc