Junichi Nomura | 30 Oct 05:51 2014

Using shell "if" to conditionalize snippet inclusion is error-prone?


In beaker 0.18, "rhts_post" snippet has a following construct:

  if [ -f /etc/sysconfig/readahead ] ; then
  {% snippet 'readahead_sysconfig' %}

This is error-prone because if we have site-local "readahead_sysconfig"
snippet, which is empty, generated shell script will be broken.
(Actually, I've been using empty "readahead_sysconfig" snippet)

So it might be better to move if/then/fi into the snippet like below.

Same might apply to "virt_console_post".

Jun'ichi Nomura, NEC Corporation

diff --git a/Server/bkr/server/snippets/readahead_sysconfig b/Server/bkr/server/snippets/readahead_sysconfig
index 830f20b..393c215 100644
--- a/Server/bkr/server/snippets/readahead_sysconfig
+++ b/Server/bkr/server/snippets/readahead_sysconfig
 <at>  <at>  -1,3 +1,4  <at>  <at> 
+if [ -f /etc/sysconfig/readahead ] ; then
     cat >>/etc/sysconfig/readahead <<EOF

 # readahead conflicts with auditd, see bug 561486 for detailed explanation.
 <at>  <at>  -8,3 +9,4  <at>  <at> 
(Continue reading)

Amit Saha | 22 Oct 04:28 2014

Atomic Host + cloud init

Came across this today: http://www.projectatomic.io/blog/2014/10/getting-started-with-cloud-init/

I think this should now make it possible to use atomic images as well in Beaker now that we have
cloud install task.


Amit Saha 
SED, Hosted & Shared Services
Red Hat, Inc.
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
Amit Saha | 9 Oct 02:14 2014

Upcoming feature: Support for provisioning Atomic hosts

Hi all,

We recently merged support for provisioning Atomic hosts [1, 2 ] in Beaker ("develop" branch). 
The main difference from running tests on a "traditional" distro is that the tests are executed in 
a Docker container instead of bare metal.

Here is an example job:

This assumes that you have an atomic host "MyAtomicHost" already imported in your Beaker installation
and has identified itself to be supporting "rpm-ostree" by using the "has_rpmostree" distro feature
variable [3].

The key ksmeta variables from the above recipe are:

* harness_docker_base_image=registry.hub.docker.com/fedora:20

This specifies the docker base image for the container. 

* ostree_repo_url=http://link/to/ostree/repo/ 

rpm-ostree repository

* ostree_ref=my-atomic-host/20/x86_64/standard

rpm-ostree remote ref.

This builds upon the recently merged "Running test harness in a Docker container" feature [4].

[1] http://www.projectatomic.io 
(Continue reading)

Amit Saha | 3 Oct 08:22 2014

Upcoming feature: Running Test harness in a Docker Container

Hi all,

We recently merged a new feature into the "develop" branch which allows
running the test harness in a Beaker container [1]. 

During the testing, I used the "restraint" [2] test harness instead of Beaker's
default test harness, "beah". "restraint" is not available from Fedora's repos
so the repository has to be added in the Job XML.

Here is an example recipe which runs the test harness in a container running Fedora 20:


The key points in the above recipe for this feature are:

- "contained_harness" ksmeta variable: This tells beaker that we want to run the test harness
  in a Docker container.
- The restraint repo is added using the <repo/> element.
- The task to be run is added in the <task/> elements.

Since this recipe does not specify which distro image to use for the test harness container,
we default to Fedora 20. 

To specify a different image, use the "harness_docker_base_image" ksmeta
variable. For example, the following recipe will run the test harness in a CentOS 7 container while
the host system is running Fedora 20: https://gist.github.com/amitsaha/aebe0e782a47063e9270

As mentioned in [1], a privileged Docker container is used to run the test harness and by default
we use "systemd" to initialize the test harness. Hence it is the harness's responsibility to
register/install itself in the right target. 
(Continue reading)

Amit Saha | 22 Sep 07:44 2014

Importing CentOS tree extras?

I have been trying to run a few jobs using CentOS 7. I noticed that
the http://mirror.aarnet.edu.au/pub/centos/7/os/x86_64/.treeinfo makes no  mention
of the "extras" repository at http://mirror.aarnet.edu.au/pub/centos/7/extras/x86_64/

Should we do this for CentOS as we do for "Fedora everything"? That is, check if it
exists, and it does, add it as well?

Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
Nick Coghlan | 19 Sep 10:01 2014

Two possible paths forward for scheduler improvements: Mesos and PyGMO

We're currently contemplating two paths to a "better scheduler".

1. Kill the bespoke scheduler, and replace it with Mesos.

We still don't know just how big the ramifications of that would be, but
if anyone wants to try out Mesos, this is probably the place to start:

(That may involve waiting until Fedora 21 is actually released, but
that's not too far away)

2. Improve the bespoke scheduler with PyGMO

The heart of the current scheduler is the "schedule_queued_recipes"
function. That essentially treats the recipe queue and the idle systems
as a 2-D matrix, and tries to map one to the other. However, it does so
incrementally on a recipe-by-recipe basis, which makes it difficult to
determine a "best fit" option that tries to get entire recipe sets
running immediately, minimises the amount of unused RAM or disk space, etc.

At PyCon New Zealand, I was introduced to a multi-objective optimiser
library published by the European Space Agency: https://esa.github.io/pygmo/

Whereas switching to Mesos would be a big architectural change, adopting
PyGMO to make the current scheduler *better* might be feasible by
switching "schedule_queued_recipes" to an approach where it:

1. Reads the current recipe queue and idle system sets from the database
2. Organises them into a format suitable for handing over to PyGMO
3. Runs PyGMO over the data set with an appropriate cost function to be
(Continue reading)

Matt Jia | 15 Sep 06:56 2014

Provisioning guest recipes with cloud images

Hi folks,

I am working on bug[1] and have created a task to provision guest recipes with cloud images. 

An example job can be found here[2] and its using RHEL7 image. There is one issue that the hostname
on guest recipe is not resolved properly from dhcp server.It only shows ip address and it may be required
further investigation. Other than that, everything is looking fine.

The source code is located at my personal repo[3]. If you have any questions, please let me know.

Cheers´╝îMatt Jia

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1108455
[2] https://beaker-devel.app.eng.bos.redhat.com/jobs/5912
[3] https://github.com/matt8754/beaker-cloud-init
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
Dan Callaghan | 12 Sep 09:40 2014

Beaker 0.18.1 released

The Beaker 0.18.1 maintenance release is now available from
beaker-project.org [1].

This release fixes some issues relating to the custom distro support 
introduced in Beaker 0.18, plus some other general doc improvements. 
Full details are in the release notes [2].

[1] https://beaker-project.org/releases/
[2] https://beaker-project.org/docs/whats-new/release-0.18.html#beaker-0-18-1


Dan Callaghan <dcallagh <at> redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
Dan Callaghan | 4 Sep 08:57 2014

Beaker 0.18.0 released

On behalf of the Beaker development team, I'm pleased to announce that 
Beaker 0.18.0 is now available from the Beaker web site [1].

As always, the release notes [2] have the full story, but the highlights 
in this release are:

* an improved usage reminder e-mail system
* a new --host-filter option for workflow commands, for pre-defined 
  <hostRequires/> XML snippets
* better support for "custom" distros (that is, Fedora- or 
  RHEL-compatible distros which are named something else)

If you are dealing with custom distros in your Beaker installation, 
please note that there are some changes to the implementation details of 
the kickstart templates which may affect any custom templates or 
snippets you have. The release notes describe some potential issues, if 
you have any other questions we can help.

The detailed list of all changes made since Beaker 0.17.3 is also 
available [3].

[1] https://beaker-project.org/releases/
[2] https://beaker-project.org/docs/whats-new/release-0.18.html
[3] https://git.beaker-project.org/cgit/beaker/log/?qt=range&q=beaker-0.17.3..beaker-0.18.0&showmsg=1


Dan Callaghan <dcallagh <at> redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
(Continue reading)

Nick Coghlan | 1 Sep 06:41 2014

Accessing entire host OS from a container

From the Docker 1.1 release notes

* / is now allowed as source of --volumes. This means you can bind-mount
your whole system in a container if you need to.

This seems like a potentially useful feature when it comes to running
test harnesses in a container.



Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane

HSS Provisioning Architect
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
Nick Coghlan | 19 Aug 09:18 2014

Future directions for the Beaker test harness

A few weeks back, Amit put together a proof of concept for running the
test harness in a container, rather than directly on the host

That proof of concept relies on restraint, the new reference harness,
that is intended to eventually replace beah

At the same time, I don't think restraint is currently getting the level
of review and testing that it needs to mature into a plausible
replacement for beah as the default harness.

I think Amit's proposed patch provides a possible way forward:

1. Accept the initial approach where restraint is the *only* supported
harness when running inside a container. Specifying both
"contained_harness" and "harness" as ks_meta variables should be an
error at this point (side note: 'harness' should also be documented
along with all the other ks_meta variables, with a link to

2. Recommend publishing both beah *and* restraint in the harness repos
for Beaker installations. This will not only make restraint available
for container based testing, but also make it readily available via
"harness=restraint" for normal testing, without needing to add a custom
repo definition.

3. Once we have container based testing working reliably with restraint,
drop the restriction against using alternative harnesses in containers.

(Continue reading)