Nick Coghlan | 19 Nov 01:48 2014
Picon

Group SSH keys on manually provisioned systems

Is there a way to get a whole group's SSH keys onto a system without
going through the scheduler? Or is manual provisioning currently
restricted to only added the SSH key of the user provisioning the system?

Cheers,
Nick.

--

-- 
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane

HSS Provisioning Architect
_______________________________________________
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/beaker-devel
Amit Saha | 17 Nov 03:17 2014
Picon

Beaker inventory: smolt & lshw comparison

Hi all,

As part of the migration to "lshw" for beaker's inventory task, I ran a comparison of the "smolt"
based tasks with the "lshw" based task. 

The "raw" data is available in the "comparison.html" file under each arch sub-directory at:
https://amitksaha.fedorapeople.org/beaker-inventory-comparison/comparison_data/

Here are the summarized differences:

** i686/AMD/athlon (32-bit system)

CPU flags
=========

* lshw categorises "fpu_exception" as a CPU flag, adds "wp" as cpu flag too.
  (This is because we assume all the stuff lshw lists as CPU capabilities,
  we consider them flags)

Others
======

* USBID retrieved by lshw with repetition, smolt didn't retrieve any
* Same PCIID retrieved by lshw, however there is repetition of a device (for PCIID as well)
* lshw is not able to detect the system model (same as smolt)
* lshw sets arch to i686 instead of i386 by smolt
* lshw fails to get the system vendor (same as smolt)
* lshw fails to get the FORMFACTOR
* CPUVENDOR obtained by smolt is  AuthenticAMD, lshw sets it to "Advanced Micro Devices [AMD]"

(Continue reading)

Nick Coghlan | 10 Nov 13:34 2014
Picon

Subtest support in unittest2!

Robert Collins has backported one of my favourite Python 3 features to
unittest2: subtests!

See the Python 3 docs [1] for details, but the basic idea is to let you
easily split up a data driven test such that:

1. All iterations execute, even if some of the checks fail
2. Each failure is reported separately, with relevant details you provide

For example:

===================
class NumbersTest(unittest.TestCase):

    def test_even(self):
        """
        Test that numbers between 0 and 5 are all even.
        """
        for i in range(0, 6):
            with self.subTest(i=i):
                self.assertEqual(i % 2, 0)
===================

Will check all values from 0 to 5, and report separate failures for 1, 3
and 5.

The only other particular notable new feature is the addition of the
"assertLogs" context manager to test cases, which makes it easier to
check logging within the current process is performed correctly as part
of unit tests.
(Continue reading)

Nick Coghlan | 10 Nov 06:27 2014
Picon

Requesting reservation of a previously submitted recipe?

It occurred to me today that with the server side reservation system
added in 0.17, that actually becomes something that could be made
configurable for a previously submitted recipe.

The kinds of cases where that seemed potentially interesting to me were:

* After manually kicking a recipe that stalled for some reason (the case
that prompted the idea)
* After noticing something odd in the execution of a long running recipe
* Switching from "always" to "only on failure" if you forget to make it
conditional (or vice-versa)
* Adding or removing the reservation request if simply made a mistake at
submission time

There'd be a bit fiddling involved - new CLI commands to manage it, new
web UI elements on the job details page.

If we did it at all, perhaps it would be best left until after the
results page redesign?

Cheers,
Nick.

--

-- 
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane

HSS Provisioning Architect
_______________________________________________
(Continue reading)

Junichi Nomura | 30 Oct 05:51 2014
Picon

Using shell "if" to conditionalize snippet inclusion is error-prone?

Hi,

In beaker 0.18, "rhts_post" snippet has a following construct:

  if [ -f /etc/sysconfig/readahead ] ; then
  {% snippet 'readahead_sysconfig' %}
  fi

This is error-prone because if we have site-local "readahead_sysconfig"
snippet, which is empty, generated shell script will be broken.
(Actually, I've been using empty "readahead_sysconfig" snippet)

So it might be better to move if/then/fi into the snippet like below.

Same might apply to "virt_console_post".

---
Jun'ichi Nomura, NEC Corporation

diff --git a/Server/bkr/server/snippets/readahead_sysconfig b/Server/bkr/server/snippets/readahead_sysconfig
index 830f20b..393c215 100644
--- a/Server/bkr/server/snippets/readahead_sysconfig
+++ b/Server/bkr/server/snippets/readahead_sysconfig
 <at>  <at>  -1,3 +1,4  <at>  <at> 
+if [ -f /etc/sysconfig/readahead ] ; then
     cat >>/etc/sysconfig/readahead <<EOF

 # readahead conflicts with auditd, see bug 561486 for detailed explanation.
 <at>  <at>  -8,3 +9,4  <at>  <at> 
 READAHEAD_COLLECT="no"
(Continue reading)

Amit Saha | 22 Oct 04:28 2014
Picon

Atomic Host + cloud init

Came across this today: http://www.projectatomic.io/blog/2014/10/getting-started-with-cloud-init/

I think this should now make it possible to use atomic images as well in Beaker now that we have
cloud install task.

--

-- 
Amit Saha 
SED, Hosted & Shared Services
Red Hat, Inc.
_______________________________________________
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/beaker-devel
Amit Saha | 9 Oct 02:14 2014
Picon

Upcoming feature: Support for provisioning Atomic hosts

Hi all,

We recently merged support for provisioning Atomic hosts [1, 2 ] in Beaker ("develop" branch). 
The main difference from running tests on a "traditional" distro is that the tests are executed in 
a Docker container instead of bare metal.

Here is an example job:
https://gist.github.com/amitsaha/dd7cdb16ed862cb259fd

This assumes that you have an atomic host "MyAtomicHost" already imported in your Beaker installation
and has identified itself to be supporting "rpm-ostree" by using the "has_rpmostree" distro feature
variable [3].

The key ksmeta variables from the above recipe are:

* harness_docker_base_image=registry.hub.docker.com/fedora:20

This specifies the docker base image for the container. 

* ostree_repo_url=http://link/to/ostree/repo/ 

rpm-ostree repository

* ostree_ref=my-atomic-host/20/x86_64/standard

rpm-ostree remote ref.

This builds upon the recently merged "Running test harness in a Docker container" feature [4].

[1] http://www.projectatomic.io 
(Continue reading)

Amit Saha | 3 Oct 08:22 2014
Picon

Upcoming feature: Running Test harness in a Docker Container

Hi all,

We recently merged a new feature into the "develop" branch which allows
running the test harness in a Beaker container [1]. 

During the testing, I used the "restraint" [2] test harness instead of Beaker's
default test harness, "beah". "restraint" is not available from Fedora's repos
so the repository has to be added in the Job XML.

Here is an example recipe which runs the test harness in a container running Fedora 20:

https://gist.github.com/amitsaha/fc7c686e897a2d921a2f

The key points in the above recipe for this feature are:

- "contained_harness" ksmeta variable: This tells beaker that we want to run the test harness
  in a Docker container.
- The restraint repo is added using the <repo/> element.
- The task to be run is added in the <task/> elements.

Since this recipe does not specify which distro image to use for the test harness container,
we default to Fedora 20. 

To specify a different image, use the "harness_docker_base_image" ksmeta
variable. For example, the following recipe will run the test harness in a CentOS 7 container while
the host system is running Fedora 20: https://gist.github.com/amitsaha/aebe0e782a47063e9270

As mentioned in [1], a privileged Docker container is used to run the test harness and by default
we use "systemd" to initialize the test harness. Hence it is the harness's responsibility to
register/install itself in the right target. 
(Continue reading)

Amit Saha | 22 Sep 07:44 2014
Picon

Importing CentOS tree extras?

I have been trying to run a few jobs using CentOS 7. I noticed that
the http://mirror.aarnet.edu.au/pub/centos/7/os/x86_64/.treeinfo makes no  mention
of the "extras" repository at http://mirror.aarnet.edu.au/pub/centos/7/extras/x86_64/

Should we do this for CentOS as we do for "Fedora everything"? That is, check if it
exists, and it does, add it as well?

Best,
Amit.
_______________________________________________
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/beaker-devel
Nick Coghlan | 19 Sep 10:01 2014
Picon

Two possible paths forward for scheduler improvements: Mesos and PyGMO

We're currently contemplating two paths to a "better scheduler".

1. Kill the bespoke scheduler, and replace it with Mesos.

We still don't know just how big the ramifications of that would be, but
if anyone wants to try out Mesos, this is probably the place to start:
https://timothysc.github.io/blog/2014/09/08/mesos-breeze/

(That may involve waiting until Fedora 21 is actually released, but
that's not too far away)

2. Improve the bespoke scheduler with PyGMO

The heart of the current scheduler is the "schedule_queued_recipes"
function. That essentially treats the recipe queue and the idle systems
as a 2-D matrix, and tries to map one to the other. However, it does so
incrementally on a recipe-by-recipe basis, which makes it difficult to
determine a "best fit" option that tries to get entire recipe sets
running immediately, minimises the amount of unused RAM or disk space, etc.

At PyCon New Zealand, I was introduced to a multi-objective optimiser
library published by the European Space Agency: https://esa.github.io/pygmo/

Whereas switching to Mesos would be a big architectural change, adopting
PyGMO to make the current scheduler *better* might be feasible by
switching "schedule_queued_recipes" to an approach where it:

1. Reads the current recipe queue and idle system sets from the database
2. Organises them into a format suitable for handing over to PyGMO
3. Runs PyGMO over the data set with an appropriate cost function to be
(Continue reading)

Matt Jia | 15 Sep 06:56 2014
Picon

Provisioning guest recipes with cloud images

Hi folks,

I am working on bug[1] and have created a task to provision guest recipes with cloud images. 

An example job can be found here[2] and its using RHEL7 image. There is one issue that the hostname
on guest recipe is not resolved properly from dhcp server.It only shows ip address and it may be required
further investigation. Other than that, everything is looking fine.

The source code is located at my personal repo[3]. If you have any questions, please let me know.

Cheers´╝îMatt Jia

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1108455
[2] https://beaker-devel.app.eng.bos.redhat.com/jobs/5912
[3] https://github.com/matt8754/beaker-cloud-init
_______________________________________________
Beaker-devel mailing list
Beaker-devel <at> lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/beaker-devel

Gmane