Running your own Ceph integration tests with OpenStack

Author: Loic Dachary
Source: Planet OpenStack

The Ceph lab has hundreds of machines continuously running integration and upgrade tests. For instance, when a pull request modifies the Ceph core, it goes through a run of the rados suite before being merged into master. The Ceph lab has between 100 to 3000 jobs in its queue at all times and it is convenient to be able to run integration tests on an independent infrastructure to:

  • run a failed job and verify a patch fixes it
  • run a full suite prior to submitting a complex modification
  • verify the upgrade path from a given Ceph version to another
  • etc.

If an OpenStack account is not available (a tenant in the OpenStack parlance), it is possible to rent one (it takes a few minutes). For instance, OVH provides an horizon dashboard showing how many instances are being used to run integration tests:

The OpenStack usage is billed monthly and the accumulated costs are displayed on the customer dashboard:



The installation steps have been tested on Ubuntu 14.04, with a dedicated instance created from the Horizon dashboard:

After following the installation instructions on the dedicated instance, the integration tests should be run to verify it actually works.

The teuthology workers process jobs. One teuthology worker can run a single job and uses from three to five virtual machines. Running two workers in parallel is the maximum for the default OpenStack quotas provided by OVH (10 instances).

teuthology-worker --tube openstack -l /tmp --archive-dir /usr/share/nginx/html

The firefly upgrade suite can then be run with:

teuthology-suite 
  --filter=ubuntu_14.04 
  --suite upgrade/firefly 
  --suite-branch firefly 
  --machine-type openstack 
  --ceph firefly 
  ~/src/ceph-qa-suite_master/machine_types/vps.yaml 
  $(pwd)/teuthology/test/integration/archive-on-error.yaml

The vps.yaml file has settings that are suitable for virtual machines. The archive-on-error.yaml file make it so only failed jobs are archived, which saves ~500MB per successful job (useful when there is not a lot of disk space). The “screen” utility can be used to group workers together with the shell used to run the suites.

Powered by WPeMatico