Whether you want to contribute to the project or just want to have the last improvements first, installing Avocado from source is the way to go. In this post you have a step-by-step guide to have Avocado up and running getting it from the official git repository.
Avocado is developed on Fedora latest, using Python 2.7. In this post, I’m using a Fedora 26 VM to have a system where I can install everything from scratch.
$ cat /etc/system-release Fedora release 26 (Twenty Six)
Besides the python and git packages, Avocado has a series of dependencies that are automatically satisfied when installing it from rpm. To use Avocado from source, you have to manually satisfy those dependencies.
$ dnf install -y python2 git gcc python-devel python-pip libvirt-devel libffi-devel openssl-devel libyaml-devel redhat-rpm-config xz-devel
Getting the Code and Installing Avocado
First, clone the repository:
$ mkdir git $ cd git $ git clone git://github.com/avocado-framework/avocado.git $ cd avocado
Now you have to satisfy the Avocado python dependencies:
$ sudo make requirements
And last, you have to install Avocado. There are some options here, the simplest e least intrusive one being “make develop”, which will basically run “make clean”, “python setup.py develop” and then will loop all the optional_plugins available and run “python setup.py develop” for each of them:
$ make develop
At this point, your system should be able to run Avocado tests. You can run this simple example test to check your installation:
$ avocado run passtest.py JOB ID : 9b1938300f1a0a4a9ed335f3836ed67768a1b495 JOB LOG : /home/apahim/avocado/job-results/job-2017-09-09T11.56-9b19383/job.log (1/1) passtest.py:PassTest.test: PASS (0.02 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 0.18 s JOB HTML : /home/apahim/avocado/job-results/job-2017-09-09T11.56-9b19383/results.html
Writing your First Test
Avocado can run anything that is executable as a test. That allows you to use the Avocado runner to run any binary created from any language. But to take advantage of all the features Avocado has to offer, you have to create your tests using the Avocado Test API.
The Avocado Test API is a Python API with inherits from Python Unittest and adds to it a lot of exciting features. Here’s our initial test file, using the Avocado Test API:
from avocado import Test class MyFirstTest(Test): def setUp(self): pass def test(self): pass def tearDown(self): pass
The test above does basically nothing, but it’s the basic structure for what’s to come. Running it, you will see:
$ avocado run my_first_test.py JOB ID : 5bcd69ffb444aa7ff41cf358a8561b096b85ebab JOB LOG : /home/apahim/avocado/job-results/job-2017-09-09T12.22-5bcd69f/job.log (1/1) my_first_test.py:MyFirstTest.test: PASS (0.03 s) RESULTS : PASS 1 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 1.02 s JOB HTML : /home/apahim/avocado/job-results/job-2017-09-09T12.22-5bcd69f/results.html
Creating Variants and Using the Parameters
Avocado has a concept/feature called “varianter”. Easiest way to understand it is to think about it as a generator of parameters combinations (or parameters matrix, if you prefer).
To use it, you have to create an YAML file with the key/values you want to make available for your test. You can create multiple instances of a given key and flag to Avocado (using the YAML tag “!mux”) that you want Avocado to create all the combinations of that specific branch of keys.
Maybe one example makes it easier to visualize:
time_options: !mux default: timeout: 60 step: 5 nice: timeout: 30 step: 10 aggressive: timeout: 120 step: 1 arch_options: !mux default: arch: x86_64 aarch64: arch: aarch64
In this example, “timeout” and “sleep” can assume 3 distinct values each, but they are placed together. On the other hand (or… branch), we have “arch”, which can assume 2 distinct values. Notice that both branches are tagged with “!mux”, so Avocado will create one variant for each combination of parameters, merging the branches together. Here’s the resulting variants:
$ avocado variants -m params_simple.yaml -c Multiplex variants (6): Variant default-default-00aa: /run/time_options/default, /run/arch_options/default /run/arch_options/default:arch => x86_64 /run/time_options/default:step => 5 /run/time_options/default:timeout => 60 Variant aarch64-default-e497: /run/time_options/default, /run/arch_options/aarch64 /run/arch_options/aarch64:arch => aarch64 /run/time_options/default:step => 5 /run/time_options/default:timeout => 60 Variant default-nice-278f: /run/time_options/nice, /run/arch_options/default /run/arch_options/default:arch => x86_64 /run/time_options/nice:step => 10 /run/time_options/nice:timeout => 30 Variant aarch64-nice-7eec: /run/time_options/nice, /run/arch_options/aarch64 /run/arch_options/aarch64:arch => aarch64 /run/time_options/nice:step => 10 /run/time_options/nice:timeout => 30 Variant default-aggressive-bd67: /run/time_options/aggressive, /run/arch_options/default /run/arch_options/default:arch => x86_64 /run/time_options/aggressive:step => 1 /run/time_options/aggressive:timeout => 120 Variant aarch64-aggressive-fe51: /run/time_options/aggressive, /run/arch_options/aarch64 /run/arch_options/aarch64:arch => aarch64 /run/time_options/aggressive:step => 1 /run/time_options/aggressive:timeout => 120
As you can see, that YAML file will provide 6 variants to the test, each one with a unique combination of parameters. The “avocado variants” command is used only for visualization. To use that YAML file with our test, let’s adapt the test to access the parameters:
from avocado import Test class MyFirstTest(Test): def setUp(self): self.timeout = self.params.get('timeout') self.step = self.params.get('step') self.arch = self.params.get('arch') def test(self): self.assertEqual(self.timeout, 30) def tearDown(self): pass
Ok, test still does not really test anything useful, but we are in “examples” area here. Let’s run all together:
$ avocado run my_first_test.py -m params_simple.yaml JOB ID : a654045e63321dfa1b57aa9cac33710a57189d2d JOB LOG : /home/apahim/avocado/job-results/job-2017-09-09T12.50-a654045/job.log (1/6) my_first_test.py:MyFirstTest.test;default-default-00aa: FAIL (0.04 s) (2/6) my_first_test.py:MyFirstTest.test;aarch64-default-e497: FAIL (0.05 s) (3/6) my_first_test.py:MyFirstTest.test;default-nice-278f: PASS (0.02 s) (4/6) my_first_test.py:MyFirstTest.test;aarch64-nice-7eec: PASS (0.01 s) (5/6) my_first_test.py:MyFirstTest.test;default-aggressive-bd67: FAIL (0.04 s) (6/6) my_first_test.py:MyFirstTest.test;aarch64-aggressive-fe51: FAIL (0.04 s) RESULTS : PASS 2 | ERROR 0 | FAIL 4 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 JOB TIME : 1.40 s JOB HTML : /home/apahim/avocado/job-results/job-2017-09-09T12.50-a654045/results.html
Our test was executed 6 times, one per variant. Each time the test had access to a different set of parameters. As you can see, only when the parameter “timeout” was equals to 30, the test PASSed.
Now you have a coding environment and some working (and trivial) examples to start with.
You can see extra coding tips here the the contribution guide: http://avocado-framework.readthedocs.io/en/latest/ContributionGuide.html