Testing the toolchain using LAVA
The Toolchain WG currently runs an extensive test suite hosted in Michael Hope's personal laboratory.
The LAVA team will work with the TCWG team to ensure this setup is migrated, as-is, into the LAVA lab. We will also work to ensure that all the resulting toolchain build and test data gets properly submitted to LAVA, the dashboard providing the facilities needed to allow the Toolchain WG to analyze this data.
It is expected that the reporting facilities in the LAVA dashboard will be driven and partly implemented by the Toolchain WG itself using the available extension mechanisms.
== Validation lab next steps ==
We now have the toolchain build system running on boards hosted by
validation. What do we do next, such as:
* Start capturing test results using LAVA
* Start regular benchmarks
* Capture benchmarks results using LAVA
* Shift components to Jenkins or LAVA
* Shift the x86 builds first?
Every time we have a merge request, we want to test and make sure there are no regressions
- They are doing this today on seabright
Many tests will fail, they are interested in the delta, not the total failures
This runs a test of every commit to the branch
1 test on v7
1 test on v5 (but they don't have v5 hardware so they test on v7
x86, x64
1. create scripts for the test suites that run right now
2. create results parser that will let us get those into lava dashboard - or at least make sure the existing one works
3. poll launchpad, find the merge request we need to pull, create tarball, create job in lava with that tarball location as a parameter (template for this testing that gets filled in) (This could be part of the extension listed below)
- For the merge request, we have multiple templates that kick off tests on the different hardware types needed (v7, v5, x86, x86_64)
4. create lava extension for visualizing results and comparing tests to the baseline (merge request should be able to figure out what was HEAD, and we can compare against that)
ACTION: mockup needed for visualization
ACTION: investigate using the ssh client to run this test in a chroot, they do the 32bit testing on a 64bit machine in a 32bit personality chroot
ACTION: zygmunt to see what we can reuse from tarmac
- email notification of jobs startig could be done with tarmac also
- there should be 2 comments, when starting and when finishing this part and the job is about to be created - also send link to the scheduler job so they can cancel if they see something wrong
Hardware considerations:
USB Sticks and scheduler support for tagging those machines
Do we need swap enabled on those machines?
Benchmarking
===========
Needs a customized rootfs that is consistant and stripped down - make sure there are no extraneous services running
project for building, keeping attachements of the binaries that we built, etc
They can submit a branch for pulling a branch they have, pulling it, and building it
The resulting artifact can be used in the benchmarking run
IMPORTANT: for private tests, we also need to have private scheduler jobs so that the raw output is not public also
B
we benchmark on a8, a9, and a15 eventually
They can build it and point us at the toolchain, or we can build it as part of the run - if we bould it, it should be built and feed into this
Next we want to run the benchmark with build parameters
some of their benchmarks need to be private because of licensing - and results need to be kept private
Blueprint information
- Status:
- Not started
- Approver:
- None
- Priority:
- Undefined
- Drafter:
- None
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Discussion
- Series goal:
- None
- Implementation:
- Unknown
- Milestone target:
- None
- Started by
- Completed by