Orchestration of execution of test runs
There are some design mishits in logic which manages test runs. This leads to inflexible code that makes it difficult to extend or change current ostf-adapter workflow with fuel-health tests.
One of the many problems which are connected with mentioned issue is dependent tests execution: if tests operate on the same resources pool it affects their workflow because of simultaneous way of implementation. Also there are some data dependencies between test runs so they must have separate memory spaces.
So far main goals of needed refactoring are:
1) make dependent test runs (that operate on shared resources pool) execute successively but in separate subprocesses.
2) provide simultaneous execution of independent test runs.
To make these happen we need to add major changes into adapter logic which are the following:
1) test running must be executed via standalone programmatic agent that will communicate with other parts of adapter via amqp.
2) add indicator of dependency between test runs. This info then will be stored in db on test set entity and will be exploited by mentioned agent to form execution chains for test runs.
Blueprint information
- Status:
- Complete
- Approver:
- Tatyanka
- Priority:
- Medium
- Drafter:
- Artem Roma
- Direction:
- Approved
- Assignee:
- Artem Roma
- Definition:
- Approved
- Series goal:
- Accepted for 5.0.x
- Implementation:
- Implemented
- Milestone target:
- 5.0
- Started by
- Artem Roma
- Completed by
- Artem Roma
Related branches
Related bugs
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Add test_set dependency to models
Addressed by: https:/
Cleanup managenement via system signal handling