Juju Charm Testing
[GOAL]
-Test a charm on promotion to charm store.
-Gate Charm commits on charm testing success.
-Provide a mechanism for users to test their charms.
-Make charm tests faster to run
-Make charm tests faster to develop
-Make charm tests more robust
-Make charm tests results reliable
[RATIONALE]
We have very few charms with automated tests. Developing charm tests is slow because running them is slow, especially in repetition, while you are developing. Test results are also often flaky. What can we do to make this faster and more reliable, both in Juju core and in the ecosystem tools?
Communicating the current health of a charm in a given provider is essential to Juju users deploying Charmed services into a cloud. Developers should also be made aware if their particular charm is failing due to a recent commit, change(s) by cloud vendor, or promotion to new Ubuntu release. There is a need to provide a process and mechanism to validate the quality of a charm in both unit testing, and workload testing. This Blueprint is to plan work relating to ad-hoc charm testing, automated charm testing, and continuous integration testing.
Blueprint information
- Status:
- Started
- Approver:
- Antonio Rosales
- Priority:
- Essential
- Drafter:
- Ubuntu Server
- Direction:
- Approved
- Assignee:
- Marco Ceppi
- Definition:
- New
- Series goal:
- Accepted for saucy
- Implementation:
- Good progress
- Milestone target:
- ubuntu-13.10
- Started by
- Antonio Rosales
- Completed by
Whiteboard
Discussion:
*Bug submission on charm failure.
*Define a process around how charm maintainers respond to test failures and subsequent bugs. Can a user run a manual test and submit the test back to the bug report to update testing status to green.
*Enable Autocharm tester to be more resilient against provider failures, and Jenkins usage.
simulate provider failure, and be able to recover: $ juju ssh <MACHINE> sudo shutdown now
* Define WIs to execute auto charm testing on Go.
* Continuous Integration (also will help with gating on charm commits)
* Juju Testing Blogging
* Juju testing communication to Juju lists.
* Work on integrating/fixing Charm runner (graph testing/ dependency/env set up testing).
Add a Jenkins workflow to run a charm or a set of charms in the following LXC environments:
-raring container on raring host
-raring container on precise host
-precise container on raring host
-precise container on precise host
Two modes of testing:
-Unit (does the charm start, and report ready)
-Workload (test the charms relations, and pushing data)
Reference Links:
*Charm Test Spec [html] https:/
* Charm Test Spec [source] http://
* CharmTester Charm http://
* Charm Runner: https:/
* Jenkins Charm Testing: https:/
[USER STORIES]
William is a juju user who wishes to know a charms current stability
Saul is patching a charm and wants to insure his changes work with the current tests
Laura is a charm maintainer and wants to add tests to insure her charm is stable
Kara is a charm maintainer and needs to know when her charm is broken
Lee is a charmer who, while reviewing charm submissions, needs to know if these changes break backwards compatibility with currently deployed services
Gaius is a charm maintainer from an upstream project and needs an easy way to learn how to write tests for his charm
[ASSUMPTIONS]
- Charm tester/charm tester control will work with gojuju for at least graph testing
[RISKS]
- Relying soley on "graph" testing may result in inaccurate test results due to lack of embedded tests
- Making tests too complicated may result in low adoption rate of embedded testing
[IN SCOPE]
[OUT OF SCOPE]
[USER ACCEPTANCE]
[RELEASE NOTE/BLOG]
[NOTES]
UDS 1305 notes: http://
=== UDS 1303 Notes ===
Pad: http://
Question:
Is there a way in meta-data to explicitly state provider support.
-Example: Ceph: Does cloud provider have block support
-More broadly stated does the cloud provider have the capabilities the charm needs
Idea:
-In charm testing status be able to show that a charm failure can be a result of the provider not providing the needed capabilities, ie the Ceph charm fails on a provider because it does not support object store.
-Make interface usage more verbose in the charm description.
-Need a rule/spec on how a interface should be implemented
-Need to investigate possible enforment of interfaces
-**Have the testing framework iterate through the operational deployment requirments.
Interfaces doc link broken:
-Example: http://
https:/
Meta-language testing (http://
Lanugage suggestions:
http://
http://
Charm-Runner integration:
- https:/
Wrap Go/Py juju client status:
- https:/
---
[notes from cloudsprint 2013-05]
Topics to cover
Current Testing
Current todos
Experiences from IS
Ideas
Review charm policy to include:
Test must pass tests
Charm must have tests
We want embedded tests!
All tests live in the charm
Functional Tests
/test (in charm)
Integration
/test.d (in charm)
How to make it low entry for charmers to add tests
charm create tests (charm-tools make a stub _simple_ test)
leverage libraries, and possibly a deployment (dare I say declarative) testing lang.
Sidnei mocks all the juju calls (U1 testing)
have a library that stubs this for you.
pull this into charm-helper library
leverage Go-watch
leverage charm testing with charm upgrade
Work Items
Work items:
[marcoceppi] Integration Testing includes framework that charm authors can write tests against (embedded in the charm) (amulet): INPROGRESS
[marcoceppi] Jenkins testing on new merge proposal, on success it is a candidate for review: POSTPONED
[marcoceppi] Develop Juju test plugin (ie juju test): DONE
[marcoceppi] Modify charm tools to capture a stub integration test: INPROGRESS
Include charm-helper library for all charmers: DONE
Dependency tree
* Blueprints in grey have been implemented.