I’d like to propose a tactical change and focus our energies on collecting test cases for xAPI conformance. If there’s one thing that years of working in specs and standards have taught me, it’s that it’s all too easy to get overwhelmed by the mass of minutiae. There are over 460 line item requirements documented on the idea board. There are even more implicit requirements, as well as “should” and “may” requirements; requirements that aren’t even listed yet. The initial release of ADL’s conformance test for Learning Record Stores has over 1300 unique tests in it. To sum it up — there’s a lot of minutiae, and it’s daunting.
To tackle this task, we need to make the task a bit smaller, and look at a bigger picture than granular line-item requirements. Pick one of the APIs (“resources” I think is the more appropriate term) and share ways you’d test for conformance of that particular API. Writing up test cases reduces the burden on the working group to have such narrow, isolated focus and it should be easier for the DISC team to determine what requirements are for any given test case. We can then also identify if the spec supports the requirement, if the spec actually says something contrary to the test case, or if the spec is ambiguous on the test case and it merits more conversation as a community.
As a reminder for those not so into the weeds of the xAPI spec, there are five different resources identified in xAPI and a whole lot of error codes.
- Error Codes
- Statement API
- PUT Statements
- POST Statements
- GET Statements
- Voided Statements
- Document API
- State API
- Activity Profile API
- Agent Profile API
Formatting test cases
One suggestion from today’s meeting was to use the IETF’s Request for Comments (RFC) approach. This seems pretty heavy duty given the timetable we’re under, but I think we can share a common format for submitting test cases.
I’m open to suggestions, but to offer a general model, let’s consider the following in submitting a test case:
Title: Use a strong title here.
Description: Verify [specific functionality] using/entering/selecting [tool, method, API request, etc] with [conditions] to [what is returned, shown, demonstrated]
Assumptions:
Preconditions:
Test data: Variables and their values
Steps to be executed: Keep the test steps clear and concise – number them when appropriate.
Expected result:
Actual result:
Pass/Fail:
Comments:
Keep in mind while writing test cases that they should be simple and easy to understand.
Thoughts? Questions? Please comment below!
Leave a Reply