Members of the POET group have completed several plugin reviews. I have linked two here (I tried to attach, but the attachment limit is 100K), to show everything we’re currently looking at.
Just as a refresher, POET is an organization made up of Moodle users wishing to take advantage of each member’s resources to help approve plugins for their organization’s use. It exists to “force” organization and shared resource use among its members. We want to work openly in the community to provide our efforts and help improve the QA on Moodle plugins.
Our current reviews don’t completely fit into the format on the plugins database, so I haven’t posted them there yet. The purpose for our reviews is to determine if the plugin is acceptable to be installed on the sites we manage. We have created a number of status criteria, designed to help indicate this:
Review in progress (obvious I think),
Failed (enough problems to recommend not using it yet),
Accepted (has minor problems, but still safe to use),
Approved (passed all tests and should be used),
Certified (future plan that would include guarantees of maintenance).
Once we review a plugin, it is our intent to send our details to the maintainer, especially when there are items that need to be fixed. We also hope to be able help when needed and provide fixes back if needed.
During this process, we discovered that the criteria for evaluating plugins is not well-defined for most plugins, and might be outdated for the ones that are defined. Our process is heavily tilted for activity modules, and was based on our own processes as well as the ones defined on the Plugin validation page. We will continue to adjust and improve these tests to validate against the current requirements for all plugins.
We used a Moodle Database activity plugin to manage our reviews. This may prove to be inadequate however, unless we create a new form for each type of plugin. We’ll continue to experiment with this to come up with an adequate solution.
Another issue that came up is that the current review structure in the Moodle Plugins database requires that reviews be tied directly to a version. This is somewhat problematic as often new versions are released that only contain bug fixes, and don’t really impact the existing review. It would be better if parts of the review, such as functional and usability, get tied to the release branch and only change when necessary. This will likely mean breaking the review up into separate pieces, and will be greatly improved by automated tests on the more mundane portions.
We will add more “subjective” parts to the reviews as well. Ideally we want to involve the “users” of our organizations to provide a review of how the plugin is used in real learning environments, what it does best, how well it performs, etc. We have a start on this but have not completed the processes yet. And these fit easily into the current Moodle Plugins database review functions (“General”, “Usability”, “Accessibility”, “Performance”).
I’d like to keep the conversation going about what we are trying to do, how we can do it better, what else the community would like from this process and whether any of this can be incorporated better into the existing “moodle.org” plugins mechanisms. I know that David Mudrak is actively working on improving the Plugins database, and adding more and more community management around these and we hope to be a part of that in constructive and beneficial ways.
Some initial questions for thought:
Should I post (cut and paste, or upload) the reviews into the “Review” box for the appropriate plugin? Or wait until we get a better system?
What do you think of the actual status categories (Failed, Accepted, etc.)?
What parts of the testing process can be replaced by the current Moodle Plugins database automated validation processes? What tests are known to have passed when a plugin has been approved for inclusion in the Moodle Plugins database?
What other existing tools are available now, in a usable form, that can provide some of this testing (for example “Code-checker”, which may or may not be completely up to date)? Could the tools the HQ integrators use for testing be applicable here?