Members of the POET group have completed several plugin reviews. I have linked two here (I tried to attach, but the attachment limit is 100K), to show everything we’re currently looking at.
Just as a refresher, POET is an organization made up of Moodle users wishing to take advantage of each member’s resources to help approve plugins for their organization’s use. It exists to “force” organization and shared resource use among its members. We want to work openly in the community to provide our efforts and help improve the QA on Moodle plugins.
Our current reviews don’t completely fit into the format on the plugins database, so I haven’t posted them there yet. The purpose for our reviews is to determine if the plugin is acceptable to be installed on the sites we manage. We have created a number of status criteria, designed to help indicate this:
Review in progress (obvious I think),
Failed (enough problems to recommend not using it yet),
Accepted (has minor problems, but still safe to use),
Approved (passed all tests and should be used),
Certified (future plan that would include guarantees of maintenance).
Once we review a plugin, it is our intent to send our details to the maintainer, especially when there are items that need to be fixed. We also hope to be able help when needed and provide fixes back if needed.
During this process, we discovered that the criteria for evaluating plugins is not well-defined for most plugins, and might be outdated for the ones that are defined. Our process is heavily tilted for activity modules, and was based on our own processes as well as the ones defined on the Plugin validation page. We will continue to adjust and improve these tests to validate against the current requirements for all plugins.
We used a Moodle Database activity plugin to manage our reviews. This may prove to be inadequate however, unless we create a new form for each type of plugin. We’ll continue to experiment with this to come up with an adequate solution.
Another issue that came up is that the current review structure in the Moodle Plugins database requires that reviews be tied directly to a version. This is somewhat problematic as often new versions are released that only contain bug fixes, and don’t really impact the existing review. It would be better if parts of the review, such as functional and usability, get tied to the release branch and only change when necessary. This will likely mean breaking the review up into separate pieces, and will be greatly improved by automated tests on the more mundane portions.
We will add more “subjective” parts to the reviews as well. Ideally we want to involve the “users” of our organizations to provide a review of how the plugin is used in real learning environments, what it does best, how well it performs, etc. We have a start on this but have not completed the processes yet. And these fit easily into the current Moodle Plugins database review functions (“General”, “Usability”, “Accessibility”, “Performance”).
I’d like to keep the conversation going about what we are trying to do, how we can do it better, what else the community would like from this process and whether any of this can be incorporated better into the existing “moodle.org” plugins mechanisms. I know that David Mudrak is actively working on improving the Plugins database, and adding more and more community management around these and we hope to be a part of that in constructive and beneficial ways.
Some initial questions for thought:
Should I post (cut and paste, or upload) the reviews into the “Review” box for the appropriate plugin? Or wait until we get a better system?
What do you think of the actual status categories (Failed, Accepted, etc.)?
What parts of the testing process can be replaced by the current Moodle Plugins database automated validation processes? What tests are known to have passed when a plugin has been approved for inclusion in the Moodle Plugins database?
What other existing tools are available now, in a usable form, that can provide some of this testing (for example “Code-checker”, which may or may not be completely up to date)? Could the tools the HQ integrators use for testing be applicable here?
Too many questions for now Mike, I am impared in my typing arm after a rotator cuff surgery.
But a quick reply:
- Looks amazing. I am cautiously optimistic. I work with some people who will probably directly benefit from this. ie they will save Moodle provider $$ and time, and their own time. I hope David is OK with this. Such is the world of community around OS.
- Q1. Keep it simple. I think get a standard 2 sentence blurb for the reviews page about POET (for those who come to the review page cold) and include the link to the review.
- Q2. "Certified (future plan that would include guarantees of maintenance)" superb category. The biggest problem I have with some of the plugins on the databaseis a) non-responsiveness from developers as to plans b) hidden knowledge in GIT to know how to install c) not knowing IF it works with version X until I try d) lack of reply on questions. Not all plugins obviously. So I like this category.
- Q3,4, above my pay grade.
- I'm going to contact a few of the creators of plugins where the functionality is 'cannot do without' and see if they can submit. One example for me is course menu. https://moodle.org/plugins/pluginversions.php?plugin=block_course_menu simple section to section navigation menu.
Hi Derek -
In your first point, you talk about "the reviews page". Do you mean the one posted at POET or the one on the Moodle Plugins database? I definitely would prefer that this all "live" in the Moodle Plugins database.
And for suggesting plugins to review, please provide us with what is important.
These are great very detailed reviews but perhaps too much information for the average user.
I think a brief version of what works, problems seen etc might be more effective than the full version. As a Moodle admin, I am not as interested in the code as I should be - I just want to know - is it useful, does it work as intended, what bugs show up on the site as a result...
Thanks for the feedback Emma.
Using the certificate review as an example, my intent was to have the right side box provide the summary you are looking for. Can you consider that and suggest different/better layout and any other information you want in there?
I would add some usability comments too. Such as the fact that image layout is pretty tricky to accomplish and takes some playing around with image size and location...
Some of those things that administrators might run into if setting it up or hear from teachers or managers if they are setting it up...
I have to say I am so glad that they opened up the reviews area - I have whining about that for some time now!!
Should I post the reviews into the “Review” box ...
The sections Structure Review and Basic Code Review seem to be mostly covered by existing automated validation mechanisms (explicit and implicit). On their own, they are more relevant for the plugin developers themselves rather than end users / admins.
Could the tools the HQ integrators use for testing be applicable here?
It is already happening in the Plugins directory for some time now. The Prechecker results published there provide a report of what would be raised if that plugin was hypothetically being integrated into the Moodle core. There is a plan to provide this report automatically for each version (not just during approval review), together with more planned improvements (such as the ability to see a code diff between two versions etc).
Hi David -
Thanks for your feedback.
You mentioned that the "sections Structure Review and Basic Code Review seem to be mostly covered by existing automated validation mechanisms" (emphasis mine). Can you provide documentation on what is absolutely guaranteed to be covered by the approval tests before a plugin is allowed to be included in the database? The main reason we wanted to publish the check list of what was tested is so that anyone can see what it took to get its rating. If parts of these are already tested and required before the plugin gets published in the database, that will reduce the need for that.
And for those tests, once a plugin has been approved, are the same tests run and required for all new version submissions? Knowing this and publishing this would be a huge benefit.
You said, "[o]n their own, they are more relevant for the plugin developers themselves rather than end users / admins". This is true only if it is true that plugins cannot be published in the Moodle database without passing these same tests. If they can fail any of the tests, then really the end users need to know this.
You said, "[t]he Prechecker results published there provide a report of what would be raised if that plugin was hypothetically being integrated into the Moodle core". Using the Certificate module as an example, where do I see the "prechecker results"? It sounds like these could go a long way to solving these issues.
What I meant is that some of these validation checks are done automatically when the ZIP is uploaded into the plugins directory. The results are available to the plugin author and the approval team members (we can make them public, there has been no demand for them). See example of such output attached - the point here is to make the ZIP is valid implementation of the given plugin type.
Some of these checks are implicit by which I meant that when we check e.g. ability to duplicate the activity module, it implicitly includes checking for moodle2 backup & restore support. Another level of validation checks is done (again implicitly) when we actually install and test the plugin in our machines. Things like absence of expected capabilities and/or capability name strings raise debugging warnings that we report into the plugin's bug tracker.
I guess my point was (and sorry if I am not good in explaining things clearly) that listing criteria like
Activity module - "lib.php" file present and many others from that section do not actually add much relevant info, because the absence of these files automatically implies that such activity module cannot simply work. What we do in approval reviews is not simply checking that such files are present. We also look how they are implemented.
I just wanted to clarify that there is actually a big overlap of our approval tests and your review tests. Which is good, because it shows us we do not miss anything important. It is natural that there are differences in both scope and depth of tests because both reviews have different goals.
The primary documentation for approval reviews is the plugin contribution checklist I already mentioned elsewhere.
Currently, the CiBoT prechecker results are provided via a comment link during the approval. As I said, to have them available for all versions (including those already approved long time ago) is something I am planning to work.
That is indeed helpful. But I would like to be able to clearly document what is guaranteed to have been tested and passed for every plugin in the database. If we have that, then those checks do not need to be rechecked, and reviews can focus on other things.
Can you provide a list of the tests that are executed for each plugin and plugin type, and which items must be passed for it to be available in the database? You mention the checklist, but the output you posted above also shows other tests. I also presume there are tests that are plugin-type specific? If that list clearly shows the required tests, and all of the plugins that are in the database have had to pass those tests, then that is a huge step.
One of the reasons I think it might be better to post the test results themselves, for each version, is that your tests might change over time. If you post the results, you can change those tests without ruining the old results.
You mentioned that posting the test results for all plugins, including historical ones, is something that you plan to do. Is there an anticipated date for that? Is that something I can help with?
Can you provide a list of the tests that are executed for each plugin and plugin type, and which items must be passed for it to be available in the database?
Again, I think our plugin contribution checklist summarizes the criteria quite clearly. All detected findings are reported in the plugin's public bug tracker and can be referred to.
You mention the checklist, but the output you posted above also shows other tests.
I do not consider them "other". I believe they all relate to what the checklist covers.
I also presume there are tests that are plugin-type specific?
Generally, I have this feeling that what you call for is a well defined list of every single thing we look at when evaluating a plugin. I do not think it is that simple. On contrary, I believe that any such list is likely to turn into just a long list of more or less formal criteria. If something, I can imagine something like a list of reasons why a plugin is implicitly rejected - this is what Apple Store Review Guidelines do, for example. But even that does not give any guarantee of approval.
What we are aiming at during the plugin approval review is a more holistic review of the plugin. The final verdict is also affected by our overall impression (from both coding and usage perspective), if and how the author communicates with us in reaction to raising issues, whether we see a value of the plugin for the community, potential for further development and improvements etc. The Mission section at https://docs.moodle.org/dev/Plugins_guardians states that at the general / conceptual level.
From this perspective, the current checklist looks like a good compromise between being objective and focused enough, and still something actionable that actually guides and helps the developers to produce better code.
Hi David -
So I've gone through what is in the "Plugin contribution checklist" and cross-referenced it to what we're checking for initially as well. It doesn't appear that you are testing as many things as we testing. For example, the checklist mentions that it tests if the plugin installs smoothly. For basic functional testing, we also test:
- if any setup function works properly,
- if an upgrade, does it properly upgrade from an older version,
- it can be added to the proper context without error (e.g. activity added to a course),
- backup and restore work properly (if valid for the plugin type),
- it can be removed from its context properly,
- any context it is added to can be deleted properly (e.g. delete a course with the activity plugin),
- it can be uninstalled, ...
and others. The checklist does not mention any of those.
If these are things that the current plugin database tests for, publishing that these have been tested would reduce this need.
If these are not tested for currently, then we are prepared to help create those tests. We'd like to have them work automatically, and it would be great if the tests could be available for developers to submit to before submitting the plugin.
You mentioned, "I can imagine something like a list of reasons why a plugin is implicitly rejected". I agree. The automated tests should be able to help "reject" a plugin, rather than give it a stamp of approval. And doing that would be a great step forward.
Once we get past all of the mundane "rejection" tests, we are then able to go to the next step. But there is no point doing further qualitative functional testing if the plugin doesn't pass the basic tests. Beyond that, we want to get our group to do reviews that test things for usefulness in various learning settings, and categorize them to types of solutions (video capture, reporting, etc.). And we want to have all of this work ideally located on moodle.org, in the plugins database.
Is it possible for you to share the plugins submissions workflow as well as the update workflow? Ideally with the automated test/approval scripts? We'd like to help work on this whole process if we can.
Also, is there a way to query the plugins database to get version and plugin type specific information easier? Something like https://download.moodle.org/api/1.3/pluglist.php but with queries for Moodle versions and/or plugin types?
It would be good to be able to get a list of all plugins that are available for 2.7, 2.8 and 2.9 for example.
I have a few questions - as an active reviewer.
First, what obligations do developers have to either proactively work or resolve the found issues.
Second, in the case where an auditor finds a plugin "passing" however another other auditors finds the same (or a later version) to have issues that were overlooked.
Third, for plugins which use subscription services (Poodll 3, Kalthura, etc) do we have documentation to explain what the objective of POET is so that they can "opt-in" to have their code reviewed.
Do we (or will we have) formal material to give to developers regarding what who POET is and what our objectives are?
To add to the review, we need to also add a section for plugins which acquire data from internet sources, does the plugin check for a connection first, has the administrator authorized the connection, is the source secure, does the source have mitigation for MITM attacks, does the plugin test the data in the event it contains malicious or incomplete code? I am sure there are more items than this.
As a number of plugins such as Banner/LMB (even Configurable Reports) has calls which connect to outside data connections with or without the end-users knowledge or being able to 100% trust the source.
This is open source.
Developers have no actual obligation to fix anything. However, many developers will surprise you, and welcome you feedback and fix the issues you identify faster than you would believe.
Once can also turn it around: what obligation do POET members, who are mostly commercial companies making money of the freely given work of the plugin developers, have to fix any issues they find, and contribute the fixes back?
@Tim: I'm not entirely sure the second part of your post contributes anything. It's time we get past the 'profit is bad' worries. If people hire developers, they need to pay them, and if people want to provide stuff under GPL they are free to do so. Worry about other aspects of 'making money from GPL' surely is a debate from last decade, even last century.
@Dustin: re your question 'Do we (or will we have) formal material to give to developers regarding what who POET is and what our objectives are?' check out the POET group website: http://poetgroup.org/
If there are still questions, then ask here I guess.
Also, your question "First, what obligations do developers have to either proactively work or resolve the found issues" Tim is right: none. They have no obligations to do anything.
Just out of curiosity Dustin, what reviews have you done? I couldn't find a way to find any reviews. I also did my first review tonight. Quite pleased to find out you can edit them!!
"what obligation do POET members, who are mostly commercial companies making money of the freely given work of the plugin developers, have to fix any issues they find, and contribute the fixes back?"
None at all. In the same way that If one day I see Mr Dougiamas Driving a new Bently, wearing a top hat and monocle and smoking a fat cigar, surrounded by super models*, I won't feel short changed if my code/work has contributed to that success. (not that it would have much anyway).
*and possibly stroking a white persian cat.
You said, "I couldn't find a way to find any reviews."
There is a way, but its well hidden. When you are on the Moodle plugins database page, in the "Administration" menu at the bottom right, there is a "Plugins administration" / "Reports" submenu. Within there, you will see a "Reviewed plugins" report. Here is the direct URL: https://moodle.org/plugins/report/index.php?report=reviews
Hi Tim -
You asked, "what obligation do POET members, who are mostly commercial companies making money of the freely given work of the plugin developers, have to fix any issues they find, and contribute the fixes back?"
I'm not really sure I understand the point of this question. To answer the part about fixing issues, the first phase of this project is to come up with a publicly visible testing and review process, ideally that is integrated into moodle.org's plugin database system. It would really be up to the developers of a plugin with identified issues as to whether or not they choose to use the information and fix the problems. Down the road, if we can get enough support, POET could evolve to provide developer help to fix and improve plugins, but this is not scheduled now.
This part of your statement - "who are mostly commercial companies making money of the freely given work of the plugin developers" - confuses me. Aren't Moodle plugins freely available to any Moodle users, including schools who charge for admissions, for-profit training companies, commercial organizations, etc? I'm not sure how this plays into the question you asked?
Hi Dustin -
You asked: "First, what obligations do developers have to either proactively work or resolve the found issues."
They don't. That's part of the problem. Which is why we want to have publicly visible tests that can show the current state of any of the plug-in submissions. We're working on those tests now, and are hoping to get access to the ones used by the plugins database currently, so we can minimize duplication. We are looking into a possible "certification" category that would involve an obligation on the part of the developer to respond and maintain their plugin for a defined time period. We have not worked out the mechanism of how that would work yet.
You said: "Second, in the case where an auditor finds a plugin "passing" however another other auditors finds the same (or a later version) to have issues that were overlooked."
We want to ensure that all of the tests and review requirements are applied to all versions of the plugin in the same way. These tests and review processes will be continuously improved as well. So for your case, the old version would show the tests it passed; the new version would show its tests as well.
You said" "Third, for plugins which use subscription services (Poodll 3, Kalthura, etc) do we have documentation to explain what the objective of POET is so that they can "opt-in" to have their code reviewed."
I'm not sure I understand the question. The objective of POET is the same for these as any other plugins. To help create a standard of quality for all plugins available to Moodle. Plugins that require subscription services are still plugins.
You asked: "Do we (or will we have) formal material to give to developers regarding what who POET is and what our objectives are?"
Yes. And again, our goal is to have what we build be part of what is used at Moodle.
Also, thank you for your suggestions for how to review and approve plugins that work beyond Moodle.
One of the issues I have faced as an independent plugin developer is that many Moodle partners make it difficult (ie expensive) to install a new plugin. A big part of the expense is the need for a code review. Figures of US$10,000 amongst the big partners are not unknown though most partners are in the range of $US800. Thats just based on my experience. Feel free to reality check me.
The need for a code review is also an obstacle to a premium Moodle plugins database, because the plugins definitely need a robust review, but who to do the review, who to grant the "certification" and who to pay for it etc.
So I think the idea of the POET backed independent plugin review platform, and one not limited to plugins on the Moodle plugins database, is great. Can I understand that the idea is that if a plugin is reviewed by Partner A, according to a standardised test , that other partners B, C and D no longer need to do their own code review? That would be great. But it would all fall down, as far as I am concerned, if each partner still insisted on their own code review and associated costs.
The other thing is that it might be nice to give the developer a heads up, in advance of a review going on or out. Nobody really wants to be shamed for something they are doing for free. If the developer is charging, well it might be different, but I think it would still be a nice courtesy.
Hi Justin -
One of the primary goals is to come up with a system "trusted" enough so that anyone can use those reviews and tests in place of their own. The organizations that originated this did it for exactly the reason you stated; so that they would not have to spend extra resources approving plugins.
To achieve that, we need to come up with open, visible, trusted processes and results. If we succeed, then anyone can use them and trust them.
As far as giving developers the heads up, again, we're trying to be as open as we can. We would like the automated tests to be as complete as they can be, and then made available for developers to test on their own, before submitting their work to the database. That way, they can get their work in the state they want it before hand. But once something is being distributed to the community, it is important that the distribution has open, visible tests and reviews.
We do want to be able to send those results to the developer, and down the road, even offer to provide fixes.
Hope this helps.