As a user of Moodle for a few years now, I find the biggest gap in documentation (and particularly training at my institution) is in the area of how best to use Moodle to add value to my instruction.
In software engineering, features have been modeled hierarchically to address how much they add value. At the lowest level, logging in to the system is a necessary feature, but it hardly adds value to the user. Similarly, most users figure out the login process and don't require much training.
A great way to judge if a software feature is adding value is the so-called "manager" test: If you told your manager you did feature X a certain number of times, would she be impressed? Logging in 50 times a day isn't going to get you a promotion.
In Moodle, there is a lot of documentation on how to accomplish lower-level goals. As an example, let's take the Quiz module. There are tons of resources to show users how to create a question, a quiz, monitor its statistics, etc. Moodle is feature-rich at this level--there are lots of activities. Just as it is the case with logging in, it's not certain that I'm adding value by creating 200 questions, or 200 quizzes for that matter. Quizzes alone won't allow me to attain the goals at the instructor's level.
In terms of quizzes, there is help for higher-level goals. For example: https://docs.moodle.org/29/en/Effective_quiz_practices mentions ways that add value (although it would be better if this page's assertions cited more references from pedagogical research, much like the standard of Wikipedia).
To illustrate my point about goals being on different levels, I'll borrow the model for goal levels from an expert in software requirements, Alistair Cockburn. I've placed on this model part of the example I cited using quizzes:
So, in short, it would be useful for instructors (and trainers) if Moodle's documentation was organized to always tie the "user goals" to some "summary goals." I admit I've mostly used assignments and quizzes, and I've shunned away from the other Moodle activities because it's not clear to me how they add value.
Also, I think it would help to make decisions about priorities when people ask for new features (or improvements to old features that are clunky). An argument could (should) be made to which summary goal a new feature is linked. In other words, prioritize new features that add value to instructors.
Of course my perspective is (selfishly) one of an instructor. The same model could (should) be done for learners.
Does it make sense to create the same hierarchy in the Moodle docs (even though the Summary Goals level might be mostly empty for now)?
I found some higher-level help for Peer Evaluations (which I would see as the higher-level goal of the Workshop activity):
- This comes from Best Practices for BlackBoard: http://www.unh.edu/it/old/media/blackboard/bb91-docs/BestPracticesforUsingSelfandPeerAssessments.pdf
- Peer assessments in on-line learning: http://www.brown.edu/about/administration/sheridan-center/teaching-learning/course-design/learning-technology/peer-assessment-online
- Peer evaluations for English (essays): http://www.nclrc.org/essentials/assessing/peereval.htm
- In my field (software engineering) there are often so-called artifact check-lists used for evaluation. The criteria are usually pretty objective, and could be used in a peer assessment. http://www.site.uottawa.ca/~tcl/seg2105/coursenotes/ClassDiagramChecklist.html
- Finally, several references mention that it's good to provide an example for peer assessment that is done before students assess each other. It's a good way to make sure the criteria used in the assessment are clear and understood by all.
The Moodle FAQ for workshops mentions some of these points (e.g., using examples), but it is mostly about the Moodle (technical) configuration of setting up the activity rather than what's needed for a good (content) of the activity.