The technical documentation is now available at http://docs.moodle.org/dev/Outcomes_Technical_Specification. We're looking forward to presenting at the upcoming Moodle Quartlerly Developer Meeting!
Future major features
MoodleRoom's proposed Outcomes changes
Good to see you folks moving ahead with speed, but I must also say I am concerned about two thing I see in the Outcomes Technical Specification: 1) Outcomes no longer appearing in the gradebook, and 2) Outcomes no longer supporting scales. Perhaps I don't understand where this is going, but it seems likely to me that these two changes will make the Outcomes module virtually unusable for K-12 schools in the U.S.
The move is on in a very, very big way toward Standards Based Reporting and Assessment in the U.S. but the ususal approach is to assess and report student achievement on a scale - typically something like: 1 Not Present; 2 Area of Concern; 3 Progressing; 4 Mastered. I don't see how a pass/fail approach to data collection can support this model, nor how pass/fail can be used to meaningfully report student performance on essays or other performance based assessments. Also, in the end the teacher's assessment of a student's performance on such a scale (perhaps by looking at the mean, mode, median, most recent or a combination of these) IS the grade in the gradebook. I'm not seeing how outcome tracking can be separated from the gradebook.
I am attaching a short article by Thomas Guskey (one of the leaders in SBG in the U.S.) which provides a basic example of how Standards Based Reporting and Assessment is being implemented in U.S. schools.
I realize that standards or criteria based grading is being implemented differently around the world, but would hope the new Outcomes module will not exclude K-12 teachers and schools in the U.S.
All the best,
Doug "I [...] would hope the new Outcomes module will not exclude K-12 teachers and schools in the U.S."
... or in any other country in the world.
I think you are confusing outcome marking with progress. We are not trying to answer at which point the student ultimately knows the criteria, just that they have. The overwhelming feedback we have received (including US K-12 institutions) is to remove the dependency on scales and the display of duplicated outcomes in the gradebook in order to reduce the confusion they cause. We are not going to force the instructor to assess each student for each graded activity for each related outcome (for each attempt!) as is currently being done with Outcomes. That simply inundates the gradebook with a ton of data that is only marginally useful to monitor progress, when that data could be collected separately.
The instructor will know whether the student has achieved an outcome, so we give them the ability to mark that achievement and give them the applicable data to make the assessment. Using the definitions from your article, the "Outcome Attempt" would translate to the Product criteria, which the instructor could access from the marking screens. They would also have access to resource views against supplementary materials or attempts against ungraded activities, which could be used as Process criteria. Finally, we have recommendation plugins that can analyze the student's attempts and make an informed recommendation about his or her progress towards the outcome. And at the end of the course, each student is marked as having Met or Not Met an outcome, which they will carry with them across the site.
So, I think that we are all shooting for the same goal. I'll add to Kris's comment to describe a little bit more of the background of this, and ways to accomplish what you are trying to accomplish.
First of all, the reason that we went the way that we did was that, while you could do grading as you describe in the old outcomes system, it had some real limitations, specifically around tying outcomes to quiz questions. There are many times where a quiz question ties tightly to one outcome, and whether or not the student gets that question (or a group of like-mapped questions) correct is a very solid indicator of competence. We have to balance this with the more subjective nature of other types of grading, such as assignments, where a paper could cover more than one outcome, and the grading is more subjective. For this, we are doing tight integration with rubrics (advanced grading). So, in the example that you mentioned, you could create a rubric that was mapped against outcomes and had the scale that you mentioned (1 Not Present; 2 Area of Concern; 3 Progressing; 4 Mastered). Upon grading that activity, you would have the data that you need. This is different from a quiz question, where it is a much more binary condition. We want to make sure to support both paths.
Additionally, the ability to do coverage reporting, as well as "Completion" reporting, where you can see whether or not your students have read or participated in the background materials tied to specific Outcomes should be a huge win for remediation of possible issues.
As to whether or not to display them in the gradebook or not, I think that there are arguments for both, and I'd love to hear more dialog on it. Currently, the Outcomes link in the Settings area takes you to the Gradebook, but you can also get there through the gradebook. We could potentially leave both. I know that Kris and Mark were looking at the technical hooks needed and whether we should leave both.
By the way, I've made a bunch of updates to the spec over the weekend. Take a look. I'm going to make a separate post with notes about the changes and a link to the change log. Thanks for being so active in this thread.
If you'd like to have a more one-on-one discussion with me about some of these issues, let me know.
We've made a couple of notable improvements to the technical design which bear mention.
1. Instead of mapping the outcome/content association to a course, we map it directly to an activity. We realized we were making it harder on ourselves by giving the system the flexibility to map to anything in the course. While that might be nice in theory, in actuality a user's performance will be tied to Activity Completion and Grades, concepts that are fundamental to activities but not other plugins/objects. This works out quite nicely since questions and advanced grading criteria both relate to an activity (when implemented), and we get a major bonus of identifying the content item's relevant context within a course - it's no longer Question A in Course B. It's Question A, part of the Final Exam (or Practice Exam) in Course B. The Functional Specs are currently being updated to reflect this.
2. Outcome attempts now capture additional data points for mingrade, maxgrade, and rawgrade. This isn't a major change, we are simply updating the spec to capture what we said we would to make it more useful for instructors and recommendation plugins.
You can see the specific changes made by clicking on the following link: http://docs.moodle.org/dev/index.php?title=Outcomes_Technical_Specification&action=historysubmit&diff=37983&oldid=37930
Hi Kris and Phil,
Sorry about the tardiness of my response. I do want to thank you for taking time to explain the philosophical and actual approaches being pursued in this new Outcomes module. I continue to be very excited about what a tremendous contribution it will make to student learning.
I've spent a lot of time considering the proposed data structure and didn't want to reply until I'd had a chance to really think through its implications. Although, in the end, I still think many schools will want to report student progress and final standard mastery on a scale rather than a simple yes or no basis, the granularity with which you are capturing performance data is what is really important. If the data is there (and God bless you for making it so), then those who want to scale it for reporting can do so via a plugin. Great work. This is really exciting.
This granularity of data capture will also allow for the creation of plugins that report student progress with a degree of specificity that will truly provide assessment for learing (what teachers and students need to guide student progess in real time) as well as final assessment of learning (what parents and universities care about). I can't wait to start using it (and probably writing a plugin or two).
Keep up the great work!
Kriss asked my thoughts about web services.
I first misunderstood during the dev meeting that people would want to distribute/publish outcomes (either with Mnet, either with Moodle hub repository). It's why I suggested to implement the web services. From the specs it's an import/export feature for admin. So no need of web services.
However it still would be nice to have some web service functions for two reasons:
* it shows when your lib.php API went wrong. When you think as a developer of a teacher/student app or admin tool, you sometimes find out that your API functions do either too much, either not enough. Issues related to parameters, capabilities, return values, abusing of Moodle form, etc... the web service functions will expose the problems of your API. I believe you'll save quite some maintenance time and some future development time if you write web services from start.
* currently we are pushing for web services to make Moodle more open.
If you have some time to create some web services, I wrote this documentation to contribute a web service to core. I'll be happy to answer if you have any specific questions regarding web services.
The technical documentation has been updated with some clarifications and refinements. The main highlights though are that the tables have been renamed for clarity and an ERD like diagram has been added to show the relations between the new outcome tables and core tables.
I went thru the wiki and this thread again, with the focus on the technical specification. I have couple of comments and questions in random order (as they came to my mind).
Firstly, please rename pages like "Capabilities and Roles", "Migration and Technical Issues", "Specifically Excluded Use Cases" and similar in a way that it is clear they are related to this project. Using "Outcomes Specification/" (including the tail slash) is what I would do. Without it, they just pollute the dev wiki and may confuse developers looking for a documentation (imagine a developer looking for dev docs on capabilities and roles system in Moodle, for example).
I don't like the userid column in outcome_sets table. If it should be foreign key to the user table, it can't have 'default 0' imho. If the value is to be optional (as I understood the description of the field), just keep it declared as the foreign key with the NULL value allowed. I can't see any reason for 'default 0' (especially for foreign keys). Same may apply for other columns.
I would like to hear more about your concept of the outcome_areas table. 'Areas' are used in several subsystems in Moodle core, such as files API or advanced grading methods. They represent sort of unified system of locating and addressing places in Moodle that other components can hook to (as in file is attached to a post, image is embedded to a text field, advanced grading form is used for assessing submissions in the Assignment module etc). It has been proved that what works is what I call "the holy four" (I wanted to call it the "fantastic four" but some puny film studios trademarked it ).
Shortly, it is good to use the combination of contextid + componentname + areaname + optional itemid to address hookable places in Moodle. So if there is a file attached to the student's submission in the Workshop module, we can easily associate the file with the context (contextid of the workshop module instance), componentname ("mod_workshop"), area ("submission_attachment") and itemid (the submissionid). Similarly, the rubric form used to assess submissions in the assignment module is hooked to an area identified by context (context of the assignment module instance), component ("mod_assign") and area ("submissions").
Looking at your ERD, your tables outcome_areas and outcome_used_areas do not fit my mind well. Not only I'm missing the contextid there. Could you please provide an example/illustration of these tables filled with some data and explain what these data mean? Eventually some pseudo-SQL queries?
WRT Phill's answer above, I have not found any detailed information of how you plan to integrate Outcomes with advanced grading methods. The spec just claims that there will be an API for that. But how will that API look like, at least roughly shaped?
What I would expect is that every advanced grading form is able to be associated with zero, one or many outcomes. When a form is reused in other activity or shared as a template, these mappings are preserved by default unless they are explicitly removed (note that advanced grading forms are copied for each individual activity, that is different from how Questions are implemented).
How would such association be stored in the database?
First, thank you so much for taking the time to read over the wiki docs. I know it takes some time to digest it all, so it is very much appreciated.
DM1 - OK, we can do that.
DM2 - Sounds good.
DM3 - That's fine, null will be used instead of zero. Please see change here. Let me know if I missed any.
DM4 - I was also trying to stick a contextid into the outcome_areas table, but it wasn't making sense for the associations that we are making. EG: what contextid would you use for a question (remember, questions exist on the site or course)? Same question for a grading form. If you think of a way to include it that would be beneficial, I'm very much interested.
Example for a grading form data would be component=gradingform_rubric, area=criteria, itemid=gradingform_rubric_criteria.id and this would represent an outcome(s) being associated to a rubric criteria. Once the grading form was associated to an activity, then a record in the outcome_used_areas would be added with the cmid of the activity in question and the id of our record from outcome_areas. The outcome_used_areas is important for coverage reports and because some content (like questions) can be used in multiple activities and we want to be able to distinguish between the two.
DM5 - I don't yet know how the API will look, but the idea is, it'll be designed to accomplish the goals I have laid out and more. I assure you it will be flexible and re-usable and it will be developed out in the open for all to review and critique.
What you are expecting is exactly what I was thinking. Since grading forms are loosely defined, meaning the plugin can do whatever it likes, then the integration of the outcomes would not be automatic. So, it is up to the grading form to determine how it would like to use outcomes. For example, what we would like to do for rubric is to be able to associate outcomes on a per criteria level. Another grading form may want to do it on another mechanism or just allow a single association to the grading form as a whole. Please see my response to DM4 for example data. If we are using a grading form template, then the outcome associations would be copied with the rest of the grading form.
DM4 - every question clearly belongs to a context: question.category -> question_categories.id; question_categories.contextid -> context.id.
So, there is an obvious value to add to outcome_areas.contextid for questions. That is useful metadata. In is not necessary for anything else in your spec, but I still think it should be added. (It may be useful for future reporting or maintenance things you you have not thought about yet, and won't be implemented in the first version.)